Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-using-canvas-and-d3
Packt
21 Aug 2014
9 min read
Save for later

Using Canvas and D3

Packt
21 Aug 2014
9 min read
This article by Pablo Navarro Castillo, author of Mastering D3.js, includes that most of the time, we use D3 to create SVG-based charts and visualizations. If the number of elements to render is huge, or if we need to render raster images, it can be more convenient to render our visualizations using the HTML5 canvas element. In this article, we will learn how to use the force layout of D3 and the canvas element to create an animated network visualization with random data. (For more resources related to this topic, see here.) Creating figures with canvas The HTML canvas element allows you to create raster graphics using JavaScript. It was first introduced in HTML5. It enjoys more widespread support than SVG, and can be used as a fallback option. Before diving deeper into integrating canvas and D3, we will construct a small example with canvas. The canvas element should have the width and height attributes. This alone will create an invisible figure of the specified size: <!— Canvas Element --> <canvas id="canvas-demo" width="650px" height="60px"></canvas> If the browser supports the canvas element, it will ignore any element inside the canvas tags. On the other hand, if the browser doesn't support the canvas, it will ignore the canvas tags, but it will interpret the content of the element. This behavior provides a quick way to handle the lack of canvas support: <!— Canvas Element --> <canvas id="canvas-demo" width="650px" height="60px"> <!-- Fallback image --> <img src="img/fallback-img.png" width="650" height="60"></img> </canvas> If the browser doesn't support canvas, the fallback image will be displayed. Note that unlike the <img> element, the canvas closing tag (</canvas>) is mandatory. To create figures with canvas, we don't need special libraries; we can create the shapes using the canvas API: <script> // Graphic Variables var barw = 65, barh = 60; // Append a canvas element, set its size and get the node. var canvas = document.getElementById('canvas-demo'); // Get the rendering context. var context = canvas.getContext('2d'); // Array with colors, to have one rectangle of each color. var color = ['#5c3566', '#6c475b', '#7c584f', '#8c6a44', '#9c7c39', '#ad8d2d', '#bd9f22', '#cdb117', '#ddc20b', '#edd400']; // Set the fill color and render ten rectangles. for (var k = 0; k < 10; k += 1) { // Set the fill color. context.fillStyle = color[k]; // Create a rectangle in incremental positions. context.fillRect(k * barw, 0, barw, barh); } </script> We use the DOM API to access the canvas element with the canvas-demo ID and to get the rendering context. Then we set the color using the fillStyle method, and use the fillRect canvas method to create a small rectangle. Note that we need to change fillStyle every time or all the following shapes will have the same color. The script will render a series of rectangles, each one filled with a different color, shown as follows: A graphic created with canvas Canvas uses the same coordinate system as SVG, with the origin in the top-left corner, the horizontal axis augmenting to the right, and the vertical axis augmenting to the bottom. Instead of using the DOM API to get the canvas node, we could have used D3 to create the node, set its attributes, and created scales for the color and position of the shapes. Note that the shapes drawn with canvas don't exists in the DOM tree; so, we can't use the usual D3 pattern of creating a selection, binding the data items, and appending the elements if we are using canvas. Creating shapes Canvas has fewer primitives than SVG. In fact, almost all the shapes must be drawn with paths, and more steps are needed to create a path. To create a shape, we need to open the path, move the cursor to the desired location, create the shape, and close the path. Then, we can draw the path by filling the shape or rendering the outline. For instance, to draw a red semicircle centered in (325, 30) and with a radius of 20, write the following code: // Create a red semicircle. context.beginPath(); context.fillStyle = '#ff0000'; context.moveTo(325, 30); context.arc(325, 30, 20, Math.PI / 2, 3 * Math.PI / 2); context.fill(); The moveTo method is a bit redundant here, because the arc method moves the cursor implicitly. The arguments of the arc method are the x and y coordinates of the arc center, the radius, and the starting and ending angle of the arc. There is also an optional Boolean argument to indicate whether the arc should be drawn counterclockwise. A basic shape created with the canvas API is shown in the following screenshot: Integrating canvas and D3 We will create a small network chart using the force layout of D3 and canvas instead of SVG. To make the graph looks more interesting, we will randomly generate the data. We will generate 250 nodes sparsely connected. The nodes and links will be stored as the attributes of the data object: // Number of Nodes var nNodes = 250, createLink = false; // Dataset Structure var data = {nodes: [],links: []}; We will append nodes and links to our dataset. We will create nodes with a radius attribute randomly assigning it a value of either 2 or 4 as follows: // Iterate in the nodes for (var k = 0; k < nNodes; k += 1) { // Create a node with a random radius. data.nodes.push({radius: (Math.random() > 0.3) ? 2 : 4}); // Create random links between the nodes. } We will create a link with probability of 0.1 only if the difference between the source and target indexes are less than 8. The idea behind this way to create links is to have only a few connections between the nodes: // Create random links between the nodes. for (var j = k + 1; j < nNodes; j += 1) { // Create a link with probability 0.1 createLink = (Math.random() < 0.1) && (Math.abs(k - j) < 8); if (createLink) { // Append a link with variable distance between the nodes data.links.push({ source: k, target: j, dist: 2 * Math.abs(k - j) + 10 }); } } We will use the radius attribute to set the size of the nodes. The links will contain the distance between the nodes and the indexes of the source and target nodes. We will create variables to set the width and height of the figure: // Figure width and height var width = 650, height = 300; We can now create and configure the force layout. As we did in the previous section, we will set the charge strength to be proportional to the area of each node. This time, we will also set the distance between the links, using the linkDistance method of the layout: // Create and configure the force layout var force = d3.layout.force() .size([width, height]) .nodes(data.nodes) .links(data.links) .charge(function(d) { return -1.2 * d.radius * d.radius; }) .linkDistance(function(d) { return d.dist; }) .start(); We can create a canvas element now. Note that we should use the node method to get the canvas element, because the append and attr methods will both return a selection, which don't have the canvas API methods: // Create a canvas element and set its size. var canvas = d3.select('div#canvas-force').append('canvas') .attr('width', width + 'px') .attr('height', height + 'px') .node(); We get the rendering context. Each canvas element has its own rendering context. We will use the '2d' context, to draw two-dimensional figures. At the time of writing this, there are some browsers that support the webgl context; more details are available at https://developer.mozilla.org/en-US/docs/Web/WebGL/Getting_started_with_WebGL. Refer to the following '2d' context: // Get the canvas context. var context = canvas.getContext('2d'); We register an event listener for the force layout's tick event. As canvas doesn't remember previously created shapes, we need to clear the figure and redraw all the elements on each tick: force.on('tick', function() { // Clear the complete figure. context.clearRect(0, 0, width, height); // Draw the links ... // Draw the nodes ... }); The clearRect method cleans the figure under the specified rectangle. In this case, we clean the entire canvas. We can draw the links using the lineTo method. We iterate through the links, beginning a new path for each link, moving the cursor to the position of the source node, and by creating a line towards the target node. We draw the line with the stroke method: // Draw the links data.links.forEach(function(d) { // Draw a line from source to target. context.beginPath(); context.moveTo(d.source.x, d.source.y); context.lineTo(d.target.x, d.target.y); context.stroke(); }); We iterate through the nodes and draw each one. We use the arc method to represent each node with a black circle: // Draw the nodes data.nodes.forEach(function(d, i) { // Draws a complete arc for each node. context.beginPath(); context.arc(d.x, d.y, d.radius, 0, 2 * Math.PI, true); context.fill(); }); We obtain a constellation of disconnected network graphs, slowly gravitating towards the center of the figure. Using the force layout and canvas to create a network chart is shown in the following screenshot: We can think that to erase all the shapes and redraw each shape again and again could have a negative impact on the performance. In fact, sometimes it's faster to draw the figures using canvas, because this way the browser doesn't have to manage the DOM tree of the SVG elements (but it still have to redraw them if the SVG elements are changed). Summary In this article, we will learn how to use the force layout of D3 and the canvas element to create an animated network visualization with random data. Resources for Article: Further resources on this subject: Interacting with Data for Dashboards [Article] Kendo UI DataViz – Advance Charting [Article] Visualizing my Social Graph with d3.js [Article]
Read more
  • 0
  • 0
  • 6090

article-image-unit-and-functional-tests
Packt
21 Aug 2014
13 min read
Save for later

Unit and Functional Tests

Packt
21 Aug 2014
13 min read
In this article by Belén Cruz Zapata and Antonio Hernández Niñirola, authors of the book Testing and Securing Android Studio Applications, you will learn how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. (For more resources related to this topic, see here.) Testing activities There are two possible modes of testing activities: Functional testing: In functional testing, the activity being tested is created using the system infrastructure. The test code can communicate with the Android system, send events to the UI, or launch another activity. Unit testing: In unit testing, the activity being tested is created with minimal connection to the system infrastructure. The activity is tested in isolation. In this article, we will explore the Android testing API to learn about the classes and methods that will help you test the activities of your application. The test case classes The Android testing API is based on JUnit. Android JUnit extensions are included in the android.test package. The following figure presents the main classes that are involved when testing activities: Let's learn more about these classes: TestCase: This JUnit class belongs to the junit.framework. The TestCase package represents a general test case. This class is extended by the Android API. InstrumentationTestCase: This class and its subclasses belong to the android.test package. It represents a test case that has access to instrumentation. ActivityTestCase: This class is used to test activities, but for more useful classes, you should use one of its subclasses instead of the main class. ActivityInstrumentationTestCase2: This class provides functional testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you have to create a test class named MainActivityTest that extends the ActivityInstrumentationTestCase2 class, shown as follows: public class MainActivityTest extends ActivityInstrumentationTestCase2<MainActivity> ActivityUnitTestCase: This class provides unit testing of an activity and is parameterized with the activity under test. For example, to evaluate your MainActivity, you can create a test class named MainActivityUnitTest that extends the ActivityUnitTestCase class, shown as follows: public class MainActivityUnitTest extends ActivityUnitTestCase<MainActivity> There is a new term that has emerged from the previous classes called Instrumentation. Instrumentation The execution of an application is ruled by the life cycle, which is determined by the Android system. For example, the life cycle of an activity is controlled by the invocation of some methods: onCreate(), onResume(), onDestroy(), and so on. These methods are called by the Android system and your code cannot invoke them, except while testing. The mechanism to allow your test code to invoke callback methods is known as Android instrumentation. Android instrumentation is a set of methods to control a component independent of its normal lifecycle. To invoke the callback methods from your test code, you have to use the classes that are instrumented. For example, to start the activity under test, you can use the getActivity() method that returns the activity instance. For each test method invocation, the activity will not be created until the first time this method is called. Instrumentation is necessary to test activities considering the lifecycle of an activity is based on the callback methods. These callback methods include the UI events as well. From an instrumented test case, you can use the getInstrumentation() method to get access to an Instrumentation object. This class provides methods related to the system interaction with the application. The complete documentation about this class can be found at: http://developer.android.com/reference/android/app/Instrumentation.html. Some of the most important methods are as follows: The addMonitor method: This method adds a monitor to get information about a particular type of Intent and can be used to look for the creation of an activity. A monitor can be created indicating IntentFilter or displaying the name of the activity to the monitor. Optionally, the monitor can block the activity start to return its canned result. You can use the following call definitions to add a monitor: ActivityMonitor addMonitor (IntentFilter filter, ActivityResult result, boolean block). ActivityMonitor addMonitor (String cls, ActivityResult result, boolean block). The following line is an example line code to add a monitor: Instrumentation.ActivityMonitor monitor = getInstrumentation().addMonitor(SecondActivity.class.getName(), null, false); The activity lifecycle methods: The methods to call the activity lifecycle methods are: callActivityOnCreate, callActivityOnDestroy, callActivityOnPause, callActivityOnRestart, callActivityOnResume, callActivityOnStart, finish, and so on. For example, you can pause an activity using the following line code: getInstrumentation().callActivityOnPause(mActivity); The getTargetContext method: This method returns the context for the application. The startActivitySync method: This method starts a new activity and waits for it to begin running. The function returns when the new activity has gone through the full initialization after the call to its onCreate method. The waitForIdleSync method: This method waits for the application to be idle synchronously. The test case methods JUnit's TestCase class provides the following protected methods that can be overridden by the subclasses: setUp(): This method is used to initialize the fixture state of the test case. It is executed before every test method is run. If you override this method, the first line of code will call the superclass. A standard setUp method should follow the given code definition: @Override protected void setUp() throws Exception { super.setUp(); // Initialize the fixture state } tearDown(): This method is used to tear down the fixture state of the test case. You should use this method to release resources. It is executed after running every test method. If you override this method, the last line of the code will call the superclass, shown as follows: @Override protected void tearDown() throws Exception { // Tear down the fixture state super.tearDown(); } The fixture state is usually implemented as a group of member variables but it can also consist of database or network connections. If you open or init connections in the setUp method, they should be closed or released in the tearDown method. When testing activities in Android, you have to initialize the activity under test in the setUp method. This can be done with the getActivity() method. The Assert class and method JUnit's TestCase class extends the Assert class, which provides a set of assert methods to check for certain conditions. When an assert method fails, AssertionFailedException is thrown. The test runner will handle the multiple assertion exceptions to present the testing results. Optionally, you can specify the error message that will be shown if the assert fails. You can read the Android reference of the TestCase class to examine all the available methods at http://developer.android.com/reference/junit/framework/Assert.html. The assertion methods provided by the Assert superclass are as follows: assertEquals: This method checks whether the two values provided are equal. It receives the actual and expected value that is to be compared with each other. This method is overloaded to support values of different types, such as short, String, char, int, byte, boolean, float, double, long, or Object. For example, the following assertion method throws an exception since both values are not equal: assertEquals(true, false); assertTrue or assertFalse: These methods check whether the given Boolean condition is true or false. assertNull or assertNotNull: These methods check whether an object is null or not. assertSame or assertNotSame: These methods check whether two objects refer to the same object or not. fail: This method fails a test. It can be used to make sure that a part of code is never reached, for example, if you want to test that a method throws an exception when it receives a wrong value, as shown in the following code snippet: try{ dontAcceptNullValuesMethod(null); fail("No exception was thrown"); } catch (NullPointerExceptionn e) { // OK } The Android testing API, which extends JUnit, provides additional and more powerful assertion classes: ViewAsserts and MoreAsserts. The ViewAsserts class The assertion methods offered by JUnit's Assert class are not enough if you want to test some special Android objects such as the ones related to the UI. The ViewAsserts class implements more sophisticated methods related to the Android views, that is, for the View objects. The whole list with all the assertion methods can be explored in the Android reference about this class at http://developer.android.com/reference/android/test/ViewAsserts.html. Some of them are described as follows: assertBottomAligned or assertLeftAligned or assertRightAligned or assertTopAligned(View first, View second): These methods check that the two specified View objects are bottom, left, right, or top aligned, respectively assertGroupContains or assertGroupNotContains(ViewGroup parent, View child): These methods check whether the specified ViewGroup object contains the specified child View assertHasScreenCoordinates(View origin, View view, int x, int y): This method checks that the specified View object has a particular position on the origin screen assertHorizontalCenterAligned or assertVerticalCenterAligned(View reference View view): These methods check that the specified View object is horizontally or vertically aligned with respect to the reference view assertOffScreenAbove or assertOffScreenBelow(View origin, View view): These methods check that the specified View object is above or below the visible screen assertOnScreen(View origin, View view): This method checks that the specified View object is loaded on the screen even if it is not visible The MoreAsserts class The Android API extends some of the basic assertion methods from the Assert class to present some additional methods. Some of the methods included in the MoreAsserts class are: assertContainsRegex(String expectedRegex, String actual): This method checks that the expected regular expression (regex) contains the actual given string assertContentsInAnyOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects and in any order assertContentsInOrder(Iterable<?> actual, Object… expected): This method checks that the iterable object contains the given objects, but in the same order assertEmpty: This method checks if a collection is empty assertEquals: This method extends the assertEquals method from JUnit to cover collections: the Set objects, int arrays, String arrays, Object arrays, and so on assertMatchesRegex(String expectedRegex, String actual): This method checks whether the expected regex matches the given actual string exactly Opposite methods such as assertNotContainsRegex, assertNotEmpty, assertNotEquals, and assertNotMatchesRegex are included as well. All these methods are overloaded to optionally include a custom error message. The Android reference about the MoreAsserts class can be inspected to learn more about these assert methods at http://developer.android.com/reference/android/test/MoreAsserts.html. UI testing and TouchUtils The test code is executed in two different threads as the application under test, although, both the threads run in the same process. When testing the UI of an application, UI objects can be referenced from the test code, but you cannot change their properties or send events. There are two strategies to invoke methods that should run in the UI thread: Activity.runOnUiThread(): This method creates a Runnable object in the UI thread in which you can add the code in the run() method. For example, if you want to request the focus of a UI component: public void testComponent() { mActivity.runOnUiThread( new Runnable() { public void run() { mComponent.requestFocus(); } } ); … } @UiThreadTest: This annotation affects the whole method because it is executed on the UI thread. Considering the annotation refers to an entire method, statements that do not interact with the UI are not allowed in it. For example, consider the previous example using this annotation, shown as follows: @UiThreadTest public void testComponent () { mComponent.requestFocus(); … } There is also a helper class that provides methods to perform touch interactions on the view of your application: TouchUtils. The touch events are sent to the UI thread safely from the test thread; therefore, the methods of the TouchUtils class should not be invoked in the UI thread. Some of the methods provided by this helper class are as follows: The clickView method: This method simulates a click on the center of a view The drag, dragQuarterScreenDown, dragViewBy, dragViewTo, dragViewToTop methods: These methods simulate a click on an UI element and then drag it accordingly The longClickView method: This method simulates a long press click on the center of a view The scrollToTop or scrollToBottom methods: These methods scroll a ViewGroup to the top or bottom The mock object classes The Android testing API provides some classes to create mock system objects. Mock objects are fake objects that simulate the behavior of real objects but are totally controlled by the test. They allow isolation of tests from the rest of the system. Mock objects can, for example, simulate a part of the system that has not been implemented yet, or a part that is not practical to be tested. In Android, the following mock classes can be found: MockApplication, MockContext, MockContentProvider, MockCursor, MockDialogInterface, MockPackageManager, MockResources, and MockContentResolver. These classes are under the android.test.mock package. The methods of these objects are nonfunctional and throw an exception if they are called. You have to override the methods that you want to use. Creating an activity test In this section, we will create an example application so that we can learn how to implement the test cases to evaluate it. Some of the methods presented in the previous section will be put into practice. You can download the example code files from your account at http://www.packtpub.com. Our example is a simple alarm application that consists of two activities: MainActivity and SecondActivity. The MainActivity implements a self-built digital clock using text views and buttons. The purpose of creating a self-built digital clock is to have more code and elements to use in our tests. The layout of MainActivity is a relative one that includes two text views: one for the hour (the tvHour ID) and one for the minutes (the tvMinute ID). There are two buttons below the clock: one to subtract 10 minutes from the clock (the bMinus ID) and one to add 10 minutes to the clock (the bPlus ID). There is also an edit text field to specify the alarm name. Finally, there is a button to launch the second activity (the bValidate ID). Each button has a pertinent method that receives the click event when the button is pressed. The layout looks like the following screenshot: The SecondActivity receives the hour from the MainActivity and shows its value in a text view simulating that the alarm was saved. The objective to create this second activity is to be able to test the launch of another activity in our test case. Summary In this article, you learned how to use unit tests that allow developers to quickly verify the state and behavior of an activity on its own. Resources for Article: Further resources on this subject: Creating Dynamic UI with Android Fragments [article] Saying Hello to Unity and Android [article] Augmented Reality [article]
Read more
  • 0
  • 0
  • 6345

article-image-angularjs-0
Packt
20 Aug 2014
15 min read
Save for later

AngularJS

Packt
20 Aug 2014
15 min read
In this article, by Rodrigo Branas, author of the book, AngularJS Essentials, we will go through the basics of AngularJS. Created by Miško Hevery and Adam Abrons in 2009, AngularJS is an open source, client-side JavaScript framework that promotes a high productivity web development experience. It was built over the belief that declarative programming is the best choice to construct the user's interface, while imperative programming is much better and preferred to implement the application's business logic. To achieve that, AngularJS empowers the traditional HTML by extending its current vocabulary, making the life of developers easier. The result is the development of expressive, reusable, and maintainable application components, leaving behind a lot of unnecessary code and keeping the team focused on the valuable and important things. (For more resources related to this topic, see here.) Architectural concepts It's been a long time since the famous Model-View-Controller, also known as MVC, started to be widely used in the software development industry, thereby becoming one of the legends of the enterprise architecture design. Basically, the model represents the knowledge that the view is responsible to present, while the controller mediates their relationship. However, these concepts are a little bit abstract, and this pattern may have different implementations depending on the language, platform, and purposes of the application. After a lot of discussions about which architectural pattern the framework follows, its authors declared that from now on, AngularJS is adopting Model-View-Whatever (MVW). Regardless of the name, the most important benefit is that the framework provides a clear separation of the concerns between the application layers, providing modularity, flexibility, and testability. In terms of concepts, a typical AngularJS application consists primarily of view, model, and controller, but there are other important components, such as services, directives, and filters. The view, also called template, is entirely written in HTML, which becomes a great opportunity to see web designers and JavaScript developers working side-by-side. It also takes advantage of the directives mechanism, a kind of extension of the HTML vocabulary that brings the ability to perform the programming language tasks, such as iterating over an array or even evaluating an expression conditionally. Behind the view, there is the controller. At first, the controller contains all business logic implementation used by the view. However, as the application grows, it becomes really important to perform some refactoring activities, such as moving the code from the controller to other components like services, in order to keep the cohesion high. The connection between the view and the controller is done by a shared object called scope. It is located between them and is used to exchange information related to the model. The model is a simple Plain-Old-JavaScript-Object (POJO). It looks very clear and easy to understand, bringing simplicity to the development by not requiring any special syntax to be created. Setting up the framework The configuration process is very simple and in order to set up the framework, we start by importing the angular.js script to our HTML file. After that, we need to create the application module by calling the module function, from the Angular's API, with it's name and dependencies. With the module already created, we just need to place the ng-app attribute with the module's name inside the html element or any other that surrounds the application. This attribute is important because it supports the initialization process of the framework. In the following code, there is an introductory application about a parking lot. At first, we are able to add and also list the parked cars, storing it’s plate in memory. Throughout the book, we will evolve this parking control application by incorporating each newly studied concept. index.html <!doctype html> <!-- Declaring the ng-app --> <html ng-app="parking"> <head> <title>Parking</title> <!-- Importing the angular.js script --> <script src="angular.js"></script> <script> // Creating the module called parking var parking = angular.module("parking", []); // Registering the parkingCtrl to the parking module parking.controller("parkingCtrl", function ($scope) { // Binding the car’s array to the scope $scope.cars = [ {plate: '6MBV006'}, {plate: '5BBM299'}, {plate: '5AOJ230'} ]; // Binding the park function to the scope $scope.park = function (car) { $scope.cars.push(angular.copy(car)); delete $scope.car; }; }); </script> </head> <!-- Attaching the view to the parkingCtrl --> <body ng-controller="parkingCtrl"> <h3>[Packt] Parking</h3> <table> <thead> <tr> <th>Plate</th> </tr> </thead> <tbody> <!-- Iterating over the cars --> <tr ng-repeat="car in cars"> <!-- Showing the car’s plate --> <td>{{car.plate}}</td> </tr> </tbody> </table> <!-- Binding the car object, with plate, to the scope --> <input type="text" ng-model="car.plate"/> <!-- Binding the park function to the click event --> <button ng-click="park(car)">Park</button> </body> </html> The ngController, was used to bind the parkingCtrl to the view while the ngRepeat iterated over the car's array. Also, we employed expressions like {{car.plate}} to display the plate of the car. Finally, to add new cars, we applied the ngModel, which creates a new object called car with the plate property, passing it as a parameter of the park function, called through the ngClick directive. To improve the loading page performance, it is recommended to use the minified and obfuscated version of the script that can be identified by angular.min.js. Both minified and regular distributions of the framework can be found on the official site of AngularJS, that is, http://www.angularjs.org, or they can be directly referenced to Google Content Delivery Network (CDN). What is a directive? A directive is an extension of the HTML vocabulary that allows the creation of new behaviors. This technology lets the developers create reusable components that can be used within the whole application and even provide their own custom components. It may be applied as an attribute, element, class, and even as a comment, by using the camelCase syntax. However, because HTML is case-insensitive, we need to use a lowercase form. For the ngModel directive, we can use ng-model, ng:model, ng_model, data-ng-model, and x-ng-model in the HTML markup. Using AngularJS built-in directives By default, the framework brings a basic set of directives, such as iterate over an array, execute a custom behavior when an element is clicked, or even show a given element based on a conditional expression and many others. ngBind This directive is generally applied to a span element and replaces the content of the element with the result of the provided expression. It has the same meaning as that of the double curly markup, for example, {{expression}}. Why would anyone like to use this directive when a less verbose alternative is available? This is because when the page is being compiled, there is a moment when the raw state of the expressions is shown. Since the directive is defined by the attribute of the element, it is invisible to the user. Here is an example of the ngBind directive usage: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> </body> </html> ngRepeat The ngRepeat directive is really useful to iterate over arrays and objects. It can be used with any kind of element such as rows of a table, elements of a list, and even options of select. We must provide a special repeat expression that describes the array to iterate over and the variable that will hold each item in the iteration. The most basic expression format allows us to iterate over an array, attributing each element to a variable: variable in array In the following code, we will iterate over the cars array and assign each element to the car variable: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> </body> </html> ngModel The ngModel directive attaches the element to a property in the scope, binding the view to the model. In this case, the element could be input (all types), select, or textarea. <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> There is an important advice regarding the use of this directive. We must pay attention to the purpose of the field that is using the ngModel directive. Every time the field is being part of the construction of an object, we must declare in which object the property should be attached. In this case, the object that is being constructed is a car, so we use car.plate inside the directive expression. However, sometimes it might occur that there is an input field that is just used to change a flag, allowing the control of the state of a dialog or another UI component. In these cases, we may use the ngModel directive without any object, as far as it will not be used together with other properties or even persisted. ngClick and other event directives The ngClick directive is one of the most useful kinds of directives in the framework. It allows you to bind any custom behavior to the click event of the element. The following code is an example of the usage of the ngClick directive calling a function: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> <button ng-click="park(car)">Park</button> </body> </html> Here there is another pitfall. Inside the ngClick directive, we call the park function, passing the car as a parameter. As far as we have access to the scope through the controller, would not be easier if we just access it directly, without passing any parameter at all? Keep in mind that we must take care of the coupling level between the view and the controller. One way to keep it low is by avoid reading the scope object directly from the controller, replacing this intention by passing everything it need by parameter from the view. It will increase the controller testability and also make the things more clear and explicitly. Other directives that have the same behavior, but are triggered by other events, are ngBlur, ngChange, ngCopy, ngCut, ngDblClick, ngFocus, ngKeyPress, ngKeyDown, ngKeyUp, ngMousedown, ngMouseenter, ngMouseleave, ngMousemove, ngMouseover, ngMouseup, and ngPaste. Filters The filters are, associated with other technologies like directives and expressions, responsible for the extraordinary expressiveness of framework. It lets us easily manipulate and transform any value, not only combined with expressions inside a template, but also injected in other components like controllers and services. It is really useful when we need to format date and money according to our current locale or even support the filtering feature of a grid component. Filters are the perfect answer to easily perform any data manipulating. currency The currency filter is used to format a number based on a currency. The basic usage of this filter is without any parameter: {{ 10 | currency}} The result of the evaluation will be the number $10.00, formatted and prefixed with the dollar sign. In order to achieve the correct output, in this case R$10,00 instead of R$10.00, we need to configure the Brazilian (PT-BR) locale, available inside the AngularJS distribution package. There, we may find locales to the most part of the countries and we just need to import it to our application such as: <script src="js/lib/angular-locale_pt-br.js"></script> After import the locale, we will not need to use the currency symbol anymore because it's already wrapped inside. Besides the currency, the locale also defines the configuration of many other variables like the days of the week and months, very useful when combined with the next filter used to format dates. date The date filter is one of the most useful filters of the framework. Generally, a date value comes from the database or any other source in a raw and generic format. In this way, filters like that are essential to any kind of application. Basically, we can use this filter by declaring it inside any expression. In the following example, we use the filter on a date variable attached to the scope. {{ car.entrance | date }} The output will be Dec 10, 2013. However, there are thousands of combinations that we can make with the optional format mask. {{ car.entrance | date:'MMMM dd/MM/yyyy HH:mm:ss' }} Using this format, the output changes to December 10/12/2013 21:42:10. filter Have you ever needed to filter a list of data? This filter performs exactly this task, acting over an array and applying any filtering criteria. Now, let's include in our car parking application a field to search any parked car and use this filter to do the job. index.html <input type="text" ng-model="criteria" placeholder="What are you looking for?" /> <table> <thead> <tr> <th></th> <th>Plate</th> <th>Color</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-class="{selected: car.selected}" ng-repeat="car in cars | filter:criteria" > <td> <input type="checkbox" ng-model="car.selected" /> </td> <td>{{car.plate}}</td> <td>{{car.color}}</td> <td>{{car.entrance | date:'dd/MM/yyyy hh:mm'}}</td> </tr> </tbody> </table> The result is really impressive. With an input field and the filter declaration we did the job. Integrating the backend with AJAX AJAX, also known as Asynchronous JavaScript and XML, is a technology that allows the applications to send and retrieve data from the server asynchronously, without refreshing the page. The $http service wraps the low-level interaction with the XMLHttpRequest object, providing an easy way to perform calls. This service could be called by just passing a configuration object, used to set many important information like the method, the URL of the requested resource, the data to be sent, and many others: $http({method: "GET", url: "/resource"}) .success(function (data, status, headers, config, statusText) { }) .error(function (data, status, headers, config, statusText) { }); To make it easier to use, there are the following shortcuts methods available for this service. In this case, the configuration object is optional. $http.get(url, [config]) $http.post(url, data, [config]) $http.put(url, data, [config]) $http.head(url, [config]) $http.delete(url, [config]) $http.jsonp(url, [config]) Now, it’s time to integrate our parking application with the back-end by calling the resource cars with the method GET. It will retrieve the cars, binding it to the $scope object. In case of something went wrong, we are going to log it to the console. controllers.js parking.controller("parkingCtrl", function ($scope, $http) { $scope.appTitle = "[Packt] Parking"; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; var retrieveCars = function () { $http.get("/cars") .success(function(data, status, headers, config) { $scope.cars = data; }) .error(function(data, status, headers, config) { switch(status) { case 401: { $scope.message = "You must be authenticated!" break; } case 500: { $scope.message = "Something went wrong!"; break; } } console.log(data, status); }); }; retrieveCars(); }); Summary This article introduced you to the fundamentals of AngularJS in order to design and construct reusable, maintainable, and modular web applications. Resources for Article: Further resources on this subject: AngularJS Project [article] Working with Live Data and AngularJS [article] CreateJS – Performing Animation and Transforming Function [article]
Read more
  • 0
  • 0
  • 8380

article-image-amazon-dynamodb-modelling-relationships-error-handling
Packt
20 Aug 2014
7 min read
Save for later

Amazon DynamoDB - Modelling relationships, Error handling

Packt
20 Aug 2014
7 min read
In this article by Tanmay Deshpande author of Mastering DynamoDB we are going to revise our concepts about the DynamoDB and will try to discover more about its features and implementation. (For more resources related to this topic, see here.) Amazon DynamoDB is a fully managed, cloud hosted, NoSQL database. It provides fast and predictable performance with the ability to scale seamlessly. It allows you to store and retrieve any amount of data , serving any level of network traffic without having any operational burden. DynamoDB gives numerous other advantages like consistent and predictable performance, flexible data modeling and durability. With just few clicks on Amazon Web Service console, you would be able to create your own DynamoDB database table, scale up or scale down provision throughput without taking down your application even for a millisecond. DynamoDB uses solid state disks (SSD) to store the data which confirms the durability of the work you are doing. It also automatically replicates the data across other AWS Availability Zones which provides built-in high availability and reliability. Before we start discussion details about DynamoDB let's try to understand what NoSQL databases are and when to choose DynamoDB over RDBMS. With the rise in data volume, variety and velocity, RDBMS were neither designed to cope up with the scale and flexibility challenges developers are facing to build the modern day applications nor were they able to take advantage of cheap commodity hardware. Also we need to provide schema before we start adding data which was restricting developers from making their application flexible. On the other hand, NoSQL databases are fast, provide flexible schema operations and do the effective use of cheap storage. Considering all these things, NoSQL is becoming popular very quickly amongst the developer community. But one has to be very cautious about when to go for NoSQL and when to stick to RDBMS. Sticking to relational databases makes sense when you know the schema is more over static, strong consistency is must and the data is not going to be that big in volume. But when you want to build an application which is Internet scalable, schema is more likely to get evolved over the time, the storage is going to be really big and the operations involved are ok to be eventually consistent then NoSQL is the way to go. There are various types of NoSQL databases. Following is the list of NoSQL database types and popular examples Document Store – MongoDB, CouchDB, MarkLogic etc. Column Store – Hbase, Cassandra etc. Key Value Store – DynamoDB, Azure, Redis etc. Graph Databases – Neo4J, DEX etc. Most of these NoSQL solutions are open source except few like DynamoDB, Azure which are available as service over Internet. DynamoDB being a key-value store indexes data only upon primary keys and one need to go through primary key to access certain attribute. Let's start learning more about DynamoDB by having a look at its history. DynamoDB's History Amazon's ecommerce platform had a huge set of decoupled services developed and managed individually and each and every service had API to be used and consumed for others. Earlier each service had direct database access which was a major bottleneck. In terms of scalability, Amazon's requirements were more than any third party vendors could provide at that time. DynamoDB was built to address Amazon's high availability, extreme scalability and durability needs. Earlier Amazon used to store its production data in relational databases and services had been provided for all required operations. But later they realized that most of the services access data only through its primary key and need not use complex queries to fetch the required data, plus maintaining these RDBMS systems required high end hardware and skilled personnel. So to overcome all such issues, Amazon's engineering team built a NoSQL database which addresses all above mentioned issues. In 2007, Amazon released one research paper on Dynamo which was combining the best of ideas from database and key value store worlds which was inspiration for many open source projects at the time. Cassandra, Voldemort and Riak were one of them. You can find the above mentioned paper at http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf Even though Dynamo had great features which would take care of all engineering needs, it was not widely accepted at that time in Amazon itself as it was not a fully managed service. When Amazon released S3 and SimpleDB, engineering teams were quite excited to adopt those compared to Dynamo as DynamoDB was bit expensive at that time due to SSDs. So finally after rounds of improvement, Amazon released Dynamo as cloud based service and since then it is one the most widely used NoSQL database. Before releasing to public cloud in 2012, DynamoDB has been the core storage service for Amazon's e-commerce platform, which was started shopping cart and session management service. Any downtime or degradation in performance had major impact on Amazon's business and any financial impact was strictly not acceptable and DynamoDB proved itself to be the best choice at the end. Now let's try to understand in more detail about DynamoDB. What is DynamoDB? DynamoDB is a fully managed, Internet scalable and easily administered, cost effective NoSQL database. It is a part of database as a service offering pane of Amazon Web Services. The above diagram shows how Amazon offers its various cloud services and where DynamoDB is exactly placed. AWS RDS is relational database as a service over Internet from Amazon while Simple DB and DynamoDB are NoSQL database as services. Both SimpleDB and DynamoDB are fully managed, non-relational services. DynamoDB is build considering fast, seamless scalability, and high performance. It runs on SSDs to provide faster responses and has no limits on request capacity and storage. It automatically partitions your data throughout the cluster to meet the expectations while in SimpleDB we have storage limit of 10 GB and can only take limited requests per second. Also in SimpleDB we have to manage our own partitions. So depending upon your need you have to choose the correct solution. To use DynamoDB, the first and foremost requirement is having an AWS account. Through easy to use AWS management console, you can directly create new tables, providing necessary information and can start loading data into the tables in few minutes. Modelling Relationships Like any other database, modeling relationships is quite interesting even though DynamoDB is a NoSQL database. Most of the time, people get confused on how do I model the relationships between various tables, in this section, we are trying make an effort to simplify this problem. To understand the relationships better, let's try to understand that using our example of Book Store where we have entities like Book, Author, Publisher, and so on. One to One In this type of relationship, one entity record of a table is related only one entity record of other table. In our book store application, we have BookInfo and Book-Details tables, BookInfo table can have short information about the book which can be used to display book information on web page whereas BookDetails table would be used when someone explicitly needs to see all the details of book. This design helps us keeping our system healthy as even if there are high request on one table, the other table would always be to up and running to fulfil what it is supposed to do. Following diagram shows how the table structure would look like. One to many In this type of relationship, one record from an entity is related to more than one record in another entity. In book store application, we can have Publisher Book Table which would keep information about the book and publisher relationship. Here we can have Publisher Id as hash key and Book Id as range key. Following diagram shows how a table structure would like. Many to many Many to many relationship means many records from an entity is related to many records from another entity. In case of book store application, we can say that a book can be authored by multiple authors and an author can write multiple books. In this we should use two tables with both and range keys.
Read more
  • 0
  • 0
  • 7638

article-image-transforming-data-service
Packt
20 Aug 2014
4 min read
Save for later

Transforming data in the service

Packt
20 Aug 2014
4 min read
This article written by, Jim Lavin, author of the book AngularJS Services will cover ways on how to transform data. Sometimes, you need to return a subset of your data for a directive or controller, or you need to translate your data into another format for use by an external service. This can be handled in several different ways; you can use AngularJS filters or you could use an external library such as underscore or lodash. (For more resources related to this topic, see here.) How often you need to do such transformations will help you decide on which route you take. If you are going to transform data just a few times, it isn't necessary to add another library to your application; however, if you are going to do it often, using a library such as underscore or lodash will be a big help. We are going to limit our discussion to using AngularJS filters to handle transforming our data. Filters are an often-overlooked component in the AngularJS arsenal. Often, developers will end up writing a lot of methods in a controller or service to filter an array of objects that are iterated over in an ngRepeat directive, when a simple filter could have easily been written and applied to the ngRepeat directive and removed the excess code from the service or controller. First, let's look at creating a filter that will reduce your data based on a property on the object, which is one of the simplest filters to create. This filter is designed to be used as an option to the ngRepeat directive to limit the number of items displayed by the directive. The following fermentableType filter expects an array of fermentable objects as the input parameter and a type value to filter as the arg parameter. If the fermentable's type value matches the arg parameter passed into the filter, it is pushed onto the resultant array, which will in turn cause the object to be included in the set provided to the ngRepeat directive. angular.module('brew-everywhere').filter('fermentableType', function () {return function (input, arg) {var result = [];angular.forEach(input, function(item){if(item.type === arg){result.push(item);}})return result;};}); To use the filter, you include it in your partial in an ngRepeat directive as follows: <table class="table table-bordered"><thead><tr><th>Name</th><th>Type</th><th>Potential</th><th>SRM</th><th>Amount</th><th>&nbsp;</th></tr></thead><tbody><tr ng-repeat="fermentable in fermentables |fermentableType:'Grain'"><td class="col-xs-4">{{fermentable.name}}</td><td class="col-xs-2">{{fermentable.type}}</td><td class="col-xs-2">{{fermentable.potential}}</td><td class="col-xs-2">{{fermentable.color}}</td></tr></tbody></table> The result of calling fermentableType with the value, Grain is only going to display those fermentable objects that have a type property with a value of Grain. Using filters to reduce an array of objects can be as simple or complex as you like. The next filter we are going to look at is one that uses an object to reduce the fermentable object array based on properties in the passed-in object. The following filterFermentable filter expects an array of fermentable objects as an input and an object that defines the various properties and their required values that are needed to return a matching object. To build the resulting array of objects, you walk through each object and compare each property with those of the object passed in as the arg parameter. If all the properties match, the object is added to the array and it is returned. angular.module('brew-everywhere').filter('filterFermentable', function () {return function (input, arg) {var result = [];angular.forEach(input, function (item) {var add = truefor (var key in arg) {if (item.hasOwnProperty(key)) {if (item[key] !== arg[key]) {add = false;}}}if (add) {result.push(item);}});return result;};});
Read more
  • 0
  • 0
  • 4794

article-image-whats-your-input
Packt
20 Aug 2014
7 min read
Save for later

What's Your Input?

Packt
20 Aug 2014
7 min read
This article by Venita Pereira, the author of the book Learning Unity 2D Game Development by Example, teaches us all about the various input types and states of a game. We will then go on to learn how to create buttons and the game controls by using code snippets for input detection. "Computers are finite machines; when given the same input, they always produce the same output." – Greg M. Perry, Sams Teach Yourself Beginning Programming in 24 Hours (For more resources related to this topic, see here.) Overview The list of topics that will be covered in this article is as follows: Input versus output Input types Output types Input Manager Input detection Buttons Game controls Input versus output We will be looking at exactly what both input and output in games entail. We will look at their functions, importance, and differentiations. Input in games Input may not seem a very important part of a game at first glance, but in fact it is very important, as input in games involves how the player will interact with the game. All the controls in our game, such as moving, special abilities, and so forth, depend on what controls and game mechanics we would like in our game and the way we would like them to function. Most games have the standard control setup of moving your character. This is to help usability, because if players are already familiar with the controls, then the game is more accessible to a much wider audience. This is particularly noticeable with games of the same genre and platform. For instance, endless runner games usually make use of the tilt mechanic which is made possible by the features of the mobile device. However, there are variations and additions to the pre-existing control mechanics; for example, many other endless runners make use of the simple swipe mechanic, and there are those that make use of both. When designing our games, we can be creative and unique with our controls, thereby innovating a game, but the controls still need to be intuitive for our target players. When first designing our game, we need to know who our target audience of players includes. If we would like our game to be played by young children, for instance, then we need to ensure that they are able to understand, learn, and remember the controls. Otherwise, instead of enjoying the game, they will get frustrated and stop playing it entirely. As an example, a young player may hold a touchscreen device with their fingers over the screen, thereby preventing the input from working correctly depending on whether the game was first designed to take this into account and support this. Different audiences of players interact with a game differently. Likewise, if a player is more familiar with the controls on a specific device, then they may struggle with different controls. It is important to create prototypes to test the input controls of a game thoroughly. Developing a well-designed input system that supports usability and accessibility will make our game more immersive. Output in games Output is the direct opposite of input; it provides the necessary information to the player. However, output is just as essential to a game as input. It provides feedback to the player, letting them know how they are doing. Output lets the player know whether they have done an action correctly or they have done something wrong, how they have performed, and their progression in the form of goals/missions/objectives. Without feedback, a player would feel lost. The player would potentially see the game as being unclear, buggy, or even broken. For certain types of games, output forms the heart of the game. The input in a game gets processed by the game to provide some form of output, which then provides feedback to the player, helping them learn from their actions. This is the cycle of the game's input-output system. The following diagram represents the cycle of input and output: Input types There are many different input types that we can utilize in our games. These various input types can form part of the exciting features that our games have to offer. The following image displays the different input types: The most widely used input types in games include the following: Keyboard: Key presses from a keyboard are supported by Unity and can be used as input controls in PC games as well as games on any other device that supports a keyboard. Mouse: Mouse clicks, motion (of the mouse), and coordinates are all inputs that are supported by Unity. Game controller: This is an input device that generally includes buttons (including shoulder and trigger buttons), a directional pad, and analog sticks. The game controller input is supported by Unity. Joystick: A joystick has a stick that pivots on a base that provides movement input in the form of direction and angle. It also has a trigger, throttle, and extra buttons. It is commonly used in flight simulation games to simulate the control device in an aircraft's cockpit and other simulation games that simulate controlling machines, such as trucks and cranes. Modern game controllers make use of a variation of joysticks known as analog sticks and are therefore treated as the same class of input device as joysticks by Unity. Joystick input is supported by Unity. Microphone: This provides audio input commands for a game. Unity supports basic microphone input. For greater fidelity, a third-party audio recognition tool would be required. Camera: This provides visual input for a game using image recognition. Unity has webcam support to access RGB data, and for more advanced features, third-party tools would be required. Touchscreen: This provides multiple touch inputs from the player's finger presses on the device's screen. This is supported by Unity. Accelerometer: This provides the proper acceleration force at which the device is moved and is supported by Unity. Gyroscope: This provides the orientation of the device as input and is supported by Unity. GPS: This provides the geographical location of the device as input and is supported by Unity. Stylus: Stylus input is similar to touchscreen input in that you use a stylus to interact with the screen; however, it provides greater precision. The latest version of Unity supports the Android stylus. Motion controller: This provides the player's motions as input. Unity does not support this, and therefore, third-party tools would be required. Output types The main output types in games are as follows: Visual output Audio output Controller vibration Unity supports all three. Visual output The Head-Up Display (HUD) is the gaming term for the game's Graphical User Interface (GUI) that provides all the essential information as visual output to the player as well as feedback and progress to the player as shown in the following image: HUD, viewed June 22, 2014, http://opengameart.org/content/golden-ui Other visual output includes images, animations, particle effects, and transitions. Audio Audio is what can be heard through an audio output, such as a speaker, to provide feedback that supports and emphasizes the visual output and, therefore, increases immersion. The following image displays a speaker: Speaker, viewed June 22, 2014, http://pixabay.com/en/loudspeaker-speakers-sound-music-146583/ Controller vibration Controller vibration provides feedback for instances where the player collides with an object or environmental feedback for earthquakes to provide even more immersion as in the following image: Having a game that is designed to provide output meaningfully not only makes it clearer and more enjoyable, but can truly bring the world to life, making it truly engaging for the player.
Read more
  • 0
  • 0
  • 12338
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-coverage-apache-karaf-pax-exam-tests
Packt
20 Aug 2014
3 min read
Save for later

Coverage with Apache Karaf Pax Exam tests

Packt
20 Aug 2014
3 min read
This article by Achim Nierbeck, one of the authors of Apache Karaf Cookbook, describes how we can find the coverage of a test. (For more resources related to this topic, see here.) Apart from testing the application, it is usually also a requirement to know how well the unit and integration tests actually cover the code. For code coverage, a couple of technologies are available. This recipe will cover how to set up your test environment to find the coverage of the test. Getting ready The sources of this recipe are available at https://github.com/jgoodyear/ApacheKarafCookbook/tree/master/chapter10/chapter10-recipe4. How to do it… To find out about the coverage of the test, a code coverage tool is needed. We will take the Java Code Coverage Library as it has a Maven plugin for automated coverage analysis. At first, the Maven coordinates for the plugin are added as shown in the following code: <groupId>org.jacoco</groupId><artifactId>jacoco-maven-plugin</artifactId><version>0.7.0.201403182114</version> We need to prepare the code first so it can be covered by the agent as follows: <execution><id>prepare-agent-integration</id><goals><goal>prepare-agent-integration</goal></goals><phase>pre-integration-test</phase><configuration><propertyName>jcoverage.command</propertyName><includes><include>com.packt.*</include></includes><append>true</append></configuration></execution> This will include the com.packt package, including subpackages. After the integration tests are done, the test report needs to be generated as follows: <execution><id>report</id><goals><goal>report-integration</goal></goals></execution> Besides these additions to the POM configuration, you need to add the VM options to the configuration of the Apache Karaf test. Without setting these options to the virtual machine, which executes the test, the executing environment doesn't know of the coverage and, therefore, no coverage is done. This can be done as follows: private static Option addCodeCoverageOption() { String coverageCommand = System.getProperty(COVERAGE_COMMAND); if (coverageCommand != null) { return CoreOptions.vmOption(coverageCommand); } return null; } The resulting report of this coverage looks like the following screenshot. It shows the coverage of the CalculatorImpl class and its methods. While the add method has been called by the test, the submethod wasn't. This results in zero coverage for that method. How it works… First, you need to prepare the agent for covering, this will be inserted into the jcoverage.command property. This property is passed to the test by adding the vmOption directory. This way the coverage agent is added to the Java Virtual Machine and it tracks the coverage of the test execution. After the test is run successfully, the report is generated by the jacoco-maven-plugin. All of this works fine with a single Maven module. A multimodule project setup will require additional work, especially if you want to combine unit and integration test coverage. More details can be found at http://www.eclemma.org/jacoco/index.html. Summary This recipe describes how we can set up a test environment to find the coverage of a test. Resources for Article: Further resources on this subject: Configuring Apache and Nginx [Article] Apache Geronimo Logging [Article] Apache Karaf – Provisioning and Clusters [Article]
Read more
  • 0
  • 0
  • 1998

article-image-now-youre-ready
Packt
20 Aug 2014
14 min read
Save for later

Now You're Ready!

Packt
20 Aug 2014
14 min read
In this article by Ryan John, author of the book Canvas LMS Course Design, we will have a look at the key points encountered during the course-building process, along with connections to educational philosophies and practices that support technology as a powerful way to enhance teaching and learning. (For more resources related to this topic, see here.) As you finish teaching your course, you will be well served to export your course to keep as a backup, to upload and reteach later within Canvas to a new group of students, or to import into another LMS. After covering how to export your course, we will tie everything we've learned together through a discussion of how Canvas can help you and your students achieve educational goals while acquiring important 21st century skills. Overall, we will cover the following topics: Exporting your course from Canvas to your computer Connecting Canvas to education in the 21st century Exporting your course Now that your course is complete, you will want to export the course from Canvas to your computer. When you export your course, Canvas compiles all the information from your course and allows you to download a single file to your computer. This file will contain all of the information for your course, and you can use this file as a master template for each time you or your colleagues teach the course. Exporting your course is helpful for two main reasons: It is wise to save a back-up version of your course on a computer. After all the hard work you have put into building and teaching your course, it is always a good decision to export your course and save it to a computer. If you are using a Free for Teachers account, your course will remain intact and accessible online until you choose to delete it. However, if you use Canvas through your institution, each institution has different procedures and policies in place regarding what happens to courses when they are complete. Exporting and saving your course will preserve your hard work and protect it from any accidental or unintended deletion. Once you have exported your course, you will be able to import your course into Canvas at any point in the future. You are also able to import your course into other LMSs such as Moodle or BlackBoard. You might wish to import your course back into Canvas if your course is removed from your institution-specific Canvas account upon completion. You will have a copy of the course to import for the next time you are scheduled to teach the same course. You might build and teach a course using a Free for Teachers account, and then later wish to import that version of the course into an institution-specific Canvas account or another LMS. Exporting your course does not remove the course from Canvas—your course will still be accessible on the Canvas site unless it is automatically deleted by your institution or if you choose to delete it. To export your entire course, complete the following steps: Click on the Settings tab at the bottom of the left-hand side menu, as pictured in the following screenshot: On the Settings page, look to the right-hand side menu. Click on the Export Course Content button, which is highlighted in the following screenshot: A screen will appear asking you whether you would like to export the Course or export a Quiz. To export your entire course, select the Course option and then click on Create Export, as shown in the following screenshot: Once you click on Create Export, a progress bar will appear. As indicated in the message below the progress bar, the export might take a while to complete, and you can leave the page while Canvas exports the content. The following screenshot displays this progress bar and message: When the export is complete, you will receive an e-mail from notifications@instructure.com that resembles the following screenshot. Click on the Click to view exports link in the e-mail: A new window or tab will appear in your browser that shows your Content Exports. Below the heading of the page, you will see your course export listed with a link that reads Click here to download, as pictured in the following screenshot. Go ahead and click on the link, and the course export file will be downloaded to your computer. Your course export file will be downloaded to your computer as a single .imscc file. You can then move the downloaded file to a folder on your computer's hard drive for later access. Your course export is complete, and you can save the exported file for later use. To access the content stored in the exported .imscc file, you will need to import the file back into Canvas or another LMS. You might notice an option to Conclude this Course on the course Settings page if your institution has not hidden or disabled this option. In most cases, it is not necessary to conclude your course if you have set the correct course start and end dates in your Course Details. Concluding your course prevents you from altering grades or accessing course content, and you cannot unconclude your course on your own. Some institutions conclude courses automatically, which is why it is always best to export your course to preserve your work. Now that we have covered the last how-to aspects of Canvas, let's close with some ways to apply the skills we have learned in this book to contemporary educational practices, philosophies, and requirements that you might encounter in your teaching. Connecting Canvas to education in the 21st century While learning how to use the features of Canvas, it is easy to forget the main purpose of Canvas' existence—to better serve your students and you in the process of education. In the midst of rapidly evolving technology, students and teachers alike require skills that are as adaptable and fluid as the technologies and new ideas swirling around them. While the development of various technologies might seem daunting, those involved in education in the 21st century have access to new and exciting tools that have never before existed. As an educator seeking to refine your craft, utilizing tools such as Canvas can help you and your students develop the skills that are becoming increasingly necessary to live and thrive in the 21st century. As attainment of these skills is indeed proving more and more valuable in recent years, many educational systems have begun to require evidence that instructors are cognizant of these skills and actively working to ensure that students are reaching valuable goals. Enacting the Framework for 21st Century Learning As education across the world continues to evolve through time, the development of frameworks, methods, and philosophies of teaching have shaped the world of formal education. In recent years, one such approach that has gained prominence in the United States' education systems is the Framework for 21st Century Learning, which was developed over the last decade through the work of the Partnership for 21st Century Skills (P21). This partnership between education, business, community, and government leaders was founded to help educators provide children in Kindergarten through 12th Grade (K-12) with the skills they would need going forward into the 21st century. Though the focus of P21 is on children in grades K-12, the concepts and knowledge articulated in the Framework for 21st Century Learning are valuable for learners at all levels, including those in higher education. In the following sections, we will apply our knowledge of Canvas to the desired 21st century student outcomes, as articulated in the P21 Framework for 21st Century Learning, to brainstorm the ways in which Canvas can help prepare your students for the future. Core subjects and 21st century themes The Framework for 21st Century Learning describes the importance of learning certain core subjects including English, reading or language arts, world languages, the arts, Mathematics, Economics, Science, Geography, History, Government, and Civics. In connecting these core subjects to the use of Canvas, the features of Canvas and the tips throughout this book should enable you to successfully teach courses in any of these subjects. In tandem with teaching and learning within the core subjects, P21 also advocates for schools to "promote understanding of academic content at much higher levels by weaving 21st century interdisciplinary themes into core subjects." The following examples offer insight and ideas for ways in which Canvas can help you integrate these interdisciplinary themes into your course. As you read through the following suggestions and ideas, think about strategies that you might be able to implement into your existing curriculum to enhance its effectiveness and help your students engage with the P21 skills: Global awareness: Since it is accessible from anywhere with an Internet connection, Canvas opens the opportunity for a myriad of interactions across the globe. Utilizing Canvas as the platform for a purely online course enables students from around the world to enroll in your course. As a distance-learning tool in colleges, universities, or continuing education departments, Canvas has the capacity to unite students from anywhere in the world to directly interact with one another: You might utilize the graded discussion feature for students to post a reflection about a class reading that considers their personal cultural background and how that affects their perception of the content. Taking it a step further, you might require students to post a reply comment on other students' reflections to further spark discussion, collaboration, and cross-cultural connections. As a reminder, it is always best to include an overview of online discussion etiquette somewhere within your course—you might consider adding a "Netiquette" section to your syllabus to maintain focus and a professional tone within these discussions. You might set up a conference through Canvas with an international colleague as a guest lecturer for a course in any subject. As a prerequisite assignment, you might ask students to prepare three questions to ask the guest lecturer to facilitate a real-time international discussion within your class. Financial, economic, business, and entrepreneurial literacy: As the world becomes increasingly digitized, accessing and incorporating current content from the Web is a great way to incorporate financial, economic, business, and entrepreneurial literacy into your course: In a Math course, you might consider creating a course module centered around the stock market. Within the module, you could build custom content pages offering direct instruction and introductions to specific topics. You could upload course readings and embed videos of interviews with experts with the YouTube app. You could link to live steam websites of the movement of the markets and create quizzes to assess students' understanding. Civic literacy: In fostering students' understanding of their role within their communities, Canvas can serve as a conduit of information regarding civic responsibilities, procedures, and actions: You might create a discussion assignment in which students search the Internet for a news article about a current event and post a reflection with connections to other content covered in the course. Offering guidance in your instructions to address how local and national citizenship impacts students' engagement with the event or incident could deepen the nature of responses you receive. Since discussion posts are visible to all participants in your course, a follow-up assignment might be for students to read one of the articles posted by another student and critique or respond to their reflection. Health literacy: Canvas can allow you to facilitate the exploration of health and wellness through the wide array of submission options for assignments. By utilizing the variety of assignment types you can create within Canvas, students are able to explore course content in new and meaningful ways: In a studio art class, you can create an out-of-class assignment to be submitted to Canvas in which students research the history, nature, and benefits of art therapy online and then create and upload a video sharing their personal relationship with art and connecting it to what they have found in the art therapy stories of others. Environmental literacy: As a cloud-based LMS, Canvas allows you to share files and course content with your students while maintaining and fostering an awareness of environmental sustainability: In any course you teach that involves readings uploaded to Canvas, encourage your students to download the readings to their computers or mobile devices rather than printing the content onto paper. Downloading documents to read on a device instead of printing them saves paper, reduces waste, and helps foster sustainable environmental habits. For PDF files embedded into content pages on Canvas, students can click on the preview icon that appears next to the document link and read the file directly on the content page without downloading or printing anything. Make a conscious effort to mention or address the environmental impacts of online learning versus traditional classroom settings, perhaps during a synchronous class conference or on a discussion board. Learning and innovation skills A number of specific elements combined can enable students to develop learning and innovation skills to prepare them for the increasingly "complex life and work environments in the 21st century." The communication setup of Canvas allows for quick and direct interactions while offering students the opportunity to contemplate and revise their contributions before posting to the course, submitting an assignment, or interacting with other students. This flexibility, combined with the ways in which you design your assignments, can help incorporate the following elements into your course to ensure the development of learning and innovation skills: Creativity and innovation: There are many ways in which the features of Canvas can help your students develop their creativity and innovation. As you build your course, finding ways for students to think creatively, work creatively with others, and implement innovations can guide the creation of your course assignments: You might consider assigning groups of students to assemble a content page within Canvas dedicated to a chosen or assigned topic. Do so by creating a content page, and then enable any user within the course to edit the page. Allowing students to experiment with the capabilities of the Rich Content Editor, embedding outside content and synthesizing ideas within Canvas allows each group's creativity to shine. As a follow-up assignment, you might choose to have students transfer the content of their content page to a public website or blog using sites such as Wikispaces, Wix, or Weebly. Once the sites are created, students can post their group site to a Canvas discussion page, where other students can view and interact with the work of their peers. Asking students to disseminate the class sites to friends or family around the globe could create international connections stemming from the creativity and innovation of your students' web content. Critical thinking and problem solving: As your students learn to overcome obstacles and find multiple solutions to complex problems, Canvas offers a place for students to work together to develop their critical thinking and problem-solving skills: Assign pairs of students to debate and posit solutions to a global issue that connects to topics within your course. Ask students to use the Conversations feature of Canvas to debate the issue privately, finding supporting evidence in various forms from around the Internet. Using the Collaborations feature, ask each pair of students to assemble and submit a final e-report on the topic, presenting the various solutions they came up with as well as supporting evidence in various electronic forms such as articles, videos, news clips, and websites. Communication and collaboration: With the seemingly impersonal nature of electronic communication, communication skills are incredibly important to maintain intended meanings across multiple means of communication. As the nature of online collaboration and communication poses challenges for understanding, connotation, and meaning, honing communication skills becomes increasingly important: As a follow-up assignment to the preceding debate suggestion, use the conferences tool in Canvas to set up a full class debate. During the debate, ask each pair of students to present their final e-report to the class, followed by a group discussion of each pair's findings, solutions, and conclusions. You might find it useful for each pair to explain their process and describe the challenges and/or benefits of collaborating and communicating via the Internet in contrast to collaborating and communicating in person.
Read more
  • 0
  • 0
  • 12209

article-image-importing-dynamic-data
Packt
20 Aug 2014
19 min read
Save for later

Importing Dynamic Data

Packt
20 Aug 2014
19 min read
In this article by Chad Adams, author of the book Learning Python Data Visualization, we will go over the finer points of pulling data from the Web using the Python language and its built-in libraries and cover parsing XML, JSON, and JSONP data. (For more resources related to this topic, see here.) Since we now have an understanding of how to work with the pygal library and building charts and graphics in general, this is the time to start looking at building an application using Python. In this article, we will take a look at the fundamentals of pulling data from the Web, parsing the data, and adding it to our code base and formatting the data into a useable format, and we will look at how to carry those fundamentals over to our Python code. We will also cover parsing XML and JSON data. Pulling data from the Web For many non-developers, it may seem like witchcraft that developers are magically able to pull data from an online resource and integrate that with an iPhone app, or a Windows Store app, or pull data to a cloud resource that is able to generate various versions of the data upon request. To be fair, they do have a general understanding; data is pulled from the Web and formatted to their app of choice. They just may not get the full background of how that process workflow happens. It's the same case with some developers as well—many developers mainly work on a technology that only works on a locked down environment, or generally, don't use the Internet for their applications. Again, they understand the logic behind it; somehow an RSS feed gets pulled into an application. In many languages, the same task is done in various ways, usually depending on which language is used. Let's take a look at a few examples using Packt's own news RSS feed, using an iOS app pulling in data via Objective-C. Now, if you're reading this and not familiar with Objective-C, that's OK, the important thing is that we have the inner XML contents of an XML file showing up in an iPhone application: #import "ViewController.h" @interfaceViewController () @property (weak, nonatomic) IBOutletUITextView *output; @end @implementationViewController - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view, typically from a nib. NSURL *packtURL = [NSURLURLWithString:@"http://www.packtpub.com/rss.xml"]; NSURLRequest *request = [NSURLRequestrequestWithURL:packtURL]; NSURLConnection *connection = [[NSURLConnectionalloc] initWithRequest:requestdelegate:selfstartImmediately:YES]; [connection start]; } - (void)connection:(NSURLConnection *)connection didReceiveData:(NSData *)data { NSString *downloadstring = [[NSStringalloc] initWithData:dataencoding:NSUTF8StringEncoding]; [self.outputsetText:downloadstring]; } - (void)didReceiveMemoryWarning { [superdidReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } @end Here, we can see in iPhone Simulator that our XML output is pulled dynamically through HTTP from the Web to our iPhone simulator. This is what we'll want to get started with doing in Python: The XML refresher Extensible Markup Language (XML) is a data markup language that sets a series of rules and hierarchy to a data group, which is stored as a static file. Typically, servers update these XML files on the Web periodically to be reused as data sources. XML is really simple to pick up as it's similar to HTML. You can start with the document declaration in this case: <?xml version="1.0" encoding="utf-8"?> Next, a root node is set. A node is like an HTML tag (which is also called a node). You can tell it's a node by the brackets around the node's name. For example, here's a node named root: <root></root> Note that we close the node by creating a same-named node with a backslash. We can also add parameters to the node and assign a value, as shown in the following root node: <root parameter="value"></root> Data in XML is set through a hierarchy. To declare that hierarchy, we create another node and place that inside the parent node, as shown in the following code: <root parameter="value"> <subnode>Subnode's value</subnode> </root> In the preceding parent node, we created a subnode. Inside the subnode, we have an inner value called Subnode's value. Now, in programmatical terms, getting data from an XML data file is a process called parsing. With parsing, we specify where in the XML structure we would like to get a specific value; for instance, we can crawl the XML structure and get the inner contents like this: /root/subnode This is commonly referred to as XPath syntax, a cross-language way of going through an XML file. For more on XML and XPath, check out the full spec at: http://www.w3.org/TR/REC-xml/ and here http://www.w3.org/TR/xpath/ respectively. RSS and the ATOM Really simple syndication (RSS) is simply a variation of XML. RSS is a spec that defines specific nodes that are common for sending data. Typically, many blog feeds include an RSS option for users to pull down the latest information from those sites. Some of the nodes used in RSS include rss, channel, item, title, description, pubDate, link, and GUID. Looking at our iPhone example in this article from the Pulling data from the Web section, we can see what a typical RSS structure entails. RSS feeds are usually easy to spot since the spec requires the root node to be named rss for it to be a true RSS file. In some cases, some websites and services will use .rss rather than .xml; this is typically fine since most readers for RSS content will pull in the RSS data like an XML file, just like in the iPhone example. Another form of XML is called ATOM. ATOM was another spec similar to RSS, but developed much later than RSS. Because of this, ATOM has more features than RSS: XML namespacing, specified content formats (video, or audio-specific URLs), support for internationalization, and multilanguage support, just to name a few. ATOM does have a few different nodes compared to RSS; for instance, the root node to an RSS feed would be <rss>. ATOM's root starts with <feed>, so it's pretty easy to spot the difference. Another difference is that ATOM can also end in .atom or .xml. For more on the RSS and ATOM spec, check out the following sites: http://www.rssboard.org/rss-specification http://tools.ietf.org/html/rfc4287 Understanding HTTP All these samples from the RSS feed of the Packt Publishing website show one commonality that's used regardless of the technology coded in, and that is the method used to pull down these static files is through the Hypertext Transfer Protocol (HTTP). HTTP is the foundation of Internet communication. It's a protocol with two distinct types of requests: a request for data or GET and a push of data called a POST. Typically, when we download data using HTTP, we use the GET method of HTTP in order to pull down the data. The GET request will return a string or another data type if we mention a specific type. We can either use this value directly or save to a variable. With a POST request, we are sending values to a service that handles any incoming values; say we created a new blog post's title and needed to add to a list of current titles, a common way of doing that is with URL parameters. A URL parameter is an existing URL but with a suffixed key-value pair. The following is a mock example of a POST request with a URL parameter: http://www.yourwebsite.com/blogtitles/?addtitle=Your%20New%20Title If our service is set up correctly, it will scan the POST request for a key of addtitle and read the value, in this case: Your New Title. We may notice %20 in our title for our request. This is an escape character that allows us to send a value with spaces; in this case, %20 is a placehoder for a space in our value. Using HTTP in Python The RSS samples from the Packt Publishing website show a few commonalities we use in programming when working in HTTP; one is that we always account for the possibility of something potentially going wrong with a connection and we always close our request when finished. Here's an example on how the same RSS feed request is done in Python using a built-in library called urllib2: #!/usr/bin/env python # -*- coding: utf-8 -*- import urllib2 try: #Open the file via HTTP. response = urllib2.urlopen('http://www.packtpub.com/rss.xml') #Read the file to a variable we named 'xml' xml = response.read() #print to the console. print(xml) #Finally, close our open network. response.close() except: #If we have an issue show a message and alert the user. print('Unable to connect to RSS...') If we look in the following console output, we can see the XML contents just as we saw in our iOS code example: In the example, notice that we wrapped our HTTP request around a try except block. For those coming from another language, except can be considered the same as a catch statement. This allows us to detect if an error occurs, which might be an incorrect URL or lack of connectivity, for example, to programmatically set an alternate result to our Python script. Parsing XML in Python with HTTP With these examples including our Python version of the script, it's still returning a string of some sorts, which by itself isn't of much use to grab values from the full string. In order to grab specific strings and values from an XML pulled through HTTP, we need to parse it. Luckily, Python has a built-in object in the Python main library for this, called as ElementTree, which is a part of the XML library in Python. Let's incorporate ElementTree into our example and see how that works: # -*- coding: utf-8 -*- import urllib2 from xml.etree import ElementTree try: #Open the file via HTTP. response = urllib2.urlopen('http://www.packtpub.com/rss.xml') tree = ElementTree.parse(response) root = tree.getroot() #Create an 'Element' group from our XPATH using findall. news_post_title = root.findall("channel//title") #Iterate in all our searched elements and print the inner text for each. for title in news_post_title: print title.text #Finally, close our open network. response.close() except Exception as e: #If we have an issue show a message and alert the user. print(e) The following screenshot shows the results of our script: As we can see, our output shows each title for each blog post. Notice how we used channel//item for our findall() method. This is using XPath, which allows us to write in a shorthand manner on how to iterate an XML structure. It works like this; let's use the http://www.packtpub.com feed as an example. We have a root, followed by channel, then a series of title elements. The findall() method found each element and saved them as an Element type specific to the XML library ElementTree uses in Python, and saved each of those into an array. We can then use a for in loop to iterate each one and print the inner text using the text property specific to the Element type. You may notice in the recent example that I changed except with a bit of extra code and added Exception as e. This allows us to help debug issues and print them to a console or display a more in-depth feedback to the user. An Exception allows for generic alerts that the Python libraries have built-in warnings and errors to be printed back either through a console or handled with the code. They also have more specific exceptions we can use such as IOException, which is specific for working with file reading and writing. About JSON Now, another data type that's becoming more and more common when working with web data is JSON. JSON is an acronym for JavaScript Object Notation, and as the name implies, is indeed true JavaScript. It has become popular in recent years with the rise of mobile apps, and Rich Internet Applications (RIA). JSON is great for JavaScript developers; it's easier to work with when working in HTML content, compared to XML. Because of this, JSON is becoming a more common data type for web and mobile application development. Parsing JSON in Python with HTTP To parse JSON data in Python is a pretty similar process; however, in this case, our ElementTree library isn't needed, since that only works with XML. We need a library designed to parse JSON using Python. Luckily, the Python library creators already have a library for us, simply called json. Let's build an example similar to our XML code using the json import; of course, we need to use a different data source since we won't be working in XML. One thing we may note is that there aren't many public JSON feeds to use, many of which require using a code that gives a developer permission to generate a JSON feed through a developer API, such as Twitter's JSON API. To avoid this, we will use a sample URL from Google's Books API, which will show demo data of Pride and Prejudice, Jane Austen. We can view the JSON (or download the file), by typing in the following URL: https://www.googleapis.com/books/v1/volumes/s1gVAAAAYAAJ Notice the API uses HTTPS, many JSON APIs are moving to secure methods of transmitting data, so be sure to include the secure in HTTP, with HTTPS. Let's take a look at the JSON output: { "kind": "books#volume", "id": "s1gVAAAAYAAJ", "etag": "yMBMZ85ENrc", "selfLink": "https://www.googleapis.com/books/v1/volumes/s1gVAAAAYAAJ", "volumeInfo": { "title": "Pride and Prejudice", "authors": [ "Jane Austen" ], "publisher": "C. Scribner's Sons", "publishedDate": "1918", "description": "Austen's most celebrated novel tells the story of Elizabeth Bennet, a bright, lively young woman with four sisters, and a mother determined to marry them to wealthy men. At a party near the Bennets' home in the English countryside, Elizabeth meets the wealthy, proud Fitzwilliam Darcy. Elizabeth initially finds Darcy haughty and intolerable, but circumstances continue to unite the pair. Mr. Darcy finds himself captivated by Elizabeth's wit and candor, while her reservations about his character slowly vanish. The story is as much a social critique as it is a love story, and the prose crackles with Austen's wry wit.", "readingModes": { "text": true, "image": true }, "pageCount": 401, "printedPageCount": 448, "dimensions": { "height": "18.00 cm" }, "printType": "BOOK", "averageRating": 4.0, "ratingsCount": 433, "contentVersion": "1.1.5.0.full.3", "imageLinks": { "smallThumbnail": "http://bks8.books.google.com/books?id=s1gVAAAAYAAJ&printsec =frontcover&img=1&zoom=5&edge=curl&imgtk=AFLRE73F8btNqKpVjGX6q7V3XS77 QA2PftQUxcEbU3T3njKNxezDql_KgVkofGxCPD3zG1yq39u0XI8s4wjrqFahrWQ- 5Epbwfzfkoahl12bMQih5szbaOw&source=gbs_api", "thumbnail": "http://bks8.books.google.com/books?id=s1gVAAAAYAAJ&printsec= frontcover&img=1&zoom=1&edge=curl&imgtk=AFLRE70tVS8zpcFltWh_ 7K_5Nh8BYugm2RgBSLg4vr9tKRaZAYoAs64RK9aqfLRECSJq7ATs_j38JRI3D4P48-2g_ k4-EY8CRNVReZguZFMk1zaXlzhMNCw&source=gbs_api", "small": "http://bks8.books.google.com/books?id=s1gVAAAAYAAJ&printsec =frontcover&img=1&zoom=2&edge=curl&imgtk=AFLRE71qcidjIs37x0jN2dGPstn 6u2pgeXGWZpS1ajrGgkGCbed356114HPD5DNxcR5XfJtvU5DKy5odwGgkrwYl9gC9fo3y- GM74ZIR2Dc-BqxoDuUANHg&source=gbs_api", "medium": "http://bks8.books.google.com/books?id=s1gVAAAAYAAJ&printsec= frontcover&img=1&zoom=3&edge=curl&imgtk=AFLRE73hIRCiGRbfTb0uNIIXKW 4vjrqAnDBSks_ne7_wHx3STluyMa0fsPVptBRW4yNxNKOJWjA4Od5GIbEKytZAR3Nmw_ XTmaqjA9CazeaRofqFskVjZP0&source=gbs_api", "large": "http://bks8.books.google.com/books?id=s1gVAAAAYAAJ&printsec= frontcover&img=1&zoom=4&edge=curl&imgtk=AFLRE73mlnrDv-rFsL- n2AEKcOODZmtHDHH0QN56oG5wZsy9XdUgXNnJ_SmZ0sHGOxUv4sWK6GnMRjQm2eEwnxIV4dcF9eBhghMcsx -S2DdZoqgopJHk6Ts&source=gbs_api", "extraLarge": "http://bks8.books.google.com/books?id=s1gVAAAAYAAJ&printsec= frontcover&img=1&zoom=6&edge=curl&imgtk=AFLRE73KIXHChszn TbrXnXDGVs3SHtYpl8tGncDPX_7GH0gd7sq7SA03aoBR0mDC4-euzb4UCIDiDNLYZUBJwMJxVX_ cKG5OAraACPLa2QLDcfVkc1pcbC0&source=gbs_api" }, "language": "en", "previewLink": "http://books.google.com/books?id=s1gVAAAAYAAJ&hl=&source=gbs_api", "infoLink": "http://books.google.com/books?id=s1gVAAAAYAAJ&hl=&source=gbs_api", "canonicalVolumeLink": "http://books.google.com/books/about/ Pride_and_Prejudice.html?hl=&id=s1gVAAAAYAAJ" }, "layerInfo": { "layers": [ { "layerId": "geo", "volumeAnnotationsVersion": "6" } ] }, "saleInfo": { "country": "US", "saleability": "FREE", "isEbook": true, "buyLink": "http://books.google.com/books?id=s1gVAAAAYAAJ&hl=&buy=&source=gbs_api" }, "accessInfo": { "country": "US", "viewability": "ALL_PAGES", "embeddable": true, "publicDomain": true, "textToSpeechPermission": "ALLOWED", "epub": { "isAvailable": true, "downloadLink": "http://books.google.com/books/download /Pride_and_Prejudice.epub?id=s1gVAAAAYAAJ&hl=&output=epub &source=gbs_api" }, "pdf": { "isAvailable": true, "downloadLink": "http://books.google.com/books/download/Pride_and_Prejudice.pdf ?id=s1gVAAAAYAAJ&hl=&output=pdf&sig=ACfU3U3dQw5JDWdbVgk2VRHyDjVMT4oIaA &source=gbs_api" }, "webReaderLink": "http://books.google.com/books/reader ?id=s1gVAAAAYAAJ&hl=&printsec=frontcover& output=reader&source=gbs_api", "accessViewStatus": "FULL_PUBLIC_DOMAIN", "quoteSharingAllowed": false } } One downside to JSON is that it can be hard to read complex data. So, if we run across a complex JSON feed, we can visualize it using a JSON Visualizer. Visual Studio includes one with all its editions, and a web search will also show multiple online sites where you can paste JSON and an easy-to-understand data structure will be displayed. Here's an example using http://jsonviewer.stack.hu/ loading our example JSON URL: Next, let's reuse some of our existing Python code using our urllib2 library to request our JSON feed, and then we will parse it with the Python's JSON library. Let's parse the volumeInfo node of the book by starting with the JSON (root) node that is followed by volumeInfo as the subnode. Here's our example from the XML section, reworked using JSON to parse all child elements: # -*- coding: utf-8 -*- import urllib2 import json try: #Set a URL variable. url = 'https://www.googleapis.com/books/v1/volumes/s1gVAAAAYAAJ' #Open the file via HTTP. response = urllib2.urlopen(url) #Read the request as one string. bookdata = response.read() #Convert the string to a JSON object in Python. data = json.loads(bookdata) for r in data ['volumeInfo']: print r #Close our response. response.close() except: #If we have an issue show a message and alert the user. print('Unable to connect to JSON API...') Here's our output. It should match the child nodes of volumeInfo, which it does in the output screen, as shown in the following screenshot: Well done! Now, let's grab the value for title. Look at the following example and notice we have two brackets: one for volumeInfo and another for title. This is similar to navigating our XML hierarchy: # -*- coding: utf-8 -*- import urllib2 import json try: #Set a URL variable. url = 'https://www.googleapis.com/books/v1/volumes/s1gVAAAAYAAJ' #Open the file via HTTP. response = urllib2.urlopen(url) #Read the request as one string. bookdata = response.read() #Convert the string to a JSON object in Python. data = json.loads(bookdata) print data['volumeInfo']['title'] #Close our response. response.close() except Exception as e: #If we have an issue show a message and alert the user. #'Unable to connect to JSON API...' print(e) The following screenshot shows the results of our script: As you can see in the preceding screenshot, we return one line with Pride and Prejudice parsed from our JSON data. About JSONP JSONP, or JSON with Padding, is actually JSON but it is set up differently compared to traditional JSON files. JSONP is a workaround for web cross-browser scripting. Some web services can serve up JSONP rather than pure JSON JavaScript files. The issue with that is JSONP isn't compatible with many JSON Python-based parsers including one covered here, so you will want to avoid JSONP style JSON whenever possible. So how can we spot JSONP files; do they have a different extension? No, it's simply a wrapper for JSON data; here's an example without JSONP: /* *Regular JSON */ { authorname: 'Chad Adams' } The same example with JSONP: /* * JSONP */ callback({ authorname: 'Chad Adams' }); Notice we wrapped our JSON data with a function wrapper, or a callback. Typically, this is what breaks in our parsers and is a giveaway that this is a JSONP-formatted JSON file. In JavaScript, we can even call it in code like this: /* * Using JSONP in JavaScript */ callback = function (data) { alert(data.authorname); }; JSONP with Python We can get around a JSONP data source though, if we need to; it just requires a bit of work. We can use the str.replace() method in Python to strip out the callback before running the string through our JSON parser. If we were parsing our example JSONP file in our JSON parser example, the string would look something like this: #Convert the string to a JSON object in Python. data = json.loads(bookdata.replace('callback(', '').) .replace(')', '')) Summary In this article, we covered HTTP concepts and methodologies for pulling strings and data from the Web. We learned how to do that with Python using the urllib2 library, and parsed XML data and JSON data. We discussed the differences between JSON and JSONP, and how to work around JSONP if needed. Resources for Article: Further resources on this subject: Introspecting Maya, Python, and PyMEL [Article] Exact Inference Using Graphical Models [Article] Automating Your System Administration and Deployment Tasks Over SSH [Article]
Read more
  • 0
  • 0
  • 16218

article-image-bpms-components
Packt
19 Aug 2014
8 min read
Save for later

BPMS Components

Packt
19 Aug 2014
8 min read
In this article by Mariano Nicolas De Maio, the author of jBPM6 Developer Guide, we will look into the various components of a Business Process Management (BPM) system. (For more resources related to this topic, see here.) BPM systems are pieces of software created with the sole purpose of guiding your processes through the BPM cycle. They were originally monolithic systems in charge of every aspect of a process, where they had to be heavily migrated from visual representations to executable definitions. They've come a long way from there, but we usually relate them to the same old picture in our heads when a system that runs all your business processes is mentioned. Nowadays, nothing is further from the truth. Modern BPM Systems are not monolithic environments; they're coordination agents. If a task is finished, they will know what to do next. If a decision needs to be made regarding the next step, they manage it. If a group of tasks can be concurrent, they turn them into parallel tasks. If a process's execution is efficient, they will perform the processing 0.1 percent of the time in the process engine and 99.9 percent of the time on tasks in external systems. This is because they will have no heavy executions within, only derivations to other systems. Also, they will be able to do this from nothing but a specific diagram for each process and specific connectors to external components. In order to empower us to do so, they need to provide us with a structure and a set of tools that we'll start defining to understand how BPM systems' internal mechanisms work, and specifically, how jBPM6 implements these tools. Components of a BPMS All big systems become manageable when we divide their complexities into smaller pieces, which makes them easier to understand and implement. BPM systems apply this by dividing each function in a different module and interconnecting them within a special structure that (in the case of jBPM6) looks something like the following figure: BPMS' internal structure Each component in the preceding figure resolves one particular function inside the BPMS architecture, and we'll see a detailed explanation on each one of them. The execution node The execution node, as seen from a black box perspective, is the component that receives the process definitions (a description of each step that must be followed; from here on, we'll just refer to them as processes). Then, it executes all the necessary steps in the established way, keeping track of each step, variable, and decision that has to be taken in each process's execution (we'll start calling these process instances). The execution node along with its modules are shown in the following figure: The execution node is composed of a set of low-level modules: the semantic module and the process engine. The semantic module The semantic module is in charge of defining each of the specific language semantics, that is, what each word means and how it will be translated to the internal structures that the process engine can execute. It consists of a series of parsers to understand different languages. It is flexible enough to allow you to extend and support multiple languages; it also allows the user to change the way already defined languages are to be interpreted for special use cases. It is a common component of most of the BPMSes out there, and in jBPM6, it allows you to add the extensions of the process interpretations to the module. This is so that you can add your own language parsers, and define your very own text-based process definition language or extend existing ones. The process engine The process engine is the module that is in charge of the actual execution of our business processes. It creates new process instances and keeps track of their state and their internal steps. Its job is to expose methods to inject process definitions and to create, start, and continue our process instances. Understanding how the process engine works internally is a very important task for the people involved in BPM's stage 4, that is, runtime. This is where different configurations can be used to improve performance, integrate with other systems, provide fault tolerance, clustering, and many other functionalities. Process Engine structure In the case of jBPM6, process definitions and process instances have similar structures but completely different objectives. Process definitions only show the steps it should follow and the internal structures of the process, keeping track of all the parameters it should have. Process instances, on the other hand, should carry all of the information of each process's execution, and have a strategy for handling each step of the process and keep track of all its actual internal values. Process definition structures These structures are static representations of our business processes. However, from the process engine's internal perspective, these representations are far from the actual process structure that the engine is prepared to handle. In order for the engine to get those structures generated, it requires the previously described semantic module to transform those representations into the required object structure. The following figure shows how this parsing process happens as well as the resultant structure: Using a process modeler, business analysts can draw business processes by dragging-and-dropping different activities from the modeler palette. For jBPM6, there is a web-based modeler designed to draw Scalable Vector Graphics (SVG) files; this is a type of image file that has the particularity of storing the image information using XML text, which is later transformed into valid BPMN2 files. Note that both BPMN2 and jBPM6 are not tied up together. On one hand, the BPMN2 standard can be used by other process engine provides such as Activiti or Oracle BPM Suite. Also, because of the semantic module, jBPM6 could easily work with other parsers to virtually translate any form of textual representation of a process to its internal structures. In the internal structures, we have a root component (called Process in our case, which is finally implemented in a class called RuleFlowProcess) that will contain all the steps that are represented inside the process definition. From the jBPM6 perspective, you can manually create these structures using nothing but the objects provided by the engine. Inside the jBPM6-Quickstart project, you will find a code snippet doing exactly this in the createProcessDefinition() method of the ProgrammedProcessExecutionTest class: //Process Definition RuleFlowProcess process = new RuleFlowProcess(); process.setId("myProgramaticProcess"); //Start Task StartNode startTask = new StartNode(); startTask.setId(1); //Script Task ActionNode scriptTask = new ActionNode(); scriptTask.setId(2); DroolsAction action = new DroolsAction(); action.setMetaData("Action", new Action() { @Override public void execute(ProcessContext context) throws Exception { System.out.println("Executing the Action!!"); } }); scriptTask.setAction(action); //End Task EndNode endTask = new EndNode(); endTask.setId(3); //Adding the connections to the nodes and the nodes to the processes new ConnectionImpl(startTask, "DROOLS_DEFAULT", scriptTask, "DROOLS_DEFAULT"); new ConnectionImpl(scriptTask, "DROOLS_DEFAULT", endTask, "DROOLS_DEFAULT"); process.addNode(startTask); process.addNode(scriptTask); process.addNode(endTask); Using this code, we can manually create the object structures to represent the process shown in the following figure: This process contains three components: a start node, a script node, and an end node. In this case, this simple process is in charge of executing a simple action. The start and end tasks simply specify a sequence. Even if this is a correct way to create a process definition, it is not the recommended one (unless you're making a low-level functionality test). Real-world, complex processes are better off being designed in a process modeler, with visual tools, and exported to standard representations such as BPMN 2.0. The output of both the cases is the same; a process object that will be understandable by the jBPM6 runtime. While we analyze how the process instance structures are created and how they are executed, this will do. Process instance structures Process instances represent the running processes and all the information being handled by them. Every time you want to start a process execution, the engine will create a process instance. Each particular instance will keep track of all the activities that are being created by its execution. In jBPM6, the structure is very similar to that of the process definitions, with one root structure (the ProcessInstance object) in charge of keeping all the information and NodeInstance objects to keep track of live nodes. The following code shows a simplification of the methods of the ProcessInstance implementation: public class RuleFlowProcessInstance implements ProcessInstance { public RuleFlowProcess getRuleFlowProcess() { ... } public long getId() { ... } public void start() { ... } public int getState() { ... } public void setVariable(String name, Object value) { ... } public Collection<NodeInstance> getNodeInstances() { ... } public Object getVariable(String name) { ... } } After its creation, the engine calls the start() method of ProcessInstance. This method seeks StartNode of the process and triggers it. Depending on the execution of the path and how different nodes connect between each other, other nodes will get triggered until they reach a safe state where the execution of the process is completed or awaiting external data. You can access the internal parameters that the process instance has through the getVariable and setVariable methods. They provide local information from the particular process instance scope. Summary In this article, we saw what are the basic components required to set up a BPM system. With these components in place, we are ready to explore, in more detail, the structure and working of a BPM system. Resources for Article: Further resources on this subject: jBPM for Developers: Part 1 [Article] Configuring JBoss Application Server 5 [Article] Boss jBPM Concepts and jBPM Process Definition Language (jPDL) [Article]
Read more
  • 0
  • 0
  • 4825
article-image-internet-things-xively
Packt
19 Aug 2014
2 min read
Save for later

Internet of Things with Xively

Packt
19 Aug 2014
2 min read
In this article by Marco Schwartz, author of the book Arduino Networking, we will visualize the data we recorded with Xively. (For more resources related to this topic, see here.) Visualizing the recorded data We are now going to visualize the data we recorded with Xively. You can go over again to the device page on the Xively website. You should see that some data has been recorded in different channels, as shown in the following screenshot: By clicking on one of these channels, you can also display the data graphically. For example, the following screenshot shows the temperature channel after a few measurements: After a while, you will have more points for the temperature measurements, as shown in the following screenshot: You can also do the same for the humidity measurements; the following screenshot shows the humidity measurements: Note that by clicking on the time icon, you can change the time axis and display a longer or shorter time range. If you don't see any data being displayed, you need to go back to the Arduino IDE and make sure that the answer coming from the Xively server is a 200 OK message, like we saw in the previous section. Summary In this article, we went to the Xively website to visualize the data and learned how to visualize the data graphically, and saw this data arrive in real time. Resources for Article: Further resources on this subject: Avoiding Obstacles Using Sensors [Article] Hardware configuration [Article] Writing Tag Content [Article]
Read more
  • 0
  • 0
  • 10832

article-image-social-media-and-magento
Packt
19 Aug 2014
17 min read
Save for later

Social Media and Magento

Packt
19 Aug 2014
17 min read
Social networks such as Twitter and Facebook are ever popular and can be a great source of new customers if used correctly on your store. In this article by Richard Carter, the author of Learning Magento Theme Development, covers the following topics: Integrating a Twitter feed into your Magento store Integrating a Facebook Like Box into your Magento store Including social share buttons in your product pages Integrating product videos from YouTube into the product page (For more resources related to this topic, see here.) Integrating a Twitter feed into your Magento store If you're active on Twitter, it can be worthwhile to let your customers know. While you can't (yet, anyway!) accept payment for your goods through Twitter, it can be a great way to develop a long term relationship with your store's customers and increase repeat orders. One way you can tell customers you're active on Twitter is to place a Twitter feed that contains some of your recent tweets on your store's home page. While you need to be careful not to get in the way of your store's true content, such as your most recent products and offers, you could add the Twitter feed in the footer of your website.   Creating your Twitter widget   To embed your tweets, you will need to create a Twitter widget. Log in to your Twitter account, navigate to https://twitter.com/settings/widgets, and follow the instructions given there to create a widget that contains your most recent tweets. This will create a block of code for you that looks similar to the following code:   <a class="twitter-timeline" href="https://twitter.com/RichardCarter" data-widget-id="123456789999999999">Tweets by @RichardCarter</a>   <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s) [0],p=/^http:/.test(d.location)?'http':'https';if(!d. getElementById(id)){js=d.createElement(s);js.id=id;js. src=p+"://platform.twitter.com/widgets.js";fjs.parentNode. insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script> Embedding your Twitter feed into a Magento template   Once you have the Twitter widget code to embed, you're ready to embed it into one of Magento's template files. This Twitter feed will be embedded in your store's footer area. So, so open your theme's /app/design/frontend/default/m18/template/page/html/footer.phtml file and add the highlighted section of the following code:   <div class="footer-about footer-col">   <?php echo $this->getLayout()->createBlock('cms/block')->setBlockId('footer_about')->toHtml(); ?>   <?php   $_helper = Mage::helper('catalog/category'); $_categories = $_helper->getStoreCategories(); if (count($_categories) > 0): ?> <ul>   <?phpforeach($_categories as $_category): ?> <li>   <a href="<?php echo $_helper->getCategoryUrl($_category) ?>"> <?php echo $_category->getName() ?>   </a>   </li> <?phpendforeach; ?> </ul>   <?phpendif; ?>   <a class="twitter-timeline" href="https://twitter.com/RichardCarter" data-widget-id="123456789999999999">Tweets by @RichardCarter</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s) [0],p=/^http:/.test(d.location)?'http':'https';if(!d. getElementById(id)){js=d.createElement(s);js.id=id;js. src=p+"://platform.twitter.com/widgets.js";fjs.parentNode. insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script>   </div>   The result of the preceding code is a Twitter feed similar to the following one embedded on your store:     As you can see, the Twitter widget is quite cumbersome. So, it's wise to be sparing when adding this to your website. Sometimes, a simple Twitter icon that links to your account is all you need!   Integrating a Facebook Like Box into your Magento store   Facebook is one of the world's most popular social networks; with careful integration, you can help drive your customers to your Facebook page and increase long term interaction. This will drive repeat sales and new potential customers to your store. One way to integrate your store's Facebook page into your Magento site is to embed your Facebook page's news feed into it.   Getting the embedding code from Facebook   Getting the necessary code for embedding from Facebook is relatively easy; navigate to the Facebook Developers website at https://developers.facebook.com/docs/plugins/like-box-for-pages. Here, you are presented with a form. Complete the form to generate your embedding code; enter your Facebook page's URL in the Facebook Page URL field (the following example uses Magento's Facebook page):   Click on the Get Code button on the screen to tell Facebook to generate the code you will need, and you will see a pop up with the code appear as shown in the following screenshot:   Adding the embed code into your Magento templates   Now that you have the embedding code from Facebook, you can alter your templates to include the code snippets. The first block of code for the JavaScript SDK is required in the header.phtml file in your theme's directory at /app/design/frontend/default/m18/template/page/html/. Then, add it at the top of the file:   <div id="fb-root"></div> <script>(function(d, s, id) {   var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id;   js.src = "//connect.facebook.net/en_GB/all.js#xfbml=1"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));</script>   Next, you can add the second code snippet provided by the Facebook Developers site where you want the Facebook Like Box to appear in your page. For flexibility, you can create a static block in Magento's CMS tool to contain this code and then use the Magento XML layout to assign the static block to a template's sidebar.   Navigate to CMS | Static Blocks in Magento's administration panel and add a new static block by clicking on the Add New Block button at the top-right corner of the screen. Enter a suitable name for the new static block in the Block Title field and give it a value facebook in the Identifier field. Disable Magento's rich text editor tool by clicking on the Show / Hide Editor button above the Content field.   Enter in the Content field the second snippet of code the Facebook Developers website provided, which will be similar to the following code:   <div class="fb-like-box" data-href="https://www.facebook.com/Magento" data-width="195" data-colorscheme="light" data-show-faces="true" data-header="true" data-stream="false" data-show-border="true"></div> Once complete, your new block should look like the following screenshot:   Click on the Save Block button to create a new block for your Facebook widget. Now that you have created the block, you can alter your Magento theme's layout files to include the block in the right-hand column of your store.   Next, open your theme's local.xml file located at /app/design/frontend/default/m18/layout/ and add the following highlighted block of XML to it. This will add the static block that contains the Facebook widget:   <reference name="right">   <block type="cms/block" name="cms_facebook">   <action method="setBlockId"><block_id>facebook</block_id></action>   </block>   <!--other layout instructions -->   </reference>   If you save this change and refresh your Magento store on a page that uses the right-hand column page layout, you will see your new Facebook widget appear in the right-hand column. This is shown in the following screenshot:   Including social share buttons in your product pages   Particularly if you are selling to consumers rather than other businesses, you can make use of social share buttons in your product pages to help customers share the products they love with their friends on social networks such as Facebook and Twitter. One of the most convenient ways to do this is to use a third-party service such as AddThis, which also allows you to track your most shared content. This is useful to learn which products are your most-shared products within your store! Styling the product page a little further   Before you begin to integrate the share buttons, you can style your product page to provide a little more layout and distinction between the blocks of content. Open your theme's styles.css file and append the following CSS (located at /skin/frontend/default/m18/css/) to provide a column for the product image and a column for the introductory content of the product:   .product-img-box, .product-shop {   float: left;   margin: 1%;   padding: 1%;   width: 46%;   }   You can also add some additional CSS to style some of the elements that appear on the product view page in your Magento store:   .product-name { margin-bottom: 10px; }   .or {   color: #888; display: block; margin-top: 10px;   }   .add-to-box { background: #f2f2f2; border-radius: 10px; margin-bottom: 10px; padding: 10px; }   .more-views ul { list-style-type: none;   }   If you refresh a product page on your store, you will see the new layout take effect: Integrating AddThis   Now that you have styled the product page a little, you can integrate AddThis with your Magento store. You will need to get a code snippet from the AddThis website at http://www.addthis.com/get/sharing. Your snippet will look something similar to the following code:   <div class="addthis_toolboxaddthis_default_style ">   <a class="addthis_button_facebook_like" fb:like:layout="button_ count"></a>   <a class="addthis_button_tweet"></a>   <a class="addthis_button_pinterest_pinit" pi:pinit:layout="horizontal"></a>   <a class="addthis_counteraddthis_pill_style"></a> </div>   <script type="text/javascript">var addthis_config = {"data_track_ addressbar":true};</script>   <script type="text/javascript" src="//s7.addthis.com/js/300/addthis_ widget.js#pubid=youraddthisusername"></script>   Once the preceding code is included in a page, this produces a social share tool that will look similar to the following screenshot:   Copy the product view template from the view.phtml file from /app/design/frontend/base/default/catalog/product/ to /app/design/frontend/default/m18/catalog/product/ and open your theme's view.phtml file for editing. You probably don't want the share buttons to obstruct the page name, add-to-cart area, or the brief description field. So, positioning the social share tool underneath those items is usually a good idea. Locate the snippet in your view.phtml file that has the following code:   <?php if ($_product->getShortDescription()):?>   <div class="short-description">   <h2><?php echo $this->__('Quick Overview') ?></h2>   <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?></div>   </div>   <?phpendif;?>   Below this block, you can insert your AddThis social share tool highlighted in the following code so that the code is similar to the following block of code (the youraddthisusername value on the last line becomes your AddThis account's username):   <?php if ($_product->getShortDescription()):?>   <div class="short-description">   <h2><?php echo $this->__('Quick Overview') ?></h2>   <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?></div>   </div>   <?phpendif;?>   <div class="addthis_toolboxaddthis_default_style ">   <a class="addthis_button_facebook_like" fb:like:layout="button_ count"></a>   <a class="addthis_button_tweet"></a>   <a class="addthis_button_pinterest_pinit" pi:pinit:layout="horizontal"></a>   <a class="addthis_counteraddthis_pill_style"></a> </div>   <script type="text/javascript">var addthis_config = {"data_track_ addressbar":true};</script>   <script type="text/javascript" src="//s7.addthis.com/js/300/addthis_ widget.js#pubid=youraddthisusername"></script>   If you want to reuse this block in multiple places throughout your store, consider adding this to a static block in Magento and using Magento's XML layout to add the block as required.   Once again, refresh the product page on your Magento store and you will see the AddThis toolbar appear as shown in the following screenshot. It allows your customers to begin sharing their favorite products on their preferred social networking sites.     If you can't see your changes, don't forget to clear your caches by navigating to System | Cache Management. If you want to provide some space between other elements and the AddThis toolbar, add the following CSS to your theme's styles.css file:   .addthis_toolbox {   margin: 10px 0;   }   The resulting product page will now look similar to the following screenshot. You have successfully integrated social sharing tools on your Magento store's product page:     Integrating product videos from YouTube into the product page   An increasingly common occurrence on ecommerce stores is the use of video in addition to product photography. The use of videos in product pages can help customers overcome any fears they're not buying the right item and give them a better chance to see the quality of the product they're buying. You can, of course, simply add the HTML provided by YouTube's embedding tool to your product description. However, if you want to insert your video on a specific page within your product template, you can follow the steps described in this section. Product attributes in Magento   Magento products are constructed from a number of attributes (different fields), such as product name, description, and price. Magento allows you to customize the attributes assigned to products, so you can add new fields to contain more information on your product. Using this method, you can add a new Video attribute that will contain the video embedding HTML from YouTube and then insert it into your store's product page template.   An attribute value is text or other content that relates to the attribute, for example, the attribute value for the Product Name attribute might be Blue Tshirt. Magento allows you to create different types of attribute:   •        Text Field: This is used for short lines of text.   •        Text Area: This is used for longer blocks of text.   •        Date: This is used to allow a date to be specified.   •        Yes/No: This is used to allow a Boolean true or false value to be assignedto the attribute.   •        Dropdown: This is used to allow just one selection from a list of optionsto be selected.   •        Multiple Select: This is used for a combination box type to allow one ormore selections to be made from a list of options provided.   •        Price: This is used to allow a value other than the product's price, specialprice, tier price, and cost. These fields inherit your store's currency settings.   •        Fixed Product Tax: This is required in some jurisdictions for certain types ofproducts (for example, those that require an environmental tax to be added). Creating a new attribute for your video field   Navigate to Catalog | Attributes | Manage Attributes in your Magento store's control panel. From there, click on the Add New Attribute button located near the top-right corner of your screen:     In the Attribute Properties panel, enter a value in the Attribute Code field that will be used internally in Magento to refer this. Remember the value you enter here, as you will require it in the next step! We will use video as the Attribute Code value in this example (this is shown in the following screenshot). You can leave the remaining settings in this panel as they are to allow this newly created attribute to be used with all types of products within your store.   In the Frontend Properties panel, ensure that Allow HTML Tags on Frontend is set to Yes (you'll need this enabled to allow you to paste the YouTube embedding HTML into your store and for it to work in the template). This is shown in the following screenshot:   Now select the Manage Labels / Options tab in the left-hand column of your screen and enter a value in the Admin and Default Store View fields in the Manage Titles panel:     Then, click on the Save Attribute button located near the top-right corner of the screen. Finally, navigate to Catalog | Attributes | Manage Attribute Sets and select the attribute set you wish to add your new video attribute to (we will use the Default attribute set for this example). In the right-hand column of this screen, you will see the list of Unassigned Attributes with the newly created video attribute in this list:     Drag-and-drop this attribute into the Groups column under the General group as shown in the following screenshot:   Click on the Save Attribute Set button at the top-right corner of the screen to add the new video attribute to the attribute set.   Adding a YouTube video to a product using the new attribute   Once you have added the new attribute to your Magento store, you can add a video to a product. Navigate to Catalog | Manage Products and select a product to edit (ensure that it uses one of the attribute sets you added the new video attribute to). The new Video field will be visible under the General tab:   Insert the embedding code from the YouTube video you wish to use on your product page into this field. The embed code will look like the following:   <iframe width="320" height="240" src="https://www.youtube.com/embed/dQw4w9WgXcQ?rel=0" frameborder="0" allowfullscreen></iframe> Once you have done that, click on the Save button to save the changes to the product.   Inserting the video attribute into your product view template   Your final task is to allow the content of the video attribute to be displayed in your product page templates in Magento. Open your theme's view.phtml file from /app/design/frontend/default/m18/catalog/product/ and locate the followingsnippet of code:   <div class="product-img-box">   <?php echo $this->getChildHtml('media') ?> </div>   Add the following highlighted code to the preceding code to check whether a video for the product exists and show it if it does exist:   <div class="product-img-box">   <?php   $_video-html = $_product->getResource()->getAttribute('video')->getFrontend()->getValue($_product);   if ($_video-html) echo $_video-html ;   ?>   <?php echo $this->getChildHtml('media') ?>   </div>   If you now refresh the product page that you have added a video to, you will see that the video appears in the same column as the product image. This is shown in the following screenshot: Summary In this article, we looked at expanding the customization of your Magento theme to include elements from social networking sites. You learned about integrating a Twitter feed and Facebook feed into your Magento store, including social share buttons in your product pages, and integrating product videos from YouTube. Resources for Article: Further resources on this subject: Optimizing Magento Performance — Using HHVM [article] Installing Magento [article] Magento Fundamentals for Developers [article]
Read more
  • 0
  • 0
  • 10941

article-image-bootstrap-grid-system
Packt
19 Aug 2014
3 min read
Save for later

The Bootstrap grid system

Packt
19 Aug 2014
3 min read
This article is written by Pieter van der Westhuizen, the author of Bootstrap for ASP.NET MVC. Many websites are reporting an increasing amount of mobile traffic and this trend is expected to increase over the coming years. The Bootstrap grid system is mobile-first, which means it is designed to target devices with smaller displays and then grow as the display size increases. Fortunately, this is not something you need to be too concerned about as Bootstrap takes care of most of the heavy lifting. (For more resources related to this topic, see here.) Bootstrap grid options Bootstrap 3 introduced a number of predefined grid classes in order to specify the sizes of columns in your design. These class names are listed in the following table: Class name Type of device Resolution Container width Column width col-xs-* Phones Less than 768 px Auto Auto col-sm-* Tablets Larger than 768 px 750 px 60 px col-md-* Desktops Larger than 992 px 970 px 1170 px col-lg-* High-resolution desktops Larger than 1200 px 78 px 95 px The Bootstrap grid is divided into 12 columns. When laying out your web page, keep in mind that all columns combined should be a total of 12. To illustrate this, consider the following HTML code: <div class="container"><div class="row"><div class="col-md-3" style="background-color:green;"><h3>green</h3></div><div class="col-md-6" style="background-color:red;"><h3>red</h3></div><div class="col-md-3" style="background-color:blue;"><h3>blue</h3></div></div></div> In the preceding code, we have a <div> element, container, with one child <div> element, row. The row div element in turn has three columns. You will notice that two of the columns have a class name of col-md-3 and one of the columns has a class name of col-md-6. When combined, they add up to 12. The preceding code will work well on all devices with a resolution of 992 pixels or higher. To preserve the preceding layout on devices with smaller resolutions, you'll need to combine the various CSS grid classes. For example, to allow our layout to work on tablets, phones, and medium-sized desktop displays, change the HTML to the following code: <div class="container"><div class="row"><div class="col-xs-3 col-sm-3 col-md-3" style="backgroundcolor:green;"><h3>green</h3></div><div class="col-xs-6 col-sm-6 col-md-6" style="backgroundcolor:red;"><h3>red</h3></div><div class="col-xs-3 col-sm-3 col-md-3" style="backgroundcolor:blue;"><h3>blue</h3></div></div></div> By adding the col-xs-* and col-sm-* class names to the div elements, we'll ensure that our layout will appear the same in a wide range of device resolutions. Bootstrap HTML elements Bootstrap provides a host of different HTML elements that are styled and ready to use. These elements include the following: Tables Buttons Forms Images
Read more
  • 0
  • 0
  • 7730
article-image-angularjs-project
Packt
19 Aug 2014
14 min read
Save for later

AngularJS Project

Packt
19 Aug 2014
14 min read
This article by Jonathan Spratley, the author of book, Learning Yeoman, covers the steps of how to create an AngularJS project and previewing our application. (For more resources related to this topic, see here.) Anatomy of an Angular project Generally in a single page application (SPA), you create modules that contain a set of functionality, such as a view to display data, a model to store data, and a controller to manage the relationship between the two. Angular incorporates the basic principles of the MVC pattern into how it builds client-side web applications. The major Angular concepts are as follows: Templates : A template is used to write plain HTML with the use of directives and JavaScript expressions Directives : A directive is a reusable component that extends HTML with the custom attributes and elements Models : A model is the data that is displayed to the user and manipulated by the user Scopes : A scope is the context in which the model is stored and made available to controllers, directives, and expressions Expressions : An expression allows access to variables and functions defined on the scope Filters : A filter formats data from an expression for visual display to the user Views : A view is the visual representation of a model displayed to the user, also known as the Document Object Model (DOM) Controllers : A controller is the business logic that manages the view Injector : The injector is the dependency injection container that handles all dependencies Modules : A module is what configures the injector by specifying what dependencies the module needs Services : A service is a piece of reusable business logic that is independent of views Compiler : The compiler handles parsing templates and instantiating directives and expressions Data binding : Data binding handles keeping model data in sync with the view Why Angular? AngularJS is an open source JavaScript framework known as the Superheroic JavaScript MVC Framework, which is actively maintained by the folks over at Google. Angular attempts to minimize the effort in creating web applications by teaching the browser's new tricks. This enables the developers to use declarative markup (known as directives or expressions) to handle attaching the custom logic behind DOM elements. Angular includes many built-in features that allow easy implementation of the following: Two-way data binding in views using double mustaches {{ }} DOM control for repeating, showing, or hiding DOM fragments Form submission and validation handling Reusable HTML components with self-contained logic Access to RESTful and JSONP API services The major benefit of Angular is the ability to create individual modules that handle specific responsibilities, which come in the form of directives, filters, or services. This enables developers to leverage the functionality of the custom modules by passing in the name of the module in the dependencies. Creating a new Angular project Now it is time to build a web application that uses some of Angular's features. The application that we will be creating will be based on the scaffold files created by the Angular generator; we will add functionality that enables CRUD operations on a database. Installing the generator-angular To install the Yeoman Angular generator, execute the following command: $ npm install -g generator-angular For Karma testing, the generator-karma needs to be installed. Scaffolding the application To scaffold a new AngularJS application, create a new folder named learning-yeoman-ch3 and then open a terminal in that location. Then, execute the following command: $ yo angular --coffee This command will invoke the AngularJS generator to scaffold an AngularJS application, and the output should look similar to the following screenshot: Understanding the directory structure Take a minute to become familiar with the directory structure of an Angular application created by the Yeoman generator: app: This folder contains all of the front-end code, HTML, JS, CSS, images, and dependencies: images: This folder contains images for the application scripts: This folder contains AngularJS codebase and business logic: app.coffee: This contains the application module definition and routing controllers: Custom controllers go here: main.coffee: This is the main controller created by default directives: Custom directives go here filters: Custom filters go here services: Reusable application services go here styles: This contains all CSS/LESS/SASS files: main.css: This is the main style sheet created by default views: This contains the HTML templates used in the application main.html: This is the main view created by default index.html: This is the applications' entry point bower_components: This folder contains client-side dependencies node_modules: This contains all project dependencies as node modules test: This contains all the tests for the application: spec: This contains unit tests mirroring structure of the app/scripts folder karma.conf.coffee: This file contains the Karma runner configuration Gruntfile.js: This file contains all project tasks package.json: This file contains project information and dependencies bower.json: This file contains frontend dependency settings The directories (directives, filters, and services) get created when the subgenerator is invoked. Configuring the application Let's go ahead and create a configuration file that will allow us to store the application wide properties; we will use the Angular value services to reference the configuration object. Open up a terminal and execute the following command: $ yo angular:value Config This command will create a configuration service located in the app/scripts/services directory. This service will store global properties for the application. For more information on Angular services, visit http://goo.gl/Q3f6AZ. Now, let's add some settings to the file that we will use throughout the application. Open the app/scripts/services/config.coffee file and replace with the following code: 'use strict' angular.module('learningYeomanCh3App').value('Config', Config = baseurl: document.location.origin sitetitle: 'learning yeoman' sitedesc: 'The tutorial for Chapter 3' sitecopy: '2014 Copyright' version: '1.0.0' email: 'jonniespratley@gmail.com' debug: true feature: title: 'Chapter 3' body: 'A starting point for a modern angular.js application.' image: 'http://goo.gl/YHBZjc' features: [ title: 'yo' body: 'yo scaffolds out a new application.' image: 'http://goo.gl/g6LO99' , title: 'Bower' body: 'Bower is used for dependency management.' image: 'http://goo.gl/GpxBAx' , title: 'Grunt' body: 'Grunt is used to build, preview and test your project.' image: 'http://goo.gl/9M00hx' ] session: authorized: false user: null layout: header: 'views/_header.html' content: 'views/_content.html' footer: 'views/_footer.html' menu: [ title: 'Home', href: '/' , title: 'About', href: '/about' , title: 'Posts', href: '/posts' ] ) The preceding code does the following: It creates a new Config value service on the learningYeomanCh3App module The baseURL property is set to the location where the document originated from The sitetitle, sitedesc, sitecopy, and version attributes are set to default values that will be displayed throughout the application The feature property is an object that contains some defaults for displaying a feature on the main page The features property is an array of feature objects that will display on the main page as well The session property is defined with authorized set to false and user set to null; this value gets set to the current authenticated user The layout property is an object that defines the paths of view templates, which will be used for the corresponding keys The menu property is an array that contains the different pages of the application Usually, a generic configuration file is created at the top level of the scripts folder for easier access. Creating the application definition During the initial scaffold of the application, an app.coffee file is created by Yeoman located in the app/scripts directory. The scripts/app.coffee file is the definition of the application, the first argument is the name of the module, and the second argument is an array of dependencies, which come in the form of angular modules and will be injected into the application upon page load. The app.coffee file is the main entry point of the application and does the following: Initializes the application module with dependencies Configures the applications router Any module dependencies that are declared inside the dependencies array are the Angular modules that were selected during the initial scaffold. Consider the following code: 'use strict' angular.module('learningYeomanCh3App', [ 'ngCookies', 'ngResource', 'ngSanitize', 'ngRoute' ]) .config ($routeProvider) -> $routeProvider .when '/', templateUrl: 'views/main.html' controller: 'MainCtrl' .otherwise redirectTo: '/' The preceding code does the following: It defines an angular module named learningYeomanCh3App with dependencies on the ngCookies, ngSanitize, ngResource, and ngRoute modules The .config function on the module configures the applications' routes by passing route options to the $routeProvider service Bower downloaded and installed these modules during the initial scaffold. Creating the application controller Generally, when creating an Angular application, you should define a top-level controller that uses the $rootScope service to configure some global application wide properties or methods. To create a new controller, use the following command: $ yo angular:controller app This command will create a new AppCtrl controller located in the app/scripts/controllers directory. file and replace with the following code: 'use strict' angular.module('learningYeomanCh3App') .controller('AppCtrl', ($rootScope, $cookieStore, Config) -> $rootScope.name = 'AppCtrl' App = angular.copy(Config) App.session = $cookieStore.get('App.session') window.App = $rootScope.App = App) The preceding code does the following: It creates a new AppCtrl controller with dependencies on the $rootScope, $cookieStore, and Config modules Inside the controller definition, an App variable is copied from the Config value service The session property is set to the App.session cookie, if available Creating the application views The Angular generator will create the applications' index.html view, which acts as the container for the entire application. The index view is used as the shell for the other views of the application; the router handles mapping URLs to views, which then get injected to the element that declares the ng-view directive. Modifying the application's index.html Let's modify the default view that was created by the generator. Open the app/index.html file, and add the content right below the following HTML comment: The structure of the application will consist of an article element that contains a header,<article id="app" <article id="app" ng-controller="AppCtrl" class="container">   <header id="header" ng-include="App.layout.header"></header>   <section id=”content” class="view-animate-container">     <div class="view-animate" ng-view=""></div>   </section>   <footer id="footer" ng-include="App.layout.footer"></footer> </article> In the preceding code: The article element declares the ng-controller directive to the AppCtrl controller The header element uses an ng-include directive that specifies what template to load, in this case, the header property on the App.layout object The div element has the view-animate-container class that will allow the use of CSS transitions The ng-view attribute directive will inject the current routes view template into the content The footer element uses an ng-include directive to load the footer specified on the App.layout.footer property Use ng-include to load partials, which allows you to easily swap out templates. Creating Angular partials Use the yo angular:view command to create view partials that will be included in the application's main layout. So far, we need to create three partials that the index view (app/index.html) will be consuming from the App.layout property on the $rootScope service that defines the location of the templates. Names of view partials typically begin with an underscore (_). Creating the application's header The header partial will contain the site title and navigation of the application. Open a terminal and execute the following command: $ yo angular:view _header This command creates a new view template file in the app/views directory. Open the app/views/_header.html file and add the following contents: <div class="header"> <ul class="nav nav-pills pull-right"> <li ng-repeat="item in App.menu" ng-class="{'active': App.location.path() === item.href}"> <a ng-href = "#{{item.href}}"> {{item.title}} </a> </li> </ul> <h3 class="text-muted"> {{ App.sitetitle }} </h3> </div> The preceding code does the following: It uses the {{ }} data binding syntax to display App.sitetitle in a heading element The ng-repeat directive is used to repeat each item in the App.menu array defined on $rootScope Creating the application's footer The footer partial will contain the copyright message and current version of the application. Open the terminal and execute the following command: $ yo angular:view _footer This command creates a view template file in the app/views directory. Open the app/views/_footer.html file and add the following markup: <div class="app-footer container clearfix">     <span class="app-sitecopy pull-left">       {{ App.sitecopy }}     </span>     <span class="app-version pull-right">       {{ App.version }}     </span> </div> The preceding code does the following: It uses a div element to wrap two span elements The first span element contains data binding syntax referencing App.sitecopy to display the application's copyright message The second span element also contains data binding syntax to reference App.version to display the application's version Customizing the main view The Angular generator creates the main view during the initial scaffold. Open the app/views/main.html file and replace with the following markup: <div class="jumbotron">     <h1>{{ App.feature.title }}</h1>     <img ng-src="{{ App.feature.image  }}"/>       <p class="lead">       {{ App.feature.body }}       </p>   </div>     <div class="marketing">   <ul class="media-list">         <li class="media feature" ng-repeat="item in App.features">        <a class="pull-left" href="#">           <img alt="{{ item.title }}"                       src="http://placehold.it/80x80"                       ng-src="{{ item.image }}"            class="media-object"/>        </a>        <div class="media-body">           <h4 class="media-heading">{{item.title}}</h4>           <p>{{ item.body }}</p>        </div>         </li>   </ul> </div> The preceding code does the following: At the top of the view, we use the {{ }} data binding syntax to display the title and body properties declared on the App.feature object Next, inside the div.marketing element, another div element is declared with the ng-repeat directive to loop for each item in the App.features property Then, using the {{ }} data binding syntax wrapped around the title and body properties from the item being repeated, we output the values Previewing the application To preview the application, execute the following command: $ grunt serve Your browser should open displaying something similar to the following screenshot: Download the AngularJS Batarang (http://goo.gl/0b2GhK) developer tool extension for Google Chrome for debugging. Summary In this article, we learned the concepts of AngularJS and how to leverage the framework in a new or existing project. Resources for Article: Further resources on this subject: Best Practices for Modern Web Applications [article] Spring Roo 1.1: Working with Roo-generated Web Applications [article] Understand and Use Microsoft Silverlight with JavaScript [article]
Read more
  • 0
  • 0
  • 3711

article-image-foundation
Packt
19 Aug 2014
22 min read
Save for later

Foundation

Packt
19 Aug 2014
22 min read
In this article by Kevin Horek author of Learning Zurb Foundation, we will be covering the following points: How to move away from showing clients wireframes and how to create responsive prototypes Why these prototypes are better and quicker than doing traditional wireframes The different versions of Foundation What does Foundation include? How to use the documentation How to migrate from an older version Getting support when you can't figure something out What browsers does Foundation support? How to extend Foundation Our demo site (For more resources related to this topic, see here.) Over the last couple of years, showing wireframes to most clients has not really worked well for me. They never seem to quite get it, and if they do, they never seem to fully understand all the functionality through a wireframe. For some people, it is really hard to picture things in their head, they need to see exactly what it will look and function like to truly understand what they are looking at. You should still do a rough wireframe either on paper, on a whiteboard, or on the computer. Then once you and/or your team are happy with these rough wireframes, then jump right into the prototype. Rough wireframing and prototypying You might think prototyping this early on when the client has only seen a sitemap is crazy, but the thing is, once you master Foundation, you can build prototypes in about the same time you would spend doing traditional high quality wireframes in Illustrator or whatever program you currently use. With these prototypes, you can make things clickable, interactive, and super fast to make edits to after you get feedback from the client. With the default Foundation components, you can work out how things will work on a phone, tablet, and desktop/laptop. This way you can work with your team to fully understand how things will function and start seeing where the project's potential issues will be. You can then assign people to start dealing with these potential problems early on in the process. When you are ready to show the client, you can walk them through their project on multiple devices and platforms. You can easily show them what content they are going to need and how that content will flow and reflow based on the medium the user is viewing their project on. You should try to get content as early as possible; a lot of companies are hiring content strategists. These content strategists handle working with the client to get, write, and rework content to fit in the responsive web medium. This allows you to design around a client's content, or at least some of the content. We all know that what a client says they will get you for content is not always what you get, so you may need to tweak the design to fit the actual content you get. Making these theming changes to accommodate these content changes can be a pain, but with Foundation, you can just reflow part of the page and try some ideas out in the prototype before you put them back into the working development site. Once you have built up a bunch of prototypes, you can easily combine and use parts of them to create new designs really fast for current or new projects. When prototyping, you should keep everything grayscale, without custom fonts, or a theme beyond the base Foundation one. These prototypes do not have to look pretty. The less it looks like a full design, the better off you will be. You will have to inform your client that an actual design for their project will be coming and that it will be done after they sign off this prototype. When you show the client, you should bring a phone, a tablet, and a laptop to show them how the project will flow on each of these devices. This takes out all the confusion about what happens to the layouts on different screen sizes and on touch and non-touch devices. It also allows your client and your team to fully understand what they are building and how everything works and functions. Trying to take a PDF of wireframes, a Photoshop file, and trying to piece them together to build a responsive web project can be really challenging. With this approach, so many details can get lost in translation, you have to keep going back to talk to the client or your team about how certain things should work or function. Even worse, you have to make huge changes to a section close to the end of the project because something was designed without being really thought through and now your developers have to scramble to make something work within the budget. Prototyping can sort out all the issues or at least the major issues that could arise in the project. With these Foundation prototypes, you keep building on the code for each step of the web building process. Your designer can work with your frontend/backend team to come up with a prototype that everyone is happy with and commit to being able to build it before the client sees anything. If you are familiar with version control, you can use it to keep track of your prototypes and collaborate with another person or a team of people. The two most popular version control software applications are Git (http://git-scm.com/) and Subversion (http://subversion.apache.org/). Git is the more popular of the two right now; however, if you are working on a project that has been around for a number of years, chances are that it will be in Subversion. You can migrate from one to the other, but it might take a bit of work. These prototypes keep your team on the same page right from the beginning of the project and allow the client to sign off on functionality and how the project will work on different mediums. Yes, you are spending more time at the beginning getting everyone on the same page and figuring out functionality early on, but this process should sort out all the confusion later in a project and save you time and money at the end of the project. When the client has changes that are out of scope, it is easy to reference back to the prototype and show them how that change will impact what they signed off on. If the change is major enough then you will need to get them a cost on making that change happen. You should test your prototypes on an iPhone, an Android phone, an iPad, and your desktop or laptop. I would also figure out what browser your client uses and make sure you test on that as well. If they are using an older version of IE, 8 or earlier, you will need to have the conversation with them about how Foundation 4+ does not support IE8. If that support is needed, you will have to come up with a solution to handle this outdated version of IE. Looking at a client's analytics to see what versions of IE their clients are coming to the project with will help you decide how to handle older versions of IE. Analytics might tell you that you can drop the version all together. Another great component that is included with Foundation is Modernizr (http://modernizr.com/); this allows you to write conditional JS and/or CSS for a specific situation or browser version. This really can be a lifesaver. Prototyping smaller projects While you are learning Foundation, you might think that using Foundation on a smaller project will eat up your entire budget. However, these are the best projects to learn Foundation. Basically, you take the prototype to a place where you can show a client the rough look and feel using Foundation. Then, you create a theme board in Photoshop with colors, fonts, photos and anything else to show the client. This first version will be a grayscale prototype that will function across multiple screen sizes. Then you can pull up your theme board to show the direction you are thinking of for the look and feel. If you still feel more comfortable doing your designs in Photoshop, there are some really good Photoshop grid templates at http://www.yeedeen.com/downloads/category/30-psd. If you want to create a custom grid that you can take a screenshot of, then paste into Photoshop, and then drag your guidelines over the grid to make your own template, you can refer to http://www.gridlover.net/foundation/. Prototyping wrap-up These methods are not perfect and may not always work for you, but you're going to see my workflow and how Foundation can be used on all of your web projects. You will figure out what will work with your clients, your projects, and your workflow. Also, you might have slightly different workflows based on the type of project, and/or project budget. If the client does not see value in having a responsive site, you should choose if you want to work with these types of clients. The Web is not one standard resolution anymore and it never will be again, so if a client does not understand that, you might want to consider not working with them. These types of clients are usually super hard to work with and your time is better spent on clients that get or are willing to allow you to teach them and trust you that you are building their project for the modern Web. Personally, clients that have fought with me to not be responsive usually come back a few months later wondering why their site does not work great on their new smartphone or tablet and wanting you to fix it. So try and address this up front and it will save you grief later on and make your clients happier and their experience better. Like anything, there are exceptions to this but just make sure you have a contract in place to outline that you are not building this as responsive, and that it could cause the client a lot of grief and costs later to go back and make it responsive. No matter what you do for a client, you should have a contract in place, this will just make sure you both understand what is each party responsible for. Personally, I like to use a modified version of, (https://gist.github.com/malarkey/4031110). This contract does not have any legal mumbo jumbo that people do not understand. It is written in plain English and has a little bit of a less serious tone. Now that we have covered why prototyping with Foundation is faster than doing wireframes or prototypes in Photoshop, let's talk about what comes in the base Foundation framework. Then we will cover which version to install, and then go through each file and folder. Introducing the framework Before we get started, please refer to the http://foundation.zurb.com/develop/download.html webpage. You will see that there are four versions of Foundation: complete, essentials, custom, and SCSS. But let's talk about the other versions. The essentials is just a smaller version of Foundation that does not include all the components of the framework; this version is a barebones version. Once you are familiar with Foundation, you will likely only include the components that you need for a specific project. By only including the components you need, you can speed up the load time of your project and you do not make the user download files that are not being used by your project. The custom version allows you to pick the components and basic sizes, colors, radius, and text direction. You will likely use this or the SCSS version of Foundation once you are more comfortable with the framework. The SCSS or Sass version of Foundation is the most powerful version. If you do not know what Sass is, it basically gives you additional features of CSS that can speed up how you theme your projects. There is actually another version of Foundation that is not listed on this page, which can be found by hitting the blue Getting Started option in the top right-corner and then clicking on App Guide under Building and App. You can also visit this version at http://foundation.zurb.com/docs/applications.html. This version is the Ruby Gem version of Foundation, and unless you are building a Ruby on Rails project, you will never use this version of Foundation. Zurb keeps the gem pretty up to date, you will likely get the new version of the gem about a week or two after the other versions come out. Alright, let's get into Foundation. If you have not already, hit the blue Download Everything button below the complete heading on the webpage. We will be building a one page demo site from the base Foundation theme that you just downloaded. This way, you can see how to take what you are given by default and customize this base theme to look anyway you want it to. We will give this base theme a custom look and feel, and make it look like you are not using a responsive framework at all. The only way to tell is if you view the source of the website. The Zurb components have very little theming applied to them. This allows you to not have to worry about really overriding the CSS code and you can just start adding additional CSS to customize these components. We will cover how to use all the major components of the framework, you will have an advanced understanding of the framework and how you can use it on all your projects going forward. Foundation has been used on small-to-large websites, web apps, at startups, with content management systems, and with enterprise-level applications. Going over the base theme The base theme that you download is made up of an HTML index file, a folder of CSS files, JavaScript files, and an empty img folder for images, which are explained in the following points: The index.html file has a few Foundation components to get you started. You have three, 12- column grids at three screen sizes; small, medium, and large. You can also control how many columns are in the grid, and the spacing (also called the gutter) between the columns, and how to use the other grid options. You will soon notice that you have full control over pretty much anything and you can control how things are rendered on any screen size or device, and whether that device is in portrait or landscape. You also have the ability to render different code on different devices and for different screen sizes. In the CSS folder, there is the un-minified version of Foundation with the filename foundation.css. There is also a minified version of Foundation with the filename foundation.min.css. If you are not familiar with minification, it has the same code as the foundation.css file, just all the spacing, comments, and code formatting have been taken out. This makes the file really hard to read and edit, but the file size is smaller and will speed up your project's load time. Most of the time, minified files have all the code on one really long line. You should use the foundation.css file as reference but actually include the minified one in your project. The minified version makes debugging and error fixing almost impossible, so we use the un-minified version for development and then the minified version for production. The last file in that folder is normalize.css; this file can be called a reset file, but it is more of an alternative to a reset file. This file is used to try to set defaults on a bunch of CSS elements and tries to get all the browsers to be set to the same defaults. The thinking behind this is that every browser will look and render things the same, and, therefore, there should not be a lot of specific theming fixes for different browsers. These types of files do a pretty good job but are not perfect and you will have to do little fixes for different browsers, even the modern ones. We will also cover how to use some extra CSS to take resetting certain elements a little further than the normalize file does for you. This will mainly include showing you how to render form elements and buttons to be the same across-browser and device. We will also talk about, browser version, platform, OS, and screen resolution detection when we talk about testing. We will also be adding our own CSS file that we will add our customizations to, so if you ever decide to update the framework as a new version comes out, you will not have to worry about overriding your changes. We will never add or modify the core files of the framework; I highly recommend you do not do this either. Once we get into Sass, we will cover how you can really start customizing the framework defaults using the custom variables that are built right into Foundation. These variables are one of the reasons that Foundation is the most advanced responsive framework out there. These variables are super powerful and one of my favorite things about Foundation. Once you understand how to use variables, you can write your own or you can extend your setup of Foundation as much as you like. In the JS folder, you will find a few files and some folders. In the Foundation folder, you will find each of the JavaScript components that you need to make Foundation work properly cross-device, browser, and responsive. These JavaScript components can also be use to extend Foundation's functionality even further. You can only include the components that you need in your project. This allows you to keep the framework lean and can help with load times; this is especially useful on mobile. You can also use CSS to theme each of these components to be rendered differently on each device or at different screen sizes. The foundation.min.js file is a minified version of all the files in the Foundation folder. You can decide based on your needs whether you want to include only the JavaScripts you are using on that project or whether you want to include them all. When you are learning, you should include them all. When you are comfortable with the framework and are ready to make your project live, you should only include the JavaScripts you are actually using. This helps with load times and can make troubleshooting easier. Many of the Foundation components will not work without including the JavaScript for that component. The next file you will notice is jquery.js it might be either in the root of this folder or in the vendor folder if you are using a newer version of Foundation 5. If you are not familiar with jQuery, it is a JavaScript library that makes DOM manipulation, event handling, animation, and Ajax a lot easier. It also makes all of this stuff work cross-browser and cross-device. The next file in the JS folder or in the vendor folder under JS is modernizr.js; this file helps you to write conditional JavaScript and/or CSS to make things work cross-browser and to make progressive enhancements. Also, you put third-party JavaScript libraries that you are using on your project in the vendor folder. These are libraries that you either wrote yourself or found online, are not part of Foundation, and are not required for Foundation to work properly. Referring to the Foundation documentation The Foundation documentation is located at http://foundation.zurb.com/docs/. Foundation is really well documented and provides a lot of code samples and examples to use in your own projects. All the components also contain Sass variables that you can use to customize some of the defaults and even build your own. This saves you writing a bunch of override CSS classes. Each part of the framework is listed on the left-hand side and you can click on what you are looking for. You are taken to a page about that specific part and can read the section's overview, view code samples, working examples, and how to customize that part of the framework. Each section has a pretty good walk through about how to use each piece. Zurb is constantly updating Foundation, so you should check the change log every once in a while at http://foundation.zurb.com/docs/changelog.html. If you need documentation on an older version of Foundation, it is at the bottom of the documentation site in the left-hand column. Zurb keeps all the documentation back to Foundation 2. The only reason you will ever need to use Foundation 2 is if you need to support a really, really old version of IE, such as version 7. Foundation never supported IE6, but you will likely never have to worry about that version of IE. Migrating to a newer version of Foundation If you have an older version of Foundation, each version has a migration guide. The migration guide from Foundation 4 to 5 can be found at http://foundation.zurb.com/docs/upgrading.html. Personally, I have migrated websites and web apps in multiple languages and as long as Zurb does not change the grid, like they did from Foundation 3 to 4, then usually we copy-and-paste over the old version of the Foundation CSS, JavaScript, and images. You will likely have to change some JavaScript calls, do some testing, and do some minor fixes here and there, but it is usually a pretty smooth process as long as you did not modify the core framework or write a bunch of custom overrides. If you did either of these things, you will be in for a lot of work or a full rebuild of your project, so you should never modify the core. For old versions of Foundation, or if your version has been heavily modified, it might be easier to start with a fresh version of Foundation and copy-and-paste in the parts that you want to still use. Personally, I have done both and it really depends on the project. Before you do any migration, make sure you are using some sort of version control, such as GIT. If you do not know what GIT is, you should look into it. Here is a good place to start: (http://git-scm.com/book/en/Getting-Started) GIT has saved me from losing code so many times. If GIT is a little overwhelming right now, at the very least, duplicate your project folder as a backup and then copy in the new version of the framework over your files. If things are really broken, you can at least still use your old version while you work out the kinks in the new version. Framework support At some point, you will likely have questions about something in the framework, or will be trying to get something to work and for some reason, you can't figure it out. Foundation has multiple ways to get support, some of which are listed as follows: E-mail Twitter GitHub StackOverflow Forums To visit or get in-touch with support go to http://foundation.zurb.com/support/support.html. Browser support Foundation 5 supports the majority of browsers and devices, but like anything modern, it drops support for older browser versions. If you need IE8 or cringe, or IE7 support, you will need to use an older version of Foundation. You can see a full browser and device compatibility list at http://foundation.zurb.com/docs/compatibility.html. Extending Foundation Zurb also builds a bunch of other components that usually make their way into Foundation at some point, and work well with Foundation even though they are not officially part of it. These components range from new JavaScript libraries, fonts, icons, templates, and so on. You can visit their playground at http://zurb.com/playground. This playground also has other great resources and tools that you can use on other projects and other mediums. The things at Zurb's playground can make designing with Foundation a lot easier, even if you are not a designer. It can take quite a while to find icons or make them into SVGs or fonts for use in your projects, but Zurb has provided these in their playground. Overview of our one-page demo website The best way to show you how to learn the Zurb Foundation Responsive Framework is to actually get you building a demo site along with me. You can visit the final demo site we will be building at http://www.learningzurbfoundation.com/demo. We will be taking the base starter theme that we downloaded and making a one-page demo site. The demo site is built to teach you how to use the components and how they work together. You can also add outside components, but you can try those on your own. The demo site will show you how to build a responsive website, and it might not look like an ideal site, but I am trying to use as many components as possible to show you how to use the framework. Once you complete this site, you will have a deep understanding of the framework. You can then use this site as a starter theme or at the very least, as a reference for all your Foundation projects going forward. Summary In this article, we covered how to rough wireframe and quickly moved into prototyping. We also covered the following points: We went over what is included in the base Foundation theme Explored the documentation and how to migrate Foundation versions How to get framework support Started to get you thinking about browser support Letting you know that you can extend Foundation beyond its defaults We quickly covered our one-page demo site Resources for Article: Further resources on this subject: Quick start – using Foundation 4 components for your first website [Article] Zurb Foundation – an Overview [Article] Best Practices for Modern Web Applications [Article]
Read more
  • 0
  • 0
  • 2014
Modal Close icon
Modal Close icon