Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-alice-3-making-simple-animations-actors
Packt
29 Apr 2011
9 min read
Save for later

Alice 3: Making Simple Animations with Actors

Packt
29 Apr 2011
9 min read
Alice 3 provides an extensive gallery with hundreds of customizable 3D models that you can easily incorporate as actors. This article provides many tasks that will allow us to start making simple animations with many actors in the 3D environment provided by Alice. We will search for models of specific animals in the diverse galleries. We will locate and orient the actors in the 3D space. We will give some simple orders to the actors to create simple animations. Browsing galleries to search for a specific class In this recipe, we will create a new project and set a simple scene. Then we will browse the different packages included in Alice to search for a specific class. We will visualize the thumbnail icons that represent each package and class. Getting ready We have to be working on a project in order to be able to browse the galleries. Therefore, we will create a new project and set a simple scene. Follow these steps: Select File New...| in the main menu to start a new project. A dialog box will display the six predefined templates with their thumbnail previews in the Templates tab. Select GrassyProject.a3p as the desired template for the new project and click on OK. Alice will display a grassy ground with a light blue sky. Click on Edit Scene, at the lower-right corner of the scene preview. Alice will show a bigger preview of the scene and will display the Model Gallery at the bottom. Go to the Model Gallery and select Generic Alice Models Environments | Skies|. Use the horizontal scroll bar to find the ForestSky class. Click on the ForestSky thumbnail. Leave the default name, forestSky, for the new instance and click OK to add it to the existing scene. The scene preview will replace the light blue sky with a violet one. Many trees will appear at the horizon, as shown in the next screenshot: (Move the mouse over the image to enlarge it.) How to do it... Follow these steps to browse the different packages included in Alice to search for a specific class: Make sure that Alice is displaying the scene editor. If you see the Edit Code label at the lower-right corner of the big preview of the scene, it means that Alice is displaying the scene editor. If you see the Edit Scene label at the lower-right corner of a small scene preview, you should click on this label to switch to the scene editor. You will see the Model Gallery displayed at the bottom of the window. The initial view of the Model Gallery shows the following three packages located in the gallery root folder, as shown in the following screenshot: Looking Glass Characters: This package includes many characters that perform realistic animations for the characters. For example, you can make a person walk with a simple call to a procedure. Looking Glass Scenery: This package includes different kinds of scenery elements. Generic Alice Models: This package includes models that provide the basic and generic procedures. For example, you can move a person with a simple procedure call, but there isn't a procedure to make the person walk. If you don't see the previously shown screenshot with the three packages, it means that you are browsing a subfolder of gallery and you need to go back to the gallery root folder. Click on the gallery button and Alice will display the thumbnails for the three packages. If you don't see the three packages, you should check your Alice installation. Click on the search entire gallery textbox, located at the right-hand side of the gallery button. Enter rab in the search entire gallery textbox. Alice will query for the classes and packages that contain the rab string and will display the thumbnails for the following classes, as shown in the next screenshot: Rabbit Scarab WhiteRabbit Parabola Now you know that you have two different rabbits, Rabbit and WhiteRabbit. You can select your favorite rabbit and then add it as an actor in the scene. Select File Save as...| and give a name to the project, for example, MyForest. Then, you can use this new scene as a template for your next Alice project. How it works... Alice organizes its gallery in packages with hierarchical folders. The previously mentioned three packages are located in the gallery root folder. We can browse each package by clicking on its thumbnail. Each time we click on a thumbnail, the related sub-folder will open and Alice will display the thumbnails for the new sub-folders and the classes. The thumbnail that represents a folder, known as a package, displays a folder icon at the upper-left corner and includes the preview of some of the classes that it includes. The next screenshot shows the thumbnails for three packages, amusementpark, animals, and beach. These packages are sub-folders of the Generic Alice Models package: The thumbnails for classes don't include the previously mentioned folder icon and they show a different background color. The next screenshot shows the thumbnails for three classes, Bird1, BirdBaby, and BlueBird: The names for packages included within one of the three main packages use lowercase names, such as, aquarium, bedroom, and circus. The names for classes always start with an uppercase letter, such as, Monitor and Room. When a class name needs more than one word, it doesn't use spaces to separate them but it mixes lowercase with uppercase to mark the difference between words, such as, CatClock and OldBed. The main packages contain hundreds of classes organized in dozens of folders. Therefore, we might spend hours browsing the galleries to find an appropriate rabbit for our scene. We took advantage of Alice query features to search the entire gallery for all the classes that contain a string. This way, we could find a simple rabbit, Rabbit, and a dressed rabbit, WhiteRabbit. There's more... While you type characters in the search entire gallery textbox, Alice will query all the packages and will display the results in real-time. You will notice that Alice changes the results displayed as you are editing the textbox. The results for your search will include both packages and classes that contain the entered string. For example, follow these steps: Click on the search entire gallery textbox, located at the right-hand side of the gallery button. Enter bug in the search entire gallery text box. Alice will query for the classes and packages that contain the bug string and will display two thumbnails. One thumbnail is the bugs package and the other thumbnail is the Ladybug class, as shown in the following screenshot: If you think that Ladybug isn't the appropriate bug you want as an actor, you can click on the thumbnail for the bugs package and you will find many other bugs. When you click on the thumbnail, the text you entered in the search entire gallery textbox will disappear because there is no filter being applied to the gallery and you are browsing the contents of the gallery Generic Alice Models | animals | bugs| package. You can add a Beetle or a Catepillar, as shown in the following screenshot: Creating a new instance from a class in a gallery In this task, we will add a new actor to an existing scene. We will drag and drop a thumbnail of a class from the gallery and then we will learn how Alice adds a new instance to the scene. Getting ready We want to add a new actor to an existing scene. Therefore, we will use an existing project that has a simple scene. Open an existing project based on one of the six predefined Alice templates. You can open the MyForest project saved in the Browsing galleries to search for a specific class recipe in this article. Select Starting Camera View in the drop-down list located at the top of the big scene preview. How to do it... Follow these steps to add a new instance of the WhiteRabbit class: Search for the WhiteRabbit class in the gallery. You can browse gallery Generic Alice Models | animals| or enter rab in the search entire gallery textbox to visualize the WhiteRabbit thumbnail. Drag the WhiteRabbit thumbnail from the gallery to the big scene preview. A bounding box that represents the 3D model in the 3D space will appear, as shown in the next screenshot: Keep the mouse button down and move the mouse to locate the bounding box in the desired initial position for the new element. Once you have located the element in the desired position, release the mouse button and the Declare Property dialog box will appear. Leave the default name, whiteRabbit, for the new instance and click on OK to add it to the existing scene. The scene preview will perform an animation when Alice adds the new instance and then it will go back to the starting camera view to show how the new element appears on the scene. The next screenshot shows the new dressed white rabbit added to the scene, as seen by the starting camera: Select File Save as...| from Alice's main menu and give a new name to the project. Then, you can make changes to the project according to your needs. How it works... When we dropped the thumbnail for the WhiteRabbit class, the Declare Property dialog box provided information about what Alice was going to do, as shown in the following screenshot: Alice defines a new class, MyWhiteRabbit, that extends WhiteRabbit. MyWhiteRabbit is a new value type for the project, a subclass of WhiteRabbit. The name for the new property that represents the new instance of MyWhiteRabbit is whiteRabbit. This means that you can access this new actor with the whiteRabbit name and that this property is available for scene. Because the starting camera view is looking at the horizon, we see the rabbit looking at the camera in the scene preview. If you select TOP in the in the drop-down list located at the top of the big scene preview, you will see the rabbit on the grassy ground and how the camera is looking at the rabbit. The next screenshot shows the scene seen from the top and you can see the camera with a circle around it: There's more... When you run the project, Alice shows a new window with the rendered scene, as seen by the previously shown camera, the starting camera. The default window size is very small. You can resize the Run window and Alice will use the new size to render the scene with a higher resolution. The next time you run the project, Alice will use the new size, as shown in the next screenshot that displays the dressed white rabbit with a forest in the background:
Read more
  • 0
  • 0
  • 6263

article-image-transformations-using-mapreduce
Packt
05 Feb 2015
19 min read
Save for later

Transformations Using Map/Reduce

Packt
05 Feb 2015
19 min read
In this article written by Adam Boduch, author of the book Lo-Dash Essentials, we'll be looking at all the interesting things we can do with Lo-Dash and the map/reduce programming model. We'll start off with the basics, getting our feet wet with some basic mappings and basic reductions. As we progress through the article, we'll start introducing more advanced techniques to think in terms of map/reduce with Lo-Dash. The goal, once you've reached the end of this article, is to have a solid understanding of the Lo-Dash functions available that aid in mapping and reducing collections. Additionally, you'll start to notice how disparate Lo-Dash functions work together in the map/reduce domain. Ready? (For more resources related to this topic, see here.) Plucking values Consider that as your informal introduction to mapping because that's essentially what it's doing. It's taking an input collection and mapping it to a new collection, plucking only the properties we're interested in. This is shown in the following example: var collection = [ { name: 'Virginia', age: 45 }, { name: 'Debra', age: 34 }, { name: 'Jerry', age: 55 }, { name: 'Earl', age: 29 } ]; _.pluck(collection, 'age'); // → [ 45, 34, 55, 29 ] This is about as simple a mapping operation as you'll find. In fact, you can do the same thing with map(): var collection = [ { name: 'Michele', age: 58 }, { name: 'Lynda', age: 23 }, { name: 'William', age: 35 }, { name: 'Thomas', age: 41 } ]; _.map(collection, 'name'); // → // [ // "Michele", // "Lynda", // "William", // "Thomas" // ] As you'd expect, the output here is exactly the same as it would be with pluck(). In fact, pluck() is actually using the map() function under the hood. The callback passed to map() is constructed using property(), which just returns the specified property value. The map() function falls back to this plucking behavior when a string instead of a function is passed to it. With that brief introduction to the nature of mapping, let's dig a little deeper and see what's possible in mapping collections. Mapping collections In this section, we'll explore mapping collections. Mapping one collection to another ranges from composing really simple—as we saw in the preceding section—to sophisticated callbacks. These callbacks that map each item in the collection can include or exclude properties and can calculate new values. Besides, we can apply functions to these items. We'll also address the issue of filtering collections and how this can be done in conjunction with mapping. Including and excluding properties When applied to an object, the pick() function generates a new object containing only the specified properties. The opposite of this function, omit(), generates an object with every property except those specified. Since these functions work fine for individual object instances, why not use them in a collection? You can use both of these functions to shed properties from collections by mapping them to new ones, as shown in the following code: var collection = [ { first: 'Ryan', last: 'Coleman', age: 23 }, { first: 'Ann', last: 'Sutton', age: 31 }, { first: 'Van', last: 'Holloway', age: 44 }, { first: 'Francis', last: 'Higgins', age: 38 } ]; _.map(collection, function(item) { return _.pick(item, [ 'first', 'last' ]); }); // → // [ // { first: "Ryan", last: "Coleman" }, // { first: "Ann", last: "Sutton" }, // { first: "Van", last: "Holloway" }, // { first: "Francis", last: "Higgins" } // ] Here, we're creating a new collection using the map() function. The callback function supplied to map() is applied to each item in the collection. The item argument is the original item from the collection. The callback is expected to return the mapped version of that item and this version could be anything, including the original item itself. Be careful when manipulating the original item in map() callbacks. If the item is an object and it's referenced elsewhere in your application, it could have unintended consequences. We're returning a new object as the mapped item in the preceding code. This is done using the pick() function. We only care about the first and the last properties. Our newly mapped collection looks identical to the original, except that no item has an age property. This newly mapped collection is seen in the following code: var collection = [ { first: 'Clinton', last: 'Park', age: 19 }, { first: 'Dana', last: 'Hines', age: 36 }, { first: 'Pete', last: 'Ross', age: 31 }, { first: 'Annie', last: 'Cross', age: 48 } ]; _.map(collection, function(item) { return _.omit(item, 'first'); }); // → // [ // { last: "Park", age: 19 }, // { last: "Hines", age: 36 }, // { last: "Ross", age: 31 }, // { last: "Cross", age: 48 } // ] The preceding code follows the same approach as the pick() code. The only difference is that we're excluding the first property from the newly created collection. You'll also notice that we're passing a string containing a single property name instead of an array of property names. In addition to passing strings or arrays as the argument to pick() or omit(), we can pass in a function callback. This is suitable when it's not very clear which objects in a collection should have which properties. Using a callback like this inside a map() callback lets us perform detailed comparisons and transformations on collections while using very little code: function invalidAge(value, key) { return key === 'age' && value < 40; } var collection = [ { first: 'Kim', last: 'Lawson', age: 40 }, { first: 'Marcia', last: 'Butler', age: 31 }, { first: 'Shawna', last: 'Hamilton', age: 39 }, { first: 'Leon', last: 'Johnston', age: 67 } ]; _.map(collection, function(item) { return _.omit(item, invalidAge); }); // → // [ // { first: "Kim", last: "Lawson", age: 40 }, // { first: "Marcia", last: "Butler" }, // { first: "Shawna", last: "Hamilton" }, // { first: "Leon", last: "Johnston", age: 67 } // ] The new collection generated by this code excludes the age property for items where the age value is less than 40. The callback supplied to omit() is applied to each key-value pair in the object. This code is a good illustration of the conciseness achievable with Lo-Dash. There's a lot of iterative code running here and there is no for or while statement in sight. Performing calculations It's time now to turn our attention to performing calculations in our map() callbacks. This entails looking at the item and, based on its current state, computing a new value that will be ultimately mapped to the new collection. This could mean extending the original item's properties or replacing one with a newly computed value. Whichever the case, it's a lot easier to map these computations than to write your own logic that applies these functions to every item in your collection. This is explained using the following example: var collection = [ { name: 'Valerie', jqueryYears: 4, cssYears: 3 }, { name: 'Alonzo', jqueryYears: 1, cssYears: 5 }, { name: 'Claire', jqueryYears: 3, cssYears: 1 }, { name: 'Duane', jqueryYears: 2, cssYears: 0 } ]; _.map(collection, function(item) { return _.extend({ experience: item.jqueryYears + item.cssYears, specialty: item.jqueryYears >= item.cssYears ? 'jQuery' : 'CSS' }, item); }); // → // [ // { // experience": 7, // specialty": "jQuery", // name": "Valerie", // jqueryYears": 4, // cssYears: 3 // }, // { // experience: 6, // specialty: "CSS", // name: "Alonzo", // jqueryYears: 1, // cssYears: 5 // }, // { // experience: 4, // specialty: "jQuery", // name: "Claire", // jqueryYears: 3, // cssYears: 1 // }, // { // experience: 2, // specialty: "jQuery", // name: "Duane", // jqueryYears: 2, // cssYears: 0 // } // ] Here, we're mapping each item in the original collection to an extended version of it. Particularly, we're computing two new values for each item—experience and speciality. The experience property is simply the sum of the jqueryYears and cssYears properties. The speciality property is computed based on the larger value of the jqueryYears and cssYears properties. Earlier, I mentioned the need to be careful when modifying items in map() callbacks. In general, it's a bad idea. It's helpful to try and remember that map() is used to generate new collections, not to modify existing collections. Here's an illustration of the horrific consequences of not being careful: var app = {}, collection = [ { name: 'Cameron', supervisor: false }, { name: 'Lindsey', supervisor: true }, { name: 'Kenneth', supervisor: false }, { name: 'Caroline', supervisor: true } ]; app.supervisor = _.find(collection, { supervisor: true }); _.map(collection, function(item) { return _.extend(item, { supervisor: false }); }); console.log(app.supervisor); // → { name: "Lindsey", supervisor: false } The destructive nature of this callback is not obvious at all and next to impossible for programmers to track down and diagnose. Its nature is essentially resetting the supervisor attribute for each item. If these items are used anywhere else in the application, the supervisor property value will be clobbered whenever this map job is executed. If you need to reset values like this, ensure that the change is mapped to the new value and not made to the original. Mapping also works with primitive values as the item. Often, we'll have an array of primitive values that we'd like transformed into an alternative representation. For example, let's say you have an array of sizes, expressed in bytes. You can map those arrays to a new collection with those sizes expressed as human-readable values, using the following code: function bytes(b) { var units = [ 'B', 'K', 'M', 'G', 'T', 'P' ], target = 0; while (b >= 1024) { b = b / 1024; target++; } return (b % 1 === 0 ? b : b.toFixed(1)) + units[target] + (target === 0 ? '' : 'B'); } var collection = [ 1024, 1048576, 345198, 120120120 ]; _.map(collection, bytes); // → [ "1KB", "1MB", "337.1KB", "114.6MB" ] The bytes() function takes a numerical argument, which is the number of bytes to be formatted. This is the starting unit. We just keep incrementing the target unit until we have something that is less than 1024. For example, the last item in our collection maps to '114.6MB'. The bytes() function can be passed directly to map() since it's expecting values in our collection as they are. Calling functions We don't always have to write our own callback functions for map(). Wherever it makes sense, we're free to leverage Lo-Dash functions to map our collection items. For example, let's say we have a collection and we'd like to know the size of each item. There's a size() Lo-Dash function we can use as our map() callback, as follows: var collection = [ [ 1, 2 ], [ 1, 2, 3 ], { first: 1, second: 2 }, { first: 1, second: 2, third: 3 } ]; _.map(collection, _.size); // → [ 2, 3, 2, 3 ] This code has the added benefit that the size() function returns consistent results, no matter what kind of argument is passed to it. In fact, any function that takes a single argument and returns a new value based on that argument is a valid candidate for a map() callback. For instance, we could also map the minimum and maximum value of each item: var source = _.range(1000), collection = [ _.sample(source, 50), _.sample(source, 100), _.sample(source, 150) ]; _.map(collection, _.min); // → [ 20, 21, 1 ] _.map(collection, _.max); // → [ 931, 985, 991 ] What if we want to map each item of our collection to a sorted version? Since we do not sort the collection itself, we don't care about the item positions within the collection, but the items themselves, if they're arrays, for instance. Let's see what happens with the following code: var collection = [ [ 'Evan', 'Veronica', 'Dana' ], [ 'Lila', 'Ronald', 'Dwayne' ], [ 'Ivan', 'Alfred', 'Doug' ], [ 'Penny', 'Lynne', 'Andy' ] ]; _.map(collection, _.compose(_.first, function(item) { return _.sortBy(item); })); // → [ "Dana", "Dwayne", "Alfred", "Andy" ] This code uses the compose() function to construct a map() callback. The first function returns the sorted version of the item by passing it to sortBy(). The first() item of this sorted list is then returned as the mapped item. The end result is a new collection containing the alphabetically first item from each array in our collection, with three lines of code. This is not bad. Filtering and mapping Filtering and mapping are two closely related collection operations. Filtering extracts only those collection items that are of particular interest in a given context. Mapping transforms collections to produce new collections. But what if you only want to map a certain subset of your collection? Then it would make sense to chain together the filtering and mapping operations, right? Here's an example of what that might look like: var collection = [ { name: 'Karl', enabled: true }, { name: 'Sophie', enabled: true }, { name: 'Jerald', enabled: false }, { name: 'Angie', enabled: false } ]; _.compose( _.partialRight(_.map, 'name'), _.partialRight(_.filter, 'enabled') )(collection); // → [ "Karl", "Sophie" ] This map is executed using compose() to build a function that is called right away, with our collection as the argument. The function is composed of two partials. We're using partialRight() on both arguments because we want the collection supplied as the leftmost argument in both cases. The first partial function is filter(). We're partially applying the enabled argument. So this function will filter our collection before it's passed to map(). This brings us to our next partial in the function composition. The result of filtering the collection is passed to map(), which has the name argument partially applied. The end result is a collection with enabled name strings. The important thing to note about the preceding code is that the filtering operation takes place before the map() function is run. We could have stored the filtered collection in an intermediate variable instead of streamlining with compose(). Regardless of flavor, it's important that the items in your mapped collection correspond to the items in the source collection. It's conceivable to filter out the items in the map() callback by not returning anything, but this is ill-advised as it doesn't map well, both figuratively and literally. Mapping objects The previous section focused on collections and how to map them. But wait, objects are collections too, right? That is indeed correct, but it's worth differentiating between the more traditional collections, arrays, and plain objects. The main reason is that there are implications with ordering and keys when performing map/reduce. At the end of the day, arrays and objects serve different use cases with map/reduce, and this article tries to acknowledge these differences. Now we'll start looking at some techniques Lo-Dash programmers employ when working with objects and mapping them to collections. There are a number of factors to consider such as the keys within an object and calling methods on objects. We'll take a look at the relationship between key-value pairs and how they can be used in a mapping context. Working with keys We can use the keys of a given object in interesting ways to map the object to a new collection. For example, we can use the keys() function to extract the keys of an object and map them to values other than the property value, as shown in the following example: var object = { first: 'Ronald', last: 'Walters', employer: 'Packt' }; _.map(_.sortBy(_.keys(object)), function(item) { return object[item]; }); // → [ "Packt", "Ronald", "Walters" ] The preceding code builds an array of property values from object. It does so using map(), which is actually mapping the keys() array of object. These keys are sorted using sortBy(). So Packt is the first element of the resulting array because employer is alphabetically first in the object keys. Sometimes, it's desirable to perform lookups in other objects and map those values to a target object. For example, not all APIs return everything you need for a given page, packaged in a neat little object. You have to do joins and build the data you need. This is shown in the following code: var users = {}, preferences = {}; _.each(_.range(100), function() { var id = _.uniqueId('user-'); users[id] = { type: 'user' }; preferences[id] = { emailme: !!(_.random()) }; }); _.map(users, function(value, key) { return _.extend({ id: key }, preferences[key]); }); // → // [ // { id: "user-1", emailme: true }, // { id: "user-2", emailme: false }, // ... // ] This example builds two objects, users and preferences. In the case of each object, the keys are user identifiers that we're generating with uniqueId(). The user objects just have some dummy attribute in them, while the preferences objects have an emailme attribute, set to a random Boolean value. Now let's say we need quick access to this preference for all users in the users object. As you can see, it's straightforward to implement using map() on the users object. The callback function returns a new object with the user ID. We extend this object with the preference for that particular user by looking at them by key. Calling methods Objects aren't limited to storing primitive strings and numbers. Properties can store functions as their values, or methods, as they're commonly referred. However, depending on the context where you're using your object, methods aren't always callable, especially if you have little or no control over the context where your objects are used. One technique that's helpful in situations such as these is mapping the result of calling these methods and using this result in the context in question. Let's see how this can be done with the following code: var object = { first: 'Roxanne', last: 'Elliot', name: function() { return this.first + ' ' + this.last; }, age: 38, retirement: 65, working: function() { return this.retirement - this.age; } }; _.map(object, function(value, key) { var item = {}; item[key] = _.isFunction(value) ? object[key]() : value return item; }); // → // [ // { first: "Roxanne" }, // { last: "Elliot" }, // { name: "Roxanne Elliot" }, // { age: 38 }, // { retirement: 65 }, // { working: 27 } // ] _.map(object, function(value, key) { var item = {}; item[key] = _.result(object, key); return item; }); // → // [ // { first: "Roxanne" }, // { last: "Elliot" }, // { name: "Roxanne Elliot" }, // { age: 38 }, // { retirement: 65 }, // { working: 27 } // ] Here, we have an object with both primitive property values and methods that use these properties. Now we'd like to map the results of calling those methods and we will experiment with two different approaches. The first approach uses the isFunction() function to determine whether the property value is callable or not. If it is, we call it and return that value. The second approach is a little easier to implement and achieves the same outcome. The result() function is applied to the object using the current key. This tests whether we're working with a function or not, so our code doesn't have to. In the first approach to mapping method invocations, you might have noticed that we're calling the method using object[key]() instead of value(). The former retains the context as the object variable, but the latter loses the context, since it is invoked as a plain function without any object. So when you're writing mapping callbacks that call methods and not getting the expected results, make sure the method's context is intact. Perhaps, you have an object but you're not sure which properties are methods. You can use functions() to figure this out and then map the results of calling each method to an array, as shown in the following code: var object = { firstName: 'Fredrick', lastName: 'Townsend', first: function() { return this.firstName; }, last: function() { return this.lastName; } }; var methods = _.map(_.functions(object), function(item) { return [ _.bindKey(object, item) ]; }); _.invoke(methods, 0); // → [ "Fredrick", "Townsend" ] The object variable has two methods, first() and last(). Assuming we didn't know about these methods, we can find them using functions(). Here, we're building a methods array using map(). The input is an array containing the names of all the methods of the given object. The value we're returning is interesting. It's a single-value array; you'll see why in a moment. The value of this array is a function built by passing the object and the name of the method to bindKey(). This function, when invoked, will always use object as its context. Lastly, we use invoke() to invoke each method in our methods array, building a new result array. Recall that our map() callback returned an array. This was a simple hack to make invoke() work, since it's a convenient way to call methods. It generally expects a key as the second argument, but a numerical index works just as well, since they're both looked up as same. Mapping key-value pairs Just because you're working with an object doesn't mean it's ideal, or even necessary. That's what map() is for—mapping what you're given to what you need. For instance, the property values are sometimes all that matter for what you're doing, and you can dispense with the keys entirely. For that, we have the values() function and we feed the values to map(): var object = { first: 'Lindsay', last: 'Castillo', age: 51 }; _.map(_.filter(_.values(object), _.isString), function(item) { return '<strong>' + item + '</strong>'; }); // → [ "<strong>Lindsay</strong>", "<strong>Castillo</strong>" ] All we want from the object variable here is a list of property values, which are strings, so that we can format them. In other words, the fact that the keys are first, last, and age is irrelevant. So first, we call values() to build an array of values. Next, we pass that array to filter(), removing anything that's not a string. We then pass the output of this to map, where we're able to map the string using <strong/> tags. The opposite might also be true—the value is completely meaningless without its key. If that's the case, it may be fitting to map key-value pairs to a new collection, as shown in the following example: function capitalize(s) { return s.charAt(0).toUpperCase() + s.slice(1); } function format(label, value) { return '<label>' + capitalize(label) + ':</label>' + '<strong>' + value + '</strong>'; } var object = { first: 'Julian', last: 'Ramos', age: 43 }; _.map(_.pairs(object), function(pair) { return format.apply(undefined, pair); }); // → // [ // "<label>First:</label><strong>Julian</strong>", // "<label>Last:</label><strong>Ramos</strong>", // "<label>Age:</label><strong>43</strong>" // ] We're passing the result of running our object through the pairs() function to map(). The argument passed to our map callback function is an array, the first element being the key and the second being the value. It so happens that the format() function expects a key and a value to format the given string, so we're able to use format.apply() to call the function, passing it the pair array. This approach is just a matter of taste. There's no need to call pairs() before map(). We could just as easily have called format directly. But sometimes, this approach is preferred, and the reasons, not least of which is the style of the programmer, are wide and varied. Summary This article introduced you to the map/reduce programming model and how Lo-Dash tools help realize it in your application. First, we examined mapping collections, including how to choose which properties get included and how to perform calculations. We then moved on to mapping objects. Keys can have an important role in how objects get mapped to new objects and collections. There are also methods and functions to consider when mapping. Resources for Article: Further resources on this subject: The First Step [article] Recursive directives [article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 6209

article-image-handling-sessions-and-users
Packt
30 Aug 2013
4 min read
Save for later

Handling sessions and users

Packt
30 Aug 2013
4 min read
(For more resources related to this topic, see here.) Getting ready We will work from the app.py file from the sched directory and the models.py file. How to do it... Flask provides a session object, which behaves like a Python dictionary, and persists automatically across requests. You can, in your Flask application code: from flask import session# ... in a request ...session['spam'] = 'eggs'# ... in another request ...spam = session.get('spam') # 'eggs' Flask-Login provides a simple means to track a user in Flask's session. Update requirements.txt: FlaskFlask-LoginFlask-ScriptFlask-SQLAlchemyWTForms Then: $ pip install -r requirements.txt We can then load Flask-Login into sched's request handling, in app.py: from flask.ext.login import LoginManager, current_userfrom flask.ext.login import login_user, logout_userfrom sched.models import User# Use Flask-Login to track current user in Flask's session.login_manager = LoginManager()login_manager.setup_app(app)login_manager.login_view = 'login'@login_manager.user_loaderdef load_user(user_id):"""Flask-Login hook to load a User instance from ID."""return db.session.query(User).get(user_id) Flask-Login requires four methods on the User object, inside class User in models.py: def get_id(self):return str(self.id)def is_active(self):return Truedef is_anonymous(self):return Falsedef is_authenticated(self):return True Flask-Login provides a UserMixin (flask.ext.login.UserMixin) if you prefer to use its default implementation. We then provide routes to log the user in when authenticated and log out. In app.py: @app.route('/login/', methods=['GET', 'POST']) def login(): if current_user.is_authenticated(): return redirect(url_for('appointment_list')) form = LoginForm(request.form) error = None if request.method == 'POST' and form.validate(): email = form.username.data.lower().strip() password = form.password.data.lower().strip() user, authenticated = User.authenticate(db.session.query, email, password) if authenticated: login_user(user) return redirect(url_for('appointment_list')) else: error = 'Incorrect username or password.' return render_template('user/login.html', form=form, error=error) @app.route('/logout/') def logout(): logout_user() return redirect(url_for('login')) We then decorate every view function that requires a valid user, in app.py: from flask.ext.login import login_required@app.route('/appointments/')@login_requireddef appointment_list():# ... How it works... On login_user, Flask-Login gets the user object's ID from User.get_id and stores it in Flask's session. Flask-Login then sets a before_request handler to load the user instance into the current_user object, using the load_user hook we provide. The logout_user function then removes the relevant bits from the session. If no user is logged in, then current_user will provide an anonymous user object which results in current_user.is_anonymous() returning True and current_user. is_authenticated() returning False, which allows application and template code to base logic on whether the user is valid. (Flask-Login puts current_user into all template contexts.) You can use User.is_active to make user accounts invalid without actually deleting them, by returning False as appropriate. View functions decorated with login_required will redirect the user to the login view if the current user is not authenticated, without calling the decorated function. There's more... Flask's session supports display of messages and protection against request forgery. Flashing messages When you want to display a simple message to indicate a successful operation or a failure quickly, you can use Flask's flash messaging, which loads the message into the session until it is retrieved. In application code, inside request handling code: from flask import flashflash('Sucessfully did that thing.', 'success') In template code, where you can use the 'success' category for conditional display: {% for cat, m in get_flashed_messages(with_categories=true) %}<div class="alert">{{ m }}</div>{% endfor %} Cross-site request forgery protection Malicious web code will attempt to forge data-altering requests for other web services. To protect against forgery, you can load a randomized token into the session and into the HTML form, and reject the request when the two do not match. This is provided in the Flask-SeaSurf extension, pythonhosted.org/Flask-SeaSurf/ or the Flask-WTF extension (which integrates WTForms), pythonhosted.org/Flask-ETF/. Summary This article explained how to keep users logged in for on-going requests after authentication. It shed light on how Flask provides a session object, which behaves like a Python dictionary, and persists automatically across requests. It also spoke about coding in Flask application. We got acquainted with flashing messages and cross-site request forgery protection. Resources for Article: Further resources on this subject: Python Testing: Installing the Robot Framework [Article] Getting Started with Spring Python [Article] Creating Skeleton Apps with Coily in Spring Python [Article]
Read more
  • 0
  • 0
  • 6165

article-image-asynchronous-communication-between-components
Packt
09 Oct 2015
12 min read
Save for later

Asynchronous Communication between Components

Packt
09 Oct 2015
12 min read
In this article by Andreas Niedermair, the author of the book Mastering ServiceStack, we will see the communication between asynchronous components. The recent release of .NET has added several new ways to further embrace asynchronous and parallel processing by introducing the Task Parallel Library (TPL) and async and await. (For more resources related to this topic, see here.) The need for asynchronous processing has been there since the early days of programming. Its main concept is to offload the processing to another thread or process to release the calling thread from waiting and it has become a standard model since the rise of GUIs. In such interfaces only one thread is responsible for drawing the GUI, which must not be blocked in order to remain available and also to avoid putting the application in a non-responding state. This paradigm is a core point in distributed systems, at some point, long running operations are offloaded to a separate component, either to overcome blocking or to avoid resource bottlenecks using dedicated machines, which also makes the processing more robust against unexpected application pool recycling and other such issues. A synonym for "fire-and-forget" is "one-way", which is also reflected by the design of static routes of ServiceStack endpoints, where the default is /{format}/oneway/{service}. Asynchronism adds a whole new level of complexity to our processing chain, as some callers might depend on a return value. This problem can be overcome by adding callback or another event to your design. Messaging or in general a producer consumer chain is a fundamental design pattern, which can be applied within the same process or inter-process, on the same or a cross machine to decouple components. Consider the following architecture: The client issues a request to the service, which processes the message and returns a response. The server is known and is directly bound to the client, which makes an on-the-fly addition of servers practically impossible. You'd need to reconfigure the clients to reflect the collection of servers on every change and implement a distribution logic for requests. Therefore, a new component is introduced, which acts as a broker (without any processing of the message, except delivery) between the client and service to decouple the service from the client. This gives us the opportunity to introduce more services for heavy load scenarios by simply registering a new instance to the broker, as shown in the following figure:. I left out the clustering (scaling) of brokers and also the routing of messages on purpose at this stage of introduction. In many cross process scenarios a database is introduced as a broker, which is constantly polled by services (and clients, if there's a response involved) to check whether there's a message to be processed or not. Adding a database as a broker and implementing your own logic can be absolutely fine for basic systems, but for more advanced scenarios it lacks some essential features, which Messages Queues come shipped with. Scalability: Decoupling is the biggest step towards a robust design, as it introduces the possibility to add more processing nodes to your data flow. Resilience: Messages are guaranteed to be delivered and processed as automatic retrying is available for non-acknowledged (processed) messages. If the retry count is exceeded, failed messages are stored in a Dead Letter Queue (DLQ) to be inspected later and are requeued after fixing the issue that caused the failure. In case of a partial failure of your infrastructure, clients can still produce messages that get delivered and processed as soon as there is even a single consumer back online. Pushing instead of polling: This is where asynchronism comes into play, as clients do not constantly poll for messages but instead it gets pushed by the broker when there's a new message in their subscribed queue. This minimizes the spinning and wait time, when the timer ticks only for 10 seconds. Guaranteed order: Most Message Queues offer a guaranteed order of the processing under defined conditions (mostly FIFO). Load balancing: With multiple services registered for messages, there is an inherent load balancing so that the heavy load scenarios can be handled better. In addition to this round-robin routing there are other routing logics, such as smallest-mailbox, tail-chopping, or random routing. Message persistence: Message Queues can be configured to persist their data to disk and even survive restarts of the host on which they are running. To overcome the downtime of the Message Queue you can even setup a cluster to offload the demand to other brokers while restarting a single node. Built-in priority: Message Queues usually have separate queues for different messages and even provide a separate in queue for prioritized messages. There are many more features, such as Time to live, security and batching modes, which we will not cover as they are outside the scope of ServiceStack. In the following example we will refer to two basic DTOs: public class Hello : ServiceStack.IReturn<HelloResponse> { public string Name { get; set; } } public class HelloResponse { public string Result { get; set; } } The Hello class is used to send a Name to a consumer that generates a message, which will be enqueued in the Message Queue as well. RabbitMQ RabbitMQ is a mature broker built on top of the Advanced Message Queuing Protocol (AMQP), which makes it possible to solve even more complex scenarios, as shown here: The messages will survive restarts of the RabbitMQ service and the additional guaranty of delivery is accomplished by depending upon an acknowledgement of the receipt (and processing) of the message, by default it is done by ServiceStack for typical scenarios. The client of this Message Queue is located in the ServiceStack.RabbitMq object's NuGet package (it uses the official client in the RabbitMQ.Client package under the hood). You can add additional protocols to RabbitMQ, such as Message Queue Telemetry Transport (MQTT) and Streaming Text Oriented Messaging Protocol (STOMP), with plugins to ease Interop scenarios. Due to its complexity, we will focus on an abstracted interaction with the broker. There are many books and articles available for a deeper understanding of RabbitMQ. A quick overview of the covered scenarios is available at https://www.rabbitmq.com/getstarted.html. The method of publishing a message with RabbitMQ does not differ much from RedisMQ: using ServiceStack; using ServiceStack.RabbitMq; using (var rabbitMqServer = new RabbitMqServer()) { using (var messageProducer = rabbitMqServer.CreateMessageProducer()) { var hello = new Hello { Name = "Demo" }; messageProducer.Publish(hello); } } This will create a Helloobject and publish it to the corresponding queue in RabbitMQ. To retrieve this message, we need to register a handler, as shown here: using System; using ServiceStack; using ServiceStack.RabbitMq; using ServiceStack.Text; var rabbitMqServer = new RabbitMqServer(); rabbitMqServer.RegisterHandler<Hello>(message => { var hello = message.GetBody(); var name = hello.Name; var result = "Hello {0}".Fmt(name); result.Print(); return null; }); rabbitMqServer.Start(); "Listening for hello messages".Print(); Console.ReadLine(); rabbitMqServer.Dispose(); This registers a handler for Hello objects and prints a message to the console. In favor of a straightforward example we are omitting all the parameters with default values of the constructor of RabbitMqServer, which will connect us to the local instance at port 5672. To change this, you can either provide a connectionString parameter (and optional credentials) or use a RabbitMqMessageFactory object to customize the connection. Setup Setting up RabbitMQ involves a bit of effort. At first you need to install Erlang from http://www.erlang.org/download.html, which is the runtime for RabbitMQ due to its functional and concurrent nature. Then you can grab the installer from https://www.rabbitmq.com/download.html, which will set RabbitMQ up and running as a service with a default configuration. Processing chain Due to its complexity, the processing chain with any mature Message Queue is different from what you know from RedisMQ. Exchanges are introduced in front of queues to route the messages to their respective queues according to their routing keys: The default exchange name is mx.servicestack (defined in ServiceStack.Messaging.QueueNames.Exchange) and is used in any Publish to call an IMessageProducer or IMessageQueueClient object. With IMessageQueueClient.Publish you can inject a routing key (queueName parameter), to customize the routing of a queue. Failed messages are published to the ServiceStack.Messaging.QueueNames.ExchangeDlq (mx.servicestack.dlq) and routed to queues with the name mq:{type}.dlq. Successful messages are published to ServiceStack.Messaging.QueueNames.ExchangeTopic (mx.servicestack.topic) and routed to the queue mq:{type}.outq. Additionally, there's also a priority queue to the in-queue with the name mq:{type}.priority. If you interact with RabbitMQ on a lower level, you can directly publish to queues and leave the routing via an exchange out of the picture. Each queue has features to define whether the queue is durable, deletes itself after the last consumer disconnected, or which exchange is to be used to publish dead messages with which routing key. More information on the concepts, different exchange types, queues, and acknowledging messages can be found at https://www.rabbitmq.com/tutorials/amqp-concepts.html. Replying directly back to the producer Messages published to a queue are dequeued in FIFO mode, hence there is no guarantee if the responses are delivered to the issuer of the initial message or not. To force a response to the originator you can make use of the ReplyTo property of a message: using System; using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; using ServiceStack.Text; var rabbitMqServer = new RabbitMqServer(); var messageQueueClient = rabbitMqServer.CreateMessageQueueClient(); var queueName = messageQueueClient.GetTempQueueName(); var hello = new Hello { Name = "reply to originator" }; messageQueueClient.Publish(new Message<Hello>(hello) { ReplyTo = queueName }); var message = messageQueueClient.Get<HelloResponse>(queueName); var helloResponse = message.GetBody(); This code is more or less identical to the RedisMQ approach, but it does something different under the hood. The messageQueueClient.GetTempQueueName object creates a temporary queue, whose name is generated by ServiceStack.Messaging.QueueNames.GetTempQueueName. This temporary queue does not survive a restart of RabbitMQ, and gets deleted as soon as the consumer disconnects. As each queue is a separate Erlang process, you may encounter the process limits of Erlang and the maximum amount of file descriptors of your OS. Broadcasting a message In many scenarios a broadcast to multiple consumers is required, for example if you need to attach multiple loggers to a system it needs a lower level of implementation. The solution to this requirement is to create a fan-out exchange that will forward the message to all the queues instead of one connected queue, where one queue is consumed exclusively by one consumer, as shown: using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; var fanoutExchangeName = string.Concat(QueueNames.Exchange, ".", ExchangeType.Fanout); var rabbitMqServer = new RabbitMqServer(); var messageProducer= (RabbitMqProducer) rabbitMqServer.CreateMessageProducer(); var channel = messageProducer.Channel; channel.ExchangeDeclare(exchange: fanoutExchangeName, type: ExchangeType.Fanout, durable: true, autoDelete: false, arguments: null); With the cast to RabbitMqProducer we have access to lower level actions, we need to declare and exchange this with the name mx.servicestack.fanout, which is durable and does not get deleted. Now, we need to bind a temporary and an exclusive queue to the exchange: var messageQueueClient = (RabbitMqQueueClient) rabbitMqServer.CreateMessageQueueClient(); var queueName = messageQueueClient.GetTempQueueName(); channel.QueueBind(queue: queueName, exchange: fanoutExchangeName, routingKey: QueueNames<Hello>.In); The call to messageQueueClient.GetTempQueueName() creates a temporary queue, which lives as long as there is just one consumer connected. This queue is bound to the fan-out exchange with the routing key mq:Hello.inq, as shown here: To publish the messages we need to use the RabbitMqProducer object (messageProducer): var hello = new Hello { Name = "Broadcast" }; var message = new Message<Hello>(hello); messageProducer.Publish(queueName: QueueNames<Hello>.In, message: message, exchange: fanoutExchangeName); Even though the first parameter of Publish is named queueName, it is propagated as the routingKey to the underlying PublishMessagemethod call. This will publish the message on the newly generated exchange with mq:Hello.inq as the routing key: Now, we need to encapsulate the handling of the message as: var messageHandler = new MessageHandler<Hello>(rabbitMqServer, message => { var hello = message.GetBody(); var name = hello.Name; name.Print(); return null; }); The MessageHandler<T> class is used internally in all the messaging solutions and looks for retries and replies. Now, we need to connect the message handler to the queue. using System; using System.IO; using System.Threading.Tasks; using RabbitMQ.Client; using RabbitMQ.Client.Exceptions; using ServiceStack.Messaging; using ServiceStack.RabbitMq; var consumer = new RabbitMqBasicConsumer(channel); channel.BasicConsume(queue: queueName, noAck: false, consumer: consumer); Task.Run(() => { while (true) { BasicGetResult basicGetResult; try { basicGetResult = consumer.Queue.Dequeue(); } catch (EndOfStreamException) { // this is ok return; } catch (OperationInterruptedException) { // this is ok return; } var message = basicGetResult.ToMessage<Hello>(); messageHandler.ProcessMessage(messageQueueClient, message); } }); This creates a RabbitMqBasicConsumer object, which is used to consume the temporary queue. To process messages we try to dequeuer from the Queue property in a separate task. This example does not handle the disconnects and reconnects from the server and does not integrate with the services (however, both can be achieved). Integrate RabbitMQ in your service The integration of RabbitMQ in a ServiceStack service does not differ overly from RedisMQ. All you have to do is adapt to the Configure method of your host. using Funq; using ServiceStack; using ServiceStack.Messaging; using ServiceStack.RabbitMq; public override void Configure(Container container) { container.Register<IMessageService>(arg => new RabbitMqServer()); container.Register<IMessageFactory>(arg => new RabbitMqMessageFactory()); var messageService = container.Resolve<IMessageService>(); messageService.RegisterHandler<Hello> (this.ServiceController.ExecuteMessage); messageService.Start(); } The registration of an IMessageService is needed for the rerouting of the handlers to your service; and also, the registration of an IMessageFactory is relevant if you want to publish a message in your service with PublishMessage. Summary In this article the messaging pattern was introduced along with all the available clients of existing Message Queues. Resources for Article: Further resources on this subject: ServiceStack applications[article] Web API and Client Integration[article] Building a Web Application with PHP and MariaDB – Introduction to caching [article]
Read more
  • 0
  • 0
  • 6125

article-image-introduction-nginx
Packt
31 Jul 2013
8 min read
Save for later

Introduction to nginx

Packt
31 Jul 2013
8 min read
(For more resources related to this topic, see here.) So, what is nginx? The best way to describe nginx (pronounced engine-x) is as an event-based multi-protocol reverse proxy. This sounds fancy, but it's not just buzz words and actually affects how we approach configuring nginx. It also highlights some of the flexibility that nginx offers. While it is often used as a web server and an HTTP reverse proxy, it can also be used as an IMAP reverse proxy or even a raw TCP reverse proxy. Thanks to the plug-in ready code structure, we can utilize a large number of first and third party modules to implement a diverse amount of features to make nginx an ideal fit for many typical use cases. A more accurate description would be to say that nginx is a reverse proxy first, and a web server second. I say this because it can help us visualize the request flow through the configuration file and rationalize how to achieve the desired configuration of nginx. The core difference this creates is that nginx works with URIs instead of files and directories, and based on that determines how to process the request. This means that when we configure nginx, we tell it what should happen for a certain URI rather than what should happen for a certain file on the disk. A beneficial part of nginx being a reverse proxy is that it fits into a large number of server setups, and can handle many things that other web servers simply aren't designed for. A popular question is "Why even bother with nginx when Apache httpd is available?" The answer lies in the way the two programs are designed. The majority of Apache setups are done using prefork mode, where we spawn a certain amount of processes and then embed our dynamic language in each process. This setup is synchronous, meaning that each process can handle one request at a time, whether that connection is for a PHP script or an image file. In contrast, nginx uses an asynchronous event-based design where each spawned process can handle thousands of concurrent connections. The downside here is that nginx will, for security and technical reasons, not embed programming languages into its own process - this means that to handle those we will need to reverse proxy to a backend, such as Apache, PHP-FPM, and so on. Thankfully, as nginx is a reverse proxy first and foremost, this is extremely easy to do and still allows us major benefits, even when keeping Apache in use. Let's take a look at a use case where Apache is used as an application server described earlier rather than just a web server. We have embedded PHP, Perl, or Python into Apache, which has the primary disadvantage of each request becoming costly. This is because the Apache process is kept busy until the request has been fully served, even if it's a request for a static file. Our online service has gotten popular and we now find that our server cannot keep up with the increased demand. In this scenario introducing nginx as a spoon-feeding layer would be ideal. When an nginx server with a spoon-feeding layer will sit between our end user and Apache and a request comes in, nginx will reverse proxy it to Apache if it is for a dynamic file, while it will handle any static file requests itself. This means that we offload a lot of the request handling from the expensive Apache processes to the more lightweight nginx processes, and increase the number of end users we can serve before having to spend money on more powerful hardware. Another example scenario is where we have an application being used from all over the world. We don't have any static files so we can't easily offload a number of requests from Apache. In this use case, our PHP process is busy from the time the request comes in until the user has finished downloading the response. Sadly, not everyone in the world has fast internet and, as a result, the sending process could be busy for a relatively significant period of time. Let's assume our visitor is on an old 56k modem and has a maximum download speed of 5 KB per second, it will take them five seconds to download a 25 KB gzipped HTML file generated by PHP. That's five seconds where our process cannot handle any other request. When we introduce nginx into this setup, we have PHP spending only microseconds generating the response but have nginx spend five seconds transferring it to the end user. Because nginx is asynchronous it will happily handle other connections in the meantime, and thus, we significantly increase the number of concurrent requests we can handle. In the previous two examples I used scenarios where nginx was used in front of Apache, but naturally this is not a requirement. nginx is capable of reverse proxying via, for instance, FastCGI, UWSGI, SCGI, HTTP, or even TCP (through a plugin) enabling backends, such as PHP-FPM, Gunicorn, Thin, and Passenger. Quick start – Creating your first virtual host It's finally time to get nginx up and running. To start out, let's quickly review the configuration file. If you installed via a system package, the default configuration file location is most likely /etc/nginx/nginx.conf. If you installed via source and didn't change the path pre fix, nginx installs itself into/usr/local/nginx and places nginx.conf in a /conf subdirectory. Keep this file open as a reference to help visualize many of the things described in this article. Step 1 – Directives and contexts To understand what we'll be covering in this section, let me first introduce a bit of terminology that the nginx community at large uses. Two central concepts to the nginx configuration file are those of directives and contexts. A directive is basically just an identifier for the various configuration options. Contexts refer to the different sections of the nginx configuration file. This term is important because the documentation often states which context a directive is allowed to have within. A glance at the standard configuration file should reveal that nginx uses a layered configuration format where blocks are denoted by curly brackets {}. These blocks are what are referred to as contexts. The topmost context is called main, and is not denoted as a block but is rather the configuration file itself. The main context has only a few directives we're really interested in, the two major ones being worker_processes and user. These directives handle how many worker processes nginx should run and which user/group nginx should run these under. Within the main context there are two possible subcontexts, the first one being called events. This block handles directives that deal with the event-polling nature of nginx. Mostly we can ignore every directive in here, as nginx can automatically configure this to be the most optimal; however, there's one directive which is interesting, namely worker_connections. This directive controls the number of connections each worker can handle. It's important to note here that nginx is a terminating proxy, so if you HTTP proxy to a backend, such as Apache httpd, that will use up two connections. The second subcontext is the interesting one called http. This context deals with everything related to HTTP, and this is what we will be working with almost all of the time. While there are directives that are configured in the http context, for now we'll focus on a subcontext within http called server. The server context is the nginx equivalent of a virtual host. This context is used to handle configuration directives based on the host name your sites are under. Within the server context, we have another subcontext called location. The location context is what we use to match the URI. Basically, a request to nginx will flow through each of our contexts, matching first the server block with the hostname provided by the client, and secondly the location context with the URI provided by the client. Depending on the installation method, there might not be any server blocks in the nginx.conf file. Typically, system package managers take advantage of the include directive that allows us to do an in-place inclusion into our configuration file. This allows us to separate out each virtual host and keep our configuration file more organized. If there aren't any server blocks, check the bottom of the file for an includedirective and check the directory from which it includes, it should have a file which contains a server block. Step 2 – Define your first virtual hosts Finally, let us define our first server block! server { listen 80; server_name example.com; root /var/www/website;} That is basically all we need, and strictly speaking, we don't even need to define which port to listen on as port 80 is default. However, it's generally a good practice to keep it in there should we want to search for all virtual hosts on port 80 later on. Summary This article provided the details about the important aspects of nginx. It also briefed about the configuration of our virtual host using nginx by explaining two simple steps, along with a configuration example. Resources for Article : Further resources on this subject: Nginx HTTP Server FAQs [Article] Nginx Web Services: Configuration and Implementation [Article] Using Nginx as a Reverse Proxy [Article]
Read more
  • 0
  • 0
  • 6110

Packt
15 Nov 2013
5 min read
Save for later

RESTful Web Services – Server-Sent Events (SSE)

Packt
15 Nov 2013
5 min read
Getting started Generally, the flow of web services is initiated by the client by sending a request for the resource to the server. This is the traditional way of consuming web services. Traditional Flow Here, the browser or Jersey client initiates the request for data from the server, and the server provides a response along with the data. Every time a client needs to initiate a request for the resource, the server may not have the capability to generate the data. This becomes difficult in an application where real-time data needs to be shown. Even though there is no new data over the server, the client needs to check for it every time. Nowadays, there is a requirement that the server needs to send some data without the client's request. For this to happen the client and server need to be connected, and the server can push the data to the client. This is why it is termed as Server-Sent Events. In these events, the connections created initially between the client and server is not released after the request. The server maintains the connection and pushes the data to the respective client when required. Server-Sent Event Flow In the Server-Sent Event Flow diagram initially, when a browser or a Jersey client initiates a request to establish a connection with the server using EventSource, the server is always in a listening mode for the new connection to be established. When a new connection from any EventSource is received, the server opens a new connection and maintains it in a queue. Maintaining a connection depends upon the implementation of business logic. SSE creates a single unidirectional connection. So, only a single connection is established between the client and server. After the connection is successfully established, the client is in the listening mode for new events from the server. Whenever any new event occurs on the server side, it will broadcast the event, along with the data to a specific open HTTP connection. In modern browsers that support HTML5, the onmessage method of EventSource is responsible for handling new events received from the server; whereas, in the case of Jersey clients, we have the onEvent method of EventSource, which handles new events from the server. Implementing Server-Sent Events (SSE) To use SSE, we need to register SseFeature on both the client and server sides. By doing so, the client/server gets connected to SseFeature to be used while traversing data over the network. SSE: Internal Working In the SSE: Internal Working diagram, we assume that the client/server is connected. When any new event is generated, the server initiates an OutboundEvent instance that will be responsible to have chunked output, which in turn will have a serialized data format. OutboundEventWriter is responsible to serialize the data on the server side. We need to specify the media type of the data in OutboundEvent. There are no restrictions of providing specific media types only. However, on the client side, InboundEvent is responsible for handling the incoming data from the server. Here, InboundEvent receives the chunked input that contains serialized data format. Using InbounEventReader, data is deserialized. Using SSEBroadCaster, we are able to broadcast events to multiple clients that are connected to the server. Let's look at the example, which shows how to create SSE web services and broadcast the events: @ApplicationPath("services") public class SSEApplication extends ResourceConfig { publicSSEApplication() { super(SSEResource.class, SseFeature.class); } } Here, we registered the SseFeature module and the SSEResource root-resource class to the server. private static final SseBroadcaster BROADCASTER = new SseBroadcaster(); …… @GET @Path("sseEvents") @Produces(SseFeature.SERVER_SENT_EVENTS) public EventOutput getConnection() { final EventOutput eventOutput = new EventOutput(); BROADCASTER.add(eventOutput); return eventOutput; } …… In the SSEResource root class, we need to create a resource method that will allow clients to establish the connection and persist accordingly. Here, we are maintaining the connection into the BROADCASTER instance in the SseBroadcaster class. EventOutput manages specific client connections. SseBroadcaster is simply responsible for accommodating a group of EventOutput; that is, the client's connection. …… @POST @Consumes(MediaType.APPLICATION_FORM_URLENCODED) public void post(@FormParam("name") String name) { BROADCASTER .broadcast(new OutboundEvent.Builder() .data(String.class, name) .build()); } …… When any post method is consumed, we create a new event and broadcast it to the client available in the BROADCASTER instance. The OutboundEvent instance will contain the data (MediaType, Object) method that is initialized with a specific media type and actual data. We can provide any media type to send data. By using the build() method, data is being serialized with the OutBoundEventWriter class internally. When the broadcast (OutboundEvent) is called, internally SseBroadcaster pushes data on all registered EventOutputs; that is, on clients connected to SseBroadcaster. At times, there's a scenario where the client/server has been connected and after sometime, the client gets disconnected. So, in this case, SseBroadcaster automatically handles the client connection; that is, it determines whether the connection needs to be maintained. When any client connection is closed, the broadcaster detects EventOutput and frees the connection and resources obtained by that EventOutput connection. Summary Thus we learned the difference between the traditional web service flow and SSE web service flow. We also covered how to create the SSE web services and implement the Jersey client in order to consume the SSE using different programmatic models. Useful Links: Setting up the most Popular Journal Articles in your Personalized Community in Liferay Portal Understanding WebSockets and Server-sent Events in Detail RESS - The idea and the Controversies
Read more
  • 0
  • 0
  • 6093
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-server-side-swift-building-slack-bot-part-2
Peter Zignego
13 Oct 2016
5 min read
Save for later

Server-side Swift: Building a Slack Bot, Part 2

Peter Zignego
13 Oct 2016
5 min read
In Part 1 of this series, I introduced you to SlackKit and Zewo, which allows us to build and deploy a Slack bot written in Swift to a Linux server. Here in Part 2, we will finish the app, showing all of the Swift code. We will also show how to get an API token, how to test the app and deploy it on Heroku, and finally how to launch it. Show Me the Swift Code! Finally, some Swift code! To create our bot, we need to edit our main.swift file to contain our bot logic: import String importSlackKit class Leaderboard: MessageEventsDelegate { // A dictionary to hold our leaderboard var leaderboard: [String: Int] = [String: Int]() letatSet = CharacterSet(characters: ["@"]) // A SlackKit client instance let client: SlackClient // Initalize the leaderboard with a valid Slack API token init(token: String) { client = SlackClient(apiToken: token) client.messageEventsDelegate = self } // Enum to hold commands the bot knows enum Command: String { case Leaderboard = "leaderboard" } // Enum to hold logic that triggers certain bot behaviors enum Trigger: String { casePlusPlus = "++" caseMinusMinus = "--" } // MARK: MessageEventsDelegate // Listen to the messages that are coming in over the Slack RTM connection funcmessageReceived(message: Message) { listen(message: message) } funcmessageSent(message: Message){} funcmessageChanged(message: Message){} funcmessageDeleted(message: Message?){} // MARK: Leaderboard Internal Logic privatefunc listen(message: Message) { // If a message contains our bots user ID and a recognized command, handle that command if let id = client.authenticatedUser?.id, text = message.text { iftext.lowercased().contains(query: Command.Leaderboard.rawValue) &&text.contains(query: id) { handleCommand(command: .Leaderboard, channel: message.channel) } } // If a message contains a trigger value, handle that trigger ifmessage.text?.contains(query: Trigger.PlusPlus.rawValue) == true { handleMessageWithTrigger(message: message, trigger: .PlusPlus) } ifmessage.text?.contains(query: Trigger.MinusMinus.rawValue) == true { handleMessageWithTrigger(message: message, trigger: .MinusMinus) } } // Text parsing can be messy when you don't have Foundation... privatefunchandleMessageWithTrigger(message: Message, trigger: Trigger) { if let text = message.text, start = text.index(of: "@"), end = text.index(of: trigger.rawValue) { let string = String(text.characters[start...end].dropLast().dropFirst()) let users = client.users.values.filter{$0.id == self.userID(string: string)} // If the receiver of the trigger is a user, use their user ID ifusers.count> 0 { letidString = userID(string: string) initalizationForValue(dictionary: &leaderboard, value: idString) scoringForValue(dictionary: &leaderboard, value: idString, trigger: trigger) // Otherwise just store the receiver value as is } else { initalizationForValue(dictionary: &leaderboard, value: string) scoringForValue(dictionary: &leaderboard, value: string, trigger: trigger) } } } // Handle recognized commands privatefunchandleCommand(command: Command, channel:String?) { switch command { case .Leaderboard: // Send message to the channel with the leaderboard attached if let id = channel { client.webAPI.sendMessage(channel:id, text: "Leaderboard", linkNames: true, attachments: [constructLeaderboardAttachment()], success: {(response) in }, failure: { (error) in print("Leaderboard failed to post due to error:(error)") }) } } } privatefuncinitalizationForValue(dictionary: inout [String: Int], value: String) { if dictionary[value] == nil { dictionary[value] = 0 } } privatefuncscoringForValue(dictionary: inout [String: Int], value: String, trigger: Trigger) { switch trigger { case .PlusPlus: dictionary[value]?+=1 case .MinusMinus: dictionary[value]?-=1 } } // MARK: Leaderboard Interface privatefuncconstructLeaderboardAttachment() -> Attachment? { let Great! But we’ll need to replace the dummy API token with the real deal before anything will work. Getting an API Token We need to create a bot integration in Slack. You’ll need a Slack instance that you have administrator access to. If you don’t already have one of those to play with, go sign up. Slack is free for small teams: Create a new bot here. Enter a name for your bot. I’m going to use “leaderbot”. Click on “Add Bot Integration”. Copy the API token that Slack generates and replace the placeholder token at the bottom of main.swift with it. Testing 1,2,3… Now that we have our API token, we’re ready to do some local testing. Back in Xcode, select the leaderbot command-line application target and run your bot (⌘+R). When we go and look at Slack, our leaderbot’s activity indicator should show that it’s online. It’s alive! To ensure that it’s working, we should give our helpful little bot some karma points: @leaderbot++ And ask it to see the leaderboard: @leaderbot leaderboard Head in the Clouds Now that we’ve verified that our leaderboard bot works locally, it’s time to deploy it. We are deploying on Heroku, so if you don’t have an account, go and sign up for a free one. First, we need to add a Procfile for Heroku. Back in the terminal, run: echo slackbot: .build/debug/leaderbot > Procfile Next, let’s check in our code: git init git add . git commit -am’leaderbot powering up’ Finally, we’ll setup Heroku: Install the Heroku toolbelt. Log in to Heroku in your terminal: heroku login Create our application on Heroku and set our buildpack: heroku create --buildpack https://github.com/pvzig/heroku-buildpack-swift.git leaderbot Set up our Heroku remote: heroku git:remote -a leaderbot Push to master: git push heroku master Once you push to master, you’ll see Heroku going through the process of building your application. Launch! When the build is complete, all that’s left to do is to run our bot: heroku run:detached slackbot Like when we tested locally, our bot should become active and respond to our commands! You’re Done! Congratulations, you’ve successfully built and deployed a Slack bot written in Swift onto a Linux server! Built With: Jay: Pure-Swift JSON parser and formatterkylef’s Heroku buildpack for Swift Open Swift: Open source cross project standards for Swift SlackKit: A Slack client library Zewo: Open source libraries for modern server software Disclaimer The linux version of SlackKit should be considered an alpha release. It’s a fun tech demo to show what’s possible with Swift on the server, not something to be relied upon. Feel free to report issues you come across. About the author Peter Zignego is an iOS developer in Durham, North Carolina, USA. He writes at bytesized.co, tweets at @pvzig, and freelances at Launch Software.
Read more
  • 0
  • 0
  • 6085

article-image-schema-validation-using-sax-and-dom-parser-oracle-jdeveloper-xdk-11g
Packt
08 Oct 2009
6 min read
Save for later

Schema Validation using SAX and DOM Parser with Oracle JDeveloper - XDK 11g

Packt
08 Oct 2009
6 min read
The choice of validation method depends on the additional functionality required in the validation application. SAXParser is recommended if SAX parsing event notification is required in addition to validation with a schema. DOMParser is recommended if the DOM tree structure of an XML document is required for random access and modification of the XML document. Schema validation with a SAX parser In this section we shall validate the example XML document catalog.xml with XML schema document catalog.xsd, with the SAXParser class. Import the oracle.xml.parser.schema and oracle.xml.parser.v2 packages. Creating a SAX parser Create a SAXParser object and set the validation mode of the SAXParser object to SCHEMA_VALIDATION, as shown in the following listing: SAXParser saxParser=new SAXParser();saxParser.setValidationMode(XMLParser.SCHEMA_VALIDATION); The different validation modes that may be set on a SAXParser are discussed in the following table; but we only need the SCHEMA-based validation modes: Validation Mode Description NONVALIDATING The parser does not validate the XML document. PARTIAL_VALIDATION The parser validates the complete or a partial XML document with a DTD or an XML schema if specified. DTD_VALIDATION The parser validates the XML document with a DTD if any. SCHEMA_VALIDATION The parser validates the XML document with an XML schema if any specified. SCHEMA_LAX_VALIDATION Validates the complete or partial XML document with an XML schema if the parser is able to locate a schema. The parser does not raise an error if a schema is not found. SCHEMA_STRICT_VALIDATION Validates the complete XML document with an XML schema if the parser is able to find a schema. If the parser is not able find a schema or if the XML document does not conform to the schema, an error is raised. Next, create an XMLSchema object from the schema document with which an XML document is to be validated. An XMLSchema object represents the DOM structure of an XML schema document and is created with an XSDBuilder class object. Create an XSDBuilder object and invoke the build(InputSource) method of the XSDBuilder object to obtain an XMLSchema object. The InputSource object is created with an InputStream object created from the example XML schema document, catalog.xsd. As discussed before, we have used an InputSource object because most SAX implementations are InputSource based. The procedure to obtain an XMLSchema object is shown in the following listing: XSDBuilder builder = new XSDBuilder();InputStream inputStream=new FileInputStream(new File("catalog.xsd"));InputSource inputSource=new InputSource(inputStream);XMLSchema schema = builder.build(inputSource); Set the XMLSchema object on the SAXParser object with setXMLSchema(XMLSchema) method: saxParser.setXMLSchema(schema); Setting the error handler As in the previous section, define an error handling class, CustomErrorHandler that extends DefaultHandler class. Create an object of type CustomErrorHandler, and register the ErrorHandler object with the SAXParser as shown here: CustomErrorHandler errorHandler = new CustomErrorHandler();saxParser.setErrorHandler(errorHandler); Validating the XML document The SAXParser class extends the XMLParser class, which provides the overloaded parse methods discussed in the following table to parse an XML document: Method Description parse(InputSource in) Parses an XML document from an org.xml.sax.InputSouce object. The InputSource-based parse method is the preferred method because SAX parsers convert the input to InputSource no matter what the input type is. parse(java.io.InputStream in) Parses an XML document from an InputStream. parse(java.io.Reader r) Parses an XML document from a Reader. parse(java.lang.String in) Parses an XML document from a String URL for the XML document. parse(java.net.URL url) Parses an XML document from the specified URL object for the XML document. Create an InputSource object from the XML document to be validated, and parse the XML document with the parse(InputSource) object: InputStream inputStream=new FileInputStream(newFile("catalog.xml"));InputSource inputSource=new InputSource(inputStream);saxParser.parse(inputSource); Running the Java application The validation application SAXValidator.java is listed in the following listing with additional explanations: First we declare the import statements for the classes that we need. import java.io.File;import java.io.FileInputStream;import java.io.FileNotFoundException;import oracle.xml.parser.schema.*;import oracle.xml.parser.v2.*;import java.io.IOException;import java.io.InputStream;import org.xml.sax.SAXException;import org.xml.sax.SAXParseException;import org.xml.sax.helpers.DefaultHandler;import org.xml.sax.InputSource; We define the Java class SAXValidator for SAX validation. public class SAXValidator { In the Java class we define a method validateXMLDocument. public void validateXMLDocument(InputSource input) {try { In the method we create a SAXParser and set the XML schema on the SAXParser. SAXParser saxParser = new SAXParser();saxParser.setValidationMode(XMLParser.SCHEMA_VALIDATION);XMLSchema schema=getXMLSchema();saxParser.setXMLSchema(schema); To handle errors we create a custom error handler. We set the error handler on the SAXParser object and parse the XML document to be validated and also output validation errors if any. CustomErrorHandler errorHandler = new CustomErrorHandler();saxParser.setErrorHandler(errorHandler);saxParser.parse(input);if (errorHandler.hasValidationError == true) {System.err.println("XML Document hasValidation Error:" + errorHandler.saxParseException.getMessage());} else {System.out.println("XML Document validateswith XML schema");}} catch (IOException e) {System.err.println("IOException " + e.getMessage());} catch (SAXException e) {System.err.println("SAXException " + e.getMessage());}} We add the Java method getXMLSchema to create an XMLSchema object. try {XSDBuilder builder = new XSDBuilder();InputStream inputStream =new FileInputStream(new File("catalog.xsd"));InputSource inputSource = newInputSource(inputStream);XMLSchema schema = builder.build(inputSource);return schema;} catch (XSDException e) {System.err.println("XSDException " + e.getMessage());} catch (FileNotFoundException e) {System.err.println("FileNotFoundException " +e.getMessage());}return null;} We define the main method in which we create an instance of the SAXValidator class and invoke the validateXMLDocument method. public static void main(String[] argv) {try { InputStream inputStream = new FileInputStream(new File("catalog.xml")); InputSource inputSource=new InputSource(inputStream); SAXValidator validator = new SAXValidator(); validator.validateXMLDocument(inputSource); } catch (FileNotFoundException e) { System.err.println("FileNotFoundException " + e.getMessage()); }} Finally, we define the custom error handler class as an inner class CustomErrorHandler to handle validation errors. private class CustomErrorHandler extends DefaultHandler {protected boolean hasValidationError = false;protected SAXParseException saxParseException = null;public void error(SAXParseException exception){hasValidationError = true;saxParseException = exception;}public void fatalError(SAXParseException exception){hasValidationError = true;saxParseException = exception;}public void warning(SAXParseException exception){}}} Copy the SAXValidator.java application to SAXValidator.java in the SchemaValidation project. To demonstrate error handling, add a title element to the journal element. To run the SAXValidator.java application, right-click on SAXValidator.java in Application Navigator, and select Run. A validation error gets outputted. The validation error indicates that the title element is not expected.
Read more
  • 0
  • 0
  • 6063

article-image-getting-started-moodle-20-business
Packt
25 Apr 2011
12 min read
Save for later

Getting Started with Moodle 2.0 for Business

Packt
25 Apr 2011
12 min read
  Moodle 2.0 for Business Beginner's Guide Implement Moodle in your business to streamline your interview, training, and internal communication processes         Read more about this book       (For more resources on Moodle, see here.) So let's get on with it... Why Moodle? Moodle is an open source Learning Management System (LMS) used by universities, K-12 schools, and both small and large businesses to deliver training over the Web. The Moodle project was created by Martin Dougiamas, a computer scientist and educator, who started as an LMS administrator at a university in Perth, Australia. He grew frustrated with the system's limitations as well as the closed nature of the software which made it difficult to extend. Martin started Moodle with the idea of building the LMS based on learning theory, not software design. Moodle is based on five learning ideas: All of us are potential teachers as well as learners—in a true collaborative environment we are both We learn particularly well from the act of creating or expressing something for others to see We learn a lot by just observing the activity of our peers By understanding the contexts of others, we can teach in a more transformational way A learning environment needs to be flexible and adaptable, so that it can quickly respond to the needs of the participants within it With these five points as reference, the Moodle developer community has developed an LMS with the flexibility to address a wider range of business issues than most closed source systems. Throughout this article we will explore new ways to use the social features of Moodle to create a learning platform to deliver real business value. Moodle has seen explosive growth over the past five years. In 2005, as Moodle began to gain traction in higher education, there were under 3,000 Moodle sites around the world. As of this writing in July, 2010, there were 51,000 Moodle sites registered with Moodle.org. These sites hosted 36 million users in 214 countries. The latest statistics on Moodle use are always available at the Moodle.org site (http://moodle.org/stats). As Moodle has matured as a learning platform, many corporations have found they can save money and provide critical training services with Moodle. According to the eLearning Guild 2008 Learning Management System survey, Moodle's initial cost to acquire, install, and customize was $16.77 per learner. The initial cost per learner for SAP was $274.36, while Saba was $79.20, and Blackboard $39.06. Moodle's open source licensing provides a considerable cost advantage against traditional closed source learning management systems. For the learning function, these savings can be translated into increased course development, more training opportunities, or other innovation. Or it can be passed back to the organization's bottom line. As Jim Whitehurst, CEO of RedHat, states: "What's sold to customers better than saying 'We can save you money' is to show them how we can give you more functionality within your budget." With training budgets among the first to be cut during a downturn, using Moodle can enable your organization to move costs from software licensing to training development, support, and performance management; activities that impact the bottom line. Moodle's open source licensing also makes customization and integration easier and cheaper than proprietary systems. Moodle has built-in tools for integrating with backend authentication tools, such as Active Directory or OpenLDAP, enrollment plugins to take a data feed from your HR system to enroll people in courses, and a web services library to integrate with your organization's other systems. Some organizations choose to go further, customizing individual modules to meet their unique needs. Others have added components for unique tracking and reporting, including development of a full data warehouse. Moodle's low cost and flexibility have encouraged widespread adoption in the corporate sectors. According to the eLearning Guild LMS survey, Moodle went from a 6.8 % corporate LMS market share in 2007 to a 19.8 % market share in 2008. While many of these adopters are smaller companies, a number of very large organizations, including AA Ireland, OpenText, and other Fortune 500 companies use Moodle in a variety of ways. According to the survey, the industries with the greatest adoption of Moodle include aerospace and defense companies, consulting companies, E-learning tool and service providers, and the hospitality industry. Why open source? Moodle is freely available under the General Public License (GPL). Anyone can go to Moodle.org and download Moodle, run it on any server for as many users as they want, and never pay a penny in licensing costs. The GPL also ensures that you will be able to get the source code for Moodle with every download, and have the right to share that code with others. This is the heart of the open source value proposition. When you adopt a GPL product, you have the right to use that product in any way you see fit, and have the right to redistribute that product as long as you let others do the same. Moodle's open source license has other benefits beyond simply cost. Forrester recently conducted a survey of 132 senior business and IT executives from large companies using open source software. Of the respondents, 92 % said open source software met or exceeded their quality expectations, while meeting or exceeding their expectations for lower costs. Many organizations go through a period of adjustment when making a conscious decision to adopt an open source product. Most organizations start using open source solutions for simple applications, or deep in their network infrastructure. Common open source applications in the data center include file serving, e-mail, and web servers. Once the organization develops a level of comfort with open source, they begin to move open source into mission critical and customer-facing applications. Many organizations use an open source content management system like Drupal or Alfresco to manage their web presence. Open source databases and middleware, like MySQL and JBoss, are common in application development and have proven themselves reliable and robust solutions. Companies adopt open source software for many reasons. The Forrester survey suggests open standards, no usage restrictions, lack of vendor lock-in and the ability to use the software without a license fee as the most important reason many organizations adopt open source software. On the other side of the coin, many CTO's worry about commercial support for their software. Fortunately, there is an emerging ecosystem of vendors who support a wide variety of open source products and provide critical services. There seem to be as many models of open source business as there are open source projects. A number of different support models have sprung up in the last few years. Moodle is supported by the Moodle Partners, a group of 50 companies around the world who provide a range of Moodle services. Services offered range from hosting and support to training, instructional design, and custom code development. Each of the partners provides a portion of its Moodle revenue back to the Moodle project to ensure the continued development of the shared platform. In the same way, Linux is developed by a range of commercial companies, including RedHat and IBM who both share some development and compete with each other for business. While many of the larger packages, like Linux and JBoss have large companies behind them, there are a range of products without clear avenues for support. However, the lack of licensing fees makes them easy to pilot. As we will explore in a moment, you can have a full Moodle server up and running on your laptop in under 20 minutes. You can use this to pilot your solutions, develop your content, and even host a small number of users. Once you are done with the pilot, you can move the same Moodle setup to its own server and roll it out to the whole organization. If you decide to find a vendor to support your Moodle implementation, there are a few key questions to ask: How long have they been in business? How experienced is the staff with the products they are supporting? Are they an official Moodle partner? What is the organization's track record? How good are their references? What is their business model for generating revenue? What are their long-term prospects? Do they provide a wide range of services, including application development, integration, consulting, and software life-cycle management? Installing Moodle for experimentation As Kenneth Grahame's character the Water Rat said in The Wind in the Willows, "Believe me, my young friend, there is nothing—absolutely nothing—half so much worth doing as simply messing about in boats." One of the best tools to have to learn about Moodle is an installation where you can "mess about" without worrying about the impact on other people. Learning theory tells us we need to spend many hours practicing in a safe environment to become proficient. The authors of this book have collectively spent more than 5,000 hours experimenting, building, and messing about with Moodle. There is much to be said for having the ability to play around with Moodle without worrying about other people seeing what you are doing, even after you go live with your Moodle solution. When dealing with some of the more advanced features, like permissions and conditional activities, you will need to be able to log in with multiple roles to ensure you have the options configured properly. If you make a mistake on a production server, you could create a support headache. Having your own sandbox provides that safe place. So we are going to start your Moodle exploration by installing Moodle on your personal computer. If your corporate policy prohibits you from installing software on your machine, discuss getting a small area on a server set up for Moodle. The installation instructions below will work on either your laptop, personal computer, or a server. Time for action — download and run the Moodle installer If you have Windows or a Mac, you can download a full Moodle installer, including the web server, database, and PHP. All of these components are needed to run Moodle and installing them individually on your computer can be tedious. Fortunately, the Moodle community has created full installers based on the XAMPP package. A single double-click on the install package will install everything you need. To install Moodle on Windows: Point your browser to http://download.moodle.org/windows and download the package to your desktop. Make sure you download the latest stable version of Moodle 2, to take advantage of the features we discuss in this article. Unpack the archive by double clicking on the ZIP file. It may take a few minutes to finish unpacking the archive. Double click the Start Moodle.exe file to start up the server manager. Open your web browser and go to http://localhost. You will then need to configure Moodle on your system. Follow the prompts for the next three steps. After successfully configuring Moodle, you will have a fully functioning Moodle site on your machine. Use the stop and start applications to control when Moodle runs on your site. To install Moodle on Mac: Point your browser to http://download.moodle.org/macosx and find the packages for the latest version of Moodle 2. You have two choices of installers. XAMPP is a smaller download, but the control interface is not as refined as MAMP. Download either package to your computer (the directions here are for the MAMP package). Open the .dmg file and drag the Moodle application to your Applications folder. Open the MAMP application folder in your Applications folder. Double click the MAMP application to start the web server and database server. Once MAMP is up and running, double click the Link To Moodle icon in the MAMP folder. You now have a fully functioning Moodle site on your machine. To shut down the site, quit the MAMP application. To run your Moodle site in the future, open the MAMP application and point your browser to http://localhost:8888/moodle: Once you have downloaded and installed Moodle, for both systems, follow these steps: Once you have the base system configured, you will need to set up your administrator account. The Moodle admin account has permissions to do anything on the site, and you will need this account to get started. Enter a username, password, and fill in the other required information to create an account: A XAMMP installation on Mac or Windows also requires you to set up the site's front page. Give your site a name and hit Save changes. You can come back later and finish configuring the site. What just happened? You now have a functioning Moodle site on your laptop for experimentation. To start your Moodle server, double click on the StartMoodle.exe and point your browser at http://localhost. Now we can look at a Moodle course and begin to look at Moodle functionality. Don't worry about how we will use this functionality now, just spend some time getting to know the system. Reflection You have just installed Moodle on a server or a personal computer, for free. You can use Moodle with as many people as you want for whatever purpose you choose without licensing fees. Some points for reflection: What collaboration / learning challenges do you have in your organization? How can you use the money you save on licensing fees to innovate to meet those challenges? Are there other ways you can use Moodle to help your organization meet its goals which would not have been cost effective if you had to pay a license fee for the software?  
Read more
  • 0
  • 0
  • 6046

article-image-fronting-external-api-ruby-rails-part-1
Mike Ball
09 Feb 2015
6 min read
Save for later

Fronting an external API with Ruby on Rails: Part 1

Mike Ball
09 Feb 2015
6 min read
Historically, a conventional Ruby on Rails application leverages server-side business logic, a relational database, and a RESTful architecture to serve dynamically-generated HTML. JavaScript-intensive applications and the widespread use of external web APIs, however, somewhat challenge this architecture. In many cases, Rails is tasked with performing as an orchestration layer, collecting data from various backend services and serving re-formatted JSON or XML to clients. In such instances, how is Rails' model-view-controller architecture still relevant? In this two part post series, we'll create a simple Rails backend that makes requests to an external XML-based web service and serves JSON. We'll use RSpec for tests and Jbuilder for view rendering. What are we building? We'll create Noterizer, a simple Rails application that requests XML from externally hosted endpoints and re-renders the XML data as JSON at a single URL. To assist in this post, I've created NotesXmlService, a basic web application that serves two XML-based endpoints: http://NotesXmlService.herokuapp.com/note-onehttp://NotesXmlService.herokuapp.com/note-two Why is this necessary in a real-world scenario? Fronting external endpoints with an application like Noterizer opens up a few opportunities: Noterizer's endpoint could serve JavaScript clients who can't perform HTTP requests across domain names to the original, external API. Noterizer's endpoint could reformat the externally hosted data to better serve its own clients' data formatting preferences. Noterizer's endpoint is a single interface to the data; multiple requests are abstracted away by its backend. Noterizer provides caching opportunities. While it's beyond the scope of this series, Rails can cache external request data, thus offloading traffic to the external API and avoiding any terms of service or rate limit violations imposed by the external service. Setup For this series, I’m using Mac OS 10.9.4, Ruby 2.1.2, and Rails 4.1.4. I’m assuming some basic familiarity with Git and the command line. Clone and set up the repo I've created a basic Rails 4 Noterizer app. Clone its repo, enter the project directory, and check out its tutorial branch: $ git clone http://github.com/mdb/noterizer && cd noterizer && git checkout tutorial Install its dependencies: $ bundle install Set up the test framework Let’s install RSpec for testing. Add the following to the project's Gemfile: gem 'rspec-rails', '3.0.1' Install rspec-rails: $ bundle install There’s now an rspec generator available for the rails command. Let's generate a basic RSpec installation: $ rails generate rspec:install This creates a few new files in a spec directory: ├── spec│   ├── rails_helper.rb│   └── spec_helper.rb We’re going to make a few adjustments to our RSpec installation.  First, because Noterizer does not use a relational database, delete the following ActiveRecord reference in spec/rails_helper.rb: # Checks for pending migrations before tests are run. # If you are not using ActiveRecord, you can remove this line. ActiveRecord::Migration.maintain_test_schema! Next, configure RSpec to be less verbose in its warning output; such verbose warnings are beyond the scope of this series. Remove the following line from .rspec: --warnings The RSpec installation also provides a spec rake task. Test this by running the following: $ rake spec You should see the following output, as there aren’t yet any RSpec tests: No examples found. Finished in 0.00021 seconds (files took 0.0422 seconds to load) 0 examples, 0 failures Note that a default Rails installation assumes tests live in a tests directory. RSpec uses a spec directory. For clarity's sake, you’re free to delete the test directory from Noterizer. Building a basic route and controller Currently, Noterizer does not have any URLs; we’ll create a single/notes URL route.  Creating the controller First, generate a controller: $ rails g controller notes Note that this created quite a few files, including JavaScript files, stylesheet files, and a helpers module. These are not relevant to our NotesController; so let's undo our controller generation by removing all untracked files from the project. Note that you'll want to commit any changes you do want to preserve. $ git clean -f Now, open config/application.rb and add the following generator configuration: config.generators do |g| g.helper false g.assets false end Re-running the generate command will now create only the desired files: $ rails g controller notes Testing the controller Let's add a basic NotesController#index test to spec/controllers/notes_spec.rb. The test looks like this: require 'rails_helper' describe NotesController, :type => :controller do describe '#index' do before :each do get :index end it 'successfully responds to requests' do expect(response).to be_success end end end This test currently fails when running rake spec, as we haven't yet created a corresponding route. Add the following route to config/routes.rb get 'notes' => 'notes#index' The test still fails when running rake spec, because there isn't a proper #index controller action.  Create an empty index method in app/controllers/notes_controller.rb class NotesController < ApplicationController def index end end rake spec still yields failing tests, this time because we haven't yet created a corresponding view. Let's create a view: $ touch app/views/notes/index.json.jbuilder To use this view, we'll need to tweak the NotesController a bit. Let's ensure that requests to the /notes route always returns JSON via a before_filter run before each controller action: class NotesController < ApplicationController before_filter :force_json def index end private def force_json request.format = :json end end Now, rake spec yields passing tests: $ rake spec . Finished in 0.0107 seconds (files took 1.09 seconds to load) 1 example, 0 failures Let's write one more test, asserting that the response returns the correct content type. Add the following to spec/controllers/notes_controller_spec.rb it 'returns JSON' do expect(response.content_type).to eq 'application/json' end Assuming rake spec confirms that the second test passes, you can also run the Rails server via the rails server command and visit the currently-empty Noterizer http://localhost:3000/notes URL in your web browser. Conclusion In this first part of the series we have created the basic route and controller for Noterizer, which is a basic example of a Rails application that fronts an external API. In the next blog post (Part 2), you will learn how to build out the backend, test the model, build up and test the controller, and also test the app with JBuilder. About this Author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 6039
article-image-introducing-feature-introjs
Packt
07 Oct 2013
5 min read
Save for later

Introducing a feature of IntroJs

Packt
07 Oct 2013
5 min read
(For more resources related to this topic, see here.) API IntroJs includes functions that let the user to control and change the execution of the introduction. For example, it is possible to make a decision for an unexpected event that happens during execution, or to change the introduction routine according to user interactions. Later on, all available APIs in IntroJs will be explained. However, these functions will extend and develop in the future. IntroJs includes these API functions: start goToStep exit setOption setOptions oncomplete onexit onchange onbeforechange introJs.start() As mentioned before, introJs.start() is the main function of IntroJs that lets the user to start the introduction for specified elements and get an instance of the introJS class. The introduction will start from the first step in specified elements. This function has no arguments and also returns an instance of the introJS class. introJs.goToStep(stepNo) Jump to the specific step of the introduction by using this function. As it is clear, introductions always start from the first step; however, it is possible to change the configuration by using this function. The goToStep function has an integer argument that accepts the number of the step in the introduction. introJs().goToStep(2).start(); //starts introduction from step 2 As the example indicates, first, the default configuration changed by using the goToStep function from 1 to 2, and then the start() function will be called. Hence, the introduction will start from the second step. Finally, this function will return the introJS class's instance. introJs.exit() The introJS.exit() function lets the user exit and close the running introduction. By default, the introduction ends when the user clicks on the Done button or goes to the last step of the introduction. introJs().exit() As it shows, the exit() function doesn't have any arguments and returns an instance of introJS. introJs.setOption(option, value) As mentioned before, IntroJs has some default options that can be changed by using the setOption method. This function has two arguments. The first one is useful to specify the option name and the second one is to set the value. introJs().setOption("nextLabel", "Go Next"); In the preceding example, nextLabel sets to Go Next. Also, it is possible to change other options by using the setOption method. introJs.setOptions(options) It is possible to change an option using the setOption method. However, to change more than one option at once, it is possible to use setOptions instead. The setOptions method accepts different options and values in the JSON format. introJs().setOptions({ skipLabel: "Exit", tooltipPosition: "right" }); In the preceding example, two options are set at the same time by using JSON and the setOptions method. introJs.oncomplete(providedCallback) The oncomplete event is raised when the introduction ends. If a function passes as an oncomplete method, it will be called by the library after the introduction ends. introJs().oncomplete(function() { alert("end of introduction"); }); In this example, after the introduction ends, the anonymous function that is passed to the oncomplete method will be called and alerted with the end of introduction message. introJs.onexit(providedCallback) As mentioned before, the user can exit the running introduction using the Esc key or by clicking on the dark area in the introduction. The onexit event notices when the user exits from the introduction. This function accepts one argument and returns the instance of running introJS. introJs().onexit(function() { alert("exit of introduction"); }); In the preceding example, we passed an anonymous function to the onexit method with an alert() statement. If the user exits the introduction, the anonymous function will be called and an alert with the message exit of introduction will appear. introJs.onchange(providedCallback) The onchange event is raised in each step of the introduction. This method is useful to inform when each step of introduction is completed. introJs().onchange(function(targetElement) { alert("new step"); }); You can define an argument for an anonymous function (targetElement in the preceding example), and when the function is called, you can access the current target element that is highlighted in the introduction with that argument. In the preceding example, when each introduction's step ends, an alert with the new step message will appear. introJs.onbeforechange(providedCallback) Sometimes, you may need to do something before each step of introduction. Consider that you need to do an Ajax call before the user goes to a step of the introduction; you can do this with the onbeforechange event. introJs().onbeforechange(function(targetElement) { alert("before new step");}); We can also define an argument for an anonymous function (targetElement in the preceding example), and when this function is called, the argument gets some information about the currently highlighted element in the introduction. So using that argument, you can know which step of the introduction will be highlighted or what's the type of target element and more. In the preceding example, an alert with the message before new step will appear before highlighting each step of the introduction. Summary In this article we learned about the API functions, their syntaxes, and how they are used. Resources for Article: Further resources on this subject: ASP.Net Site Performance: Improving JavaScript Loading [Article] Trapping Errors by Using Built-In Objects in JavaScript Testing [Article] Making a Better Form using JavaScript [Article]
Read more
  • 0
  • 0
  • 6039

article-image-managing-data-mysql
Packt
01 Apr 2010
8 min read
Save for later

Managing Data in MySQL

Packt
01 Apr 2010
8 min read
Exporting data to a simple CSV file While databases are a great tool to store and manage your data, you sometimes need to extract some of the data from your database to use it in another tool (a spreadsheet application being the most prominent example for this). In this recipe, we will show you how to utilize the respective MySQL commands for exporting data from a given table into a fi le that can easily be imported by other programs. Getting ready To step through this recipe, you will need a running MySQL database server and a working installation of a SQL client (like MySQL Query Browser or the mysql command line tool). You will also need to identify a suitable export target, which has to meet the following requirements: The MySQL server process must have write access to the target file The target file must not exist The export target file is located on the machine that runs your MySQL server, not on the client side! If you do not have file access to the MySQL server, you could instead use export functions of MySQL clients like MySQL Query Browser. In addition, a user with FILE privilege is needed (we will use an account named sample_install for the following steps; see also Chapter 8 Creating an installation user). Finally, we need some data to export. Throughout this recipe, we will assume that the data to export is stored in a table named table1 inside the database sample. As export target, we will use the file C:/target.csv (MySQL accepts slashes instead of backslashes in Windows path expressions). This is a file on the machine that runs the MySQL server instance, so in this example MySQL is assumed to be running on a Windows machine. To access the results from the client, you have to have access to the file (for example, using a fi le share or executing the MySQL client on the same machine as the server). How to do it... Connect to the database using the sample_install account. Issue the following SQL command: mysql> SELECT * FROM sample.table1 INTO OUTFILE 'C:/target.csv'FIELDS ENCLOSED BY '"' TERMINATED BY ';' ESCAPED BY '"' LINESTERMINATED BY 'rn'; Please note that when using a backslash instead of a slash in the target file's path, you have to use C:target.csv (double backslash for escaping) instead. If you do not give a path, but only a fi le name, the target fi le will be placed in the data directory of the currently selected schema of your MySQL server. How it works... In the previous SQL statement, a file C:/target.csv was created, which contains the content of the table sample.table1. The file contains a separate line for each row of the table, and each line is terminated by a sequence of a carriage return and a line feed character. This line ending was defined by the LINES TERMINATED BY 'rn' portion of the command. Each line contains the values of each column of the row. The values are separated by semicolons, as stated in the TERMINATED BY ';' clause. Every value is enclosed by a double quotation mark ("), which results from the FIELDS ENCLOSED BY '"' option. When writing the data to the target fi le, no character conversion takes place; the data is exported using the binary character set. This should be kept in mind especially when importing tables with different character sets for some of its values. You might wonder why we chose the semicolon instead of a comma as the field separator. This is simply because of a greatly improved Microsoft Excel compatibility (you can simply open the resulting files), without the need to import external data from the fi les. But you can, however, open these fi les in a different spreadsheet program (like OpenOffice.org Calc) as well. If you think the usage of semicolons is in contradiction to the notion of a CSV file, think of it as a Character Separated File. The use of double quotes to enclose single values prevents problems when field values contain semicolons (or generally the field separator character). These are not recognized as field separators if they are enclosed in double quotes. There's more... While the previous SELECT … INTO OUTFILE statement will work well in most cases, there are some circumstances in which you still might encounter problems. The following topics will show you how to handle some of those. Handling errors if the target fi le already exists If you try to execute the SELECT … INTO OUTFILE statement twice, an error File 'C:/target.csv' already exists occurs. This is due to a security feature in MySQL that makes sure that you cannot overwrite existing fi les using the SELECT … INTO OUTFILE statement. This makes perfect sense if you think about the consequences. If this were not the case, you could overwrite the MySQL data files using a simple SELECT because MySQL server needs write access to its data directories. As a result, you have to choose different target files for each export (or remove old files in advance). Unfortunately, it is not possible to use a non-constant file name (like a variable) in the SELECT … INTO OUTFILE export statement. If you wish to use different file names, for example, with a time stamp as part of the file name, you have to construct the statement inside a variable value before executing it:   mysql> SET @selInOutfileCmd := concat("SELECT * FROM sample.table1 INTOOUTFILE 'C:/target-", DATE_FORMAT(now(),'%Y-%m-%d_%H%i%s'), ".csv' FIELDSENCLOSED BY '"' TERMINATED BY ';' ESCAPED BY '"' LINES TERMINATED BY'rn';");mysql> PREPARE statement FROM @selInOutfileCmd;mysql> EXECUTE statement; The first SET statement constructs a string, which contains a SELECT statement. While it is not allowed to use variables for statements directly, you can construct a string that contains a statement and use variables for this. With the next two lines, you prepare a statement from the string and execute it. Handling NULL values Without further handling, NULL values in the data you export using the previous statement would show up as "N in the resulting file. This combination is not recognized, for example, by Microsoft Excel, which breaks the file (for typical usage). To prevent this, you need to replace NULL entries by appropriate values. Assuming that the table sample.table1 consists of a numeric column a and a character column b, you should use the following statement: mysql> SELECT IFNULL(a, 0), IFNULL(b, "NULL") FROM sample.table1 INTOOUTFILE 'C:/target.csv' FIELDS ENCLOSED BY '"' TERMINATED BY ';' ESCAPEDBY '"' LINES TERMINATED BY 'rn'; The downside to this approach is that you have to list all fi elds in which a NULL value might occur. Handling line breaks If you try to export values that contain the same character combination used for line termination in the SELECT … INTO OUTFILE statement, MySQL will try to escape the character combination with the characters defined by the ESCAPED BY clause. However, this will not always work the way it is intended. You will typically define rn as the line separators. With this constellation, values that contain a simple line break n will not cause problems, as they are exported without any conversion and can be imported to Microsoft Excel flawlessly. If your values happen to contain a combination of carriage return and line feed, the rn characters will be prepended with an escape character ("rn), but still the target file cannot be imported correctly. Therefore, you need to convert the full line breaks to simple line breaks: mysql> SELECT a, REPLACE(b, 'rn', 'n') FROM sample.table1 INTO OUTFILE'C:/target.csv' FIELDS ENCLOSED BY '"' TERMINATED BY ';' ESCAPED BY '"'LINES TERMINATED BY 'rn'; With this statement, you will export only line breaks n, which are typically accepted for import by other programs. Including headers For better understanding, you might want to include headers in your target fi le. You can do so by using a UNION construct: mysql> (SELECT 'Column a', 'Column b') UNION ALL (SELECT * FROM sample.table1 INTO OUTFILE 'C:/target.csv' FIELDS ENCLOSED BY '"' TERMINATED BY';' ESCAPED BY '"' LINES TERMINATED BY 'rn'); The resulting file will contain an additional first line with the given headers from the first SELECT clause.
Read more
  • 0
  • 0
  • 5928

article-image-nodejs-fundamentals
Packt
22 May 2015
17 min read
Save for later

Node.js Fundamentals

Packt
22 May 2015
17 min read
This article is written by Krasimir Tsonev, the author of Node.js By Example. Node.js is one of the most popular JavaScript-driven technologies nowadays. It was created in 2009 by Ryan Dahl and since then, the framework has evolved into a well-developed ecosystem. Its package manager is full of useful modules and developers around the world have started using Node.js in their production environments. In this article, we will learn about the following: Node.js building blocks The main capabilities of the environment The package management of Node.js (For more resources related to this topic, see here.) Understanding the Node.js architecture Back in the days, Ryan was interested in developing network applications. He found out that most high performance servers followed similar concepts. Their architecture was similar to that of an event loop and they worked with nonblocking input/output operations. These operations would permit other processing activities to continue before an ongoing task could be finished. These characteristics are very important if we want to handle thousands of simultaneous requests. Most of the servers written in Java or C use multithreading. They process every request in a new thread. Ryan decided to try something different—a single-threaded architecture. In other words, all the requests that come to the server are processed by a single thread. This may sound like a nonscalable solution, but Node.js is definitely scalable. We just have to run different Node.js processes and use a load balancer that distributes the requests between them. Ryan needed something that is event-loop-based and which works fast. As he pointed out in one of his presentations, big companies such as Google, Apple, and Microsoft invest a lot of time in developing high performance JavaScript engines. They have become faster and faster every year. There, event-loop architecture is implemented. JavaScript has become really popular in recent years. The community and the hundreds of thousands of developers who are ready to contribute made Ryan think about using JavaScript. Here is a diagram of the Node.js architecture: In general, Node.js is made up of three things: V8 is Google's JavaScript engine that is used in the Chrome web browser (https://developers.google.com/v8/) A thread pool is the part that handles the file input/output operations. All the blocking system calls are executed here (http://software.schmorp.de/pkg/libeio.html) The event loop library (http://software.schmorp.de/pkg/libev.html) On top of these three blocks, we have several bindings that expose low-level interfaces. The rest of Node.js is written in JavaScript. Almost all the APIs that we see as built-in modules and which are present in the documentation, are written in JavaScript. Installing Node.js A fast and easy way to install Node.js is by visiting and downloading the appropriate installer for your operating system. For OS X and Windows users, the installer provides a nice, easy-to-use interface. For developers that use Linux as an operating system, Node.js is available in the APT package manager. The following commands will set up Node.js and Node Package Manager (NPM): sudo apt-get updatesudo apt-get install nodejssudo apt-get install npm Running Node.js server Node.js is a command-line tool. After installing it, the node command will be available on our terminal. The node command accepts several arguments, but the most important one is the file that contains our JavaScript. Let's create a file called server.js and put the following code inside: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); If you run node ./server.js in your console, you will have the Node.js server running. It listens for incoming requests at localhost (127.0.0.1) on port 9000. The very first line of the preceding code requires the built-in http module. In Node.js, we have the require global function that provides the mechanism to use external modules. We will see how to define our own modules in a bit. After that, the scripts continue with the createServer and listen methods on the http module. In this case, the API of the module is designed in such a way that we can chain these two methods like in jQuery. The first one (createServer) accepts a function that is also known as a callback, which is called every time a new request comes to the server. The second one makes the server listen. The result that we will get in a browser is as follows: Defining and using modules JavaScript as a language does not have mechanisms to define real classes. In fact, everything in JavaScript is an object. We normally inherit properties and functions from one object to another. Thankfully, Node.js adopts the concepts defined by CommonJS—a project that specifies an ecosystem for JavaScript. We encapsulate logic in modules. Every module is defined in its own file. Let's illustrate how everything works with a simple example. Let's say that we have a module that represents this book and we save it in a file called book.js: // book.jsexports.name = 'Node.js by example';exports.read = function() {   console.log('I am reading ' + exports.name);} We defined a public property and a public function. Now, we will use require to access them: // script.jsvar book = require('./book.js');console.log('Name: ' + book.name);book.read(); We will now create another file named script.js. To test our code, we will run node ./script.js. The result in the terminal looks like this: Along with exports, we also have module.exports available. There is a difference between the two. Look at the following pseudocode. It illustrates how Node.js constructs our modules: var module = { exports: {} };var exports = module.exports;// our codereturn module.exports; So, in the end, module.exports is returned and this is what require produces. We should be careful because if at some point we apply a value directly to exports or module.exports, we may not receive what we need. Like at the end of the following snippet, we set a function as a value and that function is exposed to the outside world: exports.name = 'Node.js by example';exports.read = function() {   console.log('Iam reading ' + exports.name);}module.exports = function() { ... } In this case, we do not have an access to .name and .read. If we try to execute node ./script.js again, we will get the following output: To avoid such issues, we should stick to one of the two options—exports or module.exports—but make sure that we do not have both. We should also keep in mind that by default, require caches the object that is returned. So, if we need two different instances, we should export a function. Here is a version of the book class that provides API methods to rate the books and that do not work properly: // book.jsvar ratePoints = 0;exports.rate = function(points) {   ratePoints = points;}exports.getPoints = function() {   return ratePoints;} Let's create two instances and rate the books with different points value: // script.jsvar bookA = require('./book.js');var bookB = require('./book.js');bookA.rate(10);bookB.rate(20);console.log(bookA.getPoints(), bookB.getPoints()); The logical response should be 10 20, but we got 20 20. This is why it is a common practice to export a function that produces a different object every time: // book.jsmodule.exports = function() {   var ratePoints = 0;   return {     rate: function(points) {         ratePoints = points;     },     getPoints: function() {         return ratePoints;     }   }} Now, we should also have require('./book.js')() because require returns a function and not an object anymore. Managing and distributing packages Once we understand the idea of require and exports, we should start thinking about grouping our logic into building blocks. In the Node.js world, these blocks are called modules (or packages). One of the reasons behind the popularity of Node.js is its package management. Node.js normally comes with two executables—node and npm. NPM is a command-line tool that downloads and uploads Node.js packages. The official site, , acts as a central registry. When we create a package via the npm command, we store it there so that every other developer may use it. Creating a module Every module should live in its own directory, which also contains a metadata file called package.json. In this file, we have set at least two properties—name and version: {   "name": "my-awesome-nodejs-module",   "version": "0.0.1"} We can place whatever code we like in the same directory. Once we publish the module to the NPM registry and someone installs it, he/she will get the same files. For example, let's add an index.js file so that we have two files in the package: // index.jsconsole.log('Hello, this is my awesome Node.js module!'); Our module does only one thing—it displays a simple message to the console. Now, to upload the modules, we need to navigate to the directory containing the package.json file and execute npm publish. This is the result that we should see: We are ready. Now our little module is listed in the Node.js package manager's site and everyone is able to download it. Using modules In general, there are three ways to use the modules that are already created. All three ways involve the package manager: We may install a specific module manually. Let's say that we have a folder called project. We open the folder and run the following: npm install my-awesome-nodejs-module The manager automatically downloads the latest version of the module and puts it in a folder called node_modules. If we want to use it, we do not need to reference the exact path. By default, Node.js checks the node_modules folder before requiring something. So, just require('my-awesome-nodejs-module') will be enough. The installation of modules globally is a common practice, especially if we talk about command-line tools made with Node.js. It has become an easy-to-use technology to develop such tools. The little module that we created is not made as a command-line program, but we can still install it globally by running the following code: npm install my-awesome-nodejs-module -g Note the -g flag at the end. This is how we tell the manager that we want this module to be a global one. When the process finishes, we do not have a node_modules directory. The my-awesome-nodejs-module folder is stored in another place on our system. To be able to use it, we have to add another property to package.json, but we'll talk more about this in the next section. The resolving of dependencies is one of the key features of the package manager of Node.js. Every module can have as many dependencies as you want. These dependences are nothing but other Node.js modules that were uploaded to the registry. All we have to do is list the needed packages in the package.json file: {    "name": "another-module",    "version": "0.0.1",    "dependencies": {        "my-awesome-nodejs-module": "0.0.1"      } } Now we don't have to specify the module explicitly and we can simply execute npm install to install our dependencies. The manager reads the package.json file and saves our module again in the node_modules directory. It is good to use this technique because we may add several dependencies and install them at once. It also makes our module transferable and self-documented. There is no need to explain to other programmers what our module is made up of. Updating our module Let's transform our module into a command-line tool. Once we do this, users will have a my-awesome-nodejs-module command available in their terminals. There are two changes in the package.json file that we have to make: {   "name": "my-awesome-nodejs-module",   "version": "0.0.2",   "bin": "index.js"} A new bin property is added. It points to the entry point of our application. We have a really simple example and only one file—index.js. The other change that we have to make is to update the version property. In Node.js, the version of the module plays important role. If we look back, we will see that while describing dependencies in the package.json file, we pointed out the exact version. This ensures that in the future, we will get the same module with the same APIs. Every number from the version property means something. The package manager uses Semantic Versioning 2.0.0 (http://semver.org/). Its format is MAJOR.MINOR.PATCH. So, we as developers should increment the following: MAJOR number if we make incompatible API changes MINOR number if we add new functions/features in a backwards-compatible manner PATCH number if we have bug fixes Sometimes, we may see a version like 2.12.*. This means that the developer is interested in using the exact MAJOR and MINOR version, but he/she agrees that there may be bug fixes in the future. It's also possible to use values like >=1.2.7 to match any equal-or-greater version, for example, 1.2.7, 1.2.8, or 2.5.3. We updated our package.json file. The next step is to send the changes to the registry. This could be done again with npm publish in the directory that holds the JSON file. The result will be similar. We will see the new 0.0.2 version number on the screen: Just after this, we may run npm install my-awesome-nodejs-module -g and the new version of the module will be installed on our machine. The difference is that now we have the my-awesome-nodejs-module command available and if you run it, it displays the message written in the index.js file: Introducing built-in modules Node.js is considered a technology that you can use to write backend applications. As such, we need to perform various tasks. Thankfully, we have a bunch of helpful built-in modules at our disposal. Creating a server with the HTTP module We already used the HTTP module. It's perhaps the most important one for web development because it starts a server that listens on a particular port: var http = require('http');http.createServer(function (req, res) {   res.writeHead(200, {'Content-Type': 'text/plain'});   res.end('Hello Worldn');}).listen(9000, '127.0.0.1');console.log('Server running at http://127.0.0.1:9000/'); We have a createServer method that returns a new web server object. In most cases, we run the listen method. If needed, there is close, which stops the server from accepting new connections. The callback function that we pass always accepts the request (req) and response (res) objects. We can use the first one to retrieve information about incoming request, such as, GET or POST parameters. Reading and writing to files The module that is responsible for the read and write processes is called fs (it is derived from filesystem). Here is a simple example that illustrates how to write data to a file: var fs = require('fs');fs.writeFile('data.txt', 'Hello world!', function (err) {   if(err) { throw err; }   console.log('It is saved!');}); Most of the API functions have synchronous versions. The preceding script could be written with writeFileSync, as follows: fs.writeFileSync('data.txt', 'Hello world!'); However, the usage of the synchronous versions of the functions in this module blocks the event loop. This means that while operating with the filesystem, our JavaScript code is paused. Therefore, it is a best practice with Node to use asynchronous versions of methods wherever possible. The reading of the file is almost the same. We should use the readFile method in the following way: fs.readFile('data.txt', function(err, data) {   if (err) throw err;   console.log(data.toString());}); Working with events The observer design pattern is widely used in the world of JavaScript. This is where the objects in our system subscribe to the changes happening in other objects. Node.js has a built-in module to manage events. Here is a simple example: var events = require('events'); var eventEmitter = new events.EventEmitter(); var somethingHappen = function() {    console.log('Something happen!'); } eventEmitter .on('something-happen', somethingHappen) .emit('something-happen'); The eventEmitter object is the object that we subscribed to. We did this with the help of the on method. The emit function fires the event and the somethingHappen handler is executed. The events module provides the necessary functionality, but we need to use it in our own classes. Let's get the book idea from the previous section and make it work with events. Once someone rates the book, we will dispatch an event in the following manner: // book.js var util = require("util"); var events = require("events"); var Class = function() { }; util.inherits(Class, events.EventEmitter); Class.prototype.ratePoints = 0; Class.prototype.rate = function(points) {    ratePoints = points;    this.emit('rated'); }; Class.prototype.getPoints = function() {    return ratePoints; } module.exports = Class; We want to inherit the behavior of the EventEmitter object. The easiest way to achieve this in Node.js is by using the utility module (util) and its inherits method. The defined class could be used like this: var BookClass = require('./book.js'); var book = new BookClass(); book.on('rated', function() {    console.log('Rated with ' + book.getPoints()); }); book.rate(10); We again used the on method to subscribe to the rated event. The book class displays that message once we set the points. The terminal then shows the Rated with 10 text. Managing child processes There are some things that we can't do with Node.js. We need to use external programs for the same. The good news is that we can execute shell commands from within a Node.js script. For example, let's say that we want to list the files in the current directory. The file system APIs do provide methods for that, but it would be nice if we could get the output of the ls command: // exec.js var exec = require('child_process').exec; exec('ls -l', function(error, stdout, stderr) {    console.log('stdout: ' + stdout);    console.log('stderr: ' + stderr);    if (error !== null) {        console.log('exec error: ' + error);    } }); The module that we used is called child_process. Its exec method accepts the desired command as a string and a callback. The stdout item is the output of the command. If we want to process the errors (if any), we may use the error object or the stderr buffer data. The preceding code produces the following screenshot: Along with the exec method, we have spawn. It's a bit different and really interesting. Imagine that we have a command that not only does its job, but also outputs the result. For example, git push may take a few seconds and it may send messages to the console continuously. In such cases, spawn is a good variant because we get an access to a stream: var spawn = require('child_process').spawn; var command = spawn('git', ['push', 'origin', 'master']); command.stdout.on('data', function (data) {    console.log('stdout: ' + data); }); command.stderr.on('data', function (data) {    console.log('stderr: ' + data); }); command.on('close', function (code) {    console.log('child process exited with code ' + code); }); Here, stdout and stderr are streams. They dispatch events and if we subscribe to these events, we will get the exact output of the command as it was produced. In the preceding example, we run git push origin master and sent the full command responses to the console. Summary Node.js is used by many companies nowadays. This proves that it is mature enough to work in a production environment. In this article, we saw what the fundamentals of this technology are. We covered some of the commonly used cases. Resources for Article: Further resources on this subject: AngularJS Project [article] Exploring streams [article] Getting Started with NW.js [article]
Read more
  • 0
  • 0
  • 5816
article-image-alice-3-controlling-behavior-animations
Packt
18 Jul 2011
11 min read
Save for later

Alice 3: Controlling the Behavior of Animations

Packt
18 Jul 2011
11 min read
  Alice 3 Cookbook 79 recipes to harness the power of Alice 3 for teaching students to build attractive and interactive 3D scenes and videos         Read more about this book       (For more resources related to this subject, see here.) Introduction You need to organize the statements that request the different actors to perform actions. Alice 3 provides blocks that allow us to configure the order in which many statements should be executed. This article provides many tasks that will allow us to start controlling the behavior of animations with many actors performing different actions. We will execute many actions with a specific order. We will use counters to run one or more statements many times. We will execute actions for many actors of the same class. We will run code for different actors at the same time to render complex animations. Performing many statements in order In this recipe, we will execute many statements for an actor with a specific order. We will add eight statements to control a sequence of movements for a bee. Getting ready We have to be working on a project with at least one actor. Therefore, we will create a new project and set a simple scene with a few actors. Select File | New... in the main menu to start a new project. A dialog box will display the six predefined templates with their thumbnail previews in the Templates tab. Select GrassyProject.a3p as the desired template for the new project and click on OK. Alice will display a grassy ground with a light blue sky. Click on Edit Scene, at the lower right corner of the scene preview. Alice will show a bigger preview of the scene and will display the Model Gallery at the bottom. Add an instance of the Bee class to the scene, and enter bee for the name of this new instance. First, Alice will create the MyBee class to extend Bee. Then, Alice will create an instance of MyBee named bee. Follow the steps explained in the Creating a new instance from a class in a gallery recipe, in the article, Alice 3: Making Simple Animations with Actors. Add an instance of the PurpleFlower class, and enter purpleFlower for the name of this new instance. Add another instance of the PurpleFlower class, and enter purpleFlower2 for the name of this new instance. The additional flower may be placed on top of the previously added flower. Add an instance of the ForestSky class to the scene. Place the bee and the two flowers as shown in the next screenshot: How to do it... Follow these steps to execute many statements for the bee with a specific order: Open an existing project with one actor added to the scene. Click on Edit Code, at the lower-right corner of the big scene preview. Alice will show a smaller preview of the scene and will display the Code Editor on a panel located at the right-hand side of the main window. Click on the class: MyScene drop-down list and the list of classes that are part of the scene will appear. Select MyScene | Edit run. Select the desired actor in the instance drop-down list located at the left-hand side of the main window, below the small scene preview. For example, you can select bee. Make sure that part: none is selected in the drop-down list located at the right-hand side of the chosen instance. Activate the Procedures tab. Alice will display the procedures for the previously selected actor. Drag the pointAt procedure and drop it in the drop statement here area located below the do in order label, inside the run tab. Because the instance name is bee, the pointAt statement contains the this.bee and pointAt labels followed by the target parameter and its question marks ???. A list with all the possible instances to pass to the first parameter will appear. Click on this.purpleFlower. The following code will be displayed, as shown in the next screenshot: this.bee.pointAt(this.purpleFlower) Drag the moveTo procedure and drop it below the previously dropped procedure call. A list with all the possible instances to pass to the first parameter will appear. Select this.purpleFlower getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal01, as shown in the following screenshot: Click on the more... drop-down menu button that appears at the right-hand side of the recently dropped statement. Click on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears. Click on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the second statement: this.bee.moveTo(this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_AND_END_ABRUPTLY) Drag the delay procedure and drop it below the previously dropped procedure call. A list with all the predefined direction values to pass to the first parameter will appear. Select 2.0 and the following code will be displayed as the third statement: this.bee.delay(2.0) Drag the moveAwayFrom procedure and drop it below the previously dropped procedure call. Select 0.25 for the first parameter. Click on the more... drop-down menu button that appears and select this.purpleFlower getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal01. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_ABRUPTLY_AND_END_GENTLY. The following code will be displayed as the fourth statement: this.bee.moveAwayFrom(0.25, this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_ABRUPTLY_AND_END_GENTLY) Drag the turnToFace procedure and drop it below the previously dropped procedure call. Select this.purpleFlower2 getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal05. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_ABRUPTLY_AND_END_GENTLY. The following code will be displayed as the fifth statement: this.bee.turnToFace(this.purpleFlower2.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal05), duration: 1.0, style: BEGIN_ABRUPTLY_AND_END_GENTLY) Drag the moveTo procedure and drop it below the previously dropped procedure call. Select this.purpleFlower2 getPart ??? and then IStemMiddle_IStemTop_IHPistil_IHPetal05. Click on the additional more... drop-down menu button, on duration and then on 1.0 in the cascade menu that appears. Click on the new more... drop-down menu that appears, on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the sixth statement: this.bee.moveTo(this.purpleFlower2.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal05), duration: 1.0, style: BEGIN_AND_END_GENTLY) Drag the delay procedure and drop it below the previously dropped procedure call. A list with all the predefined direction values to pass to the first parameter will appear. Select 2.0 and the following code will be displayed as the seventh statement: this.bee.delay(2.0) Drag the move procedure and drop it below the previously dropped procedure call. Select FORWARD and then 10.0. Click on the more... drop-down menu button, on duration and then on 10.0 in the cascade menu that appears. Click on the additional more... drop-down menu that appears, on asSeenBy and then on this.bee. Click on the new more... drop-down menu that appears, on style and then on BEGIN_AND_END_ABRUPTLY. The following code will be displayed as the eighth and final statement. The following screenshot shows the eight statements that compose the run procedure: this.bee.move(FORWARD, duration: 10.0, asSeenBy: this.bee, style: BEGIN_ABRUPTLY_AND_END_GENTLY) (Move the mouse over the image to enlarge it.) Select File | Save as... from Alice's main menu and give a new name to the project. Then you can make changes to the project according to your needs. How it works... When we run a project, Alice creates the scene instance, creates and initializes all the instances that compose the scene, and finally executes the run method defined in the MyScene class. By default, the statements we add to a procedure are included within the do in order block. We added eight statements to the do in order block, and therefore Alice will begin with the first statement: this.bee.pointAt(this.purpleFlower) Once the bee finishes executing the pointAt procedure, the execution flow goes on with the next statement specified in the do in order block. Thus, Alice will execute the following second statement after the first one finishes: this.bee.moveTo(this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01), duration: 1.0, style: BEGIN_AND_END_ABRUPTLY) The do in order statement encapsulates a group of statements with a synchronous execution. Thus, when we add many statements within a do in order block, these statements will run one after the other. Each statement requires its previous statement to finish before starting its execution, and therefore we can use the do in order block to group statements that must run with a specific order. The moveTo procedure moves the 3D model that represents the actor until it reaches the position of the other actor. The value for the target parameter is the instance of the other actor. We want the bee to move to one of the petals of the first flower, purpleFlower, and therefore we passed this value to the target parameter: this.purpleFlower.getPart(IStemMiddle_IStemTop_IHPistil_IHPetal01) We called the getPart function for purpleFlower with IStemMiddle_IStemTop_IHPistil_IHPetal01 as the name of the part to return. This function allows us to retrieve one petal from the flower as an instance. We used the resulting instance as the target parameter for the moveTo procedure and we could make the bee move to the specific petal of the flower. Once the bee finishes executing the moveTo procedure, the execution flow goes on with the next statement specified in the do in order block. Thus, Alice will execute the following third statement after the second one finishes: this.bee.delay(2.0) The delay procedure puts the actor to sleep in its current position for the specified number of seconds. The next statement specified in the do in order block will run after waiting for two seconds. The statements added to the run procedure will perform the following visible actions in the specified order: Point the bee at purpleFlower. Begin and end abruptly a movement for the bee from its position to the petal named IStemMiddle_IStemTop_IHPistil_IHPetal01 of purpleFlower. The total duration for the animation must be 1 second. Make the bee stay in its position for 2 seconds. Move the bee away 0.25 units from the position of the petal named IStemMiddle_IStemTop_IHPistil_IHPetal01 of purpleFlower. Begin the movement abruptly but end it gently. The total duration for the animation must be 1 second. Turn the bee to the face of the petal named IStemMiddle_IStemTop_IHPistil_IHPetal05 of purpleFlower2. Begin the movement abruptly but end it gently. The total duration for the animation must be 1 second. Begin and end abruptly a movement for the bee from its position to the petal named IStemMiddle_IStemTop_IHPistil_IHPetal05 of purpleFlower2. The total duration for the animation must be 1 second. Make the bee stay in its position for 2 seconds. Move the bee forward 10 units. Begin the movement abruptly but end it gently. The total duration for the animation must be 10 seconds. The bee will disappear from the scene. The following screenshot shows six screenshots of the rendered frames: (Move the mouse over the image to enlarge it.) There's more... When you work with the Alice code editor, you can temporarily disable statements. Alice doesn't execute the disabled statements. However, you can enable them again later. It is useful to disable one or more statements when you want to test the results of running the project without these statements, but you might want to enable them back to compare the results. To disable a statement, right-click on it and deactivate the IsEnabled option, as shown in the following screenshot: The disabled statements will appear with diagonal lines, as shown in the next screenshot, and won't be considered at run-time: To enable a disabled statement, right-click on it and activate the IsEnabled option.
Read more
  • 0
  • 0
  • 5729

article-image-themes-and-templates-apache-struts-2
Packt
20 Oct 2009
10 min read
Save for later

Themes and Templates with Apache Struts 2

Packt
20 Oct 2009
10 min read
Extracting the templates The first step to modifying an existing theme or creating our own is to extract the templates from the Struts 2 distribution. This actually has the advantageous performance side effect of keeping the templates in the file system (as opposed to in the library file), which allows FreeMarker to cache the templates properly. Caching the templates provides a performance boost and involves no work other than extracting the templates. The issue with caching templates contained in library files, however, will be fixed. If we examine the Struts 2 core JAR file, we'll see a /template folder. We just need to put that in our application's classpath. The best way to do this depends on your build and deploy environment. For example, if we're using Eclipse, the easiest thing to do is put the /template folder in our source folder; Eclipse should deploy them automatically. A maze of twisty little passages Right now, consider a form having only text fields and a submit button. We'll start by looking at the template for the text field tag. For the most part, Struts 2 custom tags are named similarly to the template file that defines it. As we're using the "xhtml" theme, we'll look in our newly-created /template/xhtml folder. Templates are found in a folder with the same name as the theme. We find the <s:textfield> template in /template/xhtml/text.ftl file. However, when we open it, we are disappointed to find it implemented by the following files—controlheader.ftl file retrieved from the current theme's folder, text.ftl from the simple theme, and controlfooter.ftl file from "xhtml" theme. This is curious, but satisfactory for now. We'll assume what we need is in the controlheader.ftl file. However, upon opening that, we discover we actually need to look in controlheader-core.ftl file. Opening that file shows us the table rows that we're looking for. Going walkabout through source code, both Java and FreeMarker, can be frustrating, but ultimately educational. Developing the habit of looking at framework source can lead to a greater mastery of that framework. It can be frustrating at times, but is a critical skill. Even without a strong understanding of the FreeMarker template language, we can get a pretty good idea of what needs to be done by looking at the controlheader-core.ftl file. We notice that the template sets a convenience variable (hasFieldErrors) when the field being rendered has an error. We'll use that variable to control the style of the table row and cells of our text fields. This is how the class of the text field label is being set. Creating our theme To keep the template clean for the purpose of education, we'll go ahead and create a new theme. (Most of the things will be the same, but we'll strip out some unused code in the templates we modify.) While we have the possibility of extending an existing theme (see the Struts 2 documentation for details), we'll just create a new theme called s2wad by copying the xhtml templates into a folder called s2wad. We can now use the new theme by setting the theme in our <s:form> tag by specifying a theme attribute: <s:form theme="s2wad" ... etc ...> Subsequent form tags will now use our new s2wad theme. Because we decided not to extend the existing "xhtml" theme, as we have a lot of tags with the "xhtml" string hard coded inside. In theory, it probably wasn't necessary to hard code the theme into the templates. However, we're going to modify only a few tags for the time being, while the remaining tags will remain hard coded (although incorrectly). In an actual project, we'd either extend an existing theme or spend more time cleaning up the theme we've created (along with the "xhtml" theme, and provide corrective patches back to the Struts 2 project). First, we'll modify controlheader.ftl to use the theme parameter to load the appropriate controlheader-core.ftl file. Arguably, this is how the template should be implemented anyway, even though we could hard code in the new s2wad theme. Next, we'll start on controlheader-core.ftl. As our site will never use the top label position, we'll remove that. Doing this isn't necessary, but will keep it cleaner for our use. The controlheader-core.ftl template creates a table row for each field error for the field being rendered, and creates the table row containing the field label and input field itself. We want to add a class to both the table row and table cells containing the field label and input field. By adding a class to both, the row itself and each of the two table cells, we maximize our ability to apply CSS styles. Even if we end up styling only one or the other, it's convenient to have the option. We'll also strip out the FreeMarker code that puts the required indicator to the left of the label, once again, largely to keep things clean. Projects will normally have a unified look and feel. It's reasonable to remove unused functionality, and if we're already going through the trouble to create a new theme, then we might as well do that. We're also going to clean up the template a little bit by consolidating how we handle the presence of field errors. Instead of putting several FreeMarker <#if> directives throughout the template, we'll create some HTML attributes at the top of the template, and use them in the table row and table cells later on. Finally, we'll indent the template file to make it easier to read. This may not always be a viable technique in production, as the extra spaces may be rendered improperly, (particularly across browsers), possibly depending on what we end up putting in the tag. For now, imagine that we're using the default "required" indicator, an asterisk, but it's conceivable we might want to use something like an image. Whitespace is something to be aware of when dealing with HTML. Our modified controlheader-core.ftl file now looks like this: <#assign hasFieldErrors = parameters.name?exists && fieldErrors?exists && fieldErrors[parameters.name]?exists/><#if hasFieldErrors> <#assign labelClass = "class='errorLabel'"/> <#assign trClass = "class='hasErrors'"/> <#assign tdClass = "class='tdLabel hasErrors'"/><#else> <#assign labelClass = "class='label'"/> <#assign trClass = ""/> <#assign tdClass = "class='tdLabel'"/></#if><#if hasFieldErrors> <#list fieldErrors[parameters.name] as error> <tr errorFor="${parameters.id}" class="hasErrors"> <td>&nbsp;</td> <td class="hasErrors"><#rt/> <span class="errorMessage">${error?html}</span><#t/> </td><#lt/> </tr> </#list></#if><tr ${trClass}> <td ${tdClass}> <#if parameters.label?exists> <label <#t/> <#if parameters.id?exists> for="${parameters.id?html}" <#t/> </#if> ${labelClass} ><#t/> ${parameters.label?html}<#t/> <#if parameters.required?default(false)> <span class="required">*</span><#t/> </#if> :<#t/> <#include "/${parameters.templateDir}/s2e2e/tooltip.ftl" /> </label><#t/> </#if> </td><#lt/> It's significantly different when compared to the controlheader-core.ftl file of the "xhtml" theme. However, it has the same functionality for our application, with the addition of the new hasErrors class applied to both the table row and cells for the recipe's name and description fields. We've also slightly modified where the field errors are displayed (it is no longer centered around the entire input field row, but directly above the field itself). We'll also modify the controlheader.ftl template to apply the hasErrors style to the table cell containing the input field. This template is much simpler and includes only our new hasErrors class and the original align code. Note that we can use the variable hasFieldErrors, which is defined in controlheader-core.ftl. This is a valuable technique, but has the potential to lead to spaghetti code. It would probably be better to define it in the controlheader.ftl template. <#include "/${parameters.templateDir}/${parameters.theme}/controlheader-core.ftl" /><td<#if hasFieldErrors>class="hasErrors"<#t/></#if><#if parameters.align?exists>align="${parameters.align?html}"<#t/></#if>><#t/> We'll create a style for the table cells with the hasErrors class, setting the background to be just a little red. Our new template sets the hasErrors class on both the label and the input field table cells, and we've collapsed our table borders, so this will create a table row with a light red background. .hasErrors td {background: #fdd;} Now, a missing Name or Description will give us a more noticeable error, as shown in the following screenshot: This is fairly a simple example. However, it does show that it's pretty straightforward to begin customizing our own templates to match the requirements of the application. By encapsulating some of the view layer inside the form tags, our JSP files are kept significantly cleaner. Other uses of templates Anything we can do in a typical JSP page can be done in our templates. We don't have to use Struts 2's template support. We can do many similar things in a JSP custom tag file (or a Java-based tag), but we'd lose some of the functionality that's already been built. Some potential uses of templates might include the addition of accessibility features across an entire site, allowing them to be encapsulated within concise JSP notation. Enhanced JavaScript functionality could be added to all fields, or only specific fields of a form, including things such as detailed help or informational pop ups. This overlaps somewhat with the existing tooltip support, we might have custom usage requirements or our own framework that we need to support. Struts 2 now also ships with a Java-based theme that avoids the use of FreeMarker tags. These tags provide a noticeable speed benefit. However, only a few basic tags are supported at this time. It's bundled as a plug-in, which can be used as a launching point for our own Java-based tags. Summary Themes and templates provide another means of encapsulating functionality and/or appearance across an entire application. The use of existing themes can be a great benefit, particularly when doing early prototyping of a site, and are often sufficient for the finished product. Dealing effectively with templates is largely a matter of digging through the existing template source. It also includes determining what our particular needs are, and modifying or creating our own themes, adding and removing functionality as appropriate. While this article only takes a brief look at templates, it covers the basics and opens the door to implementing any enhancements we may require. If you have read this article you may be interested to view : Exceptions and Logging in Apache Struts 2 Documenting our Application in Apache Struts 2 (part 1) Documenting our Application in Apache Struts 2 (part 2)
Read more
  • 0
  • 0
  • 5601
Modal Close icon
Modal Close icon