Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-using-google-maps-apis-knockoutjs
Packt
22 Sep 2015
7 min read
Save for later

Using Google Maps APIs with Knockout.js

Packt
22 Sep 2015
7 min read
This article by Adnan Jaswal, the author of the book, KnockoutJS by Example, will render a map of the application and allow the users to place markers on it. The users will also be able to get directions between two addresses, both as description and route on the map. (For more resources related to this topic, see here.) Placing marker on the map This feature is about placing markers on the map for the selected addresses. To implement this feature, we will: Update the address model to hold the marker Create a method to place a marker on the map Create a method to remove an existing marker Register subscribers to trigger the removal of the existing markers when an address changes Update the module to add a marker to the map Let's get started by updating the address model. Open the MapsApplication module and locate the AddressModel variable. Add an observable to this model to hold the marker like this: /* generic model for address */ var AddressModel = function() { this.marker = ko.observable(); this.location = ko.observable(); this.streetNumber = ko.observable(); this.streetName = ko.observable(); this.city = ko.observable(); this.state = ko.observable(); this.postCode = ko.observable(); this.country = ko.observable(); }; Next, we create a method that will create and place the marker on the map. This method should take location and address model as parameters. The method will also store the marker in the address model. Use the google.maps.Marker class to create and place the marker. Our implementation of this method looks similar to this: /* method to place a marker on the map */ var placeMarker = function (location, value) { // create and place marker on the map var marker = new google.maps.Marker({ position: location, map: map }); //store the newly created marker in the address model value().marker(marker); }; Now, create a method that checks for an existing marker in the address model and removes it from the map. Name this method removeMarker. It should look similar to this: /* method to remove old marker from the map */ var removeMarker = function(address) { if(address != null) { address.marker().setMap(null); } }; The next step is to register subscribers that will trigger when an address changes. We will use these subscribers to trigger the removal of the existing markers. We will use the beforeChange event of the subscribers so that we have access to the existing markers in the model. Add subscribers to the fromAddress and toAddress observables to trigger on the beforeChange event. Remove the existing markers on the trigger. To achieve this, I created a method called registerSubscribers. This method is called from the init method of the module. The method registers the two subscribers that triggers calls to removeMarker. Our implementation looks similar to this: /* method to register subscriber */ var registerSubscribers = function () { //fire before from address is changed mapsModel.fromAddress.subscribe(function(oldValue) { removeMarker(oldValue); }, null, "beforeChange"); //fire before to address is changed mapsModel.toAddress.subscribe(function(oldValue) { removeMarker(oldValue); }, null, "beforeChange"); }; We are now ready to bring the methods we created together and place a marker on the map. Create a map called updateAddress. This method should take two parameters: the place object and the value binding. The method should call populateAddress to extract and populate the address model, and placeMarker to place a new marker on the map. Our implementation looks similar to this: /* method to update the address model */ var updateAddress = function(place, value) { populateAddress(place, value); placeMarker(place.geometry.location, value); }; Call the updateAddress method from the event listener in the addressAutoComplete custom binding: google.maps.event.addListener(autocomplete, 'place_changed', function() { var place = autocomplete.getPlace(); console.log(place); updateAddress(place, value); }); Open the application in your browser. Select from and to addresses. You should now see markers appear for the two selected addresses. In our browser, the application looks similar to the following screenshot: Displaying a route between the markers The last feature of the application is to draw a route between the two address markers. To implement this feature, we will: Create and initialize the direction service Request routing information from the direction service and draw the route Update the view to add a button to get directions Let's get started by creating and initializing the direction service. We will use the google.maps.DirectionsService class to get the routing information and the google.maps.DirectionsRenderer to draw the route on the map. Create two attributes in the MapsApplication module: one for directions service and the other for directions renderer: /* the directions service */ var directionsService; /* the directions renderer */ var directionsRenderer; Next, create a method to create and initialize the preceding attributes: /* initialise the direction service and display */ var initDirectionService = function () { directionsService = new google.maps.DirectionsService(); directionsRenderer = new google.maps.DirectionsRenderer({suppressMarkers: true}); directionsRenderer.setMap(map); }; Call this method from the mapPanel custom binding handler after the map has been created and cantered. The updated mapPanel custom binding should look similar to this: /* custom binding handler for maps panel */ ko.bindingHandlers.mapPanel = { init: function(element, valueAccessor){ map = new google.maps.Map(element, { zoom: 10 }); centerMap(localLocation); initDirectionService(); } }; The next step is to create a method that will build and fire a request to the direction service to fetch the direction information. The direction information will then be used by the direction renderer to draw the route on the map. Our implementation of this method looks similar to this: /* method to get directions and display route */ var getDirections = function () { //create request for directions var routeRequest = { origin: mapsModel.fromAddress().location(), destination: mapsModel.toAddress().location(), travelMode: google.maps.TravelMode.DRIVING }; //fire request to route based on request directionsService.route(routeRequest, function(response, status) { if (status == google.maps.DirectionsStatus.OK) { directionsRenderer.setDirections(response); } else { console.log("No directions returned ..."); } }); }; We create a routing request in the first part of the method. The request object consists of origin, destination, and travelMode. The origin and destination values are set to the locations for from and to addresses. The travelMode is set to google.maps.TravelMode.DRIVING, which, as the name suggests, specifies that we require driving route. Add the getDirections method to the return statement of the module as we will bind it to a button in the view. One last step before we can work on the view is to clear the route on the map when the user selects a new address. This can be achieved by adding an instruction to clear the route information in the subscribers we registerd earlier. Update the subscribers in the registerSubscribers method to clear the routes on the map: /* method to register subscriber */ var registerSubscribers = function () { //fire before from address is changed mapsModel.fromAddress.subscribe(function(oldValue) { removeMarker(oldValue); directionsRenderer.set('directions', null); }, null, "beforeChange"); //fire before to address is changed mapsModel.toAddress.subscribe(function(oldValue) { removeMarker(oldValue); directionsRenderer.set('directions', null); }, null, "beforeChange"); }; The last step is to update the view. Open the view and add a button under the address input components. Add click binding to the button and bind it to the getDirections method of the module. Add enable binding to make the button clickable only after the user has selected the two addresses. The button should look similar to this: <button type="button" class="btn btn-default" data-bind="enable: MapsApplication.mapsModel.fromAddress && MapsApplication.mapsModel.toAddress, click: MapsApplication.getDirections"> Get Directions </button> Open the application in your browser and select the From address and To address option. The address details and markers should appear for the two selected addresses. Click on the Get Directions button. You should see the route drawn on the map between the two markers. In our browser, the application looks similar to the following screenshot: Summary In this article, we walked through placing markers on the map and displaying the route between the markers. Resources for Article: Further resources on this subject: KnockoutJS Templates[article] Components [article] Web Application Testing [article]
Read more
  • 0
  • 0
  • 5071

article-image-designing-and-creating-database-tables-ruby-rails
Packt
23 Oct 2009
9 min read
Save for later

Designing and Creating Database Tables in Ruby on Rails

Packt
23 Oct 2009
9 min read
Background Information The User Management Module is created for a website called 'TaleWiki'. TaleWiki is a website about user submitted tales and stories, which can be added, modified, deleted, and published by the user, depending on the Role or Privileges the user has. Taking into consideration this small piece of information, we will design and create tables that will become the back-end for the User Management functionality. Designing the Tables To Design and to create tables, we need to understand the entities and their relationship, the schema corresponding to the entities, and then the table creation queries. If we go step-by-step, we can say that following are the steps in designing the tables for the User Management module: Designing the E-R model Deriving the Schema from the E-R model Creating the Tables from the Schema So, let us follow the steps. Designing the E-R Model To design the E-R model, let us first look at what we have understood about the data required by the functionalities, which we just discussed. It tells us that 'only the Users with a particular Role can access TaleWiki'. Now we can consider this as our 'problem statement' for our E-R model design. If you observe closely, the statement is vague. It doesn't tell about the particular Roles. However, for the E-R design, this will suffice as it clearly mentions the two main entities, if we use the E-R terminology. They are: User Role Let us look at the User entity. Now this entity represents a real-world user. It is not difficult to describe its attributes. Keeping a real-world user in mind and the functionalities discussed for managing a user, we can say that the User entity should have the following attributes: Id: It will identify the different users, and it will be unique. User name: The name which will be displayed with the submitted story. Password: The pass key with which the user will be authenticated. First name: The first name of the user. Last name: The last name of the user. The combination of the first and last name will be the real name of the user. Age: The age of the user. This will help in deciding whether or not the user is of required age which is 15. E-mail id: The email id of the user in which he/she would like to get emails from the administrator regarding the submissions. Country: To keep track of the 'geographic distribution' of users. Role: To know what privileges are granted for the user. The Role is required because the problem statement mentions "User with a particular Role". The entity diagram will be as follows: Next, let us look at the Role entity. Role, as already discussed, will represent the privileges a user can have. And as these privileges are static, the Role entity won't need to have the attribute to store the privileges. The important point about the static privileges that you have to keep in mind is that they will have to be programmatically checked against a user. In other words, the privileges are not present in the database and there can only be a small number of Roles with predefined privileges. Keeping this in mind, we can say that the Role entity will have the following attributes: Id: The unique identification number for the Role. Name: The name with which the id will be known and that will be displayed along with the user name. The entity diagram for Role entity will be as follows: We have completed two out of three steps in designing the E-R model. Next, we have to define how the User entity is related with the Role entity. From the problem statement we can say that a user will definitely have a Role. And the functionality for assigning the Role tells us that a user can have only one Role. So if we combine these two, we can say that 'A user will have only one Role but different users can have the same Role'. In simple terms, a Role—let us say normal user—can be applied to different users such as John, or Jane. However, the users John or Jane cannot be both normal user as well as administrator. In technical terms, we can say that a Role has a one-to-many relationship with the User entity and a User has a many-to-one relationship with a Role. Diagrammatically, it will be as follows: One piece of the puzzle is still left. If you remember there is one more entity called Story. We had found that each story had a submitter. The submitter is a user. So that means there is a relationship between the User and the Story entity. Now, a user, let us say, John or Jane, can submit many stories. However, the same story cannot be submitted by more than one user. On the basis of this we can say that a User has a many-to-one relationship with a Story and a Story has a many-to-one relationship with a User. According to the E-R diagram it will be as follows: The final E-R design including all the entities and the attributes will be as follows: That completes our E-R design step. Next, we will derive the schema from theE-R model. Deriving the Schema We have all we need to derive the schema for our purpose. While deriving a schema from an E-R model, it is always a good choice to start with the entities at the 'one' end of a 'one-to-many' relationship. In our case, it is the Role entity. As we did in the previous chapter, let us start by providing the details for each attribute of the Role entity. The following is the schema for the Role entity: Attribute Data type of the attribute Length of the acceptable value Id Integer 10 Name Varchar 25 >Next, let us look at the schema of the User entity. As it is at the 'many' end of the 'one-to-many' relationship, the Role attribute will be replaced by the Id of Role. The schema will be as follows: Attribute Data type of the attribute Length of the acceptable value Id Integer 10 User name Varchar 50 First name Varchar 50 Last name Varchar 50 Password Varchar 15 Age Integer 3 e-mail id Varchar 25 Country Varchar 20 Id of the Role Integer 10 Now, let us visit the Story entity. The attributes of the entity were: Id: This is the Primary key attribute as it can uniquely identify a story. Heading: The title of the story. Body text: The body of the story. Date of Submission: The day the user submitted the story. Source: The source from where the story was found. If it is written by the user himself/herself, the source will be the user's id. Genre: The category of the story. User: The user who submitted the story. Name of the attribute Data type of the attribute Length of the acceptable value Id Integer 10 Title Varchar 100 Body Text Varchar 1000 Date of Submission Date   Source Varchar 50 Status Varchar 15 Id of Genre Integer 10 Id of the User Integer 10 The schema has been derived and now we can move to the last part of the database design—creation of the tables. Creating the Tables Looking at the schema  required for tables in Ruby on Rails, here is the table creation statement for the Role schema: CREATE TABLE `roles` (`id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,`name` VARCHAR( 25 ) NOT NULL ,`description` VARCHAR( 100 ) NOT NULL) ENGINE = innodb; Next comes the table creation statement for the User schema. Note that here also we are following the one-to-many path, that is, the table at the 'one' end is created first. Whenever there is a one-to-many relationship between entities, you will have to create the table for the entity at the 'one' end. Otherwise you will not be able to create a foreign key reference in the table for the entity at the 'many' end, and if you try to create one, you will get an error (obviously). So here is the create table statement for the User schema: CREATE TABLE `users` (`id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,`user_name` VARCHAR( 50 ) NOT NULL ,`password` VARCHAR( 15 ) NOT NULL ,`first_name` VARCHAR( 50 ) NOT NULL ,`last_name` VARCHAR( 50 ) NOT NULL ,`age` INT( 3 ) NOT NULL ,`email` VARCHAR( 25 ) NOT NULL ,`country` VARCHAR( 20 ) NOT NULL ,`role_id` INT NOT NULL,CONSTRAINT `fk_users_roles` FOREIGN KEY (`role_id`) REFERENCES `role`( `id`) ON DELETE CASCADE) ENGINE = innodb; Next, let us create the table for Story, we will call it the 'tales' table, we will also add a foreign key reference to the users table in it. Here is the query for creating the table CREATE TABLE `tales` (`id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY,`title` VARCHAR( 100 ) NOT NULL,`body_text` TEXT NOT NULL,`submission_date` DATE NOT NULL,`source` VARCHAR( 50 ) NOT NULL,`status` VARCHAR( 15 ) NOT NULL,`genre_id` INT NOT NULL,`user_id` INT NOT NULL,CONSTRAINT `fk_tales_genres` FOREIGN KEY (`genre_id`) REFERENCES genres( `id`)) ENGINE = innodb; Next, we will make a reference to the users table after executing the above query, with the following query: ALTER TABLE `tales` ADD FOREIGN KEY ( `user_id` ) REFERENCES `users` (`id`) ON DELETE CASCADE ; That completes our task of creating the required tables and making necessary changes to the tales table. The effect of this change will be visible to you when we implement session management in the next chapter. And incidentally, it completes the 'designing the tables' section. Let us move onto the development of the user management functionality. Summary In this article, we learned how to design and create tables for a User Management Module in Ruby on Rails. We looked at designing the E-R model, deriving the schema from the E-R model and creating the tables from the schema.
Read more
  • 0
  • 0
  • 5065

article-image-kendo-mvvm-framework
Packt
06 Sep 2013
19 min read
Save for later

The Kendo MVVM Framework

Packt
06 Sep 2013
19 min read
(For more resources related to this topic, see here.) Understanding MVVM – basics MVVM stands for Model ( M ), View ( V ), and View-Model ( VM ). It is part of a family of design patterns related to system architecture that separate responsibilities into distinct units. Some other related patterns are Model-View-Controller ( MVC ) and Model-View-Presenter ( MVP ). These differ on what each portion of the framework is responsible for, but they all attempt to manage complexity through the same underlying design principles. Without going into unnecessary details here, suffice it to say that these patterns are good for developing reliable and reusable code and they are something that you will undoubtedly benefit from if you have implemented them properly. Fortunately, the good JavaScript MVVM frameworks make it easy by wiring up the components for you and letting you focus on the code instead of the "plumbing". In the MVVM pattern for JavaScript through Kendo UI, you will need to create a definition for the data that you want to display and manipulate (the Model), the HTML markup that structures your overall web page (the View), and the JavaScript code that handles user input, reacts to events, and transforms the static markup into dynamic elements (the View-Model). Another way to put it is that you will have data (Model), presentation (View), and logic (View-Model). In practice, the Model is the most loosely-defined portion of the MVVM pattern and is not always even present as a unique entity in the implementation. The View-Model can assume the role of both Model and View-Model by directly containing the Model data properties within itself, instead of referencing them as a separate unit. This is acceptable and is also seen within ASP.NET MVC when a View uses the ViewBag or the ViewData collections instead of referencing a strongly-typed Model class. Don't let it bother you if the Model isn't as well defined as the View-Model and the View. The implementation of any pattern should be filtered down to what actually makes sense for your application. Simple data binding As an introductory example, consider that you have a web page that needs to display a table of data, and also provide the users with the ability to interact with that data, by clicking specifically on a single row or element. The data is dynamic, so you do not know beforehand how many records will be displayed. Also, any change should be reflected immediately on the page instead of waiting for a full page refresh from the server. How do you make this happen? A traditional approach would involve using special server-side controls that can dynamically create tables from a data source and can even wire-up some JavaScript interactivity. The problem with this approach is that it usually requires some complicated extra communication between the server and the web browser either through "view state", hidden fields, or long and ugly query strings. Also, the output from these special controls is rarely easy to customize or manipulate in significant ways and reduces the options for how your site should look and behave. Another choice would be to create special JavaScript functions to asynchronously retrieve data from an endpoint, generate HTML markup within a table and then wire up events for buttons and links. This is a good solution, but requires a lot of coding and complexity which means that it will likely take longer to debug and refine. It may also be beyond the skill set of a given developer without significant research. The third option, available through a JavaScript MVVM like Kendo UI, strikes a balance between these two positions by reducing the complexity of the JavaScript but still providing powerful and simple data binding features inside of the page. Creating the view Here is a simple HTML page to show how a view basically works: <!DOCTYPE html> <html > <head> <title>MVVM Demo 1</title> <script src ="/Scripts/kendo/jquery.js"></script> <script src ="/Scripts/kendo/kendo.all.js"></script> <link href="/Content/kendo/kendo.common.css" rel="stylesheet" /> <link href="/Content/kendo/kendo.default.css" rel="stylesheet" /> <style type="text/css"> th { width: 135px; } </style> </head> <body> <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> </body> </html> Here we have a simple table element with three columns but instead of the body containing any tr elements, there are some special HTML5 data-* attributes indicating that something special is going on here. These data-* attributes do nothing by themselves, but Kendo UI reads them (as you will see below) and interprets their values in order to link the View with the View-Model. The data-bind attribute indicates to Kendo UI that this element should be bound to a collection of objects called people. The data-template attribute tells Kendo UI that the people objects should be formatted using a Kendo UI template. Here is the code for the template: <script id="row-template" type="text/x-kendo-template"> <tr> <td data-bind="text: name"></td> <td data-bind="text: hairColor"></td> <td data-bind="text: favoriteFood"></td> </tr> </script> This is a simple template that defines a tr structure for each row within the table. The td elements also have a data-bind attribute on them so that Kendo UI knows to insert the value of a certain property as the "text" of the HTML element, which in this case means placing the value in between <td> and </td> as simple text on the page. Creating the Model and View-Model In order to wire this up, we need a View-Model that performs the data binding. Here is the View-Model code for this View: <script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, {name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad"} ] }); kendo.bind($("body"), viewModel); </script> A Kendo UI View-Model is declared through a call to kendo.observable() which creates an observable object that is then used for the data-binding within the View. An observable object is a special object that wraps a normal JavaScript variable with events that fire any time the value of that variable changes. These events notify the MVVM framework to update any data bindings that are using that variable's value, so that they can update immediately and reflect the change. These data bindings also work both ways so that if a field bound to an observable object variable is changed, the variable bound to that field is also changed in real time. In this case, I created an array called people that contains three objects with properties about some people. This array, then, operates as the Model in this example since it contains the data and the definition of how the data is structured. At the end of this code sample, you can see the call to kendo.bind($("body"), viewModel) which is how Kendo UI actually performs its MVVM wiring. I passed a jQuery selector for the body tag to the first parameter since this viewModel object applies to the full body of my HTML page, not just a portion of it. With everything combined, here is the full source for this simplified example: <!DOCTYPE html> <html > <head> <title>MVVM Demo 1</title> <scriptsrc ="/Scripts/kendo/jquery.js"></script> <scriptsrc ="/Scripts/kendo/kendo.all.js"></script> <link href="/Content/kendo/kendo.common.css" rel="stylesheet" /> <link href="/Content/kendo/kendo.default.css" rel="stylesheet" /> <style type="text/css"> th { width: 135px; } </style> </head> <body> <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> <script id="row-template" type="text/x-kendo-template"> <tr> <td data-bind="text: name"></td> <td data-bind="text: hairColor"></td> <td data-bind="text: favoriteFood"></td> </tr> </script> <script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, { name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad" } ] }); kendo.bind($("body"), viewModel); </script> </body> </html> Here is a screenshot of the page in action. Note how the data from the JavaScript people array is populated into the table automatically: Even though this example contains a Model, a View, and a View-Model, all three units appear in the same HTML file. You could separate the JavaScript into other files, of course, but it is also acceptable to keep them together like this. Hopefully you are already seeing what sort of things this MVVM framework can do for you. Observable data binding Binding data into your HTML web page (View) using declarative attributes is great, and very useful, but the MVVM framework offers some much more significant functionality that we didn't see in the last example. Instead of simply attaching data to the View and leaving it at that, the MVVM framework maintains a running copy of all of the View-Model's properties, and keeps references to those properties up to date in real time. This is why the View-Model is created with a function called "observable". The properties inside, being observable, report changes back up the chain so that the data-bound fields always reflect the latest data. Let's see some examples. Adding data dynamically Building on the example we just saw, add this horizontal rule and form just below the table in the HTML page: <hr /> <form> <header>Add a Person</header> <input type="text" name="personName" placeholder="Name" data-bind="value: personName" /><br /> <input type="text" name="personHairColor" placeholder="Hair Color" data-bind="value: personHairColor" /><br /> <input type="text" name="personFavFood" placeholder="Favorite Food" data-bind="value: personFavFood" /><br /> <button type="button" data-bind="click: addPerson">Add</button> </form> This adds a form to the page so that a user can enter data for a new person that should appear in the table. Note that we have added some data-bind attributes, but this time we are binding the value of the input fields not the text. Note also that we have added a data-bind attribute to the button at the bottom of the form that binds the click event of that button with a function inside our View-Model. By binding the click event to the addPerson JavaScript method, the addPerson method will be fired every time this button is clicked. These bindings keep the value of those input fields linked with the View-Model object at all times. If the value in one of these input fields changes, such as when a user types something in the box, the View-Model object will immediately see that change and update its properties to match; it will also update any areas of the page that are bound to the value of that property so that they match the new data as well. The binding for the button is special because it allows the View-Model object to attach its own event handler to the click event for this element. Binding an event handler to an event is nothing special by itself, but it is important to do it this way (through the data-bind attribute) so that the specific running View-Model instance inside of the page has attached one of its functions to this event so that the code inside the event handler has access to this specific View-Model's data properties and values. It also allows for a very specific context to be passed to the event that would be very hard to access otherwise. Here is the code I added to the View-Model just below the people array. The first three properties that we have in this example are what make up the Model. They contain that data that is observed and bound to the rest of the page: personName: "", // Model property personHairColor: "", // Model property personFavFood: "", // Model property addPerson: function () { this.get("people").push({ name: this.get("personName"), hairColor: this.get("personHairColor"), favoriteFood: this.get("personFavFood") }); this.set("personName", ""); this.set("personHairColor", ""); this.set("personFavFood", ""); } The first several properties you see are the same properties that we are binding to in the input form above. They start with an empty value because the form should not have any values when the page is first loaded. It is still important to declare these empty properties inside the View-Model in order that their value can be tracked when it changes. The function after the data properties, addPerson , is what we have bound to the click event of the button in the input form. Here in this function we are accessing the people array and adding a new record to it based on what the user has supplied in the form fields. Notice that we have to use the this.get() and this.set() functions to access the data inside of our View-Model. This is important because the properties in this View-Model are special observable properties so accessing their values directly may not give you the results you would expect. The most significant thing that you should notice about the addPerson function is that it is interacting with the data on the page through the View-Model properties. It is not using jQuery, document.querySelector, or any other DOM interaction to read the value of the elements! Since we declared a data-bind attribute on the values of the input elements to the properties of our View-Model, we can always get the value from those elements by accessing the View-Model itself. The values are tracked at all times. This allows us to both retrieve and then change those View-Model properties inside the addPerson function and the HTML page will show the changes right as it happens. By calling this.set() on the properties and changing their values to an empty string, the HTML page will clear the values that the user just typed and added to the table. Once again, we change the View-Model properties without needing access to the HTML ourselves. Here is the complete source of this example: <!DOCTYPE html> <html > <head> <title>MVVM Demo 2</title> <scriptsrc ="/Scripts/kendo/jquery.js"></script> <scriptsrc ="/Scripts/kendo/kendo.all.js"></script> <link href="/Content/kendo/kendo.common.css" rel="stylesheet" /> <link href="/Content/kendo/kendo.default.css" rel="stylesheet" /> <style type="text/css"> th { width: 135px; } </style> </head> <body> <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> <hr /> <form> <header>Add a Person</header> <input type="text" name="personName" placeholder="Name"data-bind="value: personName" /><br /> <input type="text" name="personHairColor" placeholder="Hair Color"data-bind="value: personHairColor" /><br /> <input type="text" name="personFavFood" placeholder="Favorite Food"data-bind="value: personFavFood" /><br /> <button type="button" data-bind="click: addPerson">Add</button> </form> <script id="row-template" type="text/x-kendo-template"> <tr> <td data-bind="text: name"></td> <td data-bind="text: hairColor"></td> <td data-bind="text: favoriteFood"></td> </tr> </script> <script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, {name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad"} ], personName: "", personHairColor: "", personFavFood: "", addPerson: function () { this.get("people").push({ name: this.get("personName"), hairColor: this.get("personHairColor"), favoriteFood: this.get("personFavFood") }); this.set("personName", ""); this.set("personHairColor", ""); this.set("personFavFood", ""); } }); kendo.bind($("body"), viewModel); </script> </body> </html> And here is a screenshot of the page in action. You will see that one additional person has been added to the table by filling out the form. Try it out yourself to see the immediate interaction that you get with this code: Using observable properties in the View We just saw how simple it is to add new data to observable collections in the View-Model, and how this causes any data-bound elements to immediately show that new data. Let's add some more functionality to illustrate working with individual elements and see how their observable values can update content on the page. To demonstrate this new functionality, I have added some columns to the table: <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> <th></th> <th>Live Data</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> The first new column has no heading text but will contain a button on the page for each of the table rows. The second new column will be displaying the value of the "live data" in the View-Model for each of the objects displayed in the table. Here is the updated row template: <script id="row-template" type="text/x-kendo-template"> <tr> <td><input type="text" data-bind="value: name" /></td> <td><input type="text" data-bind="value: hairColor" /></td> <td><input type="text" data-bind="value: favoriteFood" /></td> <td><button type="button" data-bind="click: deletePerson">Delete</button></td> <td><span data-bind="text: name"></span>&nbsp;-&nbsp; <span data-bind="text: hairColor"></span>&nbsp;-&nbsp; <span data-bind="text: favoriteFood"></span></td> </tr> </script> Notice that I have replaced all of the simple text data-bind attributes with input elements and valuedata-bind attributes. I also added a button with a clickdata-bind attribute and a column that displays the text of the three properties so that you can see the observable behavior in real time. The View-Model gets a new method for the delete button: deletePerson: function (e) { var person = e.data; var people = this.get("people"); var index = people.indexOf(person); people.splice(index, 1); } When this function is called through the binding that Kendo UI has created, it passes an event argument, here called e, into the function that contains a data property. This data property is a reference to the model object that was used to render the specific row of data. In this function, I created a person variable for a reference to the person in this row and a reference to the people array; we then use the index of this person to splice it out of the array. When you click on the Delete button, you can observe the table reacting immediately to the change. Here is the full source code of the updated View-Model: <script id="row-template" type="text/x-kendo-template"> <tr> <td><input type="text" data-bind="value: name" /></td> <td><input type="text" data-bind="value: hairColor" /></td><td><input type="text" data-bind="value: favoriteFood" /></td> <td><button type="button" data-bind="click: deletePerson">Delete</button></td> <td><span data-bind="text: name"></span>&nbsp;-&nbsp; <span data-bind="text: hairColor"></span>&nbsp;-&nbsp; <span data-bind="text: favoriteFood"></span></td></tr> </script><script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, {name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad"} ], personName: "", personHairColor: "", personFavFood: "", addPerson: function () { this.get("people").push({ name: this.get("personName"), hairColor: this.get("personHairColor"), favoriteFood: this.get("personFavFood") }); this.set("personName", ""); this.set("personHairColor", ""); this.set("personFavFood", ""); }, deletePerson: function (e) { var person = e.data; var people = this.get("people"); var index = people.indexOf(person); people.splice(index, 1); } }); kendo.bind($("body"), viewModel); </script> </body> </html> Here is a screenshot of the new page: Click on the Delete button to see an entry disappear. You can also see that I have added a new person to the table and that I have made changes in the input boxes of the table and that those changes immediately show up on the right-hand side. This indicates that the View-Model is keeping track of the live data and updating its bindings accordingly.
Read more
  • 0
  • 0
  • 5064

article-image-schema-validation-oracle-jdeveloper-xdk-11g
Packt
15 Oct 2009
7 min read
Save for later

Schema Validation with Oracle JDeveloper - XDK 11g

Packt
15 Oct 2009
7 min read
JDeveloper built-in schema validation Oracle JDeveloper 11g has built-in support for XML schema validation. If an XML document includes a reference to an XML schema, the XML document may be validated with the XML schema using the built-in feature. An XML schema may be specified in an XML document using the xsi:noNamespaceSchemaLocation attribute or the xsi:namespaceSchemaLocation attribute. Before we discuss when to use which attribute, we need to define the target namespace. A schema is a collection of type definitions and element declarations whose names belong to a particular namespace called a target namespace. Thus, a target namespace distinguishes between type definitions and element declarations from different collections. An XML schema doesn't need to have a target namespace. If the XML schema has a target namespace, specify the schema's location in an XML document using the xsi:namespaceSchemaLocation attribute. If the XML schema does not have a target namespace, specify the schema location using the xsi:noNamespaceSchemaLocation attribute. The xsi:noNamespaceSchemaLocation and xsi:namespaceSchemaLocation attributes are a hint to the processor about the location of an XML schema document. The example XML schema document that we shall create is catalog.xsd and is listed here: <?xml version="1.0" encoding="utf-8"?><xsd:schema > <xsd:element name="catalog"type="catalogType"/> <xsd:complexType name="catalogType"> <xsd:sequence> <xsd:element ref="journal" minOccurs="0"maxOccurs="unbounded"/> </xsd:sequence> </xsd:complexType> <xsd:element name="journal" type="journalType"/> <xsd:complexType name="journalType"> <xsd:sequence> <xsd:element ref="article" minOccurs="0" maxOccurs="unbounded"/> </xsd:sequence> <xsd:attribute name="title" type="xsd:string"/> <xsd:attribute name="publisher" type="xsd:string"/> <xsd:attribute name="edition" type="xsd:string"/> </xsd:complexType> <xsd:element name="article" type="articleType"/> <xsd:complexType name="articleType"> <xsd:sequence> <xsd:element name="title" type="xsd:string"/> <xsd:element name="author" type="xsd:string"/> </xsd:sequence> <xsd:attribute name="section" type="xsd:string"/> </xsd:complexType></xsd:schema> The XML document instance that we shall generate from the schema is catalog.xml and is listed as follows: <?xml version="1.0" encoding="utf-8"?><catalog><journal title="Oracle Magazine" publisher="OraclePublishing" edition="September-October 2008"> <article section="Features"> <title>Share 2.0</title> <author>Alan Joch</author> </article></journal><journal title="Oracle Magazine" publisher="OraclePublishing" edition="March-April 2008"> <article section="Oracle Developer"> <title>Declarative Data Filtering</title> <author>Steve Muench</author> </article></journal></catalog> Specify the XML schema location in the XML document using the following attribute declaration: xsi:noNamespaceSchemaLocation="catalog.xsd" The XML schema may be in any directory. The example XML document does not include any namespace elements. Therefore, the schema is specified with the xsi:noNamespaceSchemaLocation attribute in the root element catalog. The XML schema may be specified with a relative URL, or a file, or an HTTP URL. The xsi:noNamespaceSchemaLocation attribute we added specifies the relative path to the XML schema document catalog.xsd. To validate the XML document with the XML schema, right-click on the XML document and select Validate XML . The XML document gets validated with the XML schema and the output indicates that the XML document does not have any validation errors. To demonstrate validation errors, add a non-valid element to the XML document. As an example, add the following element to the catalog element after the first journal element: <article></article> To validate the modified XML document, right-click on the XML document and select Validate XML. The output indicates validation errors. All the elements after the non-valid element become non-valid. For example, the journal element is valid as a subelement of the catalog element, but because the second journal element is after the non-valid article element, the journal element also becomes non-valid as indicated in the validation output. XDK 11g also provides a schema validation-specific API known as XSDValidator to validate an XML document with an XML schema. The choice of validation method depends on the additional functionality required in the validation application. XSDValidator is suitable for validation if all that is required is schema validation. Setting the environment Create an application (SchemaValidation, for example) and a project (SchemaValidation, for example) in JDeveloper. To create an application and a project select File | New. In the New Gallery window, select Categories | General and Items | Generic Application. Click on OK. In the Create Generic Application window, specify an Application Name and click on Next. In the Name your Generic project window, specify a Project Name and click on Finish. An application and a project get created. Next, add some XDK 11g JAR files to the project classpath. Select the project node in Application Navigator, and select Tools | Project Properties. In the Project Properties window, select Libraries and Classpath. Click on the Add Library button to add a library. In the Add Library window, select the Oracle XML Parser v2 library and click on the OK button. The Oracle XML Parser v2 library gets added to the project Libraries. Select the Add JAR/Directory button to add JAR file xml.jar from the C:OracleMiddlewarejdevelopermodulesoracle.xdk_11.1.1 directory. First, create an XML document and an XML schema in JDeveloper. To create an XML document, select File | New. In the New Gallery window select Categories | General | XML. In the Items listed select XML Document, and click on the OK button. In the Create XML File wizard, specify the XML file name, catalog.xml, and click on the OK button. An XML document gets added to the SchemaValidation project in Application Navigator. To add an XML schema, select File | New, and General | XML in the New Gallery window. Select XML schema in the Items listed. Click on the OK button. An XML schema document gets added to SchemaValidation project. The example XML document, catalog.xml, consists of a journal catalog. Copy the XML document to the catalog.xml file in the JDeveloper project. The example XML document does not specify the location of the XML schema document to which the XML document must conform to, because we will be setting the XML schema document in the schema validation application. If the XML schema document is specified in the XML document and the schema validation application, the schema document set in the schema validation application is used. Next, copy the example XML schema document, catalog.xsd to catalog.xsd in the JDeveloper project Schema Validation. Each XML schema is required to be in the XML schema namespace http://www.w3.org/2001/XMLSchema. The XML schema namespace is specified with a namespace declaration in the root element, schema, of the XML schema. A namespace declaration is of the format > Next, we will create Java classes for schema validation. Select File | New and subsequently Categories | General and Items | Java Class in the New Gallery window to create a Java class for schema validation. Click on the OK button. In the Create Java Class window specify a Class Name, XMLSchemaValidator, and a package name, schemavalidation, and click on the OK button. A Java class gets added to the SchemaValidation project. Similarly, add Java classes, DOMValidator and SAXValidator. The schema validation applications are shown in the Application Navigator.
Read more
  • 0
  • 0
  • 5063

article-image-ibm-websphere-application-server-administration-tools
Packt
23 Sep 2011
9 min read
Save for later

IBM WebSphere Application Server: Administration Tools

Packt
23 Sep 2011
9 min read
  (For more resources on IBM, see here.) Dumping namespaces To diagnose a problem, you might need to collect WAS JNDI information. WebSphere Application Server provides a utility that dumps the JNDI namespace. The dumpNamespace.sh script dumps information about the WAS namespace and is very useful when debugging applications when JNDI errors are seen in WAS logs. You can use this utility to dump the namespace to see the JNDI tree that the WAS name server (WAS JNDI lookup service provider) is providing for applications. This tool is very useful in JNDI problem determination, for example, when debugging incorrect JNDI resource mappings in the case where an application resource is not mapped correctly to a WAS-configured resource or the application is using direct JNDI lookups when really it should be using indirect lookups. For this tool to work, WAS must be running when this utility is run. To run the utility, use the following syntax: ./dumpNameSpace.sh -<command_option> There are many options for this tool and the following table lists the command-line options available by typing the command <was_root>/dumpsnameSpace.sh -help: Command option Description -host <host> Bootstrap host, that is, the WebSphere host whose namespace you want to dump. Defaults to localhost. -port <port> Bootstrap port. Defaults to 2809. -user <name> Username for authentication if security is enabled on the server. Acts the same way as the -username keyword. -username <name> Username for authentication if security is enabled on the server. Acts the same way as the -user keyword. -password <password> Password for authentication, if security is enabled in the server. -factory <factory> The initial context factory to be used to get the JNDI initial context. Defaults to com.ibm.websphere.naming. WsnInitialContextFactory and normally does not need to be changed. -root [ cell | server | node | host | legacy | tree | default ] Scope of the namespace to dump. For WS 5.0 or later: cell: DumpNameSpace default. Dump the tree starting at the cell root context. server: Dump the tree starting at the server root context. node: Dump the tree starting at the node root context. (Synonymous with host) For WS 4.0 or later: legacy: DumpNameSpace default. Dump the tree starting at the legacy root context. host: Dump the tree starting at the bootstrap host root context. (Synonymous with node) tree: Dump the tree starting at the tree root context. For all WebSphere and other name servers: default: Dump the tree starting at the initial context, which JNDI returns by default for that server type. This is the only -root choice that is compatible with WebSphere servers prior to 4.0 and with non-WebSphere name servers. The WebSphere initial JNDI context factory (default) obtains the desired root by specifying a key specific to the server type when requesting an initial CosNaming NamingContext reference. The default roots and the corresponding keys used for the various server types are listed as follows: WebSphere 5.0: Server root context. This is the initial reference registered under the key of NameServiceServerRoot on the server. WebSphere 4.0: Legacy root context. This context is bound under the name domain/legacyRoot, in the initial context registered on the server, under the key NameService. WebSphere 3.5: Initial reference registered under the key of NameService, on the server. Non-WebSphere: Initial reference registered under the key of NameService, on the server. -url <url> The value for the java.naming.provider.url property used to get the initial JNDI context. This option can be used in place of the -host, -port, and -root options. If the -url option is specified, the -host,-port, and -root options are ignored. -startAt <some/subcontext/ in/the/tree> The path from the requested root context to the top-level context, where the dump should begin. Recursively dumps (displays a tree-like structure) the sub-contexts of each namespace context. Defaults to empty string, that is, root context requested with the -root option. -format [ jndi | ins ] jndi: Display name components as atomic strings. ins: Display name components parsed per INS rules (id.kind) The default format is jndi. -report [ short | long ] short: Dumps the binding name and bound object type, which is essentially what JNDI Context. list() provides. long: Dumps the binding name, bound object type, local object type, and string representation of the local object, that is, Interoperable Object References (IORs) string values, and so on, are printed). The default report option is short. -traceString <some.package. to.trace.*=all> Trace string of the same format used with servers, with output going to the DumpNameSpaceTrace.out file. Example name space dump To see the result of using the namespace tool, navigate to the <was_root>/bin directory on your Linux server and type the following command: For Linux: ./dumpNameSpace.sh -root cell -report short -username wasadmin -password wasadmin >> /tmp/jnditree.txt For Windows: ./dumpNameSpace.bat -root cell -report short -username wasadmin -password wasadmin > c:tempjnditree.txt The following screenshot shows a few segments of the contents of an example jnditree.txt file which would contain the output of the previous command. EAR expander Sometimes during application debugging or automated application deployment, you may need to enquire about the contents of an Enterprise Archive (EAR) file. An EAR file is made up of one or more WAR files (web applications), one or more Enterprise JavaBeans (EJBs), and there can be shared JAR files as well. Also, within each WAR file, there may be JAR files as well. The EARExpander.sh utility allows all artifacts to be fully decompressed much as expanding a TAR file. Usage syntax: EARExpander -ear (name of the input EAR file for the expand operation or name of the output EAR file for the collapse operation) -operationDir (directory to which the EAR file is expanded or directory from which the EAR file is collapsed) -operation (expand | collapse) [-expansionFlags (all | war)] [-verbose] To demonstrate the utility, we will expand the HRListerEAR.ear file. Ensure that you have uploaded the HRListerEAR.ear file to a new folder called /tmp/EARExpander on your Linux server or an appropriate alternative location and run the following command: For Linux: <was_root>/bin/EARExpander.sh -ear /tmp/HRListerEAR.ear -operationDir /tmp/expanded -operation expand -expansionFlags all -verbose For Windows: <was_root>binEARExpander.bat -ear c:tempHRListerEAR.ear -operationDir c:tempexpanded -operation expand -expansionFlags all -verbose The result will be an expanded on-disk structure of the contents of the entire EAR file, as shown in the following screenshot: An example of everyday use could be that EARExpander.sh is used as part of a deployment script where an EAR file is expanded and hardcoded properties files are searched and replaced. The EAR is then re-packaged using the EARExpander -operation collapse option to recreate the EAR file once the find-and-replace routine has completed. An example of how to collapse an expanded EAR file is as follows: For Linux: <was_root>/bin/EARExpander.sh -ear /tmp/collapsed/HRListerEAR.ear -operationDir /tmp/expanded -operation collapse -expansionFlags all -verbose For Windows: <was_root>binEARExpander.bat -ear c:tempcollapsedHRListerEAR. ear -operationDir c:tempexpanded -operation collapse -expansionFlags all -verbose In the previous command line examples, the folder called EARExpander contains an expanded HRListerEAR.ear file, which was created when we used the -expand command example previously. To collapse the files back into an EAR file, use the -collapse option, as shown previously in the command line example. Collapsing the EAR folders results in a file called HRListerEAR.ear, which is created by collapsing the expanded folder contents back into a single EAR file. IBM Support Assistant IBM Support Assistant can help you locate technical documents and fixes, and discover the latest and most useful support resources available. IBM Support Assistant can be customized for over 350 products and over 20 tools, not just WebSphere Application Server. The following is a list of the current features in IBM Support Assistant: Search Information Search and filter results from a number of different websites and IBM Information Centers with just one click. Product Information Provides you with a page full of related resources specific to the IBM software you are looking to support. It also lists the latest support news and information, such as the latest fixes, APARs, Technotes, and other support data for your IBM product. Find product education and training materials Using this feature, you can search for online educational materials on how to use your IBM product. Media Viewer The media viewer allows you search and find free education and training materials available on the IBM Education Assistant sites. You can also watch Flash-based videos, read documentation, view slide presentations, or download for offline access. Automate data collection and analysis Support Assistant can help you gather the relevant diagnostic information automatically so you do not have to manually locate the resources that can explain the cause of the issue. With its automated data collection capabilities, ISA allows you to specify the troublesome symptom and have the relevant information automatically gathered in an archive. You can then look through this data, analyze it with the IBM Support Assistant tool, and even forward data to IBM support. Generate IBM Support Assistant Lite packages for any product addon that has data collection scripts. You can then export a lightweight Java application that can easily be transferred to remote systems for remote data connection. Analysis and troubleshooting tools for IBM products ISA contains tools that enable you to troubleshoot system problems. These include: analyzing JVM core dumps and garbage collector data, analyzing system ports, and also getting remote assistance from IBM support. Guided Troubleshooter This feature provides a step-by-step troubleshooting wizard that can be used to help you look for logs, suggest tools, or recommend steps on fixing the problems you are experiencing. Remote Agent technology Remote agent capabilities through the feature pack provide the ability to perform data collection and file transfer through the workbench from remote systems. Note that the Remote agents must be installed and configured with appropriate 'root-level' access. ISA is a very detailed tool and we cannot cover every feature in this article. However, for a demonstration, we will install ISA and then update ISA with an add-on called the Log Analyzer. We will use the Log Analyzer to analyze a WAS SystemOut.log file. Downloading the ISA workbench To download ISA you will require your IBM user ID. The download can be found at the following URL: http://www-01.ibm.com/software/support/isa/download.html It is possible to download both Windows and Linux versions.
Read more
  • 0
  • 0
  • 5060

article-image-handler-and-phase-apache-axis2
Packt
21 Oct 2009
5 min read
Save for later

Handler and Phase in Apache Axis2

Packt
21 Oct 2009
5 min read
(For more resources on Axis2, see here.) Handler In any messaging system, the interceptor has its factual meaning in the context of messaging, where it intercepts the flow of messaging and does whatever task it is assigned to do. In fact, an interceptor is the smallest execution unit in a messaging system, and an Axis2 handler is also an interceptor. Handlers in Axis are stateless, that is, they do not keep their pass execution states in the memory. A handler can be considered as a logic invoker with the input for the logic evaluation taken from the MessageContext. A Handler has both read and write access permissions to MessageContext (MC) or to an incoming SOAP message. We can consider MessageContext as a property bag that keeps incoming or outgoing messages (maybe both) and other required parameters. It may also include properties to carry the message through the execution chain. On the other hand, we can access the whole system including the system runtime, global parameters, and property service operations via the MC. In most cases, a handler only touches the header block part of the SOAP message, which will either read a header (or headers), add a header(s), or remove a header(s). (This does not mean that the handler cannot touch the SOAP body, nor does it mean that it is not going to touch the SOAP body.) During reading, if a header is targeted to a handler and is not executing properly (the message might be faulty), then it should throw an exception, and the next driver in the chain (in Axis2, it is the Axis engine) would take the necessary action. A typical SOAP message with few headers is shown in the figure given below: Any handler in Axis2 has the capability to pause the message execution, which means that the handler can terminate the message flow if it cannot continue. Reliable messaging (RM) is a good example or use case for that scenario, when it needs to pause the flow depending on some of the preconditions and the postconditions as well and it works on a message sequence. If a service invocation consists of more than one message, and if the second message comes before the first one, then the RM handler will stop (or rather pause) the execution of the message invocation corresponding to the second message until it gets the first one. And when it gets, the first message is invoked, and thereafter it invokes or resumes the second message. Writing a Simple Handler Just learning the concepts will not help us in remembering what we have discussed. For that, we need to write a handler and see how it works. Writing a handler in Axis2 is very simple. If you want to write a handler, you either have to extend the AbstractHandler class or implement the Handler interface. A simple handler that extends the AbstractHandler class will appear as follows: public class SimpleHandler extends AbstractHandler{ public SimpleHandler() { }public InvocationResponse invoke(MessageContext msgContext) throws AxisFault { //Write the processing logic here // DO something return InvocationResponse.CONTINUE; }} Note the return value of the invoke method. We can have the following three values as the return value of the invoke method: Continue: The handler thinks that the message is ready to go forward. Suspend: The handler thinks that the message cannot be sent forward since some conditions are not satisfied; so the execution is suspended. Abort: The handler thinks that there is something wrong with the message, and cannot therefore allow the message to go forward. In most cases, handlers will return InvocationResponse.CONTINUE as the return value. When a message is received by the Axis engine, it calls the invoke method of each of the handlers by passing the argument to the corresponding MessageContext. As a result of this, we can implement all the processing logic inside that method. A handler author has full access to the SOAP message, and also has the required properties to process the message via the MessageContext. In addition, if the handler is not satisfied with the invocation of some precondition, the invocation can be paused as we have discussed earlier (Suspend). If some handler suspends the execution, then it is its responsibility to store the message context, and to forward the message when the conditions are satisfied. For example, the RM handler performs in a similar manner. Phase The concept of phase is introduced by Axis2, mainly to support the dynamic ordering of handlers. A phase can be defined in a number of ways: It can be considered a logical collection of handlers. It can be considered a specific time interval in the message execution. It can be considered a bucket into which to put a handler. One can consider a phase as a handler too. A flow or an execution chain can be considered as a collection of phases. Even though it was mentioned earlier that an Axis engine calls the invoke method of a handler, that is not totally correct. In fact, what the engine really does is call the invoke method of each phase in a given flow, and then the phase will sequentially invoke all the handlers in it (refer to the following figure). As we know, we can extend AbstractHandler and create a new handler; in the same way one can extend the Phase class and then create a new phase. But remember that we need not always extend the Phase class to create a new phase. We can do it by just adding an entry into axis2.xml (All the configuration that requires starting axis2 is obtained from axis2.xml). A phase has two important methods—precondition checking and postcondition checking. Therefore, if we are writing a custom phase, we need to consider the methods that have been mentioned. However, writing a phase is not a common case; you need to know how to write a handler.
Read more
  • 0
  • 0
  • 5045
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-administration-interface-django-10
Packt
14 Oct 2009
5 min read
Save for later

Creating an Administration Interface with Django 1.0

Packt
14 Oct 2009
5 min read
Activating the administration interface The administration interface comes as a Django application. To activate it, we will follow a simple procedure that is similar to enabling the user authentication system. The administration application is located in the django.contrib.admin package. So the first step is adding the path of this package to the INSTALLED_APPS variable. Open the settings.py file, locate INSTALLED_APPS, and edit it as follows: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.admin', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.comments', 'django_bookmarks.bookmarks',) Next, run the following command to create the necessary tables for the administration application: $ python manage.py syncdb Now we need to make the administration interface accessible from within our site by adding URL entries for it. The admin application defines many views (as we will see later), so manually adding a separate entry for each view can become a tedious task. Therefore, the admin interface provides a shortcut for this. There is a single object that encapsulates all the admin views. To use it, open the urls.py file and edit it as follows: from django.contrib import adminadmin.autodiscover()urlpatterns = ('', [...] # Admin interface (r'^admin/(.*)', admin.site.root),) Here, we are importing the admin module, calling a method in it, and mapping all the URLs under the path ^admin/ to a view called admin.site.root. This will make the views of the administration interface accessible from within our project. One last thing remains before we see the administration page in action. We need to tell Django what models can be managed in the administration interface. This is done by creating a new file called the admin.py file in the bookmarks directory. Create the bookmarks/admin.py file and add the following code to it: from django.contrib import adminfrom bookmarks.models import *class LinkAdmin(admin.ModelAdmin): passadmin.site.register(Link, LinkAdmin) We created a class derived from the admin.ModelAdmin class and mapped it to the Link model using the admin.site.register method. This effectively tells Django to enable the Link model in the administration interface. The keyword pass means that the class is empty. Later, we will use this class to customize the administration page; so it won't remain empty. Do the same to the Bookmark, Tag, and SharedBookmark models and add it to the bookmarks/admin.py file. Now, create an empty admin class for each of them and register it. The User model is provided by Django and, therefore, we don't have control over it. But fortunately, it already has an Admin class so it's available in the administration interface by default. Next, launch the development server and direct your browser to http://127.0.0.1:8000/admin/. You will be greeted by a login page. The superuser account after writing the database model is the account that you have to use in order to log in: Next, you will see a list of the models that are available to the administration interface. As discussed earlier, only models that have admin classes in the bookmarks/admin.py file will appear on this page. If you click on a model name, you will get a list of the objects that are stored in the database under this model. You can use this page to view or edit a particular object, or to add a new one. The following figure shows the listing page for the Link model: The edit form is generated according to the fields that exist in the model. The Link form, for example, contains a single text field called Url. You can use this form to view and change the URL of a Link object. In addition, the form performs proper validation of fields before saving the object. So if you try to save a Link object with an invalid URL, you will receive an error message asking you to correct the field. The following figure shows a validation error when trying to save an invalid link: Fields are mapped to form widgets according to their type. For example, date fields are edited using a calendar widget, whereas foreign key fields are edited using a list widget, and so on. The following figure shows a calendar widget from the user edit page. Django uses it for date and time fields. As you may have noticed, the administration interface represents models by using the string returned by the __unicode__ method. It was indeed a good idea to replace the generic strings returned by the default __unicode__ method with more helpful ones. This greatly helps when working with the administration page, as well as with debugging. Experiment with the administration pages. Try to create, edit, and delete objects. Notice how changes made in the administration interface are immediately reflected on the live site. Also, the administration interface keeps a track of the actions that you make and lets you review the history of changes for each object. This section has covered most of what you need to know in order to use the administration interface provided by Django. This feature is actually one of the main advantages of using Django. You get a fully featured administration interface from writing only a few lines of code! Next, we will see how to tweak and customize the administration pages. As a bonus, we will learn more about the permissions system offered by Django.
Read more
  • 0
  • 0
  • 5044

article-image-improving-plone-3-product-performance
Packt
11 Jun 2010
7 min read
Save for later

Improving Plone 3 Product Performance

Packt
11 Jun 2010
7 min read
(For more resources on Plone, see here.) Introduction CMS Plone provides: A Means of adding, editing, and managing content A database to store content A mechanism to serve content in HTML or other formats Fortunately, it also supplies the tools to do all these things in an incredibly easy and powerful way. For example, content producers can create a new article without worrying how it will look or what other information will be surrounding the main information. To do this Plone must compose a single HTML output file (if we are talking from a web browser viewpoint) by joining and rendering several sources of data according to the place, importance, and target they are meant for. As it is built upon the Zope application server, all these jobs are easy for Plone. However, they have a tremendous impact as far as work and performance goes. If enough care is not taken, then a whole website could be stuck due to a couple of user requests. In this article, we'll look at the various performance improvements and how to measure these enhancements. We are not going to make a comprehensive review of all the options to tweak or set up a Zope-based web application, like configuring a like configuring a proxy cache or a load balancer. There are lots of places, maybe too many, where you can find information about these topics. We invite you to read these articles and tutorials and subscribe or visit Zope and Plone mailing lists: http://projects.zestsoftware.nl/guidelines/guidelines/caching/ caching1_background.html http://plone.org/documentation/tutorial/buildout/a-deployment-configuration/ http://plone.org/documentation/tutorial/optimizing-plone Installing CacheFu with a policy product When a user requests HTML pages from a website, many things can be expressed about the downloading files by setting special headers in the HTTP response. If managed cautiously, the server can save lots of time and, consequently, work by telling the browser how to store and reuse many of the resources it has got. CacheFu is the Plone add-on product that streamlines HTTP header handling in order to obtain the required performance. We could add a couple of lines to the buildout.cfg file to download and install CacheFu. Then we could add some code in our end user content type products (pox.video and Products.poxContentTypes) to configure CacheFu properly to deliver them in an efficient way. However, if we do so, we would be forcing these products to automatically install CacheFu, even if we were testing them in a development environment. To prevent this, we are going to create a policy product and add some code to install and configure CacheFu. A policy product is a regular package that will take care of general customizations to meet customer requirements. For information on how to create a policy product see Creating a policy product. Getting ready To achieve this we'll use pox.policy, the policy product created in Creating a policy product. How to do it... Automatically fetch dependencies of the policy product: Open setup.py in the root pox.policy folder and modify the install_requires variable of the setup call: setup(name='pox.policy', ... install_requires=['setuptools', # -*- Extra requirements: -*- 'Products.CacheSetup', ], Install dependencies during policy product installation. In the profiles/default folder, modify the metadata.xml file: <?xml version="1.0"?><metadata> <version>1</version> <dependencies> <dependency>profile-Products.CacheSetup:default</dependency> </dependencies></metadata You could also add here all the other products you plan to install as dependencies, instead of adding them individually in the buildout.cfg file. Configure products during the policy product installation. Our policy product already has a <genericsetup:importStep /> directive in its main component configuration file (configure.zcml). This import step tells GenericSetup to process a method in the setuphandlers module (we could have several steps, each of them with a matching method). Then modify the setupVarious method to do what we want, that is, to apply some settings to CacheFu. from zope.app.component.hooks import getSitefrom Products.CMFCore.utils import getToolByNamefrom config import * def setupVarious(context): if context.readDataFile('pox.policy_various.txt') is None: return portal = getSite() # perform custom operations # Get portal_cache_settings (from CacheFu) and # update plone-content-types rule pcs = getToolByName(portal, 'portal_cache_settings') rules = pcs.getRules() rule = getattr(rules, 'plone-content-types') rule.setContentTypes(list(rule.getContentTypes()) + CACHED_CONTENT) The above code has been shortened for clarity's sake. Check the accompanying code bundle for the full version. Add or update a config.py file in your package with all configuration options: # Content types that should be cached in plone-content-types# rule of CacheFuCACHED_CONTENT = ['XNewsItem', 'Video',] Build your instance up again and launch it: ./bin/buildout./bin/instance fg After installing the pox.policy product (it's automatically installed during buildout as explained in Creating a policy product) we should see our content types—Video and XNewsItem—listed within the cached content types. The next screenshot corresponds to the following URL: http://localhost:8080/plone/portal_cache_settings/with-caching-proxy/rules/plone-content-types. The with-caching-proxy part of the URL matches the Cache Policy field; and the plone-content-types part matches the Short Name field. As we added Python code, we must test it. Create this doctest in the README.txt file in the pox.policy package folder: Check that our content types are properly configured >>> pcs = getToolByName(self.portal, 'portal_cache_settings')>>> rules = pcs.getRules()>>> rule = getattr(rules, 'plone-content-types')>>> 'Video' in rule.getContentTypes()True>>> 'XNewsItem' in rule.getContentTypes()True Modify the tests module by replacing the ptc.setupPloneSite() line with these ones: # We first tell Zope there's a CacheSetup product availableztc.installProduct('CacheSetup') # And then we install pox.policy product in Plone.# This should take care of installing CacheSetup in Plone alsoptc.setupPloneSite(products=['pox.policy']) And then uncomment the ZopeDocFileSuite: # Integration tests that use PloneTestCaseztc.ZopeDocFileSuite( 'README.txt', package='pox.policy', test_class=TestCase), Run this test suite with the following command: ./bin/instance test -s pox.policy How it works... In the preceding steps, we have created a specific procedure to install and configure other products (CacheFu in our case). This will help us in the final production environment startup as well as on installation of other development environments we could need (when a new member joins the development team, for instance). In Step 1 of the How to do it... section, we modified setup.py to download and install a dependency package during the installation process, which is done on instance buildout. Getting dependencies in this way is possible when products are delivered in egg format thanks to Python eggs repositories and distribution services. If you need to get an old-style product, you'll have to add it to the [productdistros] part in buildout.cfg. Products.CacheSetup is the package name for CacheFu and contains these dependencies: CMFSquidTool, PageCacheManager, and PolicyHTTPCacheManager. There's more... For more information about CacheFu visit the project home page at http://plone.org/products/cachefu. You can also check for its latest version and release notes at Python Package Index (PyPI, a.k.a. The Cheese Shop): http://pypi.python.org/pypi/Products.CacheSetup. The first link that we recommended in the Introduction is a great help in understanding how CacheFu works: http://projects.zestsoftware.nl/guidelines/guidelines/caching/caching1_background.html. See also Creating a policy product Installing and configuring an egg repository
Read more
  • 0
  • 0
  • 5043

article-image-using-javascript-effects-joomla
Packt
06 Nov 2009
7 min read
Save for later

Using JavaScript Effects with Joomla!

Packt
06 Nov 2009
7 min read
Customizing Google Maps Google Maps has a comprehensive API for interacting with maps on your website. MooTools can be used to load the Google Maps engine at the correct time. It can also act as a bridge between the map and other HTML elements on your site. To get started, you will first need to get an API key to use Google Maps on your domain. You can sign up for a free key at http://code.google.com/apis/maps/signup.html. Even if you are working on your local computer, you still need the key. For instance, if the base URL of your Joomla installation is http://localhost/joomla, you will enter localhost as the domain for your API key. Once you have an API key ready, create the file basicmap.js in the /components/com_js folder, and fill it with the following code: window.addEvent('domready', function() { if (GBrowserIsCompatible()) { var map = new GMap2($('map_canvas')); map.setCenter(new GLatLng(38.89, -77.04), 12); window.onunload=function() { GUnload(); }; }}); The entire script is wrapped within a call to the MooTools-specific addEvent() member function of window. Because we want this code to execute once the DOM is ready, the first parameter is the event name 'domready'. The second parameter is an anonymous function containing our code. What does the call to function() do?Using function() in JavaScript is a way of creating an anonymous function. This way, you can create functions that are used in only one place (such as event handlers) without cluttering the namespace with a needless function name. Also, the code within the anonymous function operates within its own scope; this is referred to as a closure. Closures are very frequently used in modern JavaScript frameworks, for event handling and other distinct tasks. Once inside of the function, GBrowserIsCompatible() is used to determine if the browser is capable of running Google Maps. If it is, a new instance of GMap2() is declared and bound to the HTML element that has an id of 'map_canvas' and is stored into map. The call to $('map_canvas') is a MooTools shortcut for document.GetElementById(). Next, the setCenter() member function of map is called to tell Google Maps where to center the map and how far to zoom in. The first parameter is a GLatLng() object, which is used to set the specific latitude and longitude of the map's center. The other parameter determines the zoom level, which is set to 12 in this case. Finally, the window.onunload event is set to a function that calls GUnload(). When the user navigates away from the page, this function removes Google Maps from memory, to prevent memory leaks. With our JavaScript in place, it is now time to add a function to the controller in /components/com_js/js.php that will load it along with some HTML. Add the following basicMap() function to this file: function basicMap(){ $key = 'DoNotUseThisKeyGetOneFromCodeDotGoogleDotCom'; JHTML::_('behavior.mootools'); $document =& JFactory::getDocument(); $document->addScript('http://maps.google.com/maps?file=api&v= 2&key=' . $key); $document->addScript( JURI::base() . 'components/com_js/basicmap.js'); ?> <div id="map_canvas" style="width: 500px; height: 300px"></div> <?php} The basicMap() function starts off by setting $key to the API key received from Google. You should replace this value with the one you receive at http://code.google.com/apis/maps/signup.html. Next, JHTML::_('behavior.mootools'); is called to load MooTools into the <head> tag of the HTML document. This is followed by getting a reference to the current document object through the getDocument() member function of JFactory. The addScript() member function is called twice—once to load in the Google Maps API (using our key), then again to load our basicmap.js script. (The Google Maps API calls in all of the functions and class definitions beginning with a capital 'G'.) Finally, a <div> with an id of 'map_canvas' is sent to the browser. Once this function is in place and js.php has been saved, load index.php?option=com_js&task=basicMap in the browser. Your map should look like this: We can make this map slightly more interesting by adding a marker to a specific address. To do so, add the highlighted code below to the basicmap.js file: window.addEvent('domready', function() { if (GBrowserIsCompatible()) { var map = new GMap2($('map_canvas')); map.setCenter(new GLatLng(38.89, -77.04), 12); var whitehouse = new GClientGeocoder(); whitehouse.getLatLng('1600 Pennsylvania Ave NW', function(latlng) { marker = new GMarker( latlng ); marker.bindInfoWindowHtml('<strong>The White House</strong>'); map.addOverlay(marker); }); window.onunload=function(){ GUnload(); }; }}); This code sets whitehouse as an instance of the GClientGeocoder class. Next, the getLatLng() member function of GClientGeocoder is called. The first parameter is the street address to be looked up. The second parameter is an anonymous function where the GLatLng object is passed once the address lookup is complete. Within this function, marker is set as a new GMarker object, which takes the passed-in latlng object as a parameter. The bindInfoWindowHTML() member function of GMarker is called to add an HTML message to appear in a balloon above the marker. Finally, the maker is passed into the addOverlay() member function of GMap2, to place it on the map. Save basicmap.js and then reload index.php?option=com_js&task=basicMap. You should now see the same map, only with a red pin. When you click on the red pin, your map should look like this: Interactive Maps These two different maps show the basic functionality of getting Google Maps on your own website. These maps are very basic; you could easily create them at maps.google.com then embed them in a standard Joomla! article with the HTML code they provide you. However, you would not have the opportunity to add functions that interact with the other elements on your page. To do that, we will create some more HTML code and then write some MooTools-powered JavaScript to bridge our content with Google Maps. Open the /components/com_js/js.php file and add the following selectMap() function to the controller: function selectMap(){ $key = 'DoNotUseThisKeyGetOneFromCodeDotGoogleDotCom'; JHTML::_('behavior.mootools'); $document =& JFactory::getDocument(); $document->addScript('http://maps.google.com/maps?file=api&v =2&key=' . $key); $document->addScript( JURI::base() . 'components/com_js/selectmap.js'); ?> <div id="map_canvas" style="width: 500px; height: 300px"></div> <select id="map_selections"> <option value="">(select...)</option> <option value="1200 K Street NW">Salad Surprises</option> <option value="1221 Connecticut Avenue NW">The Daily Dish</option> <option value="701 H Street NW">Sushi and Sashimi</option> </select><?php} This function is almost identical to basicMap() except for two things—selectmap.js is being added instead of basicmap.js, and a <select> element has been added beneath the <div>. The <select> element has an id that will be used in the JavaScript. The options of the <select> element are restaurants, with different addresses as values. The JavaScript code will bind a function to the onChange event so that the marker will move as different restaurants are selected.
Read more
  • 0
  • 0
  • 5030

article-image-custom-data-readers-ext-js
Packt
24 Oct 2009
9 min read
Save for later

Custom Data Readers in Ext JS

Packt
24 Oct 2009
9 min read
When writing Chapter 12, "It's All about the Data," of Learning Ext JS, I switched things up a bit and switched the server-side processes to utilizing Adobe's ColdFusion application server, instead of the PHP we had been using in the rest of the book. There were a few reasons we decided to do this. To show that Ext JS can work with any server-side technology. ColdFusion 8 includes Ext JS 1.1 for it's new Ajax form components. Adobe uses a custom format for the serialized JSON return of query data, making it perfect for our example needs. I'm a ColdFusion programmer. Some time ago, before writing Chapter 12, I had begun to use a Custom Data Reader that I had found on the Ext JS forums. Another Ext user and ColdFusion programmer, John Wilson, had written the custom reader to consume Adobe's custom JSON return for queries. First, let me show you why Adobe's format differs from the generally expected serialized JSON return of a query. Here's an example of a typical query response. { 'results': 2, 'rows': [ { 'id': 1, 'firstname': 'Bill', occupation: 'Gardener' }, // a row object { 'id': 2, 'firstname': 'Ben' , occupation: 'Horticulturalist' } // another row object ] } And here's an example of how ColdFusion returns a query response.     {        "COLUMNS":["INTPROPERTIESID","STRDEVELOPMENT","INTADDRESSID", "STRSTREET","STRSTREET2", "STRCITY","CHSTATEID","INTZIP"],        "DATA":[            [2,"Abbey Road",6,"456 Abbey Road","Villa 5","New York","NY",12345],            [6,"Splash",39,"566 aroundthe bend dr",null,"Nashville","TN",37221]        ]    } You can see, when examining the two formats that they are very divergent. The typical format returns an array of row objects of the query's results, whereas ColdFusion's format is an array (DATA) of arrays (each row of the query result), with each row array only containing the data. The ColdFusion format has extracted the column names into it's own array (COLUMNS), as opposed to the name/value pairing found in the object notation of the typical return. It's actually very smart, on Adobe's part, to return the data in this fashion, as it would ultimately mean smaller data sets returned from a remote call, especially with large recordsets. John's CFJsonReader, a custom data reader and an extended component of Ext's base DataReader, was able to translate ColdFusion's data returns by properly parsing the JSON return into Records of an Ext Store. It worked fairly well, with a few minor exceptions. it didn't handle the column aliasing you could do with any other Ext JS data reader (name:'development',mapping:'STRDEVELOPMENT') it didn't allow data type association with a value, as other Ext JS data readers (INTZIP is of type 'int', STRDEVELOPMENT is of type 'string', etc) So, it worked, but ultimately was limited. When I was writing Chapter 13, "Code for Reuse: Extending Ext JS", I really dove into extending existing Ext JS components. This helped me gain a better understanding of what John had done, when writing CFJsonReader. But, after really reviewing the code, I saw there was a better way of handling ColdFusion's JSON return. What it basically came down to was that John was extending Ext's base DataReader object, and then hand parsing almost the entire return. Looking at the above examples, you'll notice that Adobe's implementation is an array of arrays, rather than an array of objects. Ext JS already comes with an ArrayReader object, so I knew that by writing a custom data reader that extended it I would be able to get the desired results. Half an hour later, I had "built a better mousetrap" and we now have a Custom Data Reader for properly parsing ColdFusion's JSON return, without the previous limitations. /* * Ext JS Library 2.0 * Copyright(c) 2006-2007, Ext JS, LLC. * licensing@extjs.com * * http://extjs.com/license * ******************************************* * Steve 'Cutter' Blades (CutterBl) no.junkATcutterscrossingDOTcom * http://blog.cutterscrossing.com * * Inspired by the CFJsonReader, originally writtin by John Wilson (Daemach) * http://extjs.com/forum/showthread.php?t=21408&highlight=cfjsonreader * * This Custom Data Reader will take the JSON return of a ColdFusion * Query object, rather returned straight up, or via the ColdFusion * QueryForGrid() method. * * The CFQueryReader constructor takes two arguments * @meta : object containing single key/value pair for the 'id' of each record * @recordType : field mapping object * * The recordType object allows you to alias the returned ColdFusion column * name (which is always passed in upper case) to any 'name' you wish, as * well as assign a data type, which your ExtJS app will attempt to cast * whenever the value is referenced. * * ColdFusion's JSON return, for a ColdFusion Query object, will appear in the * following format: * * {"COLUMNS":["INTVENDORTYPEID","STRVENDORTYPE","INTEXPENSECATEGORIESID", * "STREXPENSECATEGORIES"],"DATA" :[[2,"Carpet Cleaning",1,"Cleaining"], * [1,"Cleaning Service",1,"Cleaining"]]} * * The ColdFusion JSON return on any query that is first passed through * ColdFusion's QueryForGrid() method will return the object in the * following format: * * {"TOTALROWCOUNT":3, "QUERY":{"COLUMNS":["MYIDFIELD","DATA1","DATA2"], * "DATA":[[1,"Bob","Smith"],[6,"Jim","Brown"]]}} * * The Ext.data.CFQueryReader is designed to accomodate either format * automatically. You would create your reader instance in much the same * way as the CFJsonReader was created: * * var myDataModel = [ * {name: 'myIdField', mapping: 'MYIDFIELD'}, * {name: 'data1', mapping: 'DATA1'}, * {name: 'data2', mapping: 'DATA2'} * ]; * * var myCFReader = new Ext.data.CFJsonReader({id:'myIdField'},myDataModel); * * Notice that the 'id' value mirrors the alias 'name' of the record's field. */ Ext.data.CFQueryReader = function(meta, recordType){ this.meta = meta || {}; Ext.data.CFQueryReader.superclass.constructor.call(this, meta, recordType || meta.fields); }; Ext.extend(Ext.data.CFQueryReader, Ext.data.ArrayReader, { read : function(response){ var json = response.responseText; var o = eval("("+json+")"); if(!o) { throw {message: "JsonReader.read: Json object not found"}; } if(o.TOTALROWCOUNT){ this.totalRowCount = o.TOTALROWCOUNT; } return this.readRecords(((o.QUERY)? o.QUERY : o)); }, readRecords : function(o){ var sid = this.meta ? this.meta.id : null; var recordType = this.recordType, fields = recordType.prototype.fields; var records = []; var root = o.DATA; // give sid an integer value that equates to it's mapping sid = fields.indexOfKey(sid); // re-assign the mappings to line up with the column position // in the returned json response for(var a = 0; a < o.COLUMNS.length; a++){ for(var b = 0; b < fields.length; b++){ if(fields.items[b].mapping == o.COLUMNS[a]){ fields.items[b].mapping = a; } } } for(var i = 0; i < root.length; i++){ var n = root[i]; var values = {}; var id = ((sid || sid === 0) && n[sid] !== undefined && n[sid] !== "" ? n[sid] : null); for(var j = 0, jlen = fields.length; j < jlen; j++){ var f = fields.items[j]; var k = f.mapping !== undefined && f.mapping !== null ? f.mapping : j; var v = n[k] !== undefined ? n[k] : f.defaultValue; v = f.convert(v, n); values[f.name] = v; } var record = new recordType(values, id); record.json = n; records[records.length] = record; } if(!this.totalRowCount){ this.totalRowCount = records.length; } return { records : records, totalRecords : this.totalRowCount }; } }); So, this changes our examples for Chapter 12 just a little bit. First of all, we'll need to have the CFQueryReader included, in place of the CFJsonReader. You can change the script tags in the samples for Examples 3 and 4. ... <script language="javascript" type="text/javascript" src="/scripts/custom-ext/CFQueryReader.js"></script> ... Next, we'll change the scripts for these two examples. We'll remove our configuration references for CFJsonReader, and replace them with the updated configuration for the CFQueryReader. /* * Chapter 12 Example 3 * Data Store from custom reader * * Revised: SGB (Cutter): 12.17.08 * Replaced CFJsonReader with CFQueryReader */ // Save all processing until the // DOM is completely loaded Ext.onReady(function(){ var ourStore = new Ext.data.Store({ url:'Chapter12Example.cfc', baseParams:{ method: 'getFileInfoByPath', returnFormat: 'JSON', queryFormat: 'column', startPath: '/images/' }, reader: new Ext.data.CFQueryReader({ id: 'NAME', // This is supposed to match the 'mapping' fields:[ {name:'file_name',mapping:'NAME'}, {name:'file_size',mapping:'SIZE'}, {name:'type',mapping:'TYPE'}, {name:'lastmod',mapping:'DATELASTMODIFIED'}, {name:'file_attributes',mapping:'ATTRIBUTES'}, {name:'mode',mapping:'MODE'}, {name:'directory',mapping:'DIRECTORY'} ] }), fields: recordModel, listeners:{ beforeload:{ fn: function(store, options){ if (options.startPath && (options.startPath.length > 0)){ store.baseParams.startPath = options.startPath; } }, scope:this }, load: { fn: function(store,records,options){ console.log(records); } }, scope:this } }); ourStore.load(); }); /* * Chapter 12 Example 4 * Data Store from custom reader - Filtering * * Revised: SGB (Cutter): 12.17.08 * Replaced CFJsonReader with CFQueryReader */ // Simple function/object to 'clone' objects cloneConfig = function (config) { for (i in config) { if (typeof config[i] == 'object') { this[i] = new cloneConfig(config[i]); } else this[i] = config[i]; } } // Save all processing until the // DOM is completely loaded Ext.onReady(function(){ var initialBaseParams = { method: 'getDirectoryContents', returnFormat: 'JSON', queryFormat: 'column', startPath: '/testdocs/' }; var ourStore = new Ext.data.Store({ url:'Chapter12Example.cfc', baseParams: new cloneConfig(initialBaseParams), reader: new Ext.data.CFQueryReader({ id: 'NAME', // This is supposed to match the 'mapping' fields:[ {name:'file_name',mapping:'NAME'}, {name:'file_size',mapping:'SIZE'}, {name:'type',mapping:'TYPE'}, {name:'lastmod',mapping:'DATELASTMODIFIED'}, {name:'file_attributes',mapping:'ATTRIBUTES'}, {name:'mode',mapping:'MODE'}, {name:'directory',mapping:'DIRECTORY'} ] }), listeners:{ beforeload:{ fn: function(store, options){ for(var i in options){ if(options[i].length > 0){ store.baseParams[i] = options[i]; } } }, scope:this }, load: { fn: function(store, records, options){ console.log(records); }, scope: this }, update: { fn: function(store, record, operation){ switch (operation){ case Ext.record.EDIT: // Do something with the edited record break; case Ext.record.REJECT: // Do something with the rejected record break; case Ext.record.COMMIT: // Do something with the committed record break; } }, scope:this } } }); ourStore.load({recurse:true}); filterStoreByType = function (type){ ourStore.load({dirFilter:type}); } filterStoreByFileType = function (fileType){ ourStore.load({fileFilter:fileType}); } clearFilters = function (){ ourStore.baseParams = new cloneConfig(initialBaseParams); ourStore.load(); } }); Summary These very basic changes have no overall effect on our examples. They function exactly as they did before. The new Custom Data Reader loads the data, returned from ColdFusion, exactly as it should. Now, we can also work with these data stores in the same manor as we would with any other data store set up through Ext JS, having the ability to alias columns, define field data types, and more.
Read more
  • 0
  • 0
  • 5023
Packt
24 Aug 2015
9 min read
Save for later

CSS3 – Selectors, Typography, Color Modes, and New Features

Packt
24 Aug 2015
9 min read
In this article by Ben Frain, the author of Responsive Web Design with HTML5 and CSS3 Second Edition, we'll cover the following topics: What are pseudo classes The :last-child selector The nth-child selectors The nth rules nth-based selection in responsive web design (For more resources related to this topic, see here.) CSS3 gives us more power to select elements based upon where they sit in the structure of the DOM. Let's consider a common design treatment; we're working on the navigation bar for a larger viewport and we want to have all but the last link over on the left. Historically, we would have needed to solve this problem by adding a class name to the last link so that we could select it, like this: <nav class="nav-Wrapper"> <a href="/home" class="nav-Link">Home</a> <a href="/About" class="nav-Link">About</a> <a href="/Films" class="nav-Link">Films</a> <a href="/Forum" class="nav-Link">Forum</a> <a href="/Contact-Us" class="nav-Link nav-LinkLast">Contact Us</a> </nav> This in itself can be problematic. For example, sometimes, just getting a content management system to add a class to a final list item can be frustratingly difficult. Thankfully, in those eventualities, it's no longer a concern. We can solve this problem and many more with CSS3 structural pseudo-classes. The :last-child selector CSS 2.1 already had a selector applicable for the first item in a list: div:first-child { /* Styles */ } However, CSS3 adds a selector that can also match the last: div:last-child { /* Styles */ } Let's look how that selector could fix our prior problem: @media (min-width: 60rem) { .nav-Wrapper { display: flex; } .nav-Link:last-child { margin-left: auto; } } There are also useful selectors for when something is the only item: :only-child and the only item of a type: :only-of-type. The nth-child selectors The nth-child selectors let us solve even more difficult problems. With the same markup as before, let's consider how nth-child selectors allow us to select any link(s) within the list. Firstly, what about selecting every other list item? We could select the odd ones like this: .nav-Link:nth-child(odd) { /* Styles */ } Or, if you wanted to select the even ones: .nav-Link:nth-child(even) { /* Styles */ } Understanding what nth rules do For the uninitiated, nth-based selectors can look pretty intimidating. However, once you've mastered the logic and syntax you'll be amazed what you can do with them. Let's take a look. CSS3 gives us incredible flexibility with a few nth-based rules: nth-child(n) nth-last-child(n) nth-of-type(n) nth-last-of-type(n) We've seen that we can use (odd) or (even) values already in an nth-based expression but the (n) parameter can be used in another couple of ways: As an integer; for example, :nth-child(2) would select the 
second item As a numeric expression; for example, :nth-child(3n+1) would start at 1 and then select every third element The integer based property is easy enough to understand, just enter the element number you want to select. The numeric expression version of the selector is the part that can be a little baffling for mere mortals. If math is easy for you, I apologize for this next section. For everyone else, let's break it down. Breaking down the math Let's consider 10 spans on a page (you can play about with these by looking at example_05-05): <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> By default they will be styled like this: span { height: 2rem; width: 2rem; background-color: blue; display: inline-block; } As you might imagine, this gives us 10 squares in a line: OK, let's look at how we can select different ones with nth-based selections. For practicality, when considering the expression within the parenthesis, I start from the right. So, for example, if I want to figure out what (2n+3) will select, I start with the right-most number (the three here indicates the third item from the left) and know it will select every second element from that point on. So adding this rule: span:nth-child(2n+3) { color: #f90; border-radius: 50%; } Results in this in the browser: As you can see, our nth selector targets the third list item and then every subsequent second one after that too (if there were 100 list items, it would continue selecting every second one). How about selecting everything from the second item onwards? Well, although you could write :nth-child(1n+2), you don't actually need the first number 1 as unless otherwise stated, n is equal to 1. We can therefore just write :nth-child(n+2). Likewise, if we wanted to select every third element, rather than write :nth-child(3n+3), we can just write :nth-child(3n) as every third item would begin at the third item anyway, without needing to explicitly state it. The expression can also use negative numbers, for example, :nth-child(3n-2) starts at -2 and then selects every third item. You can also change the direction. By default, once the first part of the selection is found, the subsequent ones go down the elements in the DOM (and therefore from left to right in our example). However, you can reverse that with a minus. For example: span:nth-child(-2n+3) { background-color: #f90; border-radius: 50%; } This example finds the third item again, but then goes in the opposite direction to select every two elements (up the DOM tree and therefore from right to left in our example): Hopefully, the nth-based expressions are making perfect sense now? The nth-child and nth-last-child differ in that the nth-last-child variant works from the opposite end of the document tree. For example, :nth-last-child(-n+3) starts at 3 from the end and then selects all the items after it. Here's what that rule gives us in the browser: Finally, let's consider :nth-of-type and :nth-last-of-type. While the previous examples count any children regardless of type (always remember the nth-child selector targets all children at the same DOM level, regardless of classes), :nth-of-type and :nth-last-of-type let you be specific about the type of item you want to select. Consider the following markup (example_05-06): <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> If we used the selector: .span-class:nth-of-type(-2n+3) { background-color: #f90; border-radius: 50%; } Even though all the elements have the same span-class, we will only actually be targeting the span elements (as they are the first type selected). Here is what gets selected: We will see how CSS4 selectors can solve this issue shortly. CSS3 doesn't count like JavaScript and jQuery! If you're used to using JavaScript and jQuery you'll know that it counts from 0 upwards (zero index based). For example, if selecting an element in JavaScript or jQuery, an integer value of 1 would actually be the second element. CSS3 however, starts at 1 so that a value of 1 is the first item it matches. nth-based selection in responsive web designs Just to close out this little section I want to illustrate a real life responsive web design problem and how we can use nth-based selection to solve it. Remember the horizontal scrolling panel from example_05-02? Let's consider how that might look in a situation where horizontal scrolling isn't possible. So, using the same markup, let's turn the top 10 grossing films of 2014 into a grid. For some viewports the grid will only be two items wide, as the viewport increases we show three items and at larger sizes still we show four. Here is the problem though. Regardless of the viewport size, we want to prevent any items on the bottom row having a border on the bottom. You can view this code at example_05-09. Here is how it looks with four items wide: See that pesky border below the bottom two items? That's what we need to remove. However, I want a robust solution so that if there were another item on the bottom row, the border would also be removed on that too. Now, because there are a different number of items on each row at different viewports, we will also need to change the nth-based selection at different viewports. For the sake of brevity, I'll show you the selection that matches four items per row (the larger of the viewports). You can view the code sample to see the amended selection at the different viewports. @media (min-width: 55rem) { .Item { width: 25%; } /* Get me every fourth item and of those, only ones that are in the last four items */ .Item:nth-child(4n+1):nth-last-child(-n+4), /* Now get me every one after that same collection too. */ .Item:nth-child(4n+1):nth-last-child(-n+4) ~ .Item { border-bottom: 0; } } You'll notice here that we are chaining the nth-based pseudo-class selectors. It's important to understand that the first doesn't filter the selection for the next, rather the element has to match each of the selections. For our preceding example, the first element has to be the first item of four and also be one of the last four. Nice! Thanks to nth-based selections we have a defensive set of rules to remove the bottom border regardless of the viewport size or number of items we are showing. Summary In this article, we've learned what are structural pseudo-classes. We've also learned what nth rules do. We have also showed the nth-based selection in responsive web design. Resources for Article:   Further resources on this subject: CSS3 Animation[article] A look into responsive design frameworks[article] Linking Dynamic Content from External Websites[article]
Read more
  • 0
  • 0
  • 5021

Packt
03 Sep 2013
19 min read
Save for later

Oracle ADF Essentials – Adding Business Logic

Packt
03 Sep 2013
19 min read
(For more resources related to this topic, see here.) Adding logic to business components by default, a business component does not have an explicit Java class. When you want to add Java logic, however, you generate the relevant Java class from the Java tab of the business component. On the Java tab, you also decide which of your methods are to be made available to other objects by choosing to implement a Client Interface . Methods that implement a client interface show up in the Data Control palette and can be called from outside the object. Logic in entity objects Remember that entity objects are closest to your database tables –– most often, you will have one entity object for every table in the database. This makes the entity object a good place to put data logic that must be always executed. If you place, for example, validation logic in an entity object, it will be applied no matter which view object attempts to change data. In the database or in an entity object? Much of the business logic you can place in an entity object can also be placed in the database using database triggers. If other systems are accessing your database tables, business logic should go into the database as much as possible. Overriding accessors To use Java in entity objects, you open an entity object and select the Java tab. When you click on the pencil icon, the Select Java Options dialog opens as shown in the following screenshot: In this dialog, you can select to generate Accessors (the setXxx() and getXxx() methods for all the attributes) as well as Data Manipulation Methods (the doDML() method; there is more on this later). When you click on OK , the entity object class is generated for you. You can open it by clicking on the hyperlink or you can find it in the Application Navigator panel as a new node under the entity object. If you look inside this file, you will find: Your class should start with an import section that contains a statement that imports your EntityImpl class. If you have set up your framework extension classes correctly this could be import com.adfessentials.adf.framework.EntityImpl. You will have to click on the plus sign in the left margin to expand the import section. The Structure panel in the bottom-left shows an overview of the class including all the methods it contains. You will see a lot of setter and getter methods like getFirstName() and setFirstName() as shown in the following screenshot: There is a doDML() method described later. If you were to decide, for example, that last name should always be stored in upper case, you could change the setLastName() method to: public void setLastName(String value) { setAttributeInternal(LASTNAME, value.toUpperCase()); } Working with database triggers If you decide to keep some of your business logic in database triggers, your triggers might change the values that get passed from the entity object. Because the entity object caches values to save database work, you need to make sure that the entity object stays in sync with the database even if a trigger changes a value. You do this by using the Refresh on Update property. To find this property, select the Attributes subtab on the left and then select the attribute that might get changed. At the bottom of the screen, you see various settings for the attribute with the Refresh settings in the top-right of the Details tab as shown in the following screenshot: Check the Refresh on Update property checkbox if a database trigger might change the attribute value. This makes the ADF framework requery the database after an update has been issued. Refresh on Insert doesn't work if you are using MySQL and your primary key is generated with AUTO_INCREMENT or set by a trigger. ADF doesn't know the primary key and therefore cannot find the newly inserted row after inserting it. It does work if you are running against an Oracle database, because Oracle SQL syntax has a special RETURNING construct that allows the entity object to get the newly created primary key back. Overriding doDML() Next, after the setters and getters, the doDML() method is the one that most often gets overridden. This method is called whenever an entity object wants to execute a Data Manipulation Language (DML ) statement like INSERT, UPDATE, or DELETE. This offers you a way to add additional processing; for example, checking that the account balance is zero before allowing a customer to be deleted. In this case, you would add logic to check the account balance, and if the deletion is allowed, call super.doDML() to invoke normal processing. Another example would be to implement logical delete (records only change state and are not actually deleted from the table). In this case, you would override doDML() as follows: @override protected void doDML(int operation, TransactionEvent e) { if (operation == DML_DELETE) { operation = DML_UPDATE; } super.doDML(operation, e); } As it is probably obvious from the code, this simply replaces a DELETE operation with an UPDATE before it calls the doDML() method of its superclass (your framework extension EntityImpl, which passes the task on to the Oracle-supplied EntityImpl class). Of course, you also need to change the state of the entity object row, for example, in the remove() method. You can find fully-functional examples of this approach on various blogs, for example at http://myadfnotebook.blogspot.dk/2012/02/updating-flag-when-deleting-entity-in.html. You also have the option of completely replacing normal doDML() method processing by simply not calling super.doDML(). This could be the case if you want all your data modifications to go via a database procedure –– for example, to insert an actor, you would have to call insertActor with first name and last name. In this case, you would write something like: @override protected void doDML(int operation, TransactionEvent e) { CallableStatement cstmt = null; if (operation == DML_INSERT) { String insStmt = "{call insertActor (?,?)}"; cstmt = getDBTransaction().createCallableStatement(insStmt, 0); try { cstmt.setString(1, getFirstName()); cstmt.setString(2, getLastName()); cstmt.execute(); } catch (Exception ex) { … } finally { … } } } If the operation is insert, the above code uses the current transaction (via the getDBTransaction() method) to create a CallableStatement with the string insertActor(?,?). Next, it binds the two parameters (indicated by the question marks in the statement string) to the values for first name and last name (by calling the getter methods for these two attributes). Finally, the code block finishes with a normal catch clause to handle SQL errors and a finally clause to close open objects. Again, fully working examples are available in the documentation and on the Internet in various blog posts. Normally, you would implement this kind of override in the framework extension EntityImpl class, with additional logic to allow the framework extension class to recognize which specific entity object the operation applies to and which database procedure to call. Data validation With the techniques you have just seen, you can implement every kind of business logic your requirements call for. One requirement, however, is so common that it has been built right into the ADF framework: data validation . Declarative validation The simplest kind of validation is where you compare one individual attribute to a limit, a range, or a number of fixed values. For this kind of validation, no code is necessary at all. You simply select the Business Rules subtab in the entity object, select an attribute, and click on the green plus sign to add a validation rule. The Add Validation Rule dialog appears as shown in the following screenshot: You have a number of options for Rule Type –– depending on your choice here, the Rule Definition tab changes to allow you to define the parameters for the rule. On the Failure Handling tab, you can define whether the validation is an error (that must be corrected) or a warning (that the user can override), and you define a message text as shown in the following screenshot: You can even define variable message tokens by using curly brackets { } in your message text. If you do so, a token will automatically be added to the Token Message Expressions section of the dialog, where you can assign it any value using Expression Language. Click on the Help button in the dialog for more information on this. If your application might ever conceivably be needed in a different language, use the looking glass icon to define a resource string stored in a separate resource bundle. This allows your application to have multiple resource bundles, one for each different user interface language. There is also a Validation Execution tab that allows you to specify under which condition your rule should be applied. This can be useful if your logic is complex and resource intensive. If you do not enter anything here, your rule is always executed. Regular expression validation One of the especially powerful declarative validations is the Regular Expression validation. A regular expression is a very compact notation that can define the format of a string –– this is very useful for checking e-mail addresses, phone numbers, and so on. To use this, set Rule Type to Regular Expression as shown in the following screenshot: JDeveloper offers you a few predefined regular expressions, for example, the validation for e-mails as shown in the preceding screenshot. Even though you can find lots of predefined regular expressions on the Internet, someone from your team should understand the basics of regular expression syntax so you can create the exact expression you need. Groovy scripts You can also set Rule Type to Script to get a free-format box where you can write a Groovy expression. Groovy is a scripting language for the Java platform that works well together with Java –– see http://groovy.codehaus.org/ for more information on Groovy. Oracle has published a white paper on Groovy in ADF (http://www.oracle.com/technetwork/developer-tools/jdev/introduction-to-groovy-128837.pdf), and there is also information on Groovy in the JDeveloper help. Method validation If none of these methods for data validation fit your need, you can of course always revert to writing code. To do this, set Rule Type to Method and provide an error message. If you leave the Create a Select Method checkbox checked when you click on OK , JDeveloper will automatically create a method with the right signature and add it to the Java class for the entity object. The autogenerated validation method for Length (in the Film entity object) would look as follows: /** * Validation method for Length. */ public boolean validateLength (Integer length) { return true; } It is your task to fill in the logic and return either true (if validation is OK) or false (if the data value does not meet the requirements). If validation fails, ADF will automatically display the message you defined for this validation rule. Logic in view objects View objects represent the dataset you need for a specific part of the application — typically a specific screen or part of a screen. You can create Java objects for either an entire view object (an XxxImpl.java class, where Xxx is the name of your view object) or for a specific row (an XxxRowImpl.java class). A view object class contains methods to work with the entire data-set that the view object represents –– for example, methods to apply view criteria or re-execute the underlying database query. The view row class contains methods to work with an individual record of data –– mainly methods to set and get attribute values for one specific record. Overriding accessors Like for entity objects, you can override the accessors (setters and getters) for view objects. To do this, you use the Java subtab in the view object and click on the pencil icon next to Java Classes to generate Java. You can select to generate a view row class including accessors to ask JDeveloper to create a view row implementation class as shown in the following screenshot: This will create an XxxRowImpl class (for example, RentalVORowImpl) with setter and getter methods for all attributes. The code will look something like the following code snippet: … public class RentalVORowImpl extends ViewRowImpl { … /** * This is the default constructor (do not remove). */ public RentalVORowImpl() { } … /** * Gets the attribute value for title using the alias name * Title. * @return the title */ public String getTitle() { return (String) getAttributeInternal(TITLE); } /** * Sets <code>value</code> as attribute value for title using * the alias name Title. * @param value value to set the title */ public void setTitle(String value) { setAttributeInternal(TITLE, value); } … } You can change all of these to manipulate data before it is delivered to the entity object or to return a processed version of an attribute value. To use such attributes, you can write code in the implementation class to determine which value to return. You can also use Groovy expressions to determine values for transient attributes. This is done on the Value subtab for the attribute by setting Value Type to Expression and filling in the Value field with a Groovy expression. See the Oracle white paper on Groovy in ADF (http://www.oracle.com/technetwork/developer-tools/jdev/introduction-to-groovy-128837.pdf) or the JDeveloper help. Change view criteria Another example of coding in a view object is to dynamically change which view criteria are applied to the view object.It is possible to define many view criteria on a view object –– when you add a view object instance to an application module, you decide which of the available view criteria to apply to that specific view object instance. However, you can also programmatically change which view criteria are applied to a view object. This can be useful if you want to have buttons to control which subset of data to display –– in the example application, you could imagine a button to "show only overdue rentals" that would apply an extra view criterion to a rental view object. Because the view criteria apply to the whole dataset, view criteria methods go into the view object, not the view row object. You generate a Java class for the view object from the Java Options dialog in the same way as you generate Java for the view row object. In the Java Options dialog, select the option to generate the view object class as shown in the following screenshot: A simple example of programmatically applying a view criteria would be a method to apply an already defined view criterion called called OverdueCriterion to a view object. This would look like this in the view object class: public void showOnlyOverdue() { ViewCriteria vc = getViewCriteria("OverdueCriterion"); applyViewCriteria(vc); executeQuery(); } View criteria often have bind variables –– for example, you could have a view criteria called OverdueByDaysCriterion that uses a bind variable OverdueDayLimit. When you generate Java for the view object, the default option of Include bind variable accessors (shown in the preceding screenshot) will create a setOverdueDayLimit() method if you have an OverdueDayLimit bind variable. A method in the view object to which we apply this criterion might look like the following code snippet: public void showOnlyOverdueByDays(int days) { ViewCriteria vc = getViewCriteria("OverdueByDaysCriterion"); setOverdueDayLimit(days); applyViewCriteria(vc); executeQuery(); } If you want to call these methods from the user interface, you must select create a client interface for them (on the Java subtab in the view object). This will make your method available in the Data Control palette, ready to be dragged onto a page and dropped as a button. When you change the view criteria and execute the query, only the content of the view object changes –– the screen does not automatically repaint itself. In order to ensure that the screen refreshes, you need to set the PartialTriggers property of the data table to point to the ID of the button that changes the view criteria. For more on partial page rendering, see the Oracle Fusion Middleware Web User Interface Developer's Guide for Oracle Application Development Framework (http://docs.oracle.com/cd/E37975_01/web.111240/e16181/af_ppr.htm). Logic in application modules You've now seen how to add logic to both entity objects and view objects. However, you can also add custom logic to application modules. An application module is the place where logic that does not belong to a specific view object goes –– for example, calls to stored procedures that involve data from multiple view objects. To generate a Java class for an application module, you navigate to the Java subtab in the application module and select the pencil icon next to the Java Classes heading. Typically, you create Java only for the application module class and not for the application module definition. You can also add your own logic here that gets called from the user interface or you can override the existing methods in the application module. A typical method to override is prepareSession(), which gets called before the application module establishes a connection to the database –– if you need to, for example, call stored procedures or do other kinds of initialization before accessing the database, an application module method is a good place to do so. Remember that you need to define your own methods as client methods on the Java tab of the application module for the method to be available to be called from elsewhere in the application. Because the application module handles the transaction, it also contains methods, such as beforeCommit(), beforeRollback(), afterCommit(), afterRollback(), and so on. The doDML() method on any entity object that is part of the transaction is executed before any of the application modules' methods. Adding logic to the user interface Logic in the user interface is implemented in the form of managed beans. These are Java classes that are registered with the task flow and automatically instantiated by the ADF framework.ADF operates with various memory scopes –– you have to decide on a scope when you define a managed bean. Adding a bean method to a button The simplest way to add logic to the user interface is to drop a button (af:commandButton) onto a page or page fragment and then double-click on it. This brings up the Bind Action Property dialog as shown in the following screenshot: If you leave Method Binding selected and click on New , the Create Managed Bean dialog appears as shown in the following screenshot: In this dialog, you can give your bean a name, provide a class name (typically the same as the bean name), and select a scope. The backingBean scope is a good scope for logic that is only used for one action when the user clicks on the button and which does not need to store any state for later. Leaving the Generate Class If It Does Not Exist checkbox checked asks JDeveloper to create the class for you. When you click on OK , JDeveloper will automatically suggest a method for you in the Method dropdown (based on the ID of the button you double-clicked on). In the Method field, provide a more useful name and click on OK to add the new class and open it in the editor. You will see a method with your chosen name, as shown in the following code snippet: Public String rentDvd() { // Add event code here... return null; } Obviously, you place your code inside this method. If you accidentally left the default method name and ended up with something like cb5_action(), you can right-click on the method name and navigate to Refactor | Rename to give it a more descriptive name. Note that JDeveloper automatically sets the Action property for your button matching the scope, bean name, and method name. This might be something like #{backingBeanScope.RentalBean.rentDvd}. Adding a bean to a task flow Your beans should always be part of a task flow. If you're not adding logic to a button, or you just want more control over the process, you can also create a backing bean class first and then add it to the task flow. A bean class is a regular Java class created by navigating to File | New | Java Class . When you have created the class, you open the task flow where you want to use it and select the Overview tab. On the Managed Beans subtab, you can use the green plus to add your bean. Simply give it a name, point to the class you created, and select a memory scope. Accessing UI components from beans In a managed bean, you often want to refer to various user interface elements. This is done by mapping each element to a property in the bean. For example, if you have an af:inputText component that you want to refer to in a bean, you create a private variable of type RichInputText in the bean (with setter and getter methods) and set the Binding property (under the Advanced heading) to point to that bean variable using Expression Language. When creating a page or page fragment, you have the option (on the Managed Bean tab) to automatically have JDeveloper create corresponding attributes for you. The Managed Bean tab is shown in the following screenshot: Leave it on the default setting of Do Not Automatically Expose UI Components in a Managed Bean . If you select one of the options to automatically expose UI elements, your bean will acquire a lot of attributes that you don't need, which will make your code unnecessarily complex and slow. However, while learning ADF, you might want to try this out to see how the bean attributes and the Binding property work together. If you do activate this setting, it applies to every page and fragment you create until you explicitly deselect this option. Summary In this article, you have seen some examples of how to add Java code to your application to implement the specific business logic your application needs. There are many, many more places and ways to add logic –– as you work with ADF, you will continually come across new business requirements that force you to figure out how to add code to your application in new ways. Fortunately, there are other books, websites, online tutorials and training that you can use to add to your ADF skill set –– refer to http://www.adfessentials.com for a starting point. Resources for Article : Further resources on this subject: Oracle Tools and Products [Article] Managing Oracle Business Intelligence [Article] Oracle Integration and Consolidation Products [Article]
Read more
  • 0
  • 0
  • 5019

article-image-nginx-http-server
Packt
18 Apr 2013
28 min read
Save for later

The NGINX HTTP Server

Packt
18 Apr 2013
28 min read
(For more resources related to this topic, see here.) NGINX's architecture NGINX consists of a single master process and multiple worker processes. Each of these is single-threaded and designed to handle thousands of connections simultaneously. The worker process is where most of the action takes place, as this is the component that handles client requests. NGINX makes use of the operating system's event mechanism to respond quickly to these requests. The NGINX master process is responsible for reading the configuration, handling sockets, spawning workers, opening log files, and compiling embedded Perl scripts. The master process is the one that responds to administrative requests via signals. The NGINX worker process runs in a tight event loop to handle incoming connections. Each NGINX module is built into the worker, so that any request processing, filtering, handling of proxy connections, and much more is done within the worker process. Due to this worker model, the operating system can handle each process separately and schedule the processes to run optimally on each processor core. If there are any processes that would block a worker, such as disk I/O, more workers than cores can be configured to handle the load. There are also a small number of helper processes that the NGINX master process spawns to handle dedicated tasks. Among these are the cache loader and cache manager processes. The cache loader is responsible for preparing the metadata for worker processes to use the cache. The cache manager process is responsible for checking cache items and expiring invalid ones. NGINX is built in a modular fashion. The master process provides the foundation upon which each module may perform its function. Each protocol and handler is implemented as its own module. The individual modules are chained together into a pipeline to handle connections and process requests. After a request is handled, it is then passed on to a series of filters, in which the response is processed. One of these filters is responsible for processing subrequests, one of NGINX's most powerful features. Subrequests are how NGINX can return the results of a request that differs from the URI that the client sent. Depending on the configuration, they may be multiply nested and call other subrequests. Filters can collect the responses from multiple subrequests and combine them into one response to the client. The response is then finalized and sent to the client. Along the way, multiple modules come into play. See http://www.aosabook.org/en/nginx.html for a detailed explanation of NGINX internals. We will be exploring the http module and a few helper modules in the remainder of this article. The HTTP core module The http module is NGINX's central module, which handles all interactions with clients over HTTP. We will have a look at the directives in the rest of this section, again divided by type. The server The server directive starts a new context. We have already seen examples of its usage throughout the book so far. One aspect that has not yet been examined in-depth is the concept of a default server. A default server in NGINX means that it is the first server defined in a particular configuration with the same listen IP address and port as another server. A default server may also be denoted by the default_server parameter to the listen directive. The default server is useful to define a set of common directives that will then be reused for subsequent servers listening on the same IP address and port: server { listen 127.0.0.1:80; server_name default.example.com; server_name_in_redirect on; } server { listen 127.0.0.1:80; server_name www.example.com; } In this example, the www.example.com server will have the server_name_in_redirect directive set to on as well as the default.example.com server. Note that this would also work if both servers had no listen directive, since they would still both match the same IP address and port number (that of the default value for listen, which is *:80). Inheritance, though, is not guaranteed. There are only a few directives that are inherited, and which ones are changes over time. A better use for the default server is to handle any request that comes in on that IP address and port, and does not have a Host header. If you do not want the default server to handle requests without a Host header, it is possible to define an empty server_name directive. This server will then match those requests. server { server_name ""; } The following table summarizes the directives relating to server: Table: HTTP server directives Directive Explanation port_in_redirect Determines whether or not the port will be specified in a redirect issued by NGINX. server Creates a new configuration context, defining a virtual host. The listen directive specifies the IP address(es) and port(s); the server_name directive lists the Host header values that this context matches. server_name Configures the names that a virtual host may respond to. server_name_in_redirect Activates using the first value of the server_name directive in any redirect issued by NGINX within this context. server_tokens Disables sending the NGINX version string in error messages and the Server response header (default value is on). Logging NGINX has a very flexible logging model . Each level of configuration may have an access log. In addition, more than one access log may be specified per level, each with a different log_format. The log_format directive allows you to specify exactly what will be logged, and needs to be defined within the http section. The path to the log file itself may contain variables, so that you can build a dynamic configuration. The following example describes how this can be put into practice: http { log_format vhost '$host $remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; log_format downloads '$time_iso8601 $host $remote_addr ' '"$request" $status $body_bytes_sent $request_ time'; open_log_file_cache max=1000 inactive=60s; access_log logs/access.log; server { server_name ~^(www.)?(.+)$; access_log logs/combined.log vhost; access_log logs/$2/access.log; location /downloads { access_log logs/downloads.log downloads; } } } The following table describes the directives used in the preceding code: Table: HTTP logging directives Directive Explanation access_log Describes where and how access logs are to be written. The first parameter is a path to the file where the logs are to be stored. Variables may be used in constructing the path. The special value off disables the access log. An optional second parameter indicates log_format that will be used to write the logs. If no second parameter is configured, the predefined combined format is used. An optional third parameter indicates the size of the buffer if write buffering should be used to record the logs. If write buffering is used, this size cannot exceed the size of the atomic disk write for that filesystem. If this third parameter is gzip, then the buffered logs will be compressed on-the-fly, provided that the nginx binary was built with the zlib library. A final flush parameter indicates the maximum length of time buffered log data may remain in memory before being flushed to disk. log_format Specifies which fields should appear in the log file and what format they should take. See the next table for a description of the log-specific variables. log_not_found Disables reporting of 404 errors in the error log (default value is on). log_subrequest Enables logging of subrequests in the access log (default value is off ). open_log_file_cache Stores a cache of open file descriptors used in access_logs with a variable in the path. The parameters used are: max: The maximum number of file descriptors present in the cache inactive: NGINX will wait this amount of time for something to be written to this log before its file descriptor is closed min_uses: The file descriptor has to be used this amount of times within the inactive period in order to remain open valid: NGINX will check this often to see if the file descriptor still matches a file with the same name off: Disables the cache In the following example, log entries will be compressed at a gzip level of 4. The buffer size is the default of 64 KB and will be flushed to disk at least every minute. access_log /var/log/nginx/access.log.gz combined gzip=4 flush=1m; Note that when specifying gzip the log_format parameter is not optional.The default combined log_format is constructed like this: log_format combined '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent"'; As you can see, line breaks may be used to improve readability. They do not affect the log_format itself. Any variables may be used in the log_format directive. The variables in the following table which are marked with an asterisk ( *) are specific to logging and may only be used in the log_format directive. The others may be used elsewhere in the configuration, as well. Table: Log format variables Variable Name Value $body_bytes_sent The number of bytes sent to the client, excluding the response header. $bytes_sent The number of bytes sent to the client. $connection A serial number, used to identify unique connections. $connection_requests The number of requests made through a particular connection. $msec The time in seconds, with millisecond resolution. $pipe * Indicates if the request was pipelined (p) or not (.). $request_length * The length of the request, including the HTTP method, URI, HTTP protocol, header, and request body. $request_time The request processing time, with millisecond resolution, from the first byte received from the client to the last byte sent to the client. $status The response status. $time_iso8601 * Local time in ISO8601 format. $time_local * Local time in common log format (%d/%b/%Y:%H:%M:%S %z). In this section, we have focused solely on access_log and how that can be configured. You can also configure NGINX to log errors. Finding files In order for NGINX to respond to a request, it passes it to a content handler, determined by the configuration of the location directive. The unconditional content handlers are tried first: perl, proxy_pass, flv, mp4, and so on. If none of these is a match, the request is passed to one of the following, in order: random index, index, autoindex, gzip_static, static. Requests with a trailing slash are handled by one of the index handlers. If gzip is not activated, then the static module handles the request. How these modules find the appropriate file or directory on the filesystem is determined by a combination of certain directives. The root directive is best defined in a default server directive, or at least outside of a specific location directive, so that it will be valid for the whole server: server { root /home/customer/html; location / { index index.html index.htm; } location /downloads { autoindex on; } } In the preceding example any files to be served are found under the root /home/customer/html. If the client entered just the domain name, NGINX will try to serve index.html. If that file does not exist, then NGINX will serve index.htm. When a user enters the /downloads URI in their browser, they will be presented with a directory listing in HTML format. This makes it easy for users to access sites hosting software that they would like to download. NGINX will automatically rewrite the URI of a directory so that the trailing slash is present, and then issue an HTTP redirect. NGINX appends the URI to the root to find the file to deliver to the client. If this file does not exist, the client receives a 404 Not Found error message. If you don't want the error message to be returned to the client, one alternative is to try to deliver a file from different filesystem locations, falling back to a generic page, if none of those options are available. The try_files directive can be used as follows: location / { try_files $uri $uri/ backups/$uri /generic-not-found.html; } As a security precaution, NGINX can check the path to a file it's about to deliver, and if part of the path to the file contains a symbolic link, it returns an error message to the client: server { root /home/customer/html; disable_symlinks if_not_owner from=$document_root; } In the preceding example, NGINX will return a "Permission Denied" error if a symlink is found after /home/customer/html, and that symlink and the file it points to do not both belong to the same user ID. The following table summarizes these directives: Table: HTTP file-path directives Directive Explanation disable_symlinks Determines if NGINX should perform a symbolic link check on the path to a file before delivering it to the client. The following parameters are recognized: off : Disables checking for symlinks (default) on: If any part of a path is a symlink, access is denied if_not_owner: If any part of a path contains a symlink in which the link and the referent have different owners, access to the file is denied from=part: When specified, the path up to part is not checked for symlinks, everything afterward is according to either the on or if_not_owner parameter root Sets the path to the document root. Files are found by appending the URI to the value of this directive. try_files Tests the existence of files given as parameters. If none of the previous files are found, the last entry is used as a fallback, so ensure that this path or named location exists, or is set to return a status code indicated by  =<status code>. Name resolution If logical names instead of IP addresses are used in an upstream or *_pass directive, NGINX will by default use the operating system's resolver to get the IP address, which is what it really needs to connect to that server. This will happen only once, the first time upstream is requested, and won't work at all if a variable is used in the *_pass directive. It is possible, though, to configure a separate resolver for NGINX to use. By doing this, you can override the TTL returned by DNS, as well as use variables in the *_pass directives. server { resolver 192.168.100.2 valid=300s; } Table: Name resolution directives Directive Explanation resolver   Configures one or more name servers to be used to resolve upstream server names into IP addresses. An optional  valid parameter overrides the TTL of the domain name record. In order to get NGINX to resolve an IP address anew, place the logical name into a variable. When NGINX resolves that variable, it implicitly makes a DNS look-up to find the IP address. For this to work, a resolver directive must be configured: server { resolver 192.168.100.2; location / { set $backend upstream.example.com; proxy_pass http://$backend; } } Of course, by relying on DNS to find an upstream, you are dependent on the resolver always being available. When the resolver is not reachable, a gateway error occurs. In order to make the client wait time as short as possible, the resolver_timeout parameter should be set low. The gateway error can then be handled by an error_ page designed for that purpose. server { resolver 192.168.100.2; resolver_timeout 3s; error_page 504 /gateway-timeout.html; location / { proxy_pass http://upstream.example.com; } } Client interaction There are a number of ways in which NGINX can interact with clients. This can range from attributes of the connection itself (IP address, timeouts, keepalive, and so on) to content negotiation headers. The directives listed in the following table describe how to set various headers and response codes to get the clients to request the correct page or serve up that page from its own cache: Table: HTTP client interaction directives Directive Explanation default_type Sets the default MIME type of a response. This comes into play if the MIME type of the file cannot be matched to one of those specified by the types directive. error_page Defines a URI to be served when an error level response code is encountered. Adding an = parameter allows the response code to be changed. If the argument to this parameter is left empty, the response code will be taken from the URI, which must in this case be served by an upstream server of some sort. etag Disables automatically generating the ETag response header for static resources (default is on). if_modified_since Controls how the modification time of a response is compared to the value of the If-Modified-Since request header: off: The If-Modified-Since header is ignored exact: An exact match is made (default) before: The modification time of the response is less than or equal to the value of the If-Modified-Since header ignore_invalid_headers Disables ignoring headers with invalid names (default is on). A valid name is composed of ASCII letters, numbers, the hyphen, and possibly the underscore (controlled by the underscores_in_headers directive). merge_slashes Disables the removal of multiple slashes. The default value of on means that NGINX will compress two or more / characters into one. recursive_error_pages Enables doing more than one redirect using the error_page directive (default is off). types Sets up a map of MIME types to file name extensions. NGINX ships with a conf/mime.types file that contains most MIME type mappings. Using include to load this file should be sufficient for most purposes. underscores_in_headers Enables the use of the underscore character in client request headers. If left at the default value off , evaluation of such headers is subject to the value of the ignore_invalid_headers directive. The error_page directive is one of NGINX's most flexible. Using this directive, we may serve any page when an error condition presents. This page could be on the local machine, but could also be a dynamic page produced by an application server, and could even be a page on a completely different site. http { # a generic error page to handle any server-level errors error_page 500 501 502 503 504 share/examples/nginx/50x.html; server { server_name www.example.com; root /home/customer/html; # for any files not found, the page located at # /home/customer/html/404.html will be delivered error_page 404 /404.html; location / { # any server-level errors for this host will be directed # to a custom application handler error_page 500 501 502 503 504 = @error_handler; } location /microsite { # for any non-existent files under the /microsite URI, # the client will be shown a foreign page error_page 404 http://microsite.example.com/404.html; } # the named location containing the custom error handler location @error_handler { # we set the default type here to ensure the browser # displays the error page correctly default_type text/html; proxy_pass http://127.0.0.1:8080; } } } Using limits to prevent abuse We build and host websites because we want users to visit them. We want our websites to always be available for legitimate access. This means that we may have to take measures to limit access to abusive users. We may define "abusive" to mean anything from one request per second to a number of connections from the same IP address. Abuse can also take the form of a DDOS (distributed denial-of-service) attack, where bots running on multiple machines around the world all try to access the site as many times as possible at the same time. In this section, we will explore methods to counter each type of abuse to ensure that our websites are available. First, let's take a look at the different configuration directives that will help us achieve our goal: Table: HTTP limits directives Directive Explanation limit_conn Specifies a shared memory zone (configured with limit_conn_zone) and the maximum number of connections that are allowed per key value. limit_conn_log_level When NGINX limits a connection due to the limit_conn directive, this directive specifies at which log level that limitation is reported. limit_conn_zone Specifies the key to be limited in limit_conn as the first parameter. The second parameter, zone, indicates the name of the shared memory zone used to store the key and current number of connections per key and the size of that zone (name:size). limit_rate Limits the rate (in bytes per second) at which clients can download content. The rate limit works on a connection level, meaning that a single client could increase their throughput by opening multiple connections. limit_rate_after Starts the limit_rate after this number of bytes have been transferred. limit_req Sets a limit with bursting capability on the number of requests for a specific key in a shared memory store (configured with limit_req_zone). The burst can be specified with the second parameter. If there shouldn't be a delay in between requests up to the burst, a third parameter nodelay needs to be configured. limit_req_log_level When NGINX limits the number of requests due to the limit_req directive, this directive specifies at which log level that limitation is reported. A delay is logged at a level one less than the one indicated here. limit_req_zone Specifies the key to be limited in limit_req as the first parameter. The second parameter, zone, indicates the name of the shared memory zone used to store the key and current number of requests per key and the size of that zone ( name:size). The third parameter, rate, configures the number of requests per second (r/s) or per minute (r/m) before the limit is imposed. max_ranges Sets the maximum number of ranges allowed in a byte-range request. Specifying 0 disables byte-range support. Here we limit access to 10 connections per unique IP address. This should be enough for normal browsing, as modern browsers open two to three connections per host. Keep in mind, though, that any users behind a proxy will all appear to come from the same address. So observe the logs for error code 503 (Service Unavailable), meaning that this limit has come into effect: http { limit_conn_zone $binary_remote_addr zone=connections:10m; limit_conn_log_level notice; server { limit_conn connections 10; } } Limiting access based on a rate looks almost the same, but works a bit differently. When limiting how many pages per unit of time a user may request, NGINX will insert a delay after the first page request, up to a burst. This may or may not be what you want, so NGINX offers the possibility to remove this delay with the nodelay parameter: http { limit_req_zone $binary_remote_addr zone=requests:10m rate=1r/s; limit_req_log_level warn; server { limit_req zone=requests burst=10 nodelay; } } Using $binary_remote_addr We use the $binary_remote_addr variable in the preceding example to know exactly how much space storing an IP address will take. This variable takes 32 bytes on 32-bit platforms and 64 bytes on 64-bit platforms. So the 10m zone we configured previously is capable of holding up to 320,000 states on 32-bit platforms or 160,000 states on 64-bit platforms. We can also limit the bandwidth per client. This way we can ensure that a few clients don't take up all the available bandwidth. One caveat, though: the limit_rate directive works on a connection basis. A single client that is allowed to open multiple connections will still be able to get around this limit: location /downloads { limit_rate 500k; } Alternatively, we can allow a kind of bursting to freely download smaller files, but make sure that larger ones are limited: location /downloads { limit_rate_after 1m; limit_rate 500k; } Combining these different rate limitations enables us to create a configuration that is very flexible as to how and where clients are limited: http { limit_conn_zone $binary_remote_addr zone=ips:10m; limit_conn_zone $server_name zone=servers:10m; limit_req_zone $binary_remote_addr zone=requests:10m rate=1r/s; limit_conn_log_level notice; limit_req_log_level warn; reset_timedout_connection on; server { # these limits apply to the whole virtual server limit_conn ips 10; # only 1000 simultaneous connections to the same server_name limit_conn servers 1000; location /search { # here we want only the /search URL to be rate-limited limit_req zone=requests burst=3 nodelay; } location /downloads { # using limit_conn to ensure that each client is # bandwidth-limited # with no getting around it limit_conn connections 1; limit_rate_after 1m; limit_rate 500k; } } } Restricting access In the previous section, we explored ways to limit abusive access to websites running under NGINX. Now we will take a look at ways to restrict access to a whole website or certain parts of it. Access restriction can take two forms here: restricting to a certain set of IP addresses, or restricting to a certain set of users. These two methods can also be combined to satisfy requirements that some users can access the website either from a certain set of IP addresses or if they are able to authenticate with a valid username and password. The following directives will help us achieve these goals: Table: HTTP access module directives Directive Explanation allow Allows access from this IP address, network, or all. auth_basic Enables authentication using HTTP Basic Authentication. The parameter string is used as the realm name. If the special value off is used, this indicates that the auth_basic value of the parent configuration level is negated. auth_basic_user_file Indicates the location of a file of username:password:comment tuples used to authenticate users. The password field needs to be encrypted with the crypt algorithm. The comment field is optional. deny Denies access from this IP address, network, or all. satisfy Allows access if all or any of the preceding directives grant access. The default value all indicates that a user must come from a specific network address and enter the correct password. To restrict access to clients coming from a certain set of IP addresses, the allow and deny directives can be used as follows: location /stats { allow 127.0.0.1; deny all; } This configuration will allow access to the /stats URI from the localhost only. To restrict access to authenticated users, the auth_basic and auth_basic_user_file directives are used as follows: server { server_name restricted.example.com; auth_basic "restricted"; auth_basic_user_file conf/htpasswd; } Any user wanting to access restricted.example.com would need to provide credentials matching those in the htpasswd file located in the conf directory of NGINX's root. The entries in the htpasswd file can be generated using any available tool that uses the standard UNIX crypt() function. For example, the following Ruby script will generate a file of the appropriate format: #!/usr/bin/env ruby # setup the command-line options require 'optparse' OptionParser.new do |o| o.on('-f FILE') { |file| $file = file } o.on('-u', "--username USER") { |u| $user = u } o.on('-p', "--password PASS") { |p| $pass = p } o.on('-c', "--comment COMM (optional)") { |c| $comm = c } o.on('-h') { puts o; exit } o.parse! if $user.nil? or $pass.nil? puts o; exit end end # initialize an array of ASCII characters to be used for the salt ascii = ('a'..'z').to_a + ('A'..'Z').to_a + ('0'..'9').to_a + [ ".", "/" ] $lines = [] begin # read in the current http auth file File.open($file) do |f| f.lines.each { |l| $lines << l } end rescue Errno::ENOENT # if the file doesn't exist (first use), initialize the array $lines = ["#{$user}:#{$pass}n"] end # remove the user from the current list, since this is the one we're editing $lines.map! do |line| unless line =~ /#{$user}:/ line end end # generate a crypt()ed password pass = $pass.crypt(ascii[rand(64)] + ascii[rand(64)]) # if there's a comment, insert it if $comm $lines << "#{$user}:#{pass}:#{$comm}n" else $lines << "#{$user}:#{pass}n" end # write out the new file, creating it if necessary File.open($file, File::RDWR|File::CREAT) do |f| $lines.each { |l| f << l} end Save this file as http_auth_basic.rb and give it a filename (-f), a user (-u), and a password (-p), and it will generate entries appropriate to use in NGINX's auth_ basic_user_file directive: $ ./http_auth_basic.rb -f htpasswd -u testuser -p 123456 To handle scenarios where a username and password should only be entered if not coming from a certain set of IP addresses, NGINX has the satisfy directive. The any parameter is used here for this either/or scenario: server { server_name intranet.example.com; location / { auth_basic "intranet: please login"; auth_basic_user_file conf/htpasswd-intranet; allow 192.168.40.0/24; allow 192.168.50.0/24; deny all; satisfy any; } If, instead, the requirements are for a configuration in which the user must come from a certain IP address and provide authentication, the all parameter is the default. So, we omit the satisfy directive itself and include only allow, deny, auth_basic, and auth_basic_user_file: server { server_name stage.example.com; location / { auth_basic "staging server"; auth_basic_user_file conf/htpasswd-stage; allow 192.168.40.0/24; allow 192.168.50.0/24; deny all; } Streaming media files NGINX is capable of serving certain video media types. The flv and mp4 modules, included in the base distribution, can perform what is called pseudo-streaming. This means that NGINX will seek to a certain location in the video file, as indicated by the start request parameter. In order to use the pseudo-streaming capabilities, the corresponding module needs to be included at compile time: --with-http_flv_module for Flash Video (FLV) files and/or --with-http_mp4_module for H.264/AAC files. The following directives will then become available for configuration: Table: HTTP streaming directives Directive Explanation flv Activates the flv  module for this location. mp4 Activates the mp4  module for this location. mp4_buffer_size Sets the initial buffer size for delivering MP4 files. mp4_max_buffer_size Sets the maximum size of the buffer used to process MP4 metadata. Activating FLV pseudo-streaming for a location is as simple as just including the flv keyword: location /videos { flv; } There are more options for MP4 pseudo-streaming, as the H.264 format includes metadata that needs to be parsed. Seeking is available once the "moov atom" has been parsed by the player. So to optimize performance, ensure that the metadata is at the beginning of the file. If an error message such as the following shows up in the logs, the mp4_max_buffer_size needs to be increased: mp4 moov atom is too large mp4_max_buffer_size can be increased as follows: location /videos { mp4; mp4_buffer_size 1m; mp4_max_buffer_size 20m; } Predefined variables NGINX makes constructing configurations based on the values of variables easy. Not only can you instantiate your own variables by using the set or map directives, but there are also predefined variables used within NGINX. They are optimized for quick evaluation and the values are cached for the lifetime of a request. You can use any of them as a key in an if statement, or pass them on to a proxy. A number of them may prove useful if you define your own log file format. If you try to redefine any of them, though, you will get an error message as follows: <timestamp> [emerg] <master pid>#0: the duplicate "<variable_name>" variable in <path-to-configuration-file>:<line-number> They are also not made for macro expansion in the configuration—they are mostly used at run time. Summary In this article, we have explored a number of directives used to make NGINX serve files over HTTP. Not only does the http module provide this functionality, but there are also a number of helper modules that are essential to the normal operation of NGINX. These helper modules are enabled by default. Combining the directives of these various modules enables us to build a configuration that meets our needs. We explored how NGINX finds files based on the URI requested. We examined how different directives control how the HTTP server interacts with the client, and how the error_page directive can be used to serve a number of needs. Limiting access based on bandwidth usage, request rate, and number of connections is all possible. We saw, too, how we can restrict access based on either IP address or through requiring authentication. We explored how to use NGINX's logging capabilities to capture just the information we want. Pseudo-streaming was examined briefly, as well. NGINX provides us with a number of variables that we can use to construct our configurations. Resources for Article : Further resources on this subject: Nginx HTTP Server FAQs [Article] Nginx Web Services: Configuration and Implementation [Article] Using Nginx as a Reverse Proxy [Article]
Read more
  • 0
  • 0
  • 5018
article-image-server-side-swift-building-slack-bot-part-1
Peter Zignego
12 Oct 2016
5 min read
Save for later

Server-side Swift: Building a Slack Bot, Part 1

Peter Zignego
12 Oct 2016
5 min read
As a remote iOS developer, I love Slack. It’s my meeting room and my water cooler over the course of a work day. If you’re not familiar with Slack, it is a group communication tool popular in Silicon Valley and beyond. What makes Slack valuable beyond replacing email as the go-to communication method for buisnesses is that it is more than chat; it is a platform. Thanks to Slack’s open attitude toward developers with its API, hundreds of developers have been building what have become known as Slack bots. There are many different libraries available to help you start writing your Slack bot, covering a wide range of programming languages. I wrote a library in Apple’s new programming language (Swift) for this very purpose, called SlackKit. SlackKit wasn’t very practical initially—it only ran on iOS and OS X. On the modern web, you need to support Linux to deploy on Amazon Web Servies, Heroku, or hosted server companies such as Linode and Digital Ocean. But last June, Apple open sourced Swift, including official support for Linux (Ubuntu 14 and 15 specifically). This made it possible to deploy Swift code on Linux servers, and developers hit the ground running to build out the infrastructure needed to make Swift a viable language for server applications. Even with this huge developer effort, it is still early days for server-side Swift. Apple’s Linux Foundation port is a huge undertaking, as is the work to get libdispatch, a concurrency framework that provides much of the underpinning for Foundation. In addition to rough official tooling, writing code for server-side Swift can be a bit like hitting a moving target, with biweekly snapshot releases and multiple, ABI-incompatible versions to target. Zewo to Sixty on Linux Fortunately, there are some good options for deploying Swift code on servers right now, even with Apple’s libraries in flux. I’m going to focus in on one in particular: Zewo. Zewo is modular by design, allowing us to use the Swift Package Manager to pull in only what we need instead of a monolithic framework. It’s open source and is a great community of developers that spans the globe. If you’re interested in the world of server-side Swift, you should get involved! Oh, and of course they have a Slack. Using Zewo and a few other open source libraries, I was able to build a version of SlackKit that runs on Linux. A Swift Tutorial In this two-part post series I have detailed a step-by-step guide to writing a Slack bot in Swift and deploying it to Heroku. I’m going to be using OS X but this is also achievable on Linux using the editor of your choice. Prerequisites Install Homebrew: /usr/bin/ruby -e “$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" Install swiftenv: brew install kylef/formulae/swiftenv Configure your shell: echo ‘if which swiftenv > /dev/null; then eval “$(swiftenv init -)”; fi’ >> ~/.bash_profile Download and install the latest Zewo-compatible snapshot: swiftenv install DEVELOPMENT-SNAPSHOT-2016-05-09-a swiftenv local DEVELOPMENT-SNAPSHOT-2016-05-09-a Install and Link OpenSSL: brew install openssl brew link openssl --force Let’s Keep Score The sample application we’ll be building is a leaderboard for Slack, like PlusPlus++ by Betaworks. It works like this: add a point for every @thing++, subtract a point for every @thing--, and show a leaderboard when asked @botname leaderboard. First, we need to create the directory for our application and initialize the basic project structure. mkdir leaderbot && cd leaderbot swift build --init Next, we need to edit Package.swift to add our dependency, SlackKit: importPackageDescription let package = Package( name: "Leaderbot", targets: [], dependencies: [ .Package(url: "https://github.com/pvzig/SlackKit.git", majorVersion: 0, minor: 0), ] ) SlackKit is dependent on several Zewo libraries, but thanks to the Swift Package Manager, we don’t have to worry about importing them explicitly. Then we need to build our dependencies: swift build And our development environment (we need to pass in some linker flags so that swift build knows where to find the version of OpenSSL we installed via Homebrew and the C modules that some of our Zewo libraries depend on): swift build -Xlinker -L$(pwd)/.build/debug/ -Xswiftc -I/usr/local/include -Xlinker -L/usr/local/lib -X In Part 2, I will show all of the Swift code, how to get an API token, how to test the app and deploy it on Heroku, and finally how to launch it. Disclaimer The linux version of SlackKit should be considered an alpha release. It’s a fun tech demo to show what’s possible with Swift on the server, not something to be relied upon. Feel free to report issues you come across. About the author Peter Zignego is an iOS developer in Durham, North Carolina. He writes at bytesized.co, tweets @pvzig, and freelances at Launch Software.fto help you start writing your Slack bot, covering a wide range of programming languages. I wrote a library in Apple’s new programming language (Swift) for this very purpose, called SlackKit. SlackKit wasn’t very practical initially—it only ran on iOS and OS X. On the modern web, you need to support Linux to deploy on Amazon Web Servies, Heroku, or hosted server 
Read more
  • 0
  • 0
  • 5013

article-image-debugging-rest-web-services
Packt
20 Oct 2009
13 min read
Save for later

Debugging REST Web Services

Packt
20 Oct 2009
13 min read
(For more resources on this subject, see here.) Message tracing The first symptom that you will notice when you are running into problems is that the client would not behave the way you want it to behave. As an example, there would be no output, or the wrong output. Since the outcome of running a REST client depends on the request that you send over the wire and the response that you receive over the wire, one of the first things is to capture the messages and verify that those are in the correct expected format. REST Services and clients interact using messages, usually in pairs of request and response. So if there are problems, they are caused by errors in the messages being exchanged. Sometimes the user only has control over a REST client and does not have access to the implementation details of the service. Sometimes the user will implement the REST service for others to consume the service. Sometimes the Web browser can act as a client. Sometimes a PHP application on a server can act as a REST client. Irrespective of where the client is and where the service is, you can use message capturing tools to capture messages and try to figure out the problem. Thanks to the fact that the service and client use messages to interact with each other, we can always use a message capturing tool in the middle to capture messages. It is not that we must run the message capturing tool on the same machine where the client is running or the service is running; the message capturing tool can be run on either machine, or it can be run on a third machine. The following figure illustrates how the message interaction would look with a message capturing tool in place. If the REST client is a Web browser and we want to capture the request and response involved in a message interaction, we would have to point the Web browser to message capturing tool and let the tool send the request to the service on behalf of the Web browser. Then, since it is the tool that sent the request to the service, the service would respond to the tool. The message capturing tool in turn would send the response it received from the service to the Web browser. In this scenario, the tool in the middle would gain access to both the request and response. Hence it can reveal those messages for us to have a look. When you are not seeing the client to work, here is the list of things that you might need to look for: If the client sends a message If you are able to receive a response from a service If the request message sent by the client is in the correct format, including HTTP headers If the response sent by the server is in the correct format, including the HTTP headers In order to check for the above, you would require a message-capturing tool to trace the messages. There are multiple tools that you can use to capture the messages that are sent from the client to the service and vice versa. Wireshark (http://www.wireshark.org/) is one such tool that can be used to capture any network traffic. It is an open-source tool and is available under the GNU General Public License version 2. However this tool can be a bit complicated if you are looking for a simple tool. Apache TCPMon (http://ws.apache.org/commons/tcpmon/) is another tool that is designed to trace web services messages. This is a Java based tool that can be used with web services to capture the messages. Because TCPMon is a message capturing tool, it can be used to intercept messages sent between client and service, and as explained earlier, can be run on the client machine, the server machine or on a third independent machine. The only catch is that you need Java installed in your system to run this tool. You can also find a C-based implementation of a similar tool with Apache Axis2/C (http://ws.apache.org/axis2/c). However, that tool does not have a graphical user interface. There is a set of steps that you need to follow, which are more or less the same across all of these tools, in order to prepare the tool for capturing messages. Define the target host name Define the target port number Define the listen port number Target host name is the name of the host machine on which the service is running. As an example, if we want to debug the request sent to the Yahoo spelling suggestion service, hosted at http://search.yahooapis.com/WebSearchService/V1/spellingSuggestion, the host name would be search.yahooapis.com. We can either use the name of the host or we can use the IP address of the host because the tools are capable of dealing with both formats in place of the host name. As an example, if the service is hosted on the local machine, we could either use localhost or 127.0.0.1 in place of the host name. Target port number is the port number on which the service hosting web server is listening; usually this is 80. As an example, for the Yahoo spelling suggestion service, hosted at http://search.yahooapis.com/WebSearchService/V1/spellingSuggestion, the target port number is 80. Note that, when the service URL does not mention any number, we can always use the default number. If it was running on a port other than port 80, we can find the port number followed by the host name and preceded with character ':'. As an example, if we have our web server running on port 8080 on the local machine, we would have service URL similar to http://localhost:8080/rest/04/library/book.php. Here, the host name is localhost and the target port is 8080. Listen port is the port on which the tool will be listening to capture the messages from the client before sending it to the service. For an example, say that we want to use port 9090 as our listen port to capture the messages while using the Yahoo spelling suggestion service. Under normal circumstances, we will be using a URL similar to the following with the web browser to send the request to the service. http://search.yahooapis.com/WebSearchService/V1/spellingSuggestion?appid=YahooDemo&query=apocalipto When we want to send this request through the message capturing tool and since we decided to make the tools listen port to be 9090 with the tool in the middle and assuming that the tool is running on the local machine, we would now use the following URL with the web browser in place of the original URL. http://localhost:9090/WebSearchService/V1/spellingSuggestion?appid=YahooDemo&query=apocalipto Note that we are not sending this request directly to search.yahooapis.com, but rather to the tool listening on port 9090 on local host. Once the tool receives the request, it will capture the request, forward that to the target host, receive the response and forward that response to the web browser. The following figure shows the Apache TCPMon tool. You can see localhost being used as the target host, 80 being the target port number and 9090 being the listening port number. Once you fill in these fields you can see a new tab being added in the tool showing the messages being captured. Once you click on the Add button, you will see a new pane as shown in the next figure where it will show the messages and pass the messages to and from the client and service. Before you can capture the messages, there is one more step. That is to change the client code to point to the port number 9090, since our monitoring tool is now listening on that port. Originally, we were using port 80 $url = 'http://localhost:80/rest/04/library/book.php'; or just $url = 'http://localhost/rest/04/library/book.php'; because the default port number used by a web server is port 80, and the client was directly talking to the service. However, with the tool in place, we are going to make the client talk to the tool listening on port 9090. The tool in turn will talk to the service. Note that in this sample we have all three parties, the client, the service, and the tool running on the same machine. So we will keep using localhost as our host name. Now we are going to change the service endpoint address used by the client to contain port 9090. This will make sure that the client will be talking to the tool. $url = 'http://localhost:9090/rest/04/library/book.php'; As you can see, the tool has captured the request and the response. The request appears at the top and the response at the bottom. The request is a GET request to the resource located at /rest/04/library/book.php. The response is a success response, with HTTP 200 OK code. And after the HTTP headers, the response body, which is in XML follows. As mentioned earlier, the first step in debugging is to verify if the client has sent a request and if the service responded. In the above example, we have both the request and response in place. If both were missing then we need to check what is wrong on either side. If the client request was missing, you can check for the following in the code. Are you using the correct URL in client Have you written the request to the wire in the client? Usually this is done by the function curl_exec when using Curl If the response was missing, you can check for the following. Are you connected to the network? Because your service can be hosted on a remote machine Have you written a response from the service? That is, basically, have you returned the correct string value from the service? In PHP wire, using the echo function to write the required response to the wire usually does this. If you are using a PHP framework, you may have to use the framework specific mechanisms to do this. As an example, if you are using the Zend_Rest_Server class, you have to use handle() method to make sure that the response is sent to the client. Here is a sample error scenario. As you can see, the response is 404 not found. And if you look at the request, you see that there is a typo in the request. We have missed 'k' from our resource URL, hence we have sent the request to /rest/04/library/boo.php, which does not exist, whereas the correct resource URL is /rest/04/library/book.php. Next let us look at the Yahoo search example that was discussed earlier to identify some advanced concepts. We want to capture the request sent by the web browser and the response sent by the server for the request. http://search.yahooapis.com/WebSearchService/V1/spellingSuggestion?appid=YahooDemo&ampquery=apocalipto. As discussed earlier, the target host name is search.yahooapis.com. The target port number is 80. Let's use 9091 as the listen. Let us use the web browser to send the request through the tool so that we can capture the request and response. Since the tool is listening on port 9091, we would use the following URL with the web browser. http://localhost:9091/WebSearchService/V1/spellingSuggestion?appid=YahooDemo&ampquery=apocalipto When you use the above URL with the web browser, the web browser would send the request to the tool and the tool will get the response from the service and forward that to the web browser. We can see that the web browser gets the response. However, if we have a look at the TCPMon tool's captured messages, we see that the service has sent some binary data instead of XML data even though the Web browser is displaying the response in XML format. So what went wrong? In fact, nothing is wrong. The service sent the data in binary format because the web browser requested that format. If you look closely at the request sent you will see the following. GET /WebSearchService/V1/spellingSuggestion?appid=YahooDemo&query=apocalipto HTTP/1.1Host: search.yahooapis.com:9091User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.9) Gecko/20071025 Firefox/2.0.0.9Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5Accept-Language: en-us,en;q=0.7,zh-cn;q=0.3Accept-Encoding: gzip,deflateAccept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7Keep-Alive: 300Connection: keep-alive In the request, the web browser has used the HTTP header. Accept-Encoding: gzip,deflate This tells the service that the web browser can handle data that comes in gzip compressed format. Hence the service sends the data compressed to the web browser. Obviously, it is not possible to look into the XML messages and debug them if the response is compressed. Hence we should ideally capture the messages in XML format. To do this, we can modify the request message on the TCPMon pane itself and resend the message. First remove the line Accept-Encoding: gzip,deflate Then click on the Resend button. Once we click on the Resend button, we will get the response in XML format. Errors in building XML While forming XML as request payload or response payload, we can run into errors through simple mistakes. Some would be easy to spot but some are not. Most of the XML errors could be avoided by following a simple rule of thumb-each opening XML tag should have an equivalent closing tag. That is the common mistake that can happen while building XML payloads. In the above diagram, if you look carefully in the circle, the ending tag for the book element is missing. A new starting tag for a new book is started before the first book is closed. This would cause the XML parsing on the client side to fail. In this case I am using the Library system sample and here is the PHP source code causing the problem. echo "<books>";while ($line = mysql_fetch_array($result, MYSQL_ASSOC)) { echo "<book>"; foreach ($line as $key => $col_value) { echo "<$key>$col_value</$key>"; } //echo "</book>";}echo "</books>"; Here I have intentionally commented out printing the closing tag to demonstrate the error scenario. However, while writing this code, I could have missed that as well, causing the system to be buggy. While looking for XML related errors, you can use the manual technique that we just used. Look for missing tags. If the process looks complicated and you cannot seem to find any XML errors in the response or request that you are trying to debug, you can copy the XML captured with the tool and run it through an XML validator tool. For example, you can use an online tool such as http://www.w3schools.com/XML/xml_validator.asp. You can also check if the XML file is well formed using an XML parser
Read more
  • 0
  • 0
  • 5013
Modal Close icon
Modal Close icon