Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-build-advanced-contact-manager-using-jboss-richfaces-33-part-2
Packt
18 Nov 2009
10 min read
Save for later

Build an Advanced Contact Manager using JBoss RichFaces 3.3: Part 2

Packt
18 Nov 2009
10 min read
The contact detail For the third column, we would like to show three different statuses: The "No contact selected" message when no contact is selected (so the property is null) A view-only box when we are not in the edit mode (the property selectedContactEditing is set to false) An edit box when we are in the edit mode (the property selectedContactEditing is set to true) So, let's open the home.xhtml page and insert the third column inside the panel grid with the three statuses: <a:outputPanel id="contactDetail"> <a:outputPanel rendered="#{homeSelectedContactHelper. selectedContact==null}"> <rich:panel> <h:outputText value="#{messages['noContactSelected']}"/> </rich:panel></a:outputPanel> <a:outputPanel rendered="#{homeSelectedContactHelper. selectedContact!=null and homeSelectedContactHelper. selectedContactEditing==false}"> <ui:include src="main/contactView.xhtml"/> </a:outputPanel> <a:outputPanel rendered="#{homeSelectedContactHelper. selectedContact!=null and homeSelectedContactHelper. selectedContactEditing==true}"> <ui:include src="main/contactEdit.xhtml"/> </a:outputPanel></a:outputPanel> Here, we have put the main a:outputPanel as the main placeholder, and inside it we put three more instances of a:outputPanel (one for every state) with the rendered attribute in order to decide which one to show. The first one just shows a message when homeSelectedContactHelper.selectedContact is set to null: The second instance of a:outputPanel will include the main/contactView.xhtml file only if homeSelectedContactHelper.selectedContact is not null, and we are not in editing mode (so homeSelectedContactHelper.selectedContactEditing is set to false); the third one will be shown only if homeSelectedContactHelper.selectedContact is not null, and we are in the edit mode (that is homeSelectedContactHelper.selectedContactEditing is equal to true). Before starting to write the include sections, let's see how the main bean for the selected contact would look, and connect it with the data table for selecting the contact from it. The support bean Let's create a new class called HomeSelectedContactHelper inside the book.richfaces.advcm.modules.main package; the class might look like this: @Name("homeSelectedContactHelper")@Scope(ScopeType.CONVERSATION)public class HomeSelectedContactHelper { @In(create = true) EntityManager entityManager; @In(required = true) Contact loggedUser; @In FacesMessages facesMessages; // My code here} This is a standard JBoss Seam component and  now let's add the properties. The bean that we are going to use for view and edit features is very simple to understand—it just contains two properties (namely selectedContact and selectedContactEditing) and some action methods to manage them. Let's add the properties to our class: private Contact selectedContact;private Boolean selectedContactEditing;public Contact getSelectedContact() { return selectedContact;}public void setSelectedContact(Contact selectedContact) { this.selectedContact = selectedContact;}public Boolean getSelectedContactEditing() { return selectedContactEditing;}public void setSelectedContactEditing(Boolean selectedContactEditing) { this.selectedContactEditing = selectedContactEditing;} As you can see, we just added two properties with standard the getter and setter. Let's now see the action methods: public void createNewEmptyContactInstance() { setSelectedContact(new Contact());}public void insertNewContact() { // Attaching the owner of the contact getSelectedContact().setContact(loggedUser); entityManager.persist(getSelectedContact()); facesMessages.addFromResourceBundle(StatusMessage.Severity.INFO,  "contactAdded");}public void saveContactData() { entityManager.merge(getSelectedContact()); facesMessages.addFromResourceBundle(StatusMessage.Severity.INFO, "contactSaved");}public void deleteSelectedContact() { entityManager.remove(getSelectedContact()); // De-selecting the current contact setSelectedContact(null); setSelectedContactEditing(null); facesMessages.addFromResourceBundle(StatusMessage.Severity.INFO, "contactDeleted");}public boolean isSelectedContactManaged() { return getSelectedContact() != null && entityManager.contains (getSelectedContact());} It's not difficult to understand what they do, however, in order to be clear, we are going to describe what each method does. The method createNewEmptyContactInstance() simply sets the selectedContact property with a new instance of the Contact class—it will be called by the "add contact" button. After the user has clicked on the "add contact" button and inserted the contact data, he/she has to persist this new instance of data into the database. It is done by the insertNewContact() method, called when he/she clicks on the Insert button. If the user edits a contact and clicks on the "Save" button, the saveContactData() method will be called, in order to store the modifications into the database. As for saving, the deleteSelectedContact() method will be called by the "Delete" button, in order to remove the instance from the database. A special mention for the isSelectedContactManaged() method—it is used to determine if the selectedContact property contains a bean that exists in the database (so, I'm editing it), or a new instance not yet persisted to the database. We use it especially in rendered properties, in order to determine which component to show (you will see this in the next section). Selecting the contact from the contacts list We will use the contacts list in order to decide which contact must be shown in the detail view. The simple way is to add a new column into the dataTable, and put a command button (or link) to select the bean in order to visualize the detail view. Let's open the contactsList.xhtml file and add another column as follows: <rich:column width="10%" style="text-align: center"> <a:commandButton image="/img/view.png" reRender="contactDetail"> <f:setPropertyActionListener value="#{contact}" target="#{homeSelectedContactHelper.selectedContact}"/> <f:setPropertyActionListener value="#{false}" target="#{homeSelectedContactHelper.selectedContactEditing}"/> </a:commandButton></rich:column> Inside the column, we added the a:commandButton component (that shows an image instead of the standard text) that doesn't call any action—it uses the f:setPropertyAction method to set the homeSelectedContactHelper.selectedContact value to contact (the row value of the dataTable), and to tell to show the view box and not the edit one (setting homeSelectedContactHelper.selectedContactEditing to false). After the Ajax call, it will re-render the contactDetail box in order to reflect the change. Also, the header must be changed to reflect the column add: <rich:dataTable ... > <f:facet name="header"> <rich:columnGroup> <rich:column colspan="3"> <h:outputText value="Contacts"/> </rich:column> <rich:column breakBefore="true"> <h:outputText value="Name"/> </rich:column> <rich:column> <h:outputText value="Surname"/> </rich:column> <rich:column> <rich:spacer/> </rich:column> </rich:columnGroup> </f:facet> ... We incremented the colspan attribute value and added a new (empty) column header. The new contacts list will look like the following screenshot: Adding a new contact Another feature we would like to add to the contacts list is the "Add contact" button. In order to do that, we are going to use the empty toolbar. Let's add a new action button into the rich:toolbar component: <a:commandButton image="/img/addcontact.png" reRender="contactDetail"action="#{homeSelectedContactHelper.createNewEmptyContactInstance}"> <f:setPropertyActionListener value="#{true}"target="#{homeSelectedContactHelper.selectedContactEditing}"/></a:commandButton> This button will call the homeSelectedContactHelper.createNewEmptyContactInstance() action method in order to create and select an empty instance and will set homeSelectedContactHelper.selectedContactEditing to true in order to start the editing; after those Ajax calls, it will re-render the contactDetail box to reflect the changes. Viewing contact detail We are ready to implement the view contact detail box; just open the /view/main/contactView.xhtml file and add the following code: <h:form> <rich:panel> <f:facet name="header"> <h:outputText value="#{homeSelectedContactHelper.selectedContact.name} #{homeSelectedContactHelper.selectedContact.surname}"/> </f:facet> <h:panelGrid columns="2" rowClasses="prop" columnClasses="name,value"> <h:outputText value="#{messages['name']}:"/> <h:outputText value="#{homeSelectedContactHelper.selectedContact.name}"/> <h:outputText value="#{messages['surname']}:"/> <h:outputText value="#{homeSelectedContactHelper.selectedContact.surname}"/> <h:outputText value="#{messages['company']}:"/> <h:outputText value="#{homeSelectedContactHelper.selectedContact.company}"/> <h:outputText value="#{messages['email']}:"/> <h:outputText value="#{homeSelectedContactHelper.selectedContact.email}"/> </h:panelGrid> </rich:panel> <rich:toolBar> <rich:toolBarGroup> <a:commandLink ajaxSingle="true" reRender="contactDetail" styleClass="image-command-link"> <f:setPropertyActionListener value="#{true}" target="#{homeSelectedContactHelper.selectedContactEditing}"/> <h:graphicImage value="/img/edit.png" /> <h:outputText value="#{messages['edit']}" /> </a:commandLink> </rich:toolBarGroup> </rich:toolBar></h:form> The first part is just rich:panel containing h:panelGrid with the fields' detail. In the second part of the code, we put rich:toolBar containing a command link (with an image and a text) that activates the edit mode—it, in fact, just sets the homeSelectedContactHelper.selectedContactEditing property to true and re-renders contactDetail in order to make it appear in the edit box. We also added a new CSS class into the /view/stylesheet/theme.css file to manage the layout of command links with images: .image-command-link { text-decoration: none;}.image-command-link img { vertical-align: middle; padding-right: 3px;} The view box looks like: We are now ready to develop the edit box. Editing contact detail When in the edit mode, the content of the /view/main/contactEdit.xhtml file will be shown in the contact detail box—let's open it for editing. Let's add the code for creating the main panel: <h:form> <rich:panel> <f:facet name="header"> <h:panelGroup> <h:outputText value="#{homeSelectedContactHelper.selectedContact.name} #{homeSelectedContactHelper.selectedContact.surname}"rendered="#{homeSelectedContactHelper.selectedContactManaged}"/> <h:outputText value="#{messages['newContact']}"rendered="#{!homeSelectedContactHelper.selectedContactManaged}"/> </h:panelGroup> </f:facet> <!-- my code here --> </rich:panel><!-- my code here --></h:form> This is a standard rich:panel with a customized header—it has two h:outputText components that will be shown depending on the rendered attribute (whether it's a new contact or not). More than one component inside f:facetRemember that f:facet must have only one child, so, to put more than one component, you have to use a surrounding one like h:panelGroup or something similar. Inside the panel, we are going to put h:panelGrid containing the components for data editing: <rich:graphValidator> <h:panelGrid columns="3" rowClasses="prop" columnClasses="name,value,validatormsg"> <h:outputLabel for="scName" value="#{messages['name']}:"/> <h:inputText id="scName" value="#{homeSelectedContactHelper.selectedContact.name}"/> <rich:message for="scName" styleClass="messagesingle" errorClass="errormsg" infoClass="infomsg" warnClass="warnmsg"/> <h:outputLabel for="scSurname" value="#{messages['surname']}:"/> <h:inputText id="scSurname" value="#{homeSelectedContactHelper.selectedContact.surname}"/> <rich:message for="scSurname" styleClass="messagesingle" errorClass="errormsg" infoClass="infomsg" warnClass="warnmsg"/> <h:outputLabel for="scCompany" value="#{messages['company']}:"/> <h:inputText id="scCompany" value="#{homeSelectedContactHelper.selectedContact.company}"/> <rich:message for="scCompany" styleClass="messagesingle" errorClass="errormsg" infoClass="infomsg" warnClass="warnmsg"/> <h:outputLabel for="scEmail" value="#{messages['email']}:"/> <h:inputText id="scEmail" value="#{homeSelectedContactHelper.selectedContact.email}"/> <rich:message for="scEmail" styleClass="messagesingle" errorClass="errormsg" infoClass="infomsg" warnClass="warnmsg"/> </h:panelGrid><rich:graphValidator> Nothing complicated here, we've just used h:outputLabel, h:inputText, and rich:message for every Contact property to be edited; it appears as follows:
Read more
  • 0
  • 0
  • 1330

article-image-create-quick-application-cakephp-part-2
Packt
18 Nov 2009
7 min read
Save for later

Create a Quick Application in CakePHP: Part 2

Packt
18 Nov 2009
7 min read
Editing a Task Now that we can add tasks to CakeTooDoo, the next thing that we will be doing is to have the ability to edit tasks. This is necessary because the users should be able to tick on a task when it has been completed. Also, if the users are not happy with the title of the task, they can change it. To have these features in CakeTooDoo, we will need to add another action to our Tasks Controller and also add a view for this action. Time for Action: Creating the Edit Task Form Open the file tasks_controller.php and add a new action named edit as shown in the following code: function edit($id = null) { if (!$id) { $this->Session->setFlash('Invalid Task'); $this->redirect(array('action'=>'index'), null, true); } if (empty($this->data)) { $this->data = $this->Task->find(array('id' => $id)); } else { if ($this->Task->save($this->data)) { $this->Session->setFlash('The Task has been saved'); $this->redirect(array('action'=>'index'), null, true); } else { $this->Session->setFlash('The Task could not be saved. Please, try again.'); } } } Inside the directory /CakeTooDoo/app/views/tasks, create a new file named edit.ctp and add the following code to it: <?php echo $form->create('Task');?> <fieldset> <legend>Edit Task</legend> <?php echo $form->hidden('id'); echo $form->input('title'); echo $form->input('done'); ?> </fieldset> <?php echo $form->end('Save');?> We will be accessing the Task Edit Form from the List All Task page. So, let's add a link from the List All Tasks page to the Edit Task page. Open the index.ctp file in /CakeTooDoo/app/views directory, and replace the HTML comment <!-- different actions on tasks will be added here later --> with the following code: <?php echo $html->link('Edit', array('action'=>'edit', $task['Task']['id'])); ?> Now open the List All Tasks page in the browser by pointing it to http://localhost/CakeTooDoo/tasks/index and we will see an edit link beside all the tasks. Click on the edit link of the task you want to edit, and this will take you to do the Edit Task form, as shown below: Now let us add links in the Edit Task Form page to the List All Tasks and Add New Task page. Add the following code to the end of edit.ctp in /CakeTooDoo/app/views: <?php echo $html->link('List All Tasks', array('action'=>'index')); ?><br /> <?php echo $html->link('Add Task', array('action'=>'add')); ?> What Just Happened? We added a new action named edit in the Tasks controller. Then we went on to add the view file edit.ctp for this action. Lastly, we linked the other pages to the Edit Task page using the HTML helper. When accessing this page, we need to tell the action which task we are interested to edit. This is done by passing the task id in the URL. So, if we want to edit the task with the id of 2, we need to point our browser to http://localhost/CakeTooDoo/tasks/edit/2. When such a request is made, Cake forwards this request to the Tasks controller's edit action, and passes the value of the id to the first parameter of the edit action. If we check the edit action, we will notice that it accepts a parameter named $id. The task id passed in the URL is stored in this parameter. When a request is made to the edit action, the first thing that it does is to check if any id has been supplied or not. To let users edit a task, it needs to know which task the user wants to edit. It cannot continue if there is no id supplied. So, if $id is undefined, it stores an error message to the session and redirects to the index action that will show the list of current tasks along with the error message. If $id is defined, the edit action then checks whether there is any data stored in $this->data. If no data is stored in $this->data, it means that the user has not yet edited. And so, the desired task is fetched from the Task model, and stored in $this->data in the line: $this->data = $this->Task->find(array('id' => $id)); Once that is done, the view of the edit action is then rendered, displaying the task information. The view fetches the task information to be displayed from $this->data. The view of the edit action is very similar to that of the add action with a single difference. It has an extra line with echo $form->hidden('id');. This creates an HTML hidden input with the value of the task id that is being edited. Once the user edits the task and clicks on the Save button, the edited data is resent to the edit action and saved in $this->data. Having data in $this->data confirms that the user has edited and submitted the changed data. Thus, if $this->data is not empty, the edit action then tries to save the data by calling the Task Model's save() function: $this->Task->save($this->data). This is the same function that we used to add a new task in the add action. You may ask how does the save() function of model knows when to add a new record and when to edit an existing one? If the form data has a hidden id field, the function knows that it needs to edit an existing record with that id. If no id field is found, the function adds a new record. Once the data has been successfully updated, a success message is stored in the session and it redirects to the index action. Of course the index page will show the success message. Adding Data Validation If you have come this far, by now you should have a working CakeTooDoo. It has the ability to add a task, list all the tasks with their statuses, and edit a task to change its status and title. But, we are still not happy with it. We want the CakeTooDoo to be a quality application, and making a quality application with CakePHP is as easy as eating a cake. A very important aspect of any web application (or software in general), is to make sure that the users do not enter inputs that are invalid. For example, suppose a user mistakenly adds a task with an empty title, this is not desirable because without a title we cannot identify a task. We would want our application to check whether the user enters title. If they do not enter a title, CakeTooDoo should not allow the user to add or edit a task, and should show the user a message stating the problem. Adding these checks is what we call Data Validation. No matter how big or small our applications are, it is very important that we have proper data validation in place. But adding data validation can be a painful and time consuming task. This is especially true, if we have a complex application with lots of forms. Thankfully, CakePHP comes with a built-in data validation feature that can really make our lives much easier. Time for Action: Adding Data Validation to Check for Empty Title In the Task model that we created in /CakeTooDoo/app/models, add the following code inside the Task Model class. The Task Model will look like this: <?php class Task extends AppModel { var $name = 'Task'; var $validate = array( 'title' => array( 'rule' => VALID_NOT_EMPTY, 'message' => 'Title of a task cannot be empty' ) ); } ?> Now open the Add Task form in the browser by pointing it to http://localhost/CakeTooDoo/tasks/add, and try to add a task with an empty title. It will show the following error message:
Read more
  • 0
  • 0
  • 2328

article-image-plotting-geographical-data-using-basemap
Packt
18 Nov 2009
3 min read
Save for later

Plotting Geographical Data using Basemap

Packt
18 Nov 2009
3 min read
Basemap is a Matplotlib toolkit, a collection of application-specific functions that extends Matplotlib functionalities, and its complete documentation is available at http://matplotlib.sourceforge.net/basemap/doc/html/index.html. Toolkits are not present in the default Matplotlib installation (in fact, they also have a different namespace, mpl_toolkits), so we have to install Basemap separately. We can download it from http://sourceforge.net/projects/matplotlib/, under the matplotlib-toolkits menu of the download section, and then install it following the instructions in the documentation link mentioned previously. Basemap is useful for scientists such as oceanographers and meteorologists, but other users may also find it interesting. For example, we could parse the Apache log and draw a point on a map using GeoIP localization for each connection. We use the 0.99.3 version of Basemap for our examples. First example Let's start playing with the library. It contains a lot of things that are very specific, so we're going to just give an introduction to the basic functions of Basemap. # pyplot module importimport matplotlib.pyplot as plt# basemap importfrom mpl_toolkits.basemap import Basemap# Numpy importimport numpy as np These are the usual imports along with the basemap module. # Lambert Conformal map of USA lower 48 statesm = Basemap(llcrnrlon=-119, llcrnrlat=22, urcrnrlon=-64, urcrnrlat=49, projection='lcc', lat_1=33, lat_2=45, lon_0=-95, resolution='h', area_thresh=10000) Here, we initialize a Basemap object, and we can see it has several parameters depending upon the projection chosen. Let's see what a projection is: In order to represent the curved surface of the Earth on a two-dimensional map, a map projection is needed. This conversion cannot be done without distortion. Therefore, there are many map projections available in Basemap, each with its own advantages and disadvantages. Specifically, a projection can be: equal-area (the area of features is preserved) conformal (the shape of features is preserved) No projection can be both (equal-area and conformal) at the same time. In this example, we have used a Lambert Conformal map. This projection requires additional parameters to work with. In this case, they are lat_1, lat_2, and lon_0. Along with the projection, we have to provide the information about the portion of the Earth surface that the map projection will describe. This is done with the help of the following arguments: Argument Description llcrnrlon Longitude of lower-left corner of the desired map domain llcrnrlat Latitude of lower-left corner of the desired map domain urcrnrlon Longitude of upper-right corner of the desired map domain urcrnrlat Latitude of upper-right corner of the desired map domain     The last two arguments are:   Argument Description resolution Specifies what the resolution is of the features added to the map (such as coast lines, borders, and so on), here we have chosen high resolution (h), but crude, low, and intermediate are also available. area_thresh Specifies what the minimum size is for a feature to be plotted. In this case, only features bigger than 10,000 square kilometer
Read more
  • 0
  • 0
  • 6072

article-image-build-advanced-contact-manager-using-jboss-richfaces-33-part-1
Packt
18 Nov 2009
11 min read
Save for later

Build an Advanced Contact Manager using JBoss RichFaces 3.3: Part 1

Packt
18 Nov 2009
11 min read
The main layout Let's start preparing the space for the core features of the application. We want a three-column layout for groups, contacts list, and contact detail. Let's open the home.xhtml file and add a three-column panel grid inside the body: <h:panelGrid columns="3" width="100%" columnClasses="main-group-column, main-contacts-list-column,  main-contact-detail-column"></h:panelGrid> We are using three new CSS classes (one for every column). Let's open the /view/stylesheet/theme.css file and add the following code: .main-group-column { width: 20%; vertical-align: top;}.main-contacts-list-column { width: 40%; vertical-align: top;}.main-contact-detail-column { width: 40%; vertical-align: top;} The main columns are ready; now we want to split the content of every column in a separate file (so we don't have a large and difficult file to read) by using the Facelets templating capabilities—let's create a new folder inside the/view folder called main, and let's create the following empty files inside it: contactsGroups.xhtml contactsList.xhtml contactEdit.xhtml contactView.xhtml Now let's open them and put the standard code for an empty (included) file: <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><ui:composition ><!-- my code here --></ui:composition> Now, we have all of the pieces ready to be included into the home.xhtml file, let's open it and start adding the first column inside h:panelGrid: <a:outputPanel id="contactsGroups"> <ui:include src="main/contactsGroups.xhtml"/></a:outputPanel> As you can see, we surrounded the include with an a:outputPanel that will be used as a placeholder for the re-rendering purpose. Include a Facelets tag (ui:include) into the a:outputPanel that we used in order to include the page at that point. Ajax placeholders A very important concept to keep in mind while developing is that the Ajax framework can't add or delete, but can only replace existing elements in the page.For this reason, if you want to append some code, you need to use a placeholder. RichFaces has a component that can be used as a placeholder—a4j:outputPanel. Inside a4j:outputPanel, you can put other components that use the "rendered" attribute in order to decide if they are visible or not. When you want to re-render all the included components, just re-render the outputPanel, and all will work without any problem. Here is a non-working code snippet: <h:form> <h:inputText value="#{aBean.myText}"> <a4j:support event="onkeyup" reRender="out1" /> </h:inputText></h:form><h:outputText id="out1" value="#{aBean.myText}" rendered="#{not empty aBean.myText}"/> This code seems the same as that of the a4j:support example, but it won't work. The problem is that we added the rendered attribute to outputText, so initially, out1 will not be rendered (because the text property is initially empty and rendered will be equal to false). After the Ajax response, the JavaScript Engine will not find the out1 element (it is not in the page because of rendered="false"), and it will not be able to update it (remember that you can't add or delete elements, only replace them). It is very simple to make the code work: <h:form> <h:inputText value="#{aBean.myText}"> <a4j:support event="onkeyup" reRender="out2" /> </h:inputText></h:form><a4j:outputPanel id="out2"> <h:outputText id="out1" rendered="#{not empty aBean.myText}" value="#{aBean.myText}" /></a4j:outputPanel> As you can see, you just have to put the out1 component inside a4j:outputPanel (called out2) and tell a4j:support to re-render out2 instead of out1. Initially, out2 will be rendered but empty (because out1 will not be rendered). After the Ajax response, the empty out2 will be replaced with markup elements that also contain the out1 component (that is now visible, because the myText property is not empty after the Ajax update and the rendered property is true). A very important concept to keep in mind while developing is that the Ajax framework can't add or delete, but can only replace existing elements of the page. For this reason, if you want to append some code, you need to use a placeholder. The groups box This box will contain all the contacts groups, so the user will be able to organize contacts in different groups in a better way. We will not implement the group box features in this article. Therefore, by now the group column is just a rich:panel with a link to refresh the contact list. Let's open the contactsGroups.xhtml file and insert the following code: <h:form> <rich:panel> <f:facet name="header"> <h:outputText value="#{messages['groups']}" /> </f:facet> <h:panelGrid columns="1"> <a:commandLink value="#{messages['allContacts']}" ajaxSingle="true" reRender="contactsList"> <f:setPropertyActionListener value="#{null}" target="#{homeContactsListHelper.contactsList}" /> </a:commandLink> </h:panelGrid> </rich:panel></h:form> As you can see, we've put a three-column h:panelGrid (to be used in the future) and a:commandLink, which just sets the contactsList property of the homeContactListHelper bean (that we will see in the next section) to null, in order to make the list be read again. At the end of the Ajax interaction, it will re-render the contactsList column in order to show the new data. Also, notice that we are still supporting i18n for every text using the messages property; the task to fill the messages_XX.properties file is left as an exercise for the user. The contacts list The second column inside h:panelGrid of home.xhtml looks like: <a:outputPanel id="contactsList"> <ui:include src="main/contactsList.xhtml"/></a:outputPanel> As for groups, we used a placeholder surrounding the ui:include tag. Now let's focus on creating the data table—open the /view/main/contactsList.xhtml file and add the first snippet of code for dataTable: <h:form> <rich:dataTable id="contactsTable" reRender="contactsTableDS" rows="20" value="#{homeContactsListHelper.contactsList}" var="contact"> <rich:column width="45%"> <h:outputText value="#{contact.name}"/> </rich:column> <rich:column width="45%"> <h:outputText value="#{contact.surname}"/> </rich:column> <f:facet name="footer"> <rich:datascroller id="contactsTableDS" for="contactsTable" renderIfSinglePage="false"/> </f:facet> </rich:dataTable> <h:outputText value="#{messages['noContactsInList']}" rendered="#{homeContactsListHelper.contactsList.size()==0}"/></h:form> We just added the rich:dataTable component with some columns and an Ajax data scroller at the end. Differences between h:dataTable and rich:dataTable RichFaces provides its own version of h:dataTable, which contains more features and is better integrated with the RichFaces framework. The first important additional feature, in fact, is the skinnability support following the RichFaces standards. Other features are row and column spans support (we will discuss it in the Columns and column groups section), out-of-the-box filter and sorting (discussed in the Filtering and sorting section), more JavaScript event handlers (such as onRowClick, onRowContextMenu, onRowDblClick, and so on) and the reRender attribute. Like other data iteration components of the RichFaces framework, it also supports the partial-row update. Data pagination Implementing Ajax data pagination using RichFaces is really simple—just decide how many rows must be shown in every page by setting the rows attribute of dataTable (in our case, we've chosen 20 rows per page), and then "attach" the rich:datascroller component to it by filling the for attribute with the dataTable id: <rich:datascroller id="contactsTableDS" for="contactsTable" renderIfSinglePage="false"/> Here you can see another very useful attribute (renderIfSinglePage) that makes the component hidden when there is just a single page in the list (it means the list contains a number of items that is less than or equal to the value of the rows attribute). A thing to keep in mind is that the rich:datascroller component must stay inside a form component (h:form or a:form) in order to work. Customizing rich:datascroller is possible not only by using CSS classes (as usual), but also by personalizing our own parts using the following facets: pages controlsSeparator first, first_disabled last, last_disabled next, next_disabled previous, previous_disabled fastforward, fastforward_disabled fastrewind, fastrewinf_disabled Here is an example with some customized facets (using strings): <rich:datascroller id="contactsTableDS" for="contactsTable" renderIfSinglePage="false"> <f:facet name="first"> <h:outputText value="First" /> </f:facet> <f:facet name="last"> <h:outputText value="Last" /> </f:facet></rich:datascroller> Here is the result: You can use an image (or another component) instead of text, in order to create your own customized scroller. Another interesting example is: <rich:datascroller id="contactsTableDS" for="contactsTable" renderIfSinglePage="false"> <f:facet name="first"> <h:outputText value="First"/> </f:facet> <f:facet name="last"> <h:outputText value="Last"/> </f:facet> <f:attribute name="pageIndexVar" value="pageIndexVar"/> <f:attribute name="pagesVar" value="pagesVar"/> <f:facet name="pages"> <h:panelGroup> <h:outputText value="Page #{pageIndexVar} / #{pagesVar}"/> </h:panelGroup> </f:facet></rich:datascroller> The result is: By setting the pageIndexVar and pagesVar attributes, we are able to use them in an outputText component, as we've done in the example. A useful attribute of the component is maxPages that sets the maximum number of page links (the numbers in the middle), which the scroller shows—therefore, we can control the size of it. The page attribute could be bound to a property of a bean, in order to switch to a page giving the number—a simple use-case could be using an inputText and a commandButton, in order to let the client insert the page number that he/she wants to go to. Here is the code that shows how to implement it: <rich:datascroller for="contactsList" maxPages="20" fastControls="hide" page="#{customDataScrollerExampleHelper.scrollerPage}" pagesVar="pages" id="ds"> <f:facet name="first"> <h:outputText value="First" /> </f:facet> <f:facet name="first_disabled"> <h:outputText value="First" /> </f:facet> <f:facet name="last"> <h:outputText value="Last" /> </f:facet> <f:facet name="last_disabled"> <h:outputText value="Last" /> </f:facet> <f:facet name="previous"> <h:outputText value="Previous" /> </f:facet> <f:facet name="previous_disabled"> <h:outputText value="Previous" /> </f:facet> <f:facet name="next"> <h:outputText value="Next" /> </f:facet> <f:facet name="next_disabled"> <h:outputText value="Next" /> </f:facet> <f:facet name="pages"> <h:panelGroup> <h:outputText value="Page "/> <h:inputText value="#{customDataScrollerExampleHelper. scrollerPage}" size="4"> <f:validateLongRange minimum="0" /> <a:support event="onkeyup" timeout="500" oncomplete="#{rich:component('ds')}. switchToPage(this.value)" /> </h:inputText> <h:outputText value=" of #{pages}"/> </h:panelGroup> </f:facet></rich:datascroller> As you can see, besides customizing the text of the First, Last, Previous, and Next sections, we defined a pages facet by inserting h:inputText connected with an integer value inside a backing bean. We also added the a:support tag, in order to trim the page change after the keyup event is completed. We've also set the timeout attribute, in order to call the server every 500 ms and not every time the user types. You can see a screenshot of the feature here:
Read more
  • 0
  • 0
  • 1601

article-image-create-quick-application-cakephp-part-1
Packt
17 Nov 2009
9 min read
Save for later

Create a Quick Application in CakePHP: Part 1

Packt
17 Nov 2009
9 min read
The ingredients are fresh, sliced up, and in place. The oven is switched on, heated, and burning red. It is time for us to put on the cooking hat, and start making some delicious cake recipes. So, are you ready, baker? In this article, we are going to develop a small application that we'll call the "CakeTooDoo". It will be a simple to-do-list application, which will keep record of the things that we need to do. A shopping list, chapters to study for an exam, list of people you hate, and list of girls you had a crush on are all examples of lists. CakeTooDoo will allow us to keep an updated list. We will be able to view all the tasks, add new tasks, and tick the tasks that are done and much more. Here's another example of a to-do list, things that we are going to cover in this article: Make sure Cake is properly installed for CakeTooDoo Understand the features of CakeTooDoo Create and configure the CakeTooDoo database Write our first Cake model Write our first Cake controller Build a list that shows all the tasks in CakeTooDoo Create a form to add new tasks to CakeTooDoo Create another form to edit tasks in the to-do list Have a data validation rule to make sure users do not enter empty task title Add functionality to delete a task from the list Make separate lists for completed and pending Tasks Make the creation and modification time of a task look nicer Create a homepage for CakeTooDoo Making Sure the Oven is Ready Before we start with CakeTooDoo, let's make sure that our oven is ready. But just to make sure that we do not run into any problem later, here is a check list of things that should already be in place: Apache is properly installed and running in the local machine. MySQL database server is installed and running in the local machine. PHP, version 4.3.2 or higher, is installed and working with Apache. The latest 1.2 version of CakePHP is being used. Apache mod_rewrite module is switched on. AllowOverride is set to all for the web root directory in the Apache configuration file httpd.conf. CakePHP is extracted and placed in the web root directory of Apache. Apache has write access for the tmp directory of CakePHP. In this case, we are going to rename the Cake directory to it CakeTooDoo. CakeTooDoo: a Simple To-do List Application As we already know, CakeTooDoo will be a simple to-do list. The list will consist of many tasks that we want to do. Each task will consist of a title and a status. The title will indicate the thing that we need to do, and the status will keep record of whether the task has been completed or not. Along with the title and the status, each task will also record the time when the task has been created and last modified. Using CakeTooDoo, we will be able to add new tasks, change the status of a task, delete a task, and view all the tasks. Specifically, CakeTooDoo will allow us to do the following things: View all tasks in the list Add a new task to the list Edit a task to change its status View all completed tasks View all pen Delete a task A homepage that will allow access to all the features. You may think that there is a huge gap between knowing what to make and actually making it. But wait! With Cake, that's not true at all! We are just 10 minutes away from the fully functional and working CakeTooDoo. Don't believe me? Just keep reading and you will find it out yourself. Configuring Cake to Work with a Database The first thing we need to do is to create the database that our application will use. Creating database for Cake applications are no different than any other database that you may have created before. But, we just need to follow a few simple naming rules or conventions while creating tables for our database. Once the database is in place, the next step is to tell Cake to use the database. Time for Action: Creating and Configuring the Database Create a database named caketoodoo in the local machine's MySQL server. In your favourite MySQL client, execute the following code: CREATE DATABASE caketoodoo; In our newly created database, create a table named tasks, by running the following code in your MySQL client: USE caketoodoo; CREATE TABLE tasks ( id int(10) unsigned NOT NULL auto_increment, title varchar(255) NOT NULL, done tinyint(1) default NULL, created datetime default NULL, modified datetime default NULL, PRIMARY KEY (id) ); Rename the main cake directory to CakeTooDoo, if you haven't done that yet. Move inside the directory CakeTooDoo/app/config. In the config directory, there is a file named database.php.default. Rename this file to database.php. Open the database.php file with your favourite editor, and move to line number 73, where we will find an array named $default. This array contains database connection options. Assign login to the database user you will be using and password to the password of that user. Assign database to caketoodoo. If we are using the database user ahsan with password sims, the configuration will look like this: var $default = array( 'driver' => 'mysql', 'persistent' => false, 'host' => 'localhost', 'port' => '', 'login' => 'ahsan', 'password' => 'sims', 'database' => 'caketoodoo', 'schema' => '', 'prefix' => '', 'encoding' => '' ); Now, let us check if Cake is being able to connect to the database. Fire up a browser, and point to http://localhost/CakeTooDoo/. We should get the default Cake page that will have the following two lines: Your database configuration file is present and Cake is able to connect to the database, as shown in the following screen shot. If you get the lines, we have successfully configured Cake to use the caketoodoo database. What Just Happened? We just created our first database, following Cake convention, and configured Cake to use that database. Our database, which we named caketoodoo, has only one table named task. It is a convention in Cake to have plural words for table names. Tasks, users, posts, and comments are all valid names for database tables in Cake. Our table tasks has a primary key named id. All tables in Cake applications' database must have id as the primary key for the table. Conventions in CakePHPDatabase tables used with CakePHP should have plural names. All database tables should have a field named id as the primary key of the table. We then configured Cake to use the caketoodoo database. This was achieved by having a file named database.php in the configuration directory of the application. In database.php, we set the default database to caketoodoo. We also set the database username and password that Cake will use to connect to the database server. Lastly, we made sure that Cake was able to connect to our database, by checking the default Cake page. Conventions in Cake are what make the magic happen. By favoring convention over configuration, Cake makes productivity increase to a scary level without any loss to flexibility. We do not need to spend hours setting configuration values to just make the application run. Setting the database name is the only configuration that we will need, everything else will be figured out "automagically" by Cake. Throughout this article, we will get to know more conventions that Cake follows.   Writing our First Model Now that Cake is configured to work with the caketoodoo database, it's time to write our first model. In Cake, each database table should have a corresponding model. The model will be responsible for accessing and modifying data in the table. As we know, our database has only one table named tasks. So, we will need to define only one model. Here is how we will be doing it: Time for Action: Creating the Task Model Move into the directory CakeTooDoo/app/models. Here, create a file named task.php. In the file task.php, write the following code: <?php class Task extends AppModel { var $name = 'Task'; } ?> Make sure there are no white spaces or tabs before the <?php tag and after the ?> tag. Then save the file. What Just Happened? We just created our first Cake model for the database table tasks. All the models in a CakePHP application are placed in the directory named models in the app directory. Conventions in CakePHP: All model files are kept in the directory named models under the app directory. Normally, each database table will have a corresponding file (model) in this directory. The file name for a model has to be singular of the corresponding database table name followed by the .php extension. The model file for the tasks database table is therefore named task.php. Conventions in CakePHP: The model filename should be singular of the corresponding database table name. Models basically contain a PHP class. The name of the class is also singular of the database table name, but this time it is CamelCased. The name of our model is therefore Task. Conventions in CakePHP: A model class name is also singular of the name of the database table that it represents. You will notice that this class inherits another class named AppModel. All models in CakePHP must inherit this class. The AppModel class inherits another class called Model. Model is a core CakePHP class that has all the basic functions to add, modify, delete, and access data from the database. By inheriting this class, all the models will also be able to call these functions, thus we do not need to define them separately each time we have a new model. All we need to do is to inherit the AppModel class for all our models. We then defined a variable named $name in the Task'model, and assigned the name of the model to it. This is not mandatory, as Cake can figure out the name of the model automatically. But, it is a good practice to name it manually.
Read more
  • 0
  • 0
  • 3547

article-image-apache-geronimo-logging
Packt
16 Nov 2009
8 min read
Save for later

Apache Geronimo Logging

Packt
16 Nov 2009
8 min read
We will start by briefly looking at each of the logging frameworks mentioned above, and will then go into how the server logs events and errors and where it logs them to. After examining them, we will look into the different ways in which we can configure application logging. Apache log4j: Log4j is an open source logging framework that is developed by the Apache Software Foundation. It provides a set of loggers, appenders, and layouts, to control which messages should be logged at runtime, where they should be logged to, and in what format they should be logged. The loggers are organized in a tree hierarchy, starting with the root logger at the top of the hierarchy. All loggers except the root logger are named entities and can be retrieved by their names. The root logger can be accessed by using the Logger.getRootLogger() API, while all other loggers can be accessed by using the Logger.getLogger() API. The names of the loggers follow the rule that the name of the parent logger followed by a '.' is a prefix to the child logger's name. For example, if com.logger.test is the name of a logger, then its direct ancestor is com.logger, and the prior ancestor is com. Each of the loggers may be assigned levels. The set of possible levels in an ascending order are—TRACE, DEBUG, INFO, WARN, ERROR, and FATAL. If a logger is not assigned a level, then it inherits its level from its closest ancestor. A log statement makes a logging request to the log4j subsystem. This request is enabled only if its logging level is higher than or equal to its logger's level. If it is lower than the log, then the message is not output through the configured appenders. Log4j allows logs to be output to multiple destinations. This is done via different appenders. Currently there are appenders for the console, files, GUI components, JMS destinations, NT, and Unix system event loggers and remote sockets. Log4j is one of the most widely-used logging frameworks for Java applications, especially ones running on application servers. It also provides more features than the other logging framework that we are about to see, that is, the Java Logging API. Java Logging API: The Java Logging API, also called JUL, from the java.util.logging package name of the framework, is another logging framework that is distributed with J2SE from version 1.4 onwards. It also provides a hierarchy of loggers such as log4j, and the inheritance of properties by child loggers from parents just like log4j. It provides handlers for handling output, and formatters for configuring the way that the output is displayed. It provides a subset of the functionality that log4j provides, but the advantage is that it is bundled with the JRE, and so does not require the application to include third-party JARS as log4j does. SLF4J: The Simple Logging Facade for Java or SLF4J is an abstraction or facade over various logging systems. It allows a developer to plug in the desired logging framework at deployment time. It also supports the bridging of legacy API calls through the slf4j API, and to the underlying logging implementation. Versions of Apache Geronimo prior to 2.0 used Apache Commons logging as the facade or wrapper. However, commons logging uses runtime binding and a dynamic discovery mechanism, which came to be the source of quite a few bugs. Hence, Apache Geronimo migrated to slf4j, which allows the developer to plug in the logging framework during deployment, thereby eliminating the need for runtime binding. Configuring Apache Geronimo logging Apache Geronimo uses slf4j and log4j for logging. The log4j configuration files can be found in the <GERONIMO_HOME>/var/log directory. There are three configuration files that you will find in this directory, namely: client-log4j.properties deployer-log4j.properties server-log4j.properties Just as they are named, these files configure log4j logging for the client container (Java EE application client), deployer system, and the server. You will also find the corresponding log files—client.log, deployer.log, and server.log. The properties files, listed above, contain the configuration of the various appenders, loggers, and layouts for the server, deployer, and client. As mentioned above, log4j provides a hierarchy of loggers with a granularity ranging from the entire server to each class on the server. Let us examine one of the configuration files: the server-log4j.properties file: This file starts with the line log4j.rootLogger=INFO, CONSOLE, FILE. This means that the log4j root logger has a level of INFO and writes log statements to two appenders, namely, the CONSOLE appender and the FILE appender. These are the appenders that write to the console and to files respectively. The console appender and file appenders are configured to write to System.out and to <GERONIMO_HOME>/var/log/geronimo.log. Below this section, there is a finer-grained configuration of loggers at class or package levels. For example, log4j.logger.openjpa.Enhance=TRACE. It configures the logger for the class log4j.logger.openjpa.Enhance to the TRACE level. Note that all of the classes that do not have a log level defined will take on the log level of their parents. This applies recursively until we reach the root logger and inherit its log level (INFO in this case). Configuring application logging We will be illustrating how applications can log messages in Geronimo by using two logging frameworks, namely, log4j and JUL. We will also illustrate how you can use the slf4j wrapper to log messages with the above two underlying implementations. We will be using a sample application, namely, the HelloWorld web application to illustrate this. Using log4j We can use log4j for logging the application log to either a separate logfile or to the geronimo.log file. We will also illustrate how the logs can be written to a separate file in the <GERONIMO_HOME>/var/log directory, by using a GBean. Logging to the geronimo.log file and the command console Logging to the geronimo.log file and the command console is the simplest way to do application logging in Geronimo. For enabling this in your application, you only need to add logging statements to your application code. The HelloWorld sample application has a servlet called HelloWorldServlet, which has the following statements for enabling logging. The servlet is shown below. package com.packtpub.hello;import java.io.*;import javax.servlet.ServletException;import javax.servlet.http.*;import org.apache.log4j.Logger;public class HelloWorldServlet extends HttpServlet{ Logger logger = Logger.getLogger(HelloWorldServlet.class.getName()); protected void service(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException { res.setContentType("text/html"); PrintWriter out = res.getWriter(); out.print("<html>"); logger.info("Printing out <html>"); out.print("<head><title>Hello World Application</title></head>"); logger.info("Printing out <head><title>Hello World Application</title></head>"); out.print("<body>"); logger.info("Printing out <body>"); out.print("<b>Hello World</b><br>"); logger.info("Printing out <b>Hello World</b><br>"); out.print("</body>"); logger.info("Printing out </body>"); out.print("</html>"); logger.info("Printing out </html>"); logger.warn("Sample Warning message"); logger.error("Sample error message"); }} Deploy the sample HelloWorld-1.0.war file, and then access http://localhost:8080/HelloWorld/. This servlet will log the following messages in the command console, as shown in the image below: The geronimo.log file will have the following entries: 2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out <html>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out <head><title>Hello World Application</title></head>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out <body>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out <b>Hello World</b><br>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out </body>2009-02-02 20:01:38,906 INFO [HelloWorldServlet] Printing out </html>2009-02-02 20:01:38,906 WARN [HelloWorldServlet] Sample Warning message2009-02-02 20:01:38,906 ERROR [HelloWorldServlet] Sample error message Notice that only the messages with a logging level of greater than or equal to WARN are being logged to the command console, while all the INFO, ERROR, and WARN messages are logged to the geronimo.log file. This is because in server-log4j.properties the CONSOLE appender's threshold is set to the value of the system property, org.apache.geronimo.log.ConsoleLogLevel, as shown below: log4j.appender.CONSOLE.Threshold=${org.apache.geronimo.log.ConsoleLogLevel} The value of this property is, by default, WARN. All of the INFO messages are logged to the logfile because the FILE appender has a lower threshold, of TRACE, as shown below: log4j.appender.FILE.Threshold=TRACE Using this method, you can log messages of different severity to the console and logfile to which the server messages are logged. This is done for operator convenience, that is, only high severity log messages, such as warnings and errors, are logged to the console, and they need the operator's attention. The other messages are logged only to a file.
Read more
  • 0
  • 0
  • 2894
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-geronimo-architecture-part-1
Packt
13 Nov 2009
8 min read
Save for later

Geronimo Architecture: Part 1

Packt
13 Nov 2009
8 min read
Inversion of Control and dependency injection Inversion of Control (IoC) is a design pattern used in software engineering that facilitates the creation of loosely-coupled systems. In an IoC system, the flow of control is inverted, that is, the program is called by the framework—unlike in normal linear systems where the program calls the libraries. This allows us to circumvent the tight coupling that arises from the control being with the calling program. Dependency injection is a specific case of IoC where the framework provides an assembler or a configurator that provides the user program with the objects that it needs through injection. The user program declares dependencies on other services (provided by the framework or other user programs), and the assembler injects the dependencies into the user program wherever they are needed. It is important that you clearly understand the concept of dependency injection before we proceed further into the Geronimo architecture, as that is the core concept behind the functioning of the Geronimo kernel and how services are loosely coupled in it. To help you understand the concept more clearly, we will provide a simple example. Consider the following two classes: package packtsamples;public class RentCalculator{ private float rentRate; private TaxCalculator tCalc; public RentCalculator(float rate, float taxRate){ rentRate = rate; tCalc = new ServiceTaxCalculator(taxRate); } public void calculateRent(int noOfDays){ float totalRent = noOfDays * rentRate; float tax = tCalc.calculateTax(totalRent); totalRent = totalRent + tax; System.out.println("Rent is:"+totalRent); }}package packtsamples;public class ServiceTaxCalculator implements TaxCalculator{ private float taxRate; public ServiceTaxCalculator(float rate){ taxRate = rate; } public float calculateTax(float amount){ return (amount * taxRate/100); }}package packtsamples;public interface TaxCalculator{ public float calculateTax(float amount);}package packtsamples;public class Main { /** * @param args. args[0] = taxRate, args[1] = rentRate, args[2] =noOfDays */ public static void main(String[] args) { RentCalculator rc = new RentCalculator(Float.parseFloat(args[1]),Float.parseFloat(args[0])); rc.calculateRent(Integer.parseInt(args[2])); }} The RentCalculator class calculates the room rent including tax, given the rent rate, and the number of days. The TaxCalculator class calculates the tax on a particular amount, given the tax rate. As you can see from the code snippet given, the RentCalculator class is dependent on the TaxCalculator interface for calculating the tax. In the given sample, the ServiceTaxCalculator class is instantiated inside the RentCalculator class. This makes the two classes tightly coupled, so that we cannot use the RentCalculator with another TaxCalculator implementation. This problem can be solved through dependency injection. If we apply this concept to the previous classes, then the architecture will be slightly different. This is shown in the following code block:> package packtsamples.di;public class RentCalculator{ private float rentRate; private TaxCalculator tCalc; public RentCalculator(float rate, TaxCalculator tCalc){ rentRate = rate; this.tCalc = tCalc; } public void calculateRent(int noOfDays){ float totalRent = noOfDays * rentRate; float tax = tCalc.calculateTax(totalRent); totalRent = totalRent + tax; System.out.println("Rent is:" +totalRent); }}package packtsamples.di;public class ServiceTaxCalculator implements TaxCalculator{ private float taxRate; public ServiceTaxCalculator(float rate){ taxRate = rate; } public float calculateTax(float amount){ return (amount * taxRate/100); }}package packtsamples.di;public interface TaxCalculator{ public float calculateTax(float amount);} Notice the difference here from the previous implementation. The RentCalculator class has a TaxCalculator argument in its constructor. The RentCalculator then uses this TaxCalculator instance to calculate tax by calling the calculateTax method. You can pass in any implementation, and its calculateTax method will be called. In the following section, we will see how to write the class that will assemble this sample into a working program. package packtsamples.di;import java.lang.reflect.InvocationTargetException;public class Assembler { private TaxCalculator createTaxCalculator(String className, float taxRate){ TaxCalculator tc = null; try { Class cls = Class.forName(className); tc = (TaxCalculator)cls.getConstructors()[0] .newInstance(taxRate); } catch (ClassNotFoundException e) { e.printStackTrace(); } catch (IllegalArgumentException e) { e.printStackTrace(); } catch (SecurityException e) { e.printStackTrace(); } catch (InstantiationException e) { e.printStackTrace(); } catch (IllegalAccessException e) { e.printStackTrace(); } catch (InvocationTargetException e) { e.printStackTrace(); } return tc; } private RentCalculator createRentCalculator(float rate, TaxCalculator tCalc){ return new RentCalculator(rate,tCalc); } private void assembleAndExecute(String className, float taxRate, float rentRate, int noOfDays){ TaxCalculator tc = createTaxCalculator(className, taxRate); createRentCalculator(rentRate, tc).calculateRent(noOfDays);} /** * * @param args args[0] = className, args[1] = taxRate args[2] = rentRate args[3] = noOfDays */ public static void main(String[] args){ new Assembler().assembleAndExecute(args[0], Float.parseFloat(args[1]), Float.parseFloat(args[2]), Integer.parseInt(args[3])); }} In the given sample code, you can see that there is a new class called the Assembler. The Assembler, in its main method, invokes the implementation class of TaxCalculator that we want RentCalculator to use. The Assembler then instantiates an instance of RentCalculator, injects the TaxCalculator instance of the type we specify into it, and calls the calculateRent method. Thus the two classes are not tightly coupled and the program control lies with the assembler, unlike in the previous case. Thus there is Inversion of Control happening here, as the framework (Assembler in this case) is controlling the execution of the program. This is a very trivial sample. We can write an assembler class that is more generic and is not even coupled to the interface as in the previous case. This is an example of dependency injection. An injection of this type is called constructor injection, where the assembler injects values through the constructor. You can also have other types of dependency injection, namely setter injection and field injection. In the former, the values are injected into the object by invoking the setter methods that are provided by the class, and in the latter, the values are injected into fields through reflection or some other method. The Apache Geronimo kernel uses both setter injection and constructor injection for resolving dependencies between the different modules or configurations that are deployed in it. The code for these examples is provided under di-sample in the samples. To build the sample, use the following command: mvn clean install To run the sample without dependency injection, use the following command: java –cp di-sample-1.0.jar packtsamples.Main <taxRate> <rentRate><noOfDays> To run the sample with dependency injection, use the following command: java –cp di-sample-1.0.jar packtsamples.Assembler packtsamples.di.ServiceTaxCalculator <taxRate> <rentRate> <noOfDays> GBeans A GBean is the basic unit in Apache Geronimo. It is a wrapper that is used to wrap or implement different services that are deployed in the kernel. GBeans are similar to MBeans from JMX. A GBean has attributes that store its state and references to other GBeans, and can also register dependencies on other GBeans. GBeans also have lifecycle callback methods and metadata. The Geronimo architects decided to invent the concept of GBeans instead of using MBeans in order to keep the Geronimo architecture independent from JMX. This ensured that they did not need to push in all of the functionality required for the IoC container (that forms Geronimo kernel) into the JMX implementation. Even though GBeans are built on top of MBeans, they can be moved to some other framework as well. A user who is writing a GBean has to follow certain conventions. A sample GBean is shown below: import org.apache.geronimo.gbean.GBeanInfo;import org.apache.geronimo.gbean.GBeanInfoBuilder;import org.apache.geronimo.gbean.GBeanLifecycle;public class TestGBean implements GBeanLifecycle{ private String name; public TestGBean(String name){ this.name = name; } public void doFail() { System.out.println("Failed............."); } public void doStart() throws Exception { System.out.println("Started............"+name); } public void doStop() throws Exception { System.out.println("Stopped............"+name); } public static final GBeanInfo GBEAN_INFO; static { GBeanInfoBuilder infoBuilder = GBeanInfoBuilder .createStatic(TestGBean .class, "TestGBean"); infoBuilder.setPriority(2); infoBuilder.addAttribute("name", String.class, true); infoBuilder.setConstructor(new String[]{"name"}); GBEAN_INFO = infoBuilder.getGBeanInfo(); } public static GBeanInfo getGBeanInfo() { return GBEAN_INFO; }} You will notice certain characteristics that this GBean has from the previous section. We will list these characteristics as follows: All GBeans should have a static getGBeanInfo method, which returns aGBeanInfo object that describes the attributes and references of GBean as well as the interfaces it can implement. All GBeans will have a static block where a GBeanInfoBuilder object is created and linked to that GBean. All of the metadata that is associated with this GBean is then added to the GBeanInfoBuilder object. The metadata includes descriptions of the attributes, references, interfaces, and constructors of GBean. We can add GBeans to configurations either programmatically, using methods exposed through the configuration manager and kernel, or by making an entry in the plan for the GBean, as follows: <gbean name="TestGBean" class="TestGBean"> <attribute name="name">Nitya</attribute></gbean> We need to specify the attribute values in the plan, and the kernel will inject those values into the GBean at runtime. There are three attributes for which we need not specify values. These are called the magic attributes, and the kernel will automatically inject these values when the GBeans are being started. These attributes are abstractName, kernel, and classLoader. As there is no way to specify the values of these attributes in the deployment plan (an XML file in which we provide Geronimo specific information while deploying a configuration), we need not specify them there. However, we should declare these attributes in the GBeanInfo and in the constructor. If the abstractName attribute is declared, then the Geronimo kernel will inject the abstractName of the GBean into it. If it is the kernel attribute, then a reference to the kernel that loaded this GBean is injected. If we declare classLoader, then the class loader for that configuration is injected.
Read more
  • 0
  • 0
  • 1523

article-image-geronimo-architecture-part-2
Packt
13 Nov 2009
10 min read
Save for later

Geronimo Architecture: Part 2

Packt
13 Nov 2009
10 min read
Class loader architecture This section covers the class loader architecture for Apache Geronimo. The following image shows the class loader hierarchy for an application that is deployed in Apache Geronimo: The BootStrap class loader of the JVM is followed by the Extensions class loader and then the System class loader. The j2ee-system class loader is the primary class loader of Apache Geronimo. After the j2ee-system class loader, there are multiple other layers of class loaders before reaching the application class loaders. Applications have an application class loader, which loads any required application-level libraries and EJB modules. However, the web application packaged in the EAR will have its own class loader. The Administration Console has a ClassLoader Viewer portlet that can be used to view the class loader hierarchy as well as the classes loaded by each class loader. Modifying default class loading behavior In certain situations, we will need to follow a class loading strategy that is different from the default one that is provided by Apache Geronimo. A common situation where we need this functionality is when a parent configuration uses a library that is also used by the child and the library used by the parent is a different version, which is incompatible with the child's version of the library. In this case, if we follow the default class loading behavior, then we will always get the classes loaded by the parent configuration and will never be able to reference the classes in the library present in the child configuration. Apache Geronimo provides you with the ability to modify the default class loading behavior at the configuration level to handle such scenarios. This is done by providing certain elements in the deployment plan which, if present, will change the class loading behavior. These elements and the changes in class loading behavior that they represent, are explained as follows: hidden-classes: This tag is used to hide classes that are loaded in parent class loaders, so that the child class loader loads its own copy. Similarly, we can use this tag to specify the resources that should be loaded from the configuration class loader. For example, consider the case where you have a module that needs to load its copy of log4j. The server also has its own copy used for logging that is loaded in the parent class loader. We can add the hidden-classes element in the deployment plan for that module so that it loads its own copy of log4j, and the server loaded version of log4j is hidden from it. non-overridable-classes: This element specifies the list of classes that can be loaded only from the parent configurations of this configuration. In other words, the classes specified in this element cannot be loaded by the current configuration's class loader. The non-overridable-classes element is for preventing applications from loading their own copies of classes that should always be loaded from the parent class loaders, such as the Java EE API classes. private-classes: The classes that are defined by this tag will not be visible to class loaders that are the children of the current class loader. These classes will be loaded either from the current class loader or from its parents. The same class loading behavior can be achieved by using the hidden-classes tag in all of the child class loaders. inverse-classloading: If this element is specified, then the standard class loading strategy will be reversed for this module. This in effect means that a class is first looked up from the current class loader and then from its parent. Thus, the class loader hierarchy is inverted. suppress-default-environment: This will suppress the environment that is created by the builder for this module or configuration. This is a rarely-used element and can have nasty side effects if it is used carelessly. Important modules In this section, we will list the important configurations in Apache Geronimo. We will group them according to the Apache or other open source projects that they wrap. Configurations that do not wrap any other open source project will be listed under the Geronimo section. Apache ActiveMQ   org.apache.geronimo.configs/activemqbroker/2.1.4/car Apache Axis org.apache.geronimo.configs/axis/2.1.4/car org.apache.geronimo.configs/axis-deployer/2.1.4/car Apache Axis2 org.apache.geronimo.configs/axis2-deployer/2.1.4/car org.apache.geronimo.configs/axis2-ejb/2.1.4/car org.apache.geronimo.configs/axis2-ejb-deployer/2.1.4/car Apache CXF org.apache.geronimo.configs/cxf/2.1.4/car org.apache.geronimo.configs/cxf-deployer/2.1.4/car org.apache.geronimo.configs/cxf-ejb/2.1.4/car org.apache.geronimo.configs/cxf-ejb-deployer/2.1.4/car Apache Derby org.apache.geronimo.configs/derby/2.1.4/car Apache Geronimo org.apache.geronimo.configs/client/2.1.4/car org.apache.geronimo.configs/client-deployer/2.1.4/car org.apache.geronimo.configs/client-security/2.1.4/car org.apache.geronimo.configs/client-transaction/2.1.4/car org.apache.geronimo.configs/clustering/2.1.4/car org.apache.geronimo.configs/connector-deployer/2.1.4/car org.apache.geronimo.configs/farming/2.1.4/car org.apache.geronimo.configs/hot-deployer/2.1.4/car org.apache.geronimo.configs/j2ee-deployer/2.1.4/car org.apache.geronimo.configs/j2ee-server/2.1.4/car org.apache.geronimo.configs/javamail/2.1.4/car org.apache.geronimo.configs/persistence-jpa10-deployer/2.1.4/car org.apache.geronimo.configs/sharedlib/2.1.4/car org.apache.geronimo.configs/transaction/2.1.4/car org.apache.geronimo.configs/webservices-common/2.1.4/car org.apache.geronimo.framework/client-system/2.1.4/car org.apache.geronimo.framework/geronimo-gbeandeployer/2.1.4/car org.apache.geronimo.framework/j2ee-security/2.1.4/car org.apache.geronimo.framework/j2ee-system/2.1.4/car org.apache.geronimo.framework/jee-specs/2.1.4/car org.apache.geronimo.framework/jmx-security/2.1.4/car org.apache.geronimo.framework/jsr88-cli/2.1.4/car org.apache.geronimo.framework/jsr88-deploymentfactory/2.1.4/car org.apache.geronimo.framework/offline-deployer/2.1.4/car org.apache.geronimo.framework/online-deployer/2.1.4/car org.apache.geronimo.framework/plugin/2.1.4/car org.apache.geronimo.framework/rmi-naming/2.1.4/car org.apache.geronimo.framework/server-securityconfig/2.1.4/car org.apache.geronimo.framework/shutdown/2.1.4/car org.apache.geronimo.framework/transformeragent/2.1.4/car org.apache.geronimo.framework/upgrade-cli/2.1.4/car Apache Yoko org.apache.geronimo.configs/j2ee-corba-yoko/2.1.4/car org.apache.geronimo.configs/client-corba-yoko/2.1.4/car Apache Jasper org.apache.geronimo.configs/jasper/2.1.4/car org.apache.geronimo.configs/jasper-deployer/2.1.4/car JaxWS org.apache.geronimo.configs/jaxws-deployer/2.1.4/car org.apache.geronimo.configs/jaxws-ejb-deployer/2.1.4/car JSR 88 org.apache.geronimo.configs/jsr88-earconfigurer/2.1.4/car org.apache.geronimo.configs/jsr88-jarconfigurer/2.1.4/car org.apache.geronimo.configs/jsr88-rarconfigurer/2.1.4/car org.apache.geronimo.configs/jsr88-warconfigurer/2.1.4/car Apache MyFaces org.apache.geronimo.configs/myfaces/2.1.4/car org.apache.geronimo.configs/myfaces-deployer/2.1.4/car Apache OpenEJB org.apache.geronimo.configs/openejb/2.1.4/car org.apache.geronimo.configs/openejb-corbadeployer/2.1.4/car org.apache.geronimo.configs/openejb-deployer/2.1.4/car Apache OpenJPA org.apache.geronimo.configs/openjpa/2.1.4/car Spring org.apache.geronimo.configs/spring/2.1.4/car Apache Tomcat6 org.apache.geronimo.configs/tomcat6/2.1.4/car org.apache.geronimo.configs/tomcat6-clusteringbuilder-wadi/2.1.4/car org.apache.geronimo.configs/tomcat6-clusteringwadi/2.1.4/car org.apache.geronimo.configs/tomcat6-deployer/2.1.4/car org.apache.geronimo.configs/tomcat6-no-ha/2.1.4/car Apache WADI org.apache.geronimo.configs/wadi-clustering/2.1.4/car GShell org.apache.geronimo.framework/gshell-framework/2.1.4/car org.apache.geronimo.framework/gshell-geronimo/2.1.4/car Apache XmlBeans org.apache.geronimo.framework/xmlbeans/2.1.4/car Apache Pluto org.apache.geronimo.plugins/pluto-support/2.1.4/car     If you check the configurations, then you will see that most of the components that make up Geronimo have a deployer configuration and a main configuration. The deployer configuration contains the GBeans that will deploy modules onto that component. For example, the openejb-deployer contains GBeans that implement the functionality to deploy an EJB module onto Apache Geronimo. For accomplishing this, the EJB JAR file and its corresponding deployment plan are parsed by the deployer and then converted into a format that can be understood by the OpenEJB subsystem. This is then deployed on the OpenEJB container. The main configuration will usually contain the GBeans that configure the container and also manage its lifecycle. Server directory structure It is important for a user or an administrator to understand the directory structure of a Geronimo server installation. The directory structure of a v2.1.4 server is shown in the following screenshot: Please note that the directory that we will be referring to as <GERONIMO_HOME> is the geronimo-tomcat6-javaee5-2.1.4 directory shown in the screenshot. The following are some important directories that you should be familiar with: The bin directory contains the command scripts and the JAR files required to start the server, stop the server, invoke the deployer, and start the GShell. The etc directory contains the configuration files for GShell. The schema directory contains Geronimo schemas. The var/config directory contains Geronimo configurations files. A Geronimo administrator or user can find most of the configuration information about the server here. The var/derby directory contains the database files for the embedded Derby database server. The var/log directory contains logging configuration and logfiles. The var/security directory contains user credential and grouping files. The var/security/keystores directory contains the cryptographic keystore files used for server SSL configuration. The following are some important configuration files under the Geronimo directory structure: config.xml: This file is located under the &ltGERONIMO_HOME>/var/config directory. This file preserves the information regarding GBean attributes and references that were overridden from the default values used at deployment time. config-substitutions.properties: This file is located under the &ltGERONIMO_HOME>/var/config directory. The property values specified in this file are used in expressions in config.xml. The property values in this file can be overridden by using a system property or environment variable with a property name that is prefixed with org.apache.geronimo.config.substitution. artifact_aliases.properties: This file is located under the &ltGERONIMO_HOME>/var/config directory. This file is used to substitute one module or configuration ID for another module or configuration ID. The entries in this file are of the form oldArtifactId=newArtifactId, for example default/mylib//jar=default/mylib/2.0/jar. Note that the version number in the old artifact ID may be omitted, but the version number in the new artifact ID must be specified. client_artifact_aliases.properties: This file is located under the &ltGERONIMO_HOME>/var/config directory. This file is used for artifact aliasing with application clients. server-log4j.properties: This file is located under the &ltGERONIMO_HOME>/var/log directory. This file contains the logging configuration for the server. deployer-log4j.properties: This file is located under the &ltGERONIMO_HOME>/var/log directory. This file contains the logging configuration for the deployer. client-log4j.properties: This file is located under the &ltGERONIMO_HOME>/var/log directory. This file contains the logging configuration for application clients. users.properties: This file is located under the &ltGERONIMO_HOME>/var/security directory. This file contains the authentication credentials for the server. groups.properties: This file is located under the &ltGERONIMO_HOME>/var/security directory. This file contains the grouping information for the users defined in users.properties Among the directories that contain sensitive information, such as user passwords, are var/security, var/derby, and var/config. These directories should be protected using operating system provided directory and file security.
Read more
  • 0
  • 0
  • 1841

article-image-geronimo-plugins
Packt
12 Nov 2009
11 min read
Save for later

Geronimo Plugins

Packt
12 Nov 2009
11 min read
Developing a plugin In this section, we will develop our very own plugin, the World Clock plugin. This is a very simple plugin that provides the time in different locales. We will go through all of the steps required to develop it from scratch. These steps are as follows: Creating the plugin project Generating the plugin project, using maven2 Writing the plugin interface and implementation Creating a deployment plan Installing the plugin Creating a plugin project There are many ways in which you can develop plugins. You can manually create all of the plugin artifacts and package them. We will use the easiest method, that is, by using Maven's geronimo-plugin-archetype. This will generate the plugin project with all of the artifacts with the default values filled in. To generate the plugin project, run the following command: mvn archetype:create -DarchetypeGroupId=org.apache.geronimo.buildsupport -DarchetypeArtifactId=geronimo-plugin-archetype -DarchetypeVersion=2.1.4 -DgroupId=com.packt.plugins -DartifactId=WorldClock This will create a plugin project called WorldClock. A directory called WorldClock will be created, with the following artifacts in it: pom.xml pom.sample.xml src/main/plan/plan.xml src/main/resources In the same directory in which the WorldClock directory is created, you will need to create a java project that will contain the source code of the plugin. We can create this by using the following command: mvn archetype:create -DgroupId=com.packt.plugins -DartifactId=WorldClockModule This will create a java project with the same groupId and artifactId in a directory called WorldClockModule. This directory will contain the following artifacts: pom.xml src/main/java/com/packt/plugins/App.java src/test/java/com/packt/plugins/AppTest.java You can safely remove the second and third artifacts, as they are just sample stubs generated by the archetype. In this project, we will need to modify the pom.xml to have a dependency on the Geronimo kernel, so that we can compile the GBean that we are going to create and include in this module. The modified pom.xml is shown below: <project xsi_schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.packt.plugins</groupId> <artifactId>WorldClockModule</artifactId> <packaging>jar</packaging> <version>1.0-SNAPSHOT</version> <name>WorldClockModule</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.geronimo.framework</groupId> <artifactId>geronimo-kernel</artifactId> <version>2.1.4</version> </dependency> </dependencies></project> For simplicity, we have only one GBean in our sample. In a real world scenario, there may be many GBeans that you will need to create. Now we need to create the GBean that forms the core functionality of our plugin. Therefore, we will create two classes, namely, Clock and ClockGBean. These classes are shown below: package com.packt.plugins;import java.util.Date;import java.util.Locale;public interface Clock { public void setTimeZone(String timeZone); public String getTime();} and package com.packt.plugins;import java.text.DateFormat;import java.text.SimpleDateFormat;import java.util.Calendar;import java.util.Date;import java.util.GregorianCalendar;import java.util.Locale;import java.util.TimeZone;import org.apache.geronimo.gbean.GBeanInfo;import org.apache.geronimo.gbean.GBeanInfoBuilder;import org.apache.geronimo.gbean.GBeanLifecycle;import sun.util.calendar.CalendarDate;public class ClockGBean implements GBeanLifecycle, Clock{ public static final GBeanInfo GBEAN_INFO; private String name; private String timeZone; public String getTime() { GregorianCalendar cal = new GregorianCalendar(TimeZone. getTimeZone(timeZone)); int hour12 = cal.get(Calendar.HOUR); // 0..11 int minutes = cal.get(Calendar.MINUTE); // 0..59 int seconds = cal.get(Calendar.SECOND); // 0..59 boolean am = cal.get(Calendar.AM_PM) == Calendar.AM; return (timeZone +":"+hour12+":"+minutes+":"+seconds+":"+((am)? "AM":"PM")); } public void setTimeZone(String timeZone) { this.timeZone = timeZone; } public ClockGBean(String name){ this.name = name; timeZone = TimeZone.getDefault().getID(); } public void doFail() { System.out.println("Failed............."); } public void doStart() throws Exception { System.out.println("Started............"+name+" "+getTime()); } public void doStop() throws Exception { System.out.println("Stopped............"+name); } static { GBeanInfoBuilder infoFactory = GBeanInfoBuilder.createStatic ("ClockGBean",ClockGBean.class); infoFactory.addAttribute("name", String.class, true); infoFactory.addInterface(Clock.class); infoFactory.setConstructor(new String[] {"name"}); GBEAN_INFO = infoFactory.getBeanInfo(); } public static GBeanInfo getGBeanInfo() { return GBEAN_INFO; }} As you can see, Clock is an interface and ClockGBean  is a GBean that implements this interface. The Clock interface exposes the functionality that is provided by the ClockGBean. The doStart(),   doStop(), and doFail()  methods are provided by the GBeanLifeCycle interface, and provide lifecycle callback functionality. The next step is to run Maven to build this module. Go to the command prompt, and change the directory to the WorldClockModule directory. To build the module, run the following command: mvn clean install Once the build completes, you will find a WorldClockModule-1.0-SNAPSHOT.jar in the WorldClockModule/target directory. Now change the directory to WorldClock, and open the generated pom.xml file. You will need to uncomment the deploymentConfigs for the gbeanDeployer, and add the following module that you want to include in the plugin: <module> <groupId>com.packt.plugins</groupId> <artifactId>WorldClockModule</artifactId> <version>1.0</version> <type>jar</type></module> You will notice that we are using the car-maven-plugin in the pom.xml file. The car-maven-plugin is used to build Apache Geronimo configuration archives without starting the server. The final step is to create the deployment plan in order to deploy the module that we just created into the Apache Geronimo server. This deployment plan will be used by the car-maven-plugin to actually create the artifacts that will be created during deployment to Apache Geronimo. The deployment plan is shown below: <module > <environment> <moduleId> <groupId>com.packt.plugins</groupId> <artifactId>WorldClock</artifactId> <version>1.0</version> <type>car</type> </moduleId> <dependencies/> <hidden-classes/> <non-overridable-classes/> <private-classes/> </environment> <gbean name="ClockGBean" class="com.packt.clock.ClockGBean"> <attribute name="name">ClockGBean</attribute> </gbean></module> Once the plan is ready, go to the command prompt and change the directory to the WorldClock directory. Run the following command to build the plugin: mvn clean install You will notice that the car-maven-plugin is invoked and a WorldClock-1.0-SNAPSHOT.car file is created in the WorldClock/target directory. We have now completed the steps required to create an Apache Geronimo plugin. In the next section, we will see how we can install the plugin in Apache Geronimo. Installing a plugin We can install a plugin in three different ways. One way is to   use the deploy.bat or deploy.sh script, another way is to use the install-plugin command in GShell, and the third way is to use the Administration Console to  install a plugin from a plugin repository. We will discuss each of these methods: Using deploy.bat or deploy.sh file: The deploy.bat or deploy.sh script is found in the <GERONIMO_HOME>/bin directory. It has an option install-plugin, which can be used to install plugins onto the server. The command syntax is shown below: deploy install-plugin <path to the plugin car file> Running this command, and passing the path to the plugin .car archive on the disk, will result in the plugin being installed onto the Geronimo server. Once the installation has finished, an Installation Complete message will be displayed, and the command will exit. Using GShell: Invoke  the gsh command from the command prompt, after changing the current directory to <GERONIMO_HOME>/bin. This will bring up the GShell prompt. In the GShell prompt, type the following command to install the plugin: deploy/install-plugin <path to the plugin car file> Please note that, you should escape special characters in the path by using a leading "" (back slash) before the character. Another way to install plugins that are available in remote plugin repository is by using the list-plugins command. The syntax of this command is as given below: deploy/list-plugins <URI of the remote repository> If a remote repository is not specified, then the one configured in Geronimo will be used instead. Once this command has been invoked, the list of available plugins in the remote repository is shown, along with their serial numbers, and you will be prompted to enter a comma separated list of the serial numbers of the plugins that you want to install. Using the Administration Console: The Administration Console has a Plugins portlet that can be used to list the plugins available in a repository specified by the user. You can use the Administration Console to select and install the plugins that you want from this list. This portlet also has the capability to export applications or services in your server instance as Geronimo plugins, so that they can be installed on other server instances. See the Plugin portlet section for details of the usage of this portlet. Available plugins The  web  site http://geronimoplugins.com/ hosts Apache Geronimo plugins. It has many plugins listed for Apache Geronimo. There are plugins for Quartz, Apache Directory Server, and many other popular software packages. However, they are not always available for the latest versions of Apache Geronimo. A couple of fairly up-to-date plugins that are available for Apache Geronimo are the Windows Service Wrapper plugin and the Apache Tuscany plugin for Apache Geronimo. The Windows Service Wrapper provides the ability for Apache Geronimo to be registered as a windows service. The Tuscany plugin is an implementation of the SCA Java EE Integration specification by integrating Apache Tuscany as an Apache Geronimo plugin. Both of these plugins are available from the Apache Geronimo web site. Pluggable Administration Console Older versions of Apache Geronimo came with a monolithic Administration Console. However, the server was extendable through plugins. This introduced a problem: How to administer the new plugins that were added to the server? To resolve this problem, the Apache Geronimo developers rewrote the Administration Console to be extensible through console plugins called Administration Console Extensions. In this section, we will look into how to create an Administration Console portlet for the World Clock plugin that we developed in the previous section. Architecture The pluggable Administration Console functionality is based on the support provided by the Apache Pluto portlet container for dynamically adding and removing portlets and pages without requiring a restart. Apache Geronimo exposes this functionality through two GBeans, namely, the Administration Console Extension (ACE) GBean  (org.apache.geronimo.pluto.AdminConsoleExtensionGBean) and the Portal Container Services GBean (org.apache.geronimo.pluto. PortalContainerServicesGBean). The PortalContainerServicesGBean exposes the features of the Pluto container in order to add and remove portlets and pages at runtime. The ACE GBean invokes these APIs to add and remove the portlets or pages. The  ACE GBean should be specified in the Geronimo-specific deployment plan of your web application or plugin, that is, geronimo-web.xml. The architecture is shown in the following figure: Developing an Administration Console extension We will now go through the steps to develop an Administration Console Extension for the World Clock plugin that we created in the previous section. We will use Maven WAR archetype to create a web application project. To create the project, run the following command from the command-line console: mvn archetype:create -DgroupId=com.packt.plugins -DartifactId=ClockWebApp -DarchetypeArtifactId=maven-archetype-webapp This will result in the Maven web project being created, named ClockWebApp. A default pom.xml will be created. This will need to be edited to add dependencies to the two modules, as shown in the following code snippet: <dependency> <groupId>org.apache.geronimo.framework</groupId> <artifactId>geronimo-kernel</artifactId> <version>2.1.4</version></dependency><dependency> <groupId>com.packt.plugins</groupId> <artifactId>WorldClockModule</artifactId> <version>1.0</version></dependency> We add these dependencies because the portlet that we are going to write will use the classes mentioned in the above two modules.
Read more
  • 0
  • 0
  • 2381

article-image-using-business-rules-define-decision-points-oracle-soa-suite-part-2
Packt
28 Oct 2009
6 min read
Save for later

Using Business Rules to Define Decision Points in Oracle SOA Suite: Part 2

Packt
28 Oct 2009
6 min read
To invoke a rule we need to go through a number of steps. First we must create a session with the rules engine, then we can assert one or more facts, before executing the rule set and finally we can retrieve the results. We do this in BPEL via a Decision Service; this is essentially a web service wrapper around a rules dictionary, which takes cares of managing the session with the rules engine as well as governing which rule set we wish to apply. The wrapper allows a BPEL process to assert one or more facts, execute a rule set against the asserted facts, retrieve the results and then reset the session. This can be done within a single invocation of an operation, or over multiple operations. Creating a Rule Engine Connection Before you can create a Decision Service you need to create a connection to the repository in which the required rule set is stored. In the Connections panel within JDeveloper, right-click on the Rule Engines folder and select New Rule Engine Connection… as shown in the following screenshot: This will launch the Create Rule Engine Connection dialogue; first you need to specify whether the connection is for a file repository or WebDAV repository. Using a file based repository If you are using a file repository, all we need to specify is the location of the actual file. Once the connection has been created, we can use this to create a decision service for any of the rule sets contained within that repository. However, it is important to realize that when you create a decision service based on this connection, JDeveloper will take a copy of the repository and copy this into the BPEL project. When you deploy the BPEL process, then the copy of this repository will be deployed with the BPEL process. This has a number of implications; first if you want to modify the rule set used by the BPEL Process you need to modify the copy of the repository deployed with the BPEL Process. To modify the rule set deployed with a BPEL Process, log onto the BPEL console, from here click on the BPEL Processes tab, and then select the process that uses the decision service. Next click on the Descriptor tab; this will list all the Partner Links for that process, including the Decision Service (for example LeaveApprovalDecisionServicePL) as shown in the following screenshot: This PartnerLink will have the property decisionServiceDetails, with the link Rule Service Details (circled in the previous screenshot); click on this and the console will display details of the decision service. From here click on the link Open Rule Author; this will open the Rule Author complete with a connection to the file based rule repository. The second implication is that if you use the same rule set within multiple BPEL Processes, each process will have its own copy of the rule set. You can work round this by either wrapping each rule set with a single BPEL process, which is then invoked by any other process wishing to use that rule set. Or once you have deployed the rule set for one process, then you can access it directly via the WSDL for the deployed rule set, for example LeaveApprovalDecisionService.wsdl in the above screenshot. Using a WebDAV repository For the reasons mentioned above, it often makes sense to use a WebDAV based repository to hold your rules. This makes it far simpler to share a rule set between multiple clients, such as BPEL and Java. Before you can create a Rule Engine Connection to a WebDAV repository, you must first define a WebDAV connection to JDeveloper, which is also created from the Connections palette. Creating a Decision Service To create a decision service within our BPEL process, select the Services page from the Component Palette and drag a Decision Service onto your process, as shown in the following screenshot: This will launch the Decision Service Wizard dialogue, as shown: Give the service a name, and then select Execute Ruleset as the invocation pattern. Next click on the flashlight next to Ruleset to launch the Rule Explorer. This allows us to browse any previously defined rule engine connection and select the rule set we wish to invoke via the decision service. For our purposes, select the LeaveApprovalRules as shown below, and click OK. This will bring us back to the Decision Service Wizard which will be updated to list the facts that we can exchange with the Rule Engine, as shown in the following screenshot: This dialogue will only list XML Facts that map to global elements in the XML Schema. Here we need to define which facts we want to assert, that is which facts we pass as inputs to the rule engine from BPEL, and which facts we want to watch, that is which facts we want to return in the output from the rules engine back to our BPEL process. For our example, we will pass in a single leave request. The rule engine will then apply the rule set we defined earlier and update the status of the request to Approved if appropriate. So we need to specify that Assert and Watch facts of type LeaveRequest. Finally, you will notice the checkbox Check here to assert all descendants from the top level element; this is important when an element contains nested elements (or facts) to ensure that nested facts are also evaluated by the rules engine. For example if we had a fact of type LeaveRequestList which contained a list of multiple LeaveRequests, if we wanted to ensure the rules engine evaluated these nested facts, then we would need to check this checkbox. Once you have specified the facts to Assert and Watch, click Next and complete the dialogue; this will then create a decision service partner link within your BPEL process. Adding a Decide activity We are now ready to invoke our rule set from within our BPEL process. From the Component Palette, drag a Decide activity onto our BPEL process (at the point before we execute the LeaveRequest Human Task). This will open up the Edit Decide window (shown in the following screenshot). Here we need to specify a Name for the activity, and select the Decision Service we want to invoke (that is the LeaveApprovalDecisionService that we just created). Once we've specified the service, we need to specify how we want to interact with it. For example, whether we want to incrementally assert a number of facts over a period of time, before executing the rule set and retrieving the result or whether we want to assert all the facts, execute the rule set and get the result within a single invocation. We specify this through the Operation attribute. For our purpose we just need to assert a single fact and run the rule set, so select the value of Assert facts, execute rule set, retrieve results. Once we have selected the operation to invoke on the decision service, the Decision Service Facts will be updated to allow you to assign input and output facts as appropriate.  
Read more
  • 0
  • 0
  • 2151
article-image-enabling-spring-faces-support
Packt
28 Oct 2009
9 min read
Save for later

Enabling Spring Faces support

Packt
28 Oct 2009
9 min read
The main focus of the Spring Web Flow Framework is to deliver the infrastructure to describe the page flow of a web application. The flow itself is a very important element of a web application, because it describes its structure, particularly the structure of the implemented business use cases. But besides the flow which is only in the background, the user of your application is interested in the Graphical User Interface (GUI). Therefore, we need a solution of how to provide a rich user interface to the users. One framework which offers components is JavaServer Faces (JSF). With the release of Spring Web Flow 2, an integration module to connect these two technologies, called Spring Faces has been introduced. This article is no introduction to the JavaServer Faces technology. It is only a description about the integration of Spring Web Flow 2 with JSF. If you have never previously worked with JSF, please refer to the JSF reference to gain knowledge about the essential concepts of JavaServer Faces. JavaServer Faces (JSF)—a brief introductionThe JavaServer Faces (JSF) technology is a web application framework with the goal to make the development of user interfaces for a web application (based on Java EE) easier. JSF uses a component-based approach with an own lifecycle model, instead of a request-driven approach used by traditional MVC web frameworks. The version 1.0 of JSF is specified inside JSR (Java Specification Request) 127 (http://jcp.org/en/jsr/detail?id=127). To use the Spring Faces module, you have to add some configuration to your application. The diagram below depicts the single configuration blocks. These blocks are described in this article. The first step in the configuration is to configure the JSF framework itself. That is done in the deployment descriptor of the web application—web.xml. The servlet has to be loaded at the startup of the application. This is done with the <load-on-startup>1</load-on-startup> element. <!-- Initialization of the JSF implementation. The Servlet is not used at runtime --> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.faces</url-pattern> </servlet-mapping> For the work with the JavaServer Faces, there are two important classes. These are the javax.faces.webapp.FacesServlet and the javax.faces.context.FacesContext classes.You can think of FacesServlet as the core base of each JSF application. Sometimes that servlet is called an infrastructure servlet. It is important to mention that each JSF application in one web container has its own instance of the FacesServlet class. This means that an infrastructure servlet cannot be shared between many web applications on the same JEE web container.FacesContext is the data container which encapsulates all information that is necessary around the current request.For the usage of Spring Faces, it is important to know that FacesServlet is only used to instantiate the framework. A further usage inside Spring Faces is not done. To be able to use the components from Spring Faces library, it's required to use Facelets instead of JSP. Therefore, we have to configure that mechanism. If you are interested in reading more about the Facelets technology, visit the Facelets homepage from java.net with the following URL: https://facelets.dev.java.net. A good introduction inside the Facelets technology is the http://www.ibm.com/developerworks/java/library/j-facelets/ article, too. The configuration process is done inside the deployment descriptor of your web application—web.xml. The following sample shows the configuration inside the mentioned file. <context-param> <param-name>javax.faces.DEFAULT_SUFFIX</param-name> <param-value>.xhtml</param-value></context-param> As you can see in the above code, the configuration parameter is done with a context parameter. The name of the parameter is javax.faces.DEFAULT_SUFFIX. The value for that context parameter is .xhtml. Inside the Facelets technology To present the separate views inside a JSF context, you need a specific view handler technology. One of those technologies is the well-known JavaServer Pages (JSP) technology. Facelets are an alternative for the JSP inside the JSF context. Instead, to define the views in JSP syntax, you will use XML. The pages are created using XHTML. The Facelets technology offers the following features: A template mechanism, similar to the mechanism which is known from the Tiles framework The composition of components based on other components Custom logic tags Expression functions With the Facelets technology, it's possible to use HTML for your pages. Therefore, it's easy to create the pages and view them directly in a browser, because you don't need an application server between the processes of designing a page The possibility to create libraries of your components The following sample shows a sample XHTML page which uses the component aliasing mechanism of the Facelets technology. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html > <body> <form jsfc="h:form"> <span jsfc="h:outputText" value="Welcome to our page: #{user.name}" disabled="#{empty user}" /> <input type="text" jsfc="h:inputText" value="#{bean.theProperty}" /> <input type="submit" jsfc="h:commandButton" value="OK" action="#{bean.doIt}" /> </form> </body></html> The sample code snippet above uses the mentioned expression language (for example, the #{user.name} expression accesses the name property from the user instance) of the JSF technology to access the data. What is component aliasingOne of the mentioned features of the Facelets technology is that it is possible to view a page directly in a browser without that the page is running inside a JEE container environment. This is possible through the component aliasing feature. With this feature, you can use normal HTML elements, for example an input element. Additionally, you can refer to the component which is used behind the scenes with the jsfc attribute. An example for that is <input type="text" jsfc="h:inputText" value="#{bean.theProperty}" /> . If you open this inside a browser, the normal input element is used. If you use it inside your application, the h:inputText element of the component library is used     The ResourceServlet One main part of the JSF framework are the components for the GUI. These components often consist of many files besides the class files. If you use many of these components, the problem of handling these files arises. To solve this problem, the files such as JavaScript and CSS (Cascading Style Sheets) can be delivered inside the JAR archive of the component. If you deliver the file inside the JAR file, you can organize the components in one file and therefore it is easier for the deployment and maintenance of your component library. Regardless of the framework you use, the result is HTML. The resources inside the HTML pages are required as URLs. For that, we need a way to access these resources inside the archive with the HTTP protocol. To solve that problem, there is a servlet with the name ResourceServlet (package org.springframework.js.resource). The servlet can deliver the following resources: Resources which are available inside the web application (for example, CSS files) Resources inside a JAR archive The configuration of the servlet inside web.xml is shown below: <servlet> <servlet-name>Resource Servlet</servlet-name> <servlet-class>org.springframework.js.resource.ResourceServlet</servlet-class> <load-on-startup>0</load-on-startup></servlet> <servlet-mapping> <servlet-name>Resource Servlet</servlet-name> <url-pattern>/resources/*</url-pattern></servlet-mapping> It is important that you use the correct url-pattern inside servlet-mapping. As you can see in the sample above, you have to use /resources/*. If a component does not work (from the Spring Faces components), first check if you have the correct mapping for the servlet. All resources in the context of Spring Faces should be retrieved through this Servlet. The base URL is /resources. Internals of the ResourceServlet ResourceServlet can only be accessed via a GET request. The ResourceServlet servlet implements only the GET method. Therefore, it's not possible to serve POST requests. Before we describe the separate steps, we want to show you the complete process, illustrated in the diagram below: For a better understanding, we choose an example for the explanation of the mechanism which is shown in the previous diagram. Let us assume that we have registered the ResourcesServlet as mentioned before and we request a resource by the following sample URL: http://localhost:8080/ flowtrac-web-jsf/resources/css/test1.css. How to request more than one resource with one requestFirst, you can specify the appended parameter. The value of the parameter is the path to the resource you want to retrieve. An example for that is the following URL: http://localhost:8080/ flowtracweb-jsf/resources/css/test1.css?appended=/css/test2.css. If you want to specify more than one resource, you can use the delimiter comma inside the value for the appended parameter. A simple example for that mechanism is the following URL: http://localhost:8080/ flowtrac-web-jsf/resources/css/test1.css?appended=/css/test2.css, http://localhost:8080/flowtrac-web-jsf/resources/css/test1.css?appended=/css/test3.css. Additionally, it is possible to use the comma delimiter inside the PathInfo. For example: http://localhost:8080/flowtrac-web-jsf/resources/css/test1.css,/css/test2.css. It is important to mention that if one resource of the requested resources is not available, none of the requested resources is delivered. This mechanism can be used to deliver more than one CSS in one request. From the view of development, it can make sense to modularize your CSS files to get more maintainable CSS files. With that concept, the client gets one CSS, instead of many CSS files. From the view of performance optimization, it is better to have as few requests for rendering a page as possible. Therefore, it makes sense to combine the CSS files of a page. Internally, the files are written in the same sequence as they are requested. To understand how a resource is addressed, we separate the sample URL into the specific parts. The example URL is a URL on a local servlet container which has an HTTP connector at port 8080. See the following diagram for the mentioned separation: The table below describes the five sections of the URL that are shown in the previous diagram:
Read more
  • 0
  • 1
  • 26934

article-image-understanding-business-activity-monitoring-oracle-soa-suite
Packt
28 Oct 2009
14 min read
Save for later

Understanding Business Activity Monitoring in Oracle SOA Suite

Packt
28 Oct 2009
14 min read
How BAM differs from traditional business intelligence The Oracle SOA Suite stores the state of all processes in a database in documented schemas so why do we need yet another reporting tool to provide insight into our processes and services? In other words how does BAM differ from traditional BI (Business Intelligence)? In traditional BI, reports are generated and delivered either on a scheduled basis or in response to a user request. Any changes to the information will not be reflected until the next scheduled run or until a user requests the report to be rerun. BAM is an event-driven reporting tool that generates alerts and reports in real time, based on a continuously changing data stream, some of whose data may not be in the database. As events occur in the Services and Processes, the business has defined they are captured by BAM and reports and views are updated in real time. Where necessary these updated reports are delivered to users. This delivery to users can take several forms. The best known is the dashboard on users' desktops that will automatically update without any need for the user to refresh the screen. There are also other means to deliver reports to the end user, including sending them via a text message or an email. Traditional reporting tools such as Oracle Reports and Oracle Discoverer as well as Oracles latest Business Intelligence Suite can be used to provide some real-time reporting needs but they do not provide the event driven reporting that gives the business a continuously updating view of the current business situation. Event Driven Architecture Event Driven Architecture (EDA) is about building business solutions around responsiveness to events. Events may be simple triggers such as a stock out event or they may be more complex triggers such as the calculations to realize that a stock out will occur in three days. An Event Driven Architecture will often take a number of simple events and then combine them through a complex event processing sequence to generate complex events that could not have been raised without aggregation of several simpler events. Oracle BAM scenarios Oracle Business Activity Monitoring is typically used to monitor two distinct types of real-time data. Firstly it may be used to monitor the overall state of processes in the business. For example it may be used to track how many auctions are currently running, how many have bids on them, and how many have completed in the last 24 hours (or other time periods). Secondly it may be used to track in real-time Key Performance Indicators or KPIS. For example it may be used to provide a real-time updating dashboard to a seller to show the current total value of all the sellers' auctions and to track this against an expected target. In the first case, we are interested in how business processes are progressing and are using BAM to identify bottlenecks and failure points within those processes. Bottlenecks can be identified by too much time being spent on given steps in the process. BAM allows us to compute the time taken between two points in a process, such as the time between order placement and shipping, and provide real-time feedback on those times. Similarly BAM can be used to track the percentage drop-out rate between steps in a sales process, allowing the business to take appropriate action. In the second case, our interest is on some aggregate number, such as our total liabilities should we win all the auctions we are bidding on. This requires us to aggregate results from many events, possibly performing some kind of calculation on them to provide us with a single KPI that gives an indication to the business of how things are going. BAM allows us to continuously update this number in real on a dashboard without the need for continued polling. It also allows us to trigger alerts, perhaps through email or SMS, to notify an individual, when a threshold is breached. In both cases reports delivered can be customized based on the individual receiving the report. BAM architecture It may seem odd to have a section on architecture in the middle of a article about how to effectively use BAM, but key to successful utilization of BAM is an understanding of how the different tiers relate to each other. Logical view The following diagram represents a logical view of how BAM operates. Events are acquired from one or more sources through event acquisition and then normalized, correlated, and stored in event storage (generally a memory area in BAM that is backed up to disc). The report cache generates reports based on events in storage and then delivers those reports, together with real-time updates through the report delivery layer. Event processing is also performed on events in storage, and when defined conditions are met, alerts will be delivered through the alert delivery service. Physical view To better understand the physical view of the architecture of BAM, we have divided this section into four parts. Let us discuss these in detail. Capture This logical view maps onto the physical BAM components shown in the following diagram. Data acquisition in the SOA Suite is handled by sensors in BPEL and ESB. BAM can also receive events from JMS message queues and access data in databases (useful for historical comparison). For complex data formats or for other data sources then Oracle Data Integrator (ODI is a separate product to the SOA Suite) is recommended by Oracle. Although potentially less efficient and more work than running ODI, it is also possible to use adapters to acquire data from multiple sources and feed it into BAM through ESB or BPEL. At the data capture level we need to think of the data items that we can provide to feed the reports and alerts that we desire to generate. We must consider the sources of that data and the best way to load it into BAM. Store Once the data is captured, it is then stored in a normalized form in the Active Data Cache (ADC). This storage facility has the ability to do simple correlation based on fields within the data, and multiple data items received from the acquisition layer may update just a single object in the data cache. For example the state of a given BPEL process instance may be represented by a single object in the ADC and all updates to that process state will just update that single data item rather than creating multiple data items. Process Reports are run based on user demand. Once a report is run it will update the user's screen on a real time basis. Where multiple users are accessing the same report only one instance of the report is maintained by the report server. As events are captured and stored in real time the report engine will continuously monitor them for any changes that need to be made to those reports which are currently active. When changes are detected that impact active reports, then the appropriate report will be updated in memory and the updates sent to the user screen. In addition to the event processing required to correctly insert and update items in the ADC, there is also a requirement to monitor items in the ADC for events that require some sort of action to be taken. This is the job of the event processor. This will monitor data in the ADC to see if registered thresholds on values have been exceeded or if certain time-outs have expired. The event processor will often need to perform calculations across multiple data items to do this. Deliver Delivery of reports takes place in two ways. First, users request reports to be delivered to their desktop by selecting views within BAM. These reports are delivered as HTML pages within a browser and are updated whenever the underlying data used in the report changes. The second approach is that reports are sent out as a result of events being triggered by the Event Processing Engine. In the latter case, the report may be delivered by email, SMS, or voice messaging using the notifications service. A final option available for these event generated reports is to invoke a web service to take some sort of automated action. Closing the loop While monitoring what is happening is all very laudable, it is only of benefit if we actually do something about what we are monitoring. BAM provides the real-time monitoring ability very well but it also provides the facility to invoke other services to respond to undesirable events such as stock outs. The ability to invoke external services is crucial to the concept of a closed loop control environment where as a result of monitoring we are able to reach back into the processes and either alter their execution or start new ones. For example when a stock out or low stock event is raised then the message centre could invoke a web service requesting a supplier to send more stock to replenish inventory. Placing this kind of feedback mechanism in BAM allows us to trigger events across multiple applications and locations in a way that may not be possible within a single application or process. For example, in response to a stock out, instead of requesting our supplier to provide more stock, we may be monitoring stock levels in independent systems and, based on stock levels elsewhere, may redirect stock from one location to another. BAM platform anomaly In 10g SOA Suite, BAM runs only as a Windows application. Unlike the rest of SOA Suite, it does not run on a JEE Application Server and it can only run on the Windows platform. In the next release, 11g, BAM will be provided as a JEE application that can run on a number of application servers and operating systems. User interface Development in Oracle BAM is done through a web-based user interface. This user interface gives access to four different applications that allow you to interact with different parts of BAM. These are: Active Viewer for giving access to reports; this relates to the deliver stage for user requested reports. Active Studio for building reports; this relates to the 'process' stage for creating reports. Architect for setting up both inbound and outbound events. Data elements are defined here as data sources. Alerts are also configured here. This covers setting up, acquire and store stages as well as the deliver stage for alerts. Administrator for managing users and roles as well as defining the types of message sources. We will not examine the applications individually but will take a task-focused look at how to use them as part of providing some specific reports. Monitoring process state Now that we have examined how BAM is constructed, let us use this knowledge to construct some simple dashboards that track the state of a business process. We will instrument a simple version of an auction process. The process is shown in the following figure: An auction is started and then bids are placed until the time runs out at which point the auction is completed. This is modelled in BPEL. This process has three distinct states: Started Bid received Completed We are interested in the number of auctions in each state as well as the total value of auctions in progress. One needs to follow these steps to build the dashboard: Define our data within the Active Data Cache Create sensors in BPEL and map to data in the ADC Create suitable reports Run the reports Defining data objects Data in BAM is stored in data objects. Individual data objects contain the information that is reported in BAM dashboards and may be updated by multiple events. Generally BAM will report against aggregations of objects, but there is also the ability for reports to drill down into individual data objects. Before defining our data objects let's group them into an Auction folder so they are easy to find. To do this we use the BAM Architect application and select Data Objects which gives us the following screen: We select Create subfolder to create the folder and give it a name Auction. We then select Create folder to actually create the folder and we get a confirmation message to tell us that the folder was created. Notice that once created, the folder also appears in the Folders window on the left-hand side of the screen. Now we have our folder we can create a data object. Again we select Data Objects from the drop-down menu. To define the data objects that are to be stored in our Active Data Cache, we open the Auction folder if it is not already open and selectCreate Data Object. If we don't select the Auction folder then we pick it later when filling in the details of the data object. We need to give our object a unique name within the folder and optionally provide it with a tip text that helps explain what the object does when the mouse is moved over it in object listings. Having named our object we can now create the data fields by selecting Add a field. When adding fields we need to provide a name and type as well as indicating if they must contain data; the default Nullable does not require a field to be populated. We may also optionally indicate if a field should be public "available for display" and what if any tool tip text it should have. Once all the data fields have been defined then we can click Create Data Object to actually create the object as we have defined it. We are then presented with a confirmation screen that the object has been created. Grouping data into hierarchies When creating a data object it is possible to specify Dimensions for the object. A dimension is based on one or more fields within the object. A given field can only participate in one dimension. This gives the ability to group the object by the fields in the given dimension. If multiple fields are selected for a single dimension then they can be layered into a hierarchy, for example to allow analysis by country, region, and city. In this case all three elements would be selected into a single dimension, perhaps called geography. Within geography a hierarchy could be set up with country at the top, region next, and finally city at the bottom, allowing drill down to occur in views. Just as a data object can have multiple dimensions, a dimension can also have multiple hierarchies. A digression on populating data object fields In the previous discussion, we mentioned the Nullable attribute that can be attached to fields. This is very important as we do not expect to populate all or even most of the fields in a data object at one moment in time. Do not confuse data objects with the low level events that are used to populate them. Data objects in BAM do not have a one-to-one correspondence with the low level events that populate them. In our auction example there will be just one auction object for every auction. However there will be at least two and usually more messages for every auction; one message for the auction starting, another for the auction completing, and additional messages for each bid received. These messages will all populate or in some cases overwrite different parts of the auction data object. The table shows how the three messages populate different parts of the data object. Message Auction ID State Highest bid Reserve Expires Seller Highest bidder Auction Started Inserted Inserted Inserted Inserted Inserted Inserted   Bid Received   Updated Updated       Updated Auction Finished   Updated           Summary In this article we have explored how Business Activity Monitoring differs from and is complementary to more traditional Business Intelligence solutions such as Oracle Reports and Business Objects. We have explored how BAM can allow the business to monitor the state of business targets and Key Performance Indicators, such as the current most popular products in a retail environment or the current time taken to serve customers in a service environment.
Read more
  • 0
  • 0
  • 7172

article-image-using-business-rules-define-decision-points-oracle-soa-suite-part-1
Packt
28 Oct 2009
11 min read
Save for later

Using Business Rules to Define Decision Points in Oracle SOA Suite: Part 1

Packt
28 Oct 2009
11 min read
The advantage of separating out decision points as external rules is that we not only ensure that each rule is used in a consistent fashion, but in addition make it simpler and quicker to modify; that is we only have to modify a rule once and can do this with almost immediate effect, thus increasing the agility of our solution. Business Rule concepts Before we implement our first rule, let's briefly introduce the key components which make up a Business Rule. These are: Facts: Represent the data or business objects that rules are applied to. Rules: A rule consists of two parts, an IF part which consists of one or more tests to be applied to fact(s), and a THEN part, which lists the actions to be carried out should the test to evaluate to true Rule Set: As the name implies, it is just a set of one or more related rules that are designed to work together . Dictionary: A dictionary is the container of all components that make up a business rule, it holds all the facts, rule sets, and rules for a business rule. In addition, a dictionary may also contain functions, variables, and constraints. We will introduce these in more detail later in this article. To execute a business rule, you submit one or more facts to the rules engine. It will apply the rules to the facts, that is each fact will be tested against the IF part of the rule and if it evaluates to true, then it will perform the specified actions for that fact. This may result in the creation of new facts or the modification of existing facts (which may result in further rule evaluation). Leave approval rule To begin with, we will write a simple rule to automatically approve a leave request that is of type Vacation and only for 1 day's duration. A pretty trivial example, but once we've done this we will look at how to extend this rule to handle more complex examples. Using the Rule Author In SOA Suite 10.1.3 you use the Rule Author, which is a browser based interface for defining your business rules. To launch the Rule Author within your browser go to the following URL: http://<host name>:<port number>/ruleauthor/ This will bring up the Rule Author Log In screen. Here you need to log in as user that belongs to the rule-administrators role. You can either log in as the user oc4jadmin (default password Welcome1), which automatically belongs to this group, or define your own user. Creating a Rule Repository Within Oracle Business Rules, all of our definitions (that is facts, constraints, variables, and functions) and rule sets are defined within a dictionary. A dictionary is held within a Repository. A repository can contain multiple dictionaries and can also contain multiple versions of a dictionary. So, before we can write any rules, we need to either connect to an existing repository, or create a new one. Oracle Business Rules supports two types of repository—File based and WebDAV. For simplicity we will use a File based repository, though typically in production you want to use a WebDAV based repository as this makes it simpler to share rules between multiple BPEL Processes. WebDAV is short for Web-based Distributed Authoring and Versioning. It is an extension to HTTP that allows users to collaboratively edit and manage files (that is business rules in our case) over the Web. To create a File based repository click on the Repository tab within the Rule Author, this will display the Repository Connect screen as shown in the following screenshot: From here we can either connect to an existing repository (WebDAV or File based) or create and connect to a new file-based repository. For our purposes, select a Repository Type of File, and specify the full path name of where you want to create the repository and then click Create. To use a WebDAV repository, you will first need to create this externally from the Rule Author. Details on how to do this can be found in Appendix B of the Oracle Business Rules User Guide (http://download.oracle.com/docs/cd/B25221_04/web.1013/b15986/toc.htm). From a development perspective it can often be more convenient to develop your initial business rules in a file repository. Once complete, you can then export the rules from the file repository and import them into a WebDAV repository. Creating a dictionary Once we have connected to a repository, the next step is to create a dictionary. Click on the Create tab, circled in the following screenshot, and this will bring up the Create Dictionary screen. Enter a New Dictionary Name (for example LeaveApproval) and click Create. This will create and load the dictionary so it's ready to use. Once you have created a dictionary, then next time you connect to the repository you will select the Load tab (next to the Create tab) to load it. Defining facts Before we can define any rules, we first need to define the facts that the rules will be applied to. Click on the Definitions tab, this will bring up the page which summarizes all the facts defined within the current dictionary. You will see from this that the rule engine supports three types of facts: Java Facts, XML Facts, and RL Facts. The type of fact that you want to use really depends on the context in which you will be using the rules engine. For example, if you are calling the rule engine from Java, then you would work with Java Facts as this provides a more integrated way of combining the two components. As we are using the rule engine with BPEL then it makes sense to use XML Facts. Creating XML Facts The Rule Author uses XML Schemas to generate JAXB 1.0 classes, which are then imported to generate the corresponding XML Facts. For our example we will use the Leave Request schema, shown as follows for convenience: <?xml version="1.0" encoding="windows-1252"?> <xsd:schema targetNamespace="http://schemas.packtpub.com/LeaveRequest" elementFormDefault="qualified" > <xsd:element name="leaveRequest" type="tLeaveRequest"/> <xsd:complexType name="tLeaveRequest"> <xsd:sequence> <xsd:element name="employeeId" type="xsd:string"/> <xsd:element name="fullName" type="xsd:string" /> <xsd:element name="startDate" type="xsd:date" /> <xsd:element name="endDate" type="xsd:date" /> <xsd:element name="leaveType" type="xsd:string" /> <xsd:element name="leaveReason" type="xsd:string"/> <xsd:element name="requestStatus" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:schema> Using JAXB, particularly when used in conjunction with BPEL, places a number of constraints on how we define our XML Schemas, including: When defining rules, the Rule Author can only work with globally defined types. This is because it's unable to introspect the properties (i.e. attributes and elements) of global elements. Within BPEL you can only define variables based on globally defined elements. The net result is that any facts we want to pass from BPEL to the rules engine (or vice versa) must be defined as global elements for BPEL and have a corresponding global type definition so that we can define rules against it. The simplest way to achieve this is to define a global type (for example tLeaveRequest in the above schema) and then define a corresponding global element based on that type (for example, leaveRequest in the above schema). Even though it is perfectly acceptable with XML Schemas to use the same name for both elements and types, it presents problems for JAXB, hence the approach taken above where we have prefixed every type definition with t as in tLeaveRequest. Fortunately this approach corresponds to best practice for XML Schema design. The final point you need to be aware of is that when creating XML facts the JAXB processor maps the type xsd:decimal to java.lang.BigDecimal and xsd:integer to java.lang.BigInteger. This means you can't use the standard operators (for example >, >=, <=, and <) within your rules to compare properties of these types. To simplify your rules, within your XML Schemas use xsd:double in place of xsd:decimal and xsd:int in place of xsd:integer. To generate XML facts, from the XML Fact Summary screen (shown previously), click Create, this will display the XML Schema Selector page as shown: Here we need to specify the location of the XML Schema, this can either be an absolute path to an xsd file containing the schema or can be a URL. Next we need to specify a temporary JAXB Class Directory in which the generated JAXB classes are to be created. Finally, for the Target Package Name we can optionally specify a unique name that will be used as the Java package name for the generated classes. If we leave this blank, the package name will be automatically generated based on the target namespace of the XML Schema using the JAXB XML-to-Java mapping rules. For example, our leave request schema has a target namespace of http://schemas.packtpub.com/LeaveRequest; this will result in a package name of com.packtpub.schemas.leaverequest. Next click on Add Schema; this will cause the Rule Author to generate the JAXB classes for our schema in the specified directory. This will update the XML Fact Summary screen to show details of the generated classes; expand the class navigation tree until you can see the list of all the generated classes, as shown in the following screenshot: Select the top level node (that is com) to specify that we want to import all the generated classes. We need to import the TLeaveRequest class as this is the one we will use to implement rules and the LeaveRequest class as we need this to pass this in as a fact from BPEL to the rules engine. The ObjectFactory class is optional, but we will need this if we need to generate new LeaveRequest facts within our rule sets. Although we don't need to do this at the moment it makes sense to import it now in case we do need it in the future. Once we have selected the classes to be imported, click Import (circled in previous screenshot) to load them into the dictionary. The Rule Author will display a message to confirm that the classes have been successfully imported. If you check the list of generated JAXB classes, you will see that the imported classes are shown in bold. In the process of importing your facts, the Rule Author will assign default aliases to each fact and a default alias to all properties that make up a fact, where a property corresponds to either an element or an attribute in the XML Schema. Using aliases Oracle Business Rules allows you to specify your own aliases for facts and properties in order to define more business friendly names which can then be used when writing rules. For XML facts if you have followed standard naming conventions when defining your XML Schemas, we typically find that the default aliases are clear enough and that if you start defining aliases it can actually cause more confusion unless applied consistently across all facts. Hiding facts and properties The Rule Author lets you hide facts and properties so that they don't appear in the drop downs within the Rule Author. For facts which have a large number of properties, hiding some of these can be worth while as it can simplify the creation of rules. Another obvious use of this might be to hide all the facts based on elements, since we won't be implementing any rules directly against these. However, any facts you hide will also be hidden from BPEL, so you won't be able to pass facts of these types from BPEL to the rules engine (or vice versa). In reality, the only fact you will typically want to hide will be the ObjectFactory (as you will have one of these per XML Schema that you import). Saving the rule dictionary As you define your business rules, it makes sense to save your work at regular intervals. To save the dictionary, click on the Save Dictionary link in the top right hand corner of the Rule Author page. This will bring up the Save Dictionary page. Here either click on the Save button to update the current version of the dictionary with your changes or, if you want to save the dictionary as a new version or under a new dictionary name, then click on the Save As link and amend the dictionary name and version as appropriate.
Read more
  • 0
  • 0
  • 2268
article-image-using-spring-faces
Packt
28 Oct 2009
13 min read
Save for later

Using Spring Faces

Packt
28 Oct 2009
13 min read
Using Spring Faces Module The following section shows you how to use the Spring Faces module. Overview of all tags of the Spring Faces tag library The Spring Faces module comes with a set of components, which are provided through a tag library. If you want more detailed information about the tag library, look at the following files inside the Spring Faces source distribution: spring-faces/src/main/resources/META-INF/spring-faces.tld spring-faces/src/main/resources/META-INF/springfaces.taglib.xml spring-faces/src/main/resources/META-INF/faces-config.xml If you want to see the source code of a specific tag, refer to faces-config.xml and springfaces.taglib.xml to get the name of the class of the component. The spring-faces.tld file can be used for documentation issues. The following table should give you a short description about the available tags from the Spring Faces component library: Name of the tag Description includeStyles The includeStyles tag renders the necessary CSS stylesheets which are essential for the components from Spring Faces. The usage of this tag in the head section is recommended for performance optimization. If the tag isn't included, the necessary stylesheets are rendered on the first usage of the tag. If you are using a template for your pages, it's a good pattern to include the tag in the header of that template. For more information about performance optimization, refer to the Yahoo performance guidelines, which are available at the following URL: http://developer.yahoo.com/performance. Some tags (includeStyles, resource, and resourceGroup) of the Spring Faces tag library are implementing patterns to optimize the performance on client side. resource The resource tag loads and renders a resource with ResourceServlet. You should prefer this tag instead of directly including a CSS stylesheet or a JavaScript file because ResourceServlet sets the proper response headers for caching the resource file. resourceGroup With the resourceGroup tag, it is possible to combine all resources which are inside the tag. It is important that all resources are the same type. The tag uses ResourceServlet with the appended parameter to create one resource file which is sent to the client. clientTextValidator With the clientTextValidator tag, you can validate a child inputText element. For the validation, you have an attribute called regExp where you can provide a regular expression. The validation is done on client side. clientNumberValidator With the clientNumberValidator tag, you can validate a child inputText element. With the provided validation methods, you can check whether the text is a number and check some properties for the number, e.g. range. The validation is done on client side. clientCurrencyValidator With the clientCurrencyValidator tag, you can validate a child inputText element. This tag should be used if you want to validate a currency. The validation is done on client side. clientDateValidator With the clientDateValidator tag, you can validate a child inputText element. The tag should be used to validate a date. The field displays a pop-up calendar. The validation is done on client side. validateAllOnClick With the validateAllOnClick tag, you can execute all client-side validation on the click of a specific element. That can be useful for a submit button. commandButton With the commandButton tag, it is possible to execute an arbitrary method on an instance. The method itself has to be a public method with no parameters and a java.lang.Object instance as the return value. commandLink The commandLink tag renders an AJAX link. With the processIds attribute, you can provide the ids of components which should be processed through the process. ajaxEvent The ajaxEvent tag creates a JavaScript event listener. This tag should only be used if you can ensure that the client has JavaScript enabled. A complete example After we have shown the main configuration elements and described the Spring Faces components, the following section shows a complete example in order to get a good understanding about how to work with the Spring Faces module in your own web application. The following diagram shows the screen of the sample application. With the shown screen, it is possible to create a new issue and save it to the bug database. It is not part of this example to describe the bug database or to describe how to work with databases in general. The sample uses the model classes. The diagram has three required fields. These fields are: Name: The name of the issue Description: A short description for the issue Fix until: The fixing date for the issue Additionally, there are the following two buttons: store: With a click on the store button, the system tries to create a new issue that includes the provided information cancel: With a click on the cancel button, the system ignores the data which is entered and navigates to the overview page. Now, the first step is to create the implementation of that input page. That implementation and its description are shown in the section below. Creating the input page As we described above, we use Facelets as a view handler technology. Therefore, the pages have to be defined with XHTML, with .xhtml as the file extension. The name for the input page will be add.xhtml. For the description, we separate the page into the following five parts: Header Name Description Fix until The Buttons This separation is shown in the diagram below: The Header part The first step in the header is to define that we have an XHTML page. This is done through the definition of the correct doctype. <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> An XHTML page is described as an XML page. If you want to use special tags inside the XML page, you have to define a namespace for that. For a page with Facelets and Spring Faces, we have to define more than one namespace. The following table describes those namespaces: Namespace Description http://www.w3.org/1999/xhtml The namespace for XHTML itself. http://java.sun.com/jsf/facelets The Facelet defines some components (tags). These components are available under this namespace. http://java.sun.com/jsf/html The user interface components of JSF are available under this namespace. http://java.sun.com/jsf/core The core tags of JSF, for example converter, can be accessed under this namespace. http://www.springframework.org/tags/faces The namespace for the Spring Faces component library. For the header definition, we use the composition component of the Facelets components. With that component, it is possible to define a template for the layout. This is very similar to the previously mentioned Tiles framework. The following code snippet shows you the second part (after the doctype) of the header definition: A description and overview of the JSF tags is available at: http://developers.sun.com/jscreator/archive/learning/bookshelf/pearson/corejsf/standard_jsf_tags.pdf. <ui:composition template="/WEB-INF/layouts/standard.xhtml"> With the help of the template attribute, we refer to the used layout template. In our example, we refer to /WEB-INF/layouts/standard.xhtml. The following code shows the complete layout file standard.xhtml. This layout file is also described with the Facelets technology. Therefore, it is possible to use Facelets components inside that page, too. Additionally, we use Spring Faces components inside that layout page. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <f:view contentType="text/html"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>flow.tracR</title> <sf:includeStyles/> <sf:resourceGroup> <sf:resource path="/css-framework/css/tools.css"/> <sf:resource path="/css-framework/css/typo.css"/> <sf:resource path="/css-framework/css/forms.css"/> <sf:resource path="/css-framework/css/layout-navtop-localleft. css"/> <sf:resource path="/css-framework/css/layout.css"/> </sf:resourceGroup> <sf:resource path="/css/issue.css"/> <ui:insert name="headIncludes"/> </head> <body class="tundra spring"> <div id="page"> <div id="content"> <div id="main"> <ui:insert name="content"/> </div> </div> </div> </body> </f:view> </html> The Name part The first element in the input page is the section for the input of the name. For the description of that section, we use elements from the JSF component library. We access this library with h as the prefix, which we have defined in the header section. For the general layout, we use standard HTML elements, such as the div element. The definition is shown below. <div class="field"> <div class="label"> <h:outputLabel for="name">Name:</h:outputLabel> </div> <div class="input"> <h:inputText id="name" value="#{issue.name}" /> </div> </div> The Description part The next element in the page is the Description element. The definition is very similar to the Name part. Instead of the definition of the Name part, we use the element description inside the h:inputText element—the required attribute with true as its value. This attribute tells the JSF system that the issue.description value is mandatory. If the user does not enter a value, the validation fails. <div class="field"> <div class="label"> <h:outputLabel for="description">Description:</h:outputLabel> </div> <div class="input"> <h:inputText id="description" value="#{issue.description}" required="true"/> </div> </div> The Fix until part The last input section is the Fix until part. This field is a very common field in web applications, because there is often the need to input a date. Internally, a date is often represented through an instance of the java.util.Date class. The text which is entered by the user has to be validated and converted in order to get a valid instance. To help the user with the input, a calendar for the input is often used. The Spring Faces library offers a component which shows a calendar and adds client-side validation. The complete definition of the Fix until part is shown below. The name of the component is clientDateValidator. The clientDateValidator component is used with sf as the prefix. This prefix is defined in the namespace definition in the shown header of the add.xhtml page. <div class="field"> <div class="label"> <h:outputLabel for="checkinDate">Fix until:</h:outputLabel> </div> <div class="input"> <sf:clientDateValidator required="true" invalidMessage="pleaseinsert a correct fixing date. format: dd.MM.yyyy"promptMessage="Format: dd.MM.yyyy, example: 01.01.2020"> <h:inputText id="checkinDate" value="#{issue.fixingDate}" required="true"> <f:convertDateTime pattern="dd.MM.yyyy" timeZone="GMT"/> </h:inputText> </sf:clientDateValidator> </div> </div> In the example above, we use the promptMessage attribute to help the user with the format. The message is shown when the user sets the cursor on the input element. If the validation fails, the message of the invalidMessage attribute is used to show the user that a wrong formatted input has been entered. The Buttons part The last element in the page are the buttons. For these buttons, the commandButton component from Spring Faces is used. The definition is shown below: <div class="buttonGroup"> <sf:commandButton id="store" action="store" processIds="*" value="store"/> <sf:commandButton id="cancel" action="cancel" processIds="*" value="cancel"/> </div> It is worth mentioning that JavaServer Faces binds an action to the action method of a backing bean. Spring Web Flow binds the action to events. Handling of errors It's possible to have validation on the client side or on the server side. For the Fix until element, we use the previously mentioned clientDateValidator component of the Spring Faces library. The following figure shows how this component shows the error message to the user: Reflecting the actions of the buttons into the flow definition file Clicking the buttons executes an action that has a transition as the result. The name of the action is expressed in the action attribute of the button component which is implemented as commandButton from the Spring Faces component library. If you click on the store button, the validation is executed first. If you want to prevent that validation, you have to use the bind attribute and set it to false. This feature is used for the cancel button, because in this state it is necessary to ignore the inputs. <view-state id="add" model="issue"> <transition on="store" to="issueStore" > <evaluate expression="persistenceContext.persist(issue)"/> </transition> <transition on="cancel" to="cancelInput" bind="false"> </transition> </view-state> Showing the results To test the implemented feature, we implement an overview page. We have the choice to implement the page as a flow with one view state or implement it as a simple JSF view. Independent from that choice, we will use Facelets to implement that overview page, because Facelets does not depend on the Spring Web Flow Framework as it is a feature of JSF. The example uses a table to show the entered issues. If no issue is entered, a message is shown to the user. The figure below shows this table with one row of data. The Id is a URL. If you click on this link, the input page is shown with the data of that issue. With data, we execute an update. The indicator for that is the valid ID of the issue. If your data is available, the No issues in database message is shown to the user. This is done with a condition on the outputText component. See the code snippet below: <h:outputText id="noIssuesText" value="No Issues in the database" rendered="#{empty issueList}"/> For the table, we use the dataTable component. <h:dataTable id="issues" value="#{issueList}" var="issue"rendered="#{not empty issueList}" border="1"> <h:column> <f:facet name="header">Id</f:facet> <a href="add?id=#{issue.id}">#{issue.id}</a> </h:column> <h:column> <f:facet name="header">Name</f:facet> #{issue.name} </h:column> <h:column> <f:facet name="header">fix until</f:facet> #{issue.fixingDate} </h:column> <h:column> <f:facet name="header">creation date</f:facet> #{issue.creationDate} </h:column> <h:column> <f:facet name="header">last modified</f:facet> #{issue.lastModified} </h:column> </h:dataTable>
Read more
  • 0
  • 0
  • 3234

article-image-working-xml-flex-3-and-java-part1
Packt
28 Oct 2009
10 min read
Save for later

Working with XML in Flex 3 and Java-part1

Packt
28 Oct 2009
10 min read
In today's world, many server-side applications make use of XML to structure data because XML is a standard way of representing structured information. It is easy to work with, and people can easily read, write, and understand XML without the need of any specialized skills. The XML standard is widely accepted and used in server communications such as Simple Object Access Protocol (SOAP) based web services. XML stands for eXtensible Markup Language. The XML standard specification is available at http://www.w3.org/XML/. Adobe Flex provides a standardized ECMAScript-based set of API classes and functionality for working with XML data. This collection of classes and functionality provided by Flex are known as E4X. You can use these classes provided by Flex to build sophisticated Rich Internet Applications using XML data. XML basics XML is a standard way to represent categorized data into a tree structure similar to HTML documents. XML is written in plain-text format, and hence it is very easy to read, write, and manipulate its data. A typical XML document looks like this: <book>    <title>Flex 3 with Java</title>    <author>Satish Kore</author>    <publisher>Packt Publishing</publisher>    <pages>300</pages> </book> Generally, XML data is known as XML documents and it is represented by tags wrapped in angle brackets (< >). These tags are also known as XML elements. Every XML document starts with a single top-level element known as the root element. Each element is distinguished by a set of tags known as the opening tag and the closing tag. In the previous XML document, <book> is the opening tag and </book> is the closing tag. If an element contains no content, it can be written as an empty statement (also called self-closing statement). For example, <book/> is as good as writing <book></book>. XML documents can also be more complex with nested tags and attributes, as shown in the following example: <book ISBN="978-1-847195-34-0">   <title>Flex 3 with Java</title>   <author country="India" numberOfBooks="1">    <firstName>Satish</firstName>    <lastName>Kore</lastName> </author>   <publisher country="United Kingdom">Packt Publishing</publisher>   <pages>300</pages> </book> Notice that the above XML document contains nested tags such as <firstName> and <lastName> under the <author> tag. ISBN, country, and numberOfBooks, which you can see inside the tags, are called XML attributes. To learn more about XML, visit the W3Schools' XML Tutorial at http://w3schools.com/xml/. Understanding E4X Flex provides a set of API classes and functionality based on the ECMAScript for XML (E4X) standards in order to work with XML data. The E4X approach provides a simple and straightforward way to work with XML structured data, and it also reduces the complexity of parsing XML documents. Earlier versions of Flex did not have a direct way of working with XML data. The E4X provides an alternative to DOM (Document Object Model) interface that uses a simpler syntax for reading and querying XML documents. More information about other E4X implementations can be found at http://en.wikipedia.org/wiki/E4X. The key features of E4X include: It is based on standard scripting language specifications known as ECMAScript for XML. Flex implements these specifications in the form of API classes and functionality for simplifying the XML data processing. It provides easy and well-known operators, such as the dot (.) and @, to work with XML objects. The @ and dot (.) operators can be used not only to read data, but also to assign data to XML nodes, attributes, and so on. The E4X functionality is much easier and more intuitive than working with the DOM documents to access XML data. ActionScript 3.0 includes the following E4X classes: XML, XMLList, QName, and Namespace. These classes are designed to simplify XML data processing into Flex applications. Let's see one quick example: Define a variable of type XML and create a sample XML document. In this example, we will assign it as a literal. However, in the real world, your application might load XML data from external sources, such as a web service or an RSS feed. private var myBooks:XML =   <books publisher="Packt Pub">    <book title="Book1" price="99.99">    <author>Author1</author>    </book>    <book title="Book2" price="59.99">    <author>Author2</author>    </book>    <book title="Book3" price="49.99">    <author>Author3</author>    </book> </books>; Now, we will see some of the E4X approaches to read and parse the above XML in our application. The E4X uses many operators to simplify accessing XML nodes and attributes, such as dot (.) and attribute identifier (@), for accessing properties and attributes. private function traceXML():void {    trace(myBooks.book.(@price < 50.99).@title); //Output: Book3    trace(myBooks.book[1].author); //Output: Author2    trace(myBooks.@publisher); //Output: Packt Pub    //Following for loop outputs prices of all books    for each(var price in myBooks..@price) {    trace(price);    } } In the code above, we are using a conditional expression to extract the title of the book(s) whose price is set below 50.99$ in the first trace statement. If we have to do this manually, imagine how much code would have been needed to parse the XML. In the second trace, we are accessing a book node using index and printing its author node's value. And in the third trace, we are simply printing the root node's publisher attribute value and finally, we are using a for loop to traverse through prices of all the books and printing each price. The following is a list of XML operators: Operator Name Description    @   attribute identifier Identifies attributes of an XML or XMLList object.     { }     braces(XML) Evaluates an expression that is used in an XML or XMLList initializer.   [ ]     brackets(XML) Accesses a property or attribute of an XML or XMLList object, for example myBooks.book["@title"].     + concatenation(XMLList) Concatenates (combines) XML or XMLList values into an XMLList object.     += concatenation assignment (XMLList) Assigns expression1 The XML object An XML class represents an XML element, attribute, comment, processing instruction, or a text element. We have used the XML class in our example above to initialize the myBooks variable with an XML literal. The XML class is included into an ActionScript 3.0 core class, so you don't need to import a package to use it. The XML class provides many properties and methods to simplify XML processing, such as ignoreWhitespace and ignoreComments properties, used for ignoring whitespaces and comments in XML documents respectively. You can use the prependChild() and appendChild() methods to prepend and append XML nodes to existing XML documents. Methods such as toString() and toXMLString() allow you to convert XML to a string. An example of an XML object: private var myBooks:XML = <books publisher="Packt Pub"> <book title="Book1" price="99.99"> <author>Author1</author> </book> <book title="Book2" price="120.00"> <author>Author2</author> </book> </books>;   In the above example, we have created an XML object by assigning an XML literal to it. You can also create an XML object from a string that contains XML data, as shown in the following example: private var str:String = "<books publisher="Packt Pub"> <book title="Book1" price="99.99"> <author>Author1</author> </book> <book title="Book2" price="59.99"> <author>Author2</author> </book> </books>"; private var myBooks:XML = new XML(str); trace(myBooks.toXMLString()); //outputs formatted xml as string If the XML data in string is not well-formed (for example, a closing tag is missing), then you will see a runtime error. You can also use binding expressions in the XML text to extract contents from a variable data. For example, you could bind a node's name attribute to a variable value, as in the following line: private var title:String = "Book1" var aBook:XML = <book title="{title}">; To read more about XML class methods and properties, go through Flex 3 LiveDocs at http://livedocs.adobe.com/flex/3/langref/XML.html. The XMLList object As the class name indicates, XMLList contains one or more XML objects. It can contain full XML documents, XML fragments, or the results of an XML query. You can typically use all of the XML class's methods and properties on the objects from XMLList. To access these objects from the XMLList collection, iterate over it using a for each… statement. The XMLList provides you with the following methods to work with its objects: child(): Returns a specified child of every XML object children(): Returns specified children of every XML object descendants(): Returns all descendants of an XML object elements(): Calls the elements() method of each XML object in the XMLList. Returns all elements of the XML object parent(): Returns the parent of the XMLList object if all items in the XMLList object have the same parent attribute(attributeName): Calls the attribute() method of each XML object and returns an XMLList object of the results. The results match the given attributeName parameter attributes(): Calls the attributes() method of each XML object and returns an XMLList object of attributes for each XML object contains(): Checks if the specified XML object is present in the XMLList copy(): Returns a copy of the given XMLList object length(): Returns the number of properties in the XMLList object valueOf(): Returns the XMLList object For details on these methods, see the ActionScript 3.0 Language Reference. Let's return to the example of the XMLList: var xmlList:XMLList = myBooks.book.(@price == 99.99); var item:XML; for each(item in xmlList) { trace("item:"+item.toXMLString()); } Output: item:<book title="Book1" price="99.99"> <author>Author1</author> </book> In the example above, we have used XMLList to store the result of the myBooks.book.(@price == 99.99); statement. This statement returns an XMLList containing XML node(s) whose price is 99.99$. Working with XML objects The XML class provides many useful methods to work with XML objects, such as the appendChild() and prependChild() methods to add an XML element to the beginning or end of an XML object, as shown in the following example: var node1:XML = <middleInitial>B</middleInitial> var node2:XML = <lastName>Kore</lastName> var root:XML = <personalInfo></personalInfo> root = root.appendChild(node1); root = root.appendChild(node2); root = root.prependChild(<firstName>Satish</firstName>); The output is as follows: <personalInfo> <firstName>Satish</firstName> <middleInitial>B</middleInitial> <lastName>Kore</lastName> </personalInfo> You can use the insertChildBefore() or insertChildAfter() method to add a property before or after a specified property, as shown in the following example: var x:XML = <count> <one>1</one> <three>3</three> <four>4</four> </count>; x = x.insertChildBefore(x.three, "<two>2</two>"); x = x.insertChildAfter(x.four, "<five>5</five>"); trace(x.toXMLString()); The output of the above code is as follows: <count> <one>1</one> <two>2</two> <three>3</three> <four>4</four> <five>5</five> </count>
Read more
  • 0
  • 0
  • 3136
Modal Close icon
Modal Close icon