Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-working-xml-flex-3-and-java-part1
Packt
28 Oct 2009
10 min read
Save for later

Working with XML in Flex 3 and Java-part1

Packt
28 Oct 2009
10 min read
In today's world, many server-side applications make use of XML to structure data because XML is a standard way of representing structured information. It is easy to work with, and people can easily read, write, and understand XML without the need of any specialized skills. The XML standard is widely accepted and used in server communications such as Simple Object Access Protocol (SOAP) based web services. XML stands for eXtensible Markup Language. The XML standard specification is available at http://www.w3.org/XML/. Adobe Flex provides a standardized ECMAScript-based set of API classes and functionality for working with XML data. This collection of classes and functionality provided by Flex are known as E4X. You can use these classes provided by Flex to build sophisticated Rich Internet Applications using XML data. XML basics XML is a standard way to represent categorized data into a tree structure similar to HTML documents. XML is written in plain-text format, and hence it is very easy to read, write, and manipulate its data. A typical XML document looks like this: <book>    <title>Flex 3 with Java</title>    <author>Satish Kore</author>    <publisher>Packt Publishing</publisher>    <pages>300</pages> </book> Generally, XML data is known as XML documents and it is represented by tags wrapped in angle brackets (< >). These tags are also known as XML elements. Every XML document starts with a single top-level element known as the root element. Each element is distinguished by a set of tags known as the opening tag and the closing tag. In the previous XML document, <book> is the opening tag and </book> is the closing tag. If an element contains no content, it can be written as an empty statement (also called self-closing statement). For example, <book/> is as good as writing <book></book>. XML documents can also be more complex with nested tags and attributes, as shown in the following example: <book ISBN="978-1-847195-34-0">   <title>Flex 3 with Java</title>   <author country="India" numberOfBooks="1">    <firstName>Satish</firstName>    <lastName>Kore</lastName> </author>   <publisher country="United Kingdom">Packt Publishing</publisher>   <pages>300</pages> </book> Notice that the above XML document contains nested tags such as <firstName> and <lastName> under the <author> tag. ISBN, country, and numberOfBooks, which you can see inside the tags, are called XML attributes. To learn more about XML, visit the W3Schools' XML Tutorial at http://w3schools.com/xml/. Understanding E4X Flex provides a set of API classes and functionality based on the ECMAScript for XML (E4X) standards in order to work with XML data. The E4X approach provides a simple and straightforward way to work with XML structured data, and it also reduces the complexity of parsing XML documents. Earlier versions of Flex did not have a direct way of working with XML data. The E4X provides an alternative to DOM (Document Object Model) interface that uses a simpler syntax for reading and querying XML documents. More information about other E4X implementations can be found at http://en.wikipedia.org/wiki/E4X. The key features of E4X include: It is based on standard scripting language specifications known as ECMAScript for XML. Flex implements these specifications in the form of API classes and functionality for simplifying the XML data processing. It provides easy and well-known operators, such as the dot (.) and @, to work with XML objects. The @ and dot (.) operators can be used not only to read data, but also to assign data to XML nodes, attributes, and so on. The E4X functionality is much easier and more intuitive than working with the DOM documents to access XML data. ActionScript 3.0 includes the following E4X classes: XML, XMLList, QName, and Namespace. These classes are designed to simplify XML data processing into Flex applications. Let's see one quick example: Define a variable of type XML and create a sample XML document. In this example, we will assign it as a literal. However, in the real world, your application might load XML data from external sources, such as a web service or an RSS feed. private var myBooks:XML =   <books publisher="Packt Pub">    <book title="Book1" price="99.99">    <author>Author1</author>    </book>    <book title="Book2" price="59.99">    <author>Author2</author>    </book>    <book title="Book3" price="49.99">    <author>Author3</author>    </book> </books>; Now, we will see some of the E4X approaches to read and parse the above XML in our application. The E4X uses many operators to simplify accessing XML nodes and attributes, such as dot (.) and attribute identifier (@), for accessing properties and attributes. private function traceXML():void {    trace(myBooks.book.(@price < 50.99).@title); //Output: Book3    trace(myBooks.book[1].author); //Output: Author2    trace(myBooks.@publisher); //Output: Packt Pub    //Following for loop outputs prices of all books    for each(var price in myBooks..@price) {    trace(price);    } } In the code above, we are using a conditional expression to extract the title of the book(s) whose price is set below 50.99$ in the first trace statement. If we have to do this manually, imagine how much code would have been needed to parse the XML. In the second trace, we are accessing a book node using index and printing its author node's value. And in the third trace, we are simply printing the root node's publisher attribute value and finally, we are using a for loop to traverse through prices of all the books and printing each price. The following is a list of XML operators: Operator Name Description    @   attribute identifier Identifies attributes of an XML or XMLList object.     { }     braces(XML) Evaluates an expression that is used in an XML or XMLList initializer.   [ ]     brackets(XML) Accesses a property or attribute of an XML or XMLList object, for example myBooks.book["@title"].     + concatenation(XMLList) Concatenates (combines) XML or XMLList values into an XMLList object.     += concatenation assignment (XMLList) Assigns expression1 The XML object An XML class represents an XML element, attribute, comment, processing instruction, or a text element. We have used the XML class in our example above to initialize the myBooks variable with an XML literal. The XML class is included into an ActionScript 3.0 core class, so you don't need to import a package to use it. The XML class provides many properties and methods to simplify XML processing, such as ignoreWhitespace and ignoreComments properties, used for ignoring whitespaces and comments in XML documents respectively. You can use the prependChild() and appendChild() methods to prepend and append XML nodes to existing XML documents. Methods such as toString() and toXMLString() allow you to convert XML to a string. An example of an XML object: private var myBooks:XML = <books publisher="Packt Pub"> <book title="Book1" price="99.99"> <author>Author1</author> </book> <book title="Book2" price="120.00"> <author>Author2</author> </book> </books>;   In the above example, we have created an XML object by assigning an XML literal to it. You can also create an XML object from a string that contains XML data, as shown in the following example: private var str:String = "<books publisher="Packt Pub"> <book title="Book1" price="99.99"> <author>Author1</author> </book> <book title="Book2" price="59.99"> <author>Author2</author> </book> </books>"; private var myBooks:XML = new XML(str); trace(myBooks.toXMLString()); //outputs formatted xml as string If the XML data in string is not well-formed (for example, a closing tag is missing), then you will see a runtime error. You can also use binding expressions in the XML text to extract contents from a variable data. For example, you could bind a node's name attribute to a variable value, as in the following line: private var title:String = "Book1" var aBook:XML = <book title="{title}">; To read more about XML class methods and properties, go through Flex 3 LiveDocs at http://livedocs.adobe.com/flex/3/langref/XML.html. The XMLList object As the class name indicates, XMLList contains one or more XML objects. It can contain full XML documents, XML fragments, or the results of an XML query. You can typically use all of the XML class's methods and properties on the objects from XMLList. To access these objects from the XMLList collection, iterate over it using a for each… statement. The XMLList provides you with the following methods to work with its objects: child(): Returns a specified child of every XML object children(): Returns specified children of every XML object descendants(): Returns all descendants of an XML object elements(): Calls the elements() method of each XML object in the XMLList. Returns all elements of the XML object parent(): Returns the parent of the XMLList object if all items in the XMLList object have the same parent attribute(attributeName): Calls the attribute() method of each XML object and returns an XMLList object of the results. The results match the given attributeName parameter attributes(): Calls the attributes() method of each XML object and returns an XMLList object of attributes for each XML object contains(): Checks if the specified XML object is present in the XMLList copy(): Returns a copy of the given XMLList object length(): Returns the number of properties in the XMLList object valueOf(): Returns the XMLList object For details on these methods, see the ActionScript 3.0 Language Reference. Let's return to the example of the XMLList: var xmlList:XMLList = myBooks.book.(@price == 99.99); var item:XML; for each(item in xmlList) { trace("item:"+item.toXMLString()); } Output: item:<book title="Book1" price="99.99"> <author>Author1</author> </book> In the example above, we have used XMLList to store the result of the myBooks.book.(@price == 99.99); statement. This statement returns an XMLList containing XML node(s) whose price is 99.99$. Working with XML objects The XML class provides many useful methods to work with XML objects, such as the appendChild() and prependChild() methods to add an XML element to the beginning or end of an XML object, as shown in the following example: var node1:XML = <middleInitial>B</middleInitial> var node2:XML = <lastName>Kore</lastName> var root:XML = <personalInfo></personalInfo> root = root.appendChild(node1); root = root.appendChild(node2); root = root.prependChild(<firstName>Satish</firstName>); The output is as follows: <personalInfo> <firstName>Satish</firstName> <middleInitial>B</middleInitial> <lastName>Kore</lastName> </personalInfo> You can use the insertChildBefore() or insertChildAfter() method to add a property before or after a specified property, as shown in the following example: var x:XML = <count> <one>1</one> <three>3</three> <four>4</four> </count>; x = x.insertChildBefore(x.three, "<two>2</two>"); x = x.insertChildAfter(x.four, "<five>5</five>"); trace(x.toXMLString()); The output of the above code is as follows: <count> <one>1</one> <two>2</two> <three>3</three> <four>4</four> <five>5</five> </count>
Read more
  • 0
  • 0
  • 3136

article-image-using-spring-faces
Packt
28 Oct 2009
13 min read
Save for later

Using Spring Faces

Packt
28 Oct 2009
13 min read
Using Spring Faces Module The following section shows you how to use the Spring Faces module. Overview of all tags of the Spring Faces tag library The Spring Faces module comes with a set of components, which are provided through a tag library. If you want more detailed information about the tag library, look at the following files inside the Spring Faces source distribution: spring-faces/src/main/resources/META-INF/spring-faces.tld spring-faces/src/main/resources/META-INF/springfaces.taglib.xml spring-faces/src/main/resources/META-INF/faces-config.xml If you want to see the source code of a specific tag, refer to faces-config.xml and springfaces.taglib.xml to get the name of the class of the component. The spring-faces.tld file can be used for documentation issues. The following table should give you a short description about the available tags from the Spring Faces component library: Name of the tag Description includeStyles The includeStyles tag renders the necessary CSS stylesheets which are essential for the components from Spring Faces. The usage of this tag in the head section is recommended for performance optimization. If the tag isn't included, the necessary stylesheets are rendered on the first usage of the tag. If you are using a template for your pages, it's a good pattern to include the tag in the header of that template. For more information about performance optimization, refer to the Yahoo performance guidelines, which are available at the following URL: http://developer.yahoo.com/performance. Some tags (includeStyles, resource, and resourceGroup) of the Spring Faces tag library are implementing patterns to optimize the performance on client side. resource The resource tag loads and renders a resource with ResourceServlet. You should prefer this tag instead of directly including a CSS stylesheet or a JavaScript file because ResourceServlet sets the proper response headers for caching the resource file. resourceGroup With the resourceGroup tag, it is possible to combine all resources which are inside the tag. It is important that all resources are the same type. The tag uses ResourceServlet with the appended parameter to create one resource file which is sent to the client. clientTextValidator With the clientTextValidator tag, you can validate a child inputText element. For the validation, you have an attribute called regExp where you can provide a regular expression. The validation is done on client side. clientNumberValidator With the clientNumberValidator tag, you can validate a child inputText element. With the provided validation methods, you can check whether the text is a number and check some properties for the number, e.g. range. The validation is done on client side. clientCurrencyValidator With the clientCurrencyValidator tag, you can validate a child inputText element. This tag should be used if you want to validate a currency. The validation is done on client side. clientDateValidator With the clientDateValidator tag, you can validate a child inputText element. The tag should be used to validate a date. The field displays a pop-up calendar. The validation is done on client side. validateAllOnClick With the validateAllOnClick tag, you can execute all client-side validation on the click of a specific element. That can be useful for a submit button. commandButton With the commandButton tag, it is possible to execute an arbitrary method on an instance. The method itself has to be a public method with no parameters and a java.lang.Object instance as the return value. commandLink The commandLink tag renders an AJAX link. With the processIds attribute, you can provide the ids of components which should be processed through the process. ajaxEvent The ajaxEvent tag creates a JavaScript event listener. This tag should only be used if you can ensure that the client has JavaScript enabled. A complete example After we have shown the main configuration elements and described the Spring Faces components, the following section shows a complete example in order to get a good understanding about how to work with the Spring Faces module in your own web application. The following diagram shows the screen of the sample application. With the shown screen, it is possible to create a new issue and save it to the bug database. It is not part of this example to describe the bug database or to describe how to work with databases in general. The sample uses the model classes. The diagram has three required fields. These fields are: Name: The name of the issue Description: A short description for the issue Fix until: The fixing date for the issue Additionally, there are the following two buttons: store: With a click on the store button, the system tries to create a new issue that includes the provided information cancel: With a click on the cancel button, the system ignores the data which is entered and navigates to the overview page. Now, the first step is to create the implementation of that input page. That implementation and its description are shown in the section below. Creating the input page As we described above, we use Facelets as a view handler technology. Therefore, the pages have to be defined with XHTML, with .xhtml as the file extension. The name for the input page will be add.xhtml. For the description, we separate the page into the following five parts: Header Name Description Fix until The Buttons This separation is shown in the diagram below: The Header part The first step in the header is to define that we have an XHTML page. This is done through the definition of the correct doctype. <!DOCTYPE composition PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> An XHTML page is described as an XML page. If you want to use special tags inside the XML page, you have to define a namespace for that. For a page with Facelets and Spring Faces, we have to define more than one namespace. The following table describes those namespaces: Namespace Description http://www.w3.org/1999/xhtml The namespace for XHTML itself. http://java.sun.com/jsf/facelets The Facelet defines some components (tags). These components are available under this namespace. http://java.sun.com/jsf/html The user interface components of JSF are available under this namespace. http://java.sun.com/jsf/core The core tags of JSF, for example converter, can be accessed under this namespace. http://www.springframework.org/tags/faces The namespace for the Spring Faces component library. For the header definition, we use the composition component of the Facelets components. With that component, it is possible to define a template for the layout. This is very similar to the previously mentioned Tiles framework. The following code snippet shows you the second part (after the doctype) of the header definition: A description and overview of the JSF tags is available at: http://developers.sun.com/jscreator/archive/learning/bookshelf/pearson/corejsf/standard_jsf_tags.pdf. <ui:composition template="/WEB-INF/layouts/standard.xhtml"> With the help of the template attribute, we refer to the used layout template. In our example, we refer to /WEB-INF/layouts/standard.xhtml. The following code shows the complete layout file standard.xhtml. This layout file is also described with the Facelets technology. Therefore, it is possible to use Facelets components inside that page, too. Additionally, we use Spring Faces components inside that layout page. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <f:view contentType="text/html"> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <title>flow.tracR</title> <sf:includeStyles/> <sf:resourceGroup> <sf:resource path="/css-framework/css/tools.css"/> <sf:resource path="/css-framework/css/typo.css"/> <sf:resource path="/css-framework/css/forms.css"/> <sf:resource path="/css-framework/css/layout-navtop-localleft. css"/> <sf:resource path="/css-framework/css/layout.css"/> </sf:resourceGroup> <sf:resource path="/css/issue.css"/> <ui:insert name="headIncludes"/> </head> <body class="tundra spring"> <div id="page"> <div id="content"> <div id="main"> <ui:insert name="content"/> </div> </div> </div> </body> </f:view> </html> The Name part The first element in the input page is the section for the input of the name. For the description of that section, we use elements from the JSF component library. We access this library with h as the prefix, which we have defined in the header section. For the general layout, we use standard HTML elements, such as the div element. The definition is shown below. <div class="field"> <div class="label"> <h:outputLabel for="name">Name:</h:outputLabel> </div> <div class="input"> <h:inputText id="name" value="#{issue.name}" /> </div> </div> The Description part The next element in the page is the Description element. The definition is very similar to the Name part. Instead of the definition of the Name part, we use the element description inside the h:inputText element—the required attribute with true as its value. This attribute tells the JSF system that the issue.description value is mandatory. If the user does not enter a value, the validation fails. <div class="field"> <div class="label"> <h:outputLabel for="description">Description:</h:outputLabel> </div> <div class="input"> <h:inputText id="description" value="#{issue.description}" required="true"/> </div> </div> The Fix until part The last input section is the Fix until part. This field is a very common field in web applications, because there is often the need to input a date. Internally, a date is often represented through an instance of the java.util.Date class. The text which is entered by the user has to be validated and converted in order to get a valid instance. To help the user with the input, a calendar for the input is often used. The Spring Faces library offers a component which shows a calendar and adds client-side validation. The complete definition of the Fix until part is shown below. The name of the component is clientDateValidator. The clientDateValidator component is used with sf as the prefix. This prefix is defined in the namespace definition in the shown header of the add.xhtml page. <div class="field"> <div class="label"> <h:outputLabel for="checkinDate">Fix until:</h:outputLabel> </div> <div class="input"> <sf:clientDateValidator required="true" invalidMessage="pleaseinsert a correct fixing date. format: dd.MM.yyyy"promptMessage="Format: dd.MM.yyyy, example: 01.01.2020"> <h:inputText id="checkinDate" value="#{issue.fixingDate}" required="true"> <f:convertDateTime pattern="dd.MM.yyyy" timeZone="GMT"/> </h:inputText> </sf:clientDateValidator> </div> </div> In the example above, we use the promptMessage attribute to help the user with the format. The message is shown when the user sets the cursor on the input element. If the validation fails, the message of the invalidMessage attribute is used to show the user that a wrong formatted input has been entered. The Buttons part The last element in the page are the buttons. For these buttons, the commandButton component from Spring Faces is used. The definition is shown below: <div class="buttonGroup"> <sf:commandButton id="store" action="store" processIds="*" value="store"/> <sf:commandButton id="cancel" action="cancel" processIds="*" value="cancel"/> </div> It is worth mentioning that JavaServer Faces binds an action to the action method of a backing bean. Spring Web Flow binds the action to events. Handling of errors It's possible to have validation on the client side or on the server side. For the Fix until element, we use the previously mentioned clientDateValidator component of the Spring Faces library. The following figure shows how this component shows the error message to the user: Reflecting the actions of the buttons into the flow definition file Clicking the buttons executes an action that has a transition as the result. The name of the action is expressed in the action attribute of the button component which is implemented as commandButton from the Spring Faces component library. If you click on the store button, the validation is executed first. If you want to prevent that validation, you have to use the bind attribute and set it to false. This feature is used for the cancel button, because in this state it is necessary to ignore the inputs. <view-state id="add" model="issue"> <transition on="store" to="issueStore" > <evaluate expression="persistenceContext.persist(issue)"/> </transition> <transition on="cancel" to="cancelInput" bind="false"> </transition> </view-state> Showing the results To test the implemented feature, we implement an overview page. We have the choice to implement the page as a flow with one view state or implement it as a simple JSF view. Independent from that choice, we will use Facelets to implement that overview page, because Facelets does not depend on the Spring Web Flow Framework as it is a feature of JSF. The example uses a table to show the entered issues. If no issue is entered, a message is shown to the user. The figure below shows this table with one row of data. The Id is a URL. If you click on this link, the input page is shown with the data of that issue. With data, we execute an update. The indicator for that is the valid ID of the issue. If your data is available, the No issues in database message is shown to the user. This is done with a condition on the outputText component. See the code snippet below: <h:outputText id="noIssuesText" value="No Issues in the database" rendered="#{empty issueList}"/> For the table, we use the dataTable component. <h:dataTable id="issues" value="#{issueList}" var="issue"rendered="#{not empty issueList}" border="1"> <h:column> <f:facet name="header">Id</f:facet> <a href="add?id=#{issue.id}">#{issue.id}</a> </h:column> <h:column> <f:facet name="header">Name</f:facet> #{issue.name} </h:column> <h:column> <f:facet name="header">fix until</f:facet> #{issue.fixingDate} </h:column> <h:column> <f:facet name="header">creation date</f:facet> #{issue.creationDate} </h:column> <h:column> <f:facet name="header">last modified</f:facet> #{issue.lastModified} </h:column> </h:dataTable>
Read more
  • 0
  • 0
  • 3234

article-image-designing-your-very-own-aspnet-mvc-application
Packt
28 Oct 2009
8 min read
Save for later

Designing your very own ASP.NET MVC Application

Packt
28 Oct 2009
8 min read
When downloading and installing the ASP.NET MVC framework SDK, a new project template is installed in Visual Studio—the ASP.NET MVC project template. This article by Maarten Balliauw describes how to use this template. We will briefly touch all aspects of ASP.NET MVC by creating a new ASP.NET MVC web application based on this Visual Studio template. Besides view, controller, and model, new concepts including ViewData—a means of transferring data between controller and view, routing—the link between a web browser URL and a specific action method inside a controller, and unit testing of a controller are also illustrated in this article. (For more resources on .NET, see here.) Creating a new ASP.NET MVC web application project Before we start creating an ASP.NET MVC web application, make sure that you have installed the ASP.NET MVC framework SDK from http://www.asp.net/mvc. After installation, open Visual Studio 2008 and select menu option File | New | Project. The following screenshot will be displayed. Make sure that you select the .NET framework 3.5 as the target framework. You will notice a new project template called ASP.NET MVC Web Application. This project template creates the default project structure for an ASP.NET MVC application. After clicking on OK, Visual Studio will ask you if you want to create a test project. This dialog offers the choice between several unit testing frameworks that can be used for testing your ASP.NET MVC application. You can decide for yourself if you want to create a unit testing project right now—you can also add a testing project later on. Letting the ASP.NET MVC project template create a test project now is convenient because it creates all of the project references, and contains an example unit test, although this is not required. For this example, continue by adding the default unit test project. What's inside the box? After the ASP.NET MVC project has been created, you will notice a default folder structure. There's a Controllers folder, a Models folder, a Views folder, as well as a Content folder and a Scripts folder. ASP.NET MVC comes with the convention that these folders (and namespaces) are used for locating the different blocks used for building the ASP.NET MVC framework. The Controllers folder obviously contains all of the controller classes; the Models folder contains the model classes; while the Views folder contains the view pages. Content will typically contain web site content such as images and stylesheet files, and Scripts will contain all of the JavaScript files used by the web application. By default, the Scripts folder contains some JavaScript files required for the use of Microsoft AJAX or jQuery. Locating the different building blocks is done in the request life cycle. One of the first steps in the ASP.NET MVC request life cycle is mapping the requested URL to the correct controller action method. This process is referred to as routing. A default route is initialized in the Global.asax file and describes to the ASP.NET MVC framework how to handle a request. Double-clicking on the Global.asax file in the MvcApplication1 project will display the following code: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.Mvc;using System.Web.Routing;namespace MvcApplication1{ public class GlobalApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } }} In the Application_Start() event handler, which is fired whenever the application is compiled or the web server is restarted, a route table is registered. The default route is named Default, and responds to a URL in the form of http://www.example.com/{controller}/{action}/{id}. The variables between { and } are populated with actual values from the request URL or with the default values if no override is present in the URL. This default route will map to the Home controller and to the Index action method, according to the default routing parameters. We won't have any other action with this routing map. By default, all the possible URLs can be mapped through this default route. It is also possible to create our own routes. For example, let's map the URL http://www.example.com/Employee/Maarten to the Employee controller, the Show action, and the firstname parameter. The following code snippet can be inserted in the Global.asax file we've just opened. Because the ASP.NET MVC framework uses the first matching route, this code snippet should be inserted above the default route; otherwise the route will never be used. routes.MapRoute( "EmployeeShow", // Route name "Employee/{firstname}", // URL with parameters new { // Parameter defaults controller = "Employee", action = "Show", firstname = "" } ); Now, let's add the necessary components for this route. First of all, create a class named EmployeeController in the Controllers folder. You can do this by adding a new item to the project and selecting the MVC Controller Class template located under the Web | MVC category. Remove the Index action method, and replace it with a method or action named Show. This method accepts a firstname parameter and passes the data into the ViewData dictionary. This dictionary will be used by the view to display data. The EmployeeController class will pass an Employee object to the view. This Employee class should be added in the Models folder (right-click on this folder and then select Add | Class from the context menu). Here's the code for the Employee class: namespace MvcApplication1.Models{ public class Employee { public string FirstName { get; set; } public string LastName { get; set; } public string Email { get; set; } }} After adding the EmployeeController and Employee classes, the ASP.NET MVC project now appears as shown in the following screenshot: The EmployeeController class now looks like this: using System.Web.Mvc;using MvcApplication1.Models;namespace MvcApplication1.Controllers{ public class EmployeeController : Controller { public ActionResult Show(string firstname) { if (string.IsNullOrEmpty(firstname)) { ViewData["ErrorMessage"] = "No firstname provided!"; } else { Employee employee = new Employee { FirstName = firstname, LastName = "Example", Email = firstname + "@example.com" }; ViewData["FirstName"] = employee.FirstName; ViewData["LastName"] = employee.LastName; ViewData["Email"] = employee.Email; } return View(); } }} The action method we've just created can be requested by a user via a URL—in this case, something similar to http://www.example.com/Employee/Maarten. This URL is mapped to the action method by the route we've created before. By default, any public action method (that is, a method in a controller class) can be requested using the default routing scheme. If you want to avoid a method from being requested, simply make it private or protected, or if it has to be public, add a [NonAction] attribute to the method. Note that we are returning an ActionResult (created by the View() method), which can be a view-rendering command, a page redirect, a JSON result, a string, or any other custom class implementation inheriting the ActionResult that you want to return. Returning an ActionResult is not necessary. The controller can write content directly to the response stream if required, but this would be breaking the MVC pattern—the controller should never be responsible for the actual content of the response that is being returned. Next, create a Show.aspx page in the Views | Employee folder. You can create a view by adding a new item to the project and selecting the MVC View Content Page template, located under the Web | MVC category, as we want this view to render in a master page (located in Views | Shared). There is an alternative way to create a view related to an action method, which will be covered later in this article. In the view, you can display employee information or display an error message if an employee is not found. Add the following code to the Show.aspx page: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" Inherits=" System.Web.Mvc.ViewPage" %><asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="server"> <% if (ViewData["ErrorMessage"] != null) { %> <h1><%=ViewData["ErrorMessage"]%></h1> <% } else { %> <h1><%=ViewData["FirstName"]%> <%=ViewData["LastName"]%></h1> <p> E-mail: <%=ViewData["Email"]%> </p> <% } %></asp:Content> If the ViewData, set by the controller, is given an ErrorMessage, then the ErrorMessage is displayed on the resulting web page. Otherwise, the employee details are displayed. Press the F5 button on your keyboard to start the development web server. Alter the URL in your browser to something ending in /Employee/Your_Name_Here, and see the action method and the view we've just created in action.
Read more
  • 0
  • 0
  • 6445

article-image-developing-web-applications-using-javaserver-faces-part-1
Packt
27 Oct 2009
6 min read
Save for later

Developing Web Applications using JavaServer Faces: Part 1

Packt
27 Oct 2009
6 min read
Although a lot of applications have been written using these APIs, most modern Java applications are written using some kind of web application framework. As of Java EE 5, the standard framework for building web applications is Java Server Faces (JSF). Introduction to JavaServer Faces Before JSF was developed, Java web applications were typically developed using non-standard web application frameworks such as Apache Struts, Tapestry, Spring Web MVC, or many others. These frameworks are built on top of the Servlet and JSP standards, and automate a lot of functionality that needs to be manually coded when using these APIs directly. Having a wide variety of web application frameworks available (at the time of writing, Wikipedia lists 35 Java web application frameworks, and this list is far from extensive!), often resulted in "analysis paralysis", that is, developers often spend an inordinate amount of time evaluating frameworks for their applications. The introduction of JSF to the Java EE 5 specification resulted in having a standard web application framework available in any Java EE 5 compliant application server. We don't mean to imply that other web application frameworks are obsolete or that they shouldn't be used at all, however, a lot of organizations consider JSF the "safe" choice since it is part of the standard and should be well supported for the foreseeable future. Additionally, NetBeans offers excellent JSF support, making JSF a very attractive choice. Strictly speaking, JSF is not a web application framework as such, but a component framework. In theory, JSF can be used to write applications that are not web-based, however, in practice JSF is almost always used for this purpose. In addition to being the standard Java EE 5 component framework, one benefit of JSF is that it was designed with graphical tools in mind, making it easy for tools and IDEs such as NetBeans to take advantage of the JSF component model with drag-and-drop support for components. NetBeans provides a Visual Web JSF Designer that allow us to visually create JSF applications. Developing Our first JSF Application From an application developer's point of view, a JSF application consists of a series of JSP pages containing custom JSF tags, one or more JSF managed beans, and a configuration file named faces-config.xml. The faces-config.xml file declares the managed beans in the application, as well as the navigation rules to follow when navigating from one JSF page to another. Creating a New JSF Project To create a new JSF project, we need to go to File | New Project, select the Java Web project category, and Web Application as the project type. After clicking Next, we need to enter a Project Name, and optionally change other information for our project, although NetBeans provides sensible defaults. On the next page in the wizard, we can select the Server, Java EE Version, and Context Path of our application. In our example, we will simply pick the default values. On the next page of the new project wizard, we can select what frameworks our web application will use. Unsurprisingly, for JSF applications we need to select the JavaServer Faces framework. The Visual Web JavaServer Faces framework allows us to quickly build web pages by dragging-and-dropping components from the NetBeans palette into our pages. Although it certainly allows us to develop applications a lot quicker than manually coding, it hides a lot of the "ins" and "outs" of JSF. Having a background in standard JSF development will help us understand what the NetBeans Visual Web functionality does behind the scenes. When clicking Finish, the wizard generates a skeleton JSF project for us, consisting of a single JSP file called welcomeJSF.jsp, and a few configuration files: web.xml, faces-config.xml and, if we are using the default bundled GlassFish server, the GlassFish specific sun-web.xml file is generated as well. web.xml is the standard configuration file needed for all Java web applications. faces-config.xml is a JSF-specific configuration file used to declare JSF-managed beans and navigation rules. sun-web.xml is a GlassFish-specific configuration file that allows us to override the application's default context root, add security role mappings, and perform several other configuration tasks. The generated JSP looks like this: <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <%-- This file is an entry point for JavaServer Faces application. --%> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>JSP Page</title> </head> <body> <f:view> <h1> <h:outputText value="JavaServer Faces"/> </h1> </f:view> </body> </html> As we can see, a JSF enabled JSP file is a standard JSP file using a couple of JSF-specific tag libraries. The first tag library, declared in our JSP by the following line: <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> is the core JSF tag library, this library includes a number of tags that are independent of the rendering mechanism of the JSF application (recall that JSF can be used for applications other than web applications). By convention, the prefix f (for faces) is used for this tag library. The second tag library in the generated JSP, declared by the following line: <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> is the JSF HTML tag library. This tag library includes a number of tags that are used to implement HTML specific functionality, such as creating HTML forms and input fields. By convention, the prefix h (for HTML) is used for this tag library. The first JSF tag we see in the generated JSP file is the <f:view> tag. When writing a Java web application using JSF, all JSF custom tags must be enclosed inside an <f:view> tag. In addition to JSF-specific tags, this tag can contain standard HTML tags, as well as tags from other tag libraries, such as the JSTL tags. The next JSF-specific tag we see in the above JSP is <h:outputText>. This tag simply displays the value of its value attribute in the rendered page. The application generated by the new project wizard is a simple, but complete, JSF web application. We can see it in action by right-clicking on our project in the project window and selecting Run. At this point the application server is started (if it wasn't already running), the application is deployed and the default system browser opens, displaying our application's welcome page.
Read more
  • 0
  • 0
  • 2614

article-image-python-data-persistence-using-mysql-part-ii-moving-data-processing-data
Packt
27 Oct 2009
8 min read
Save for later

Python Data Persistence using MySQL Part II: Moving Data Processing to the Data

Packt
27 Oct 2009
8 min read
To move data processing to the data, you can use stored procedures, stored functions, and triggers. All these components are implemented inside the underlying database, and can significantly improve performance of your application due to reducing network overhead associated with multiple calls to the database. It is important to realize, though, the decision to move any piece of processing logic into the database should be taken with care. In some situations, this may be simply inefficient. For example, if you decide to move some logic dealing with the data stored in a custom Python list into the database, while still keeping that list implemented in your Python code, this can be inefficient in such a case, since it only increases the number of calls to the underlying database, thus causing significant network overhead. To fix this situation, you could move the list from Python into the database as well, implementing it as a table. Starting with version 5.0, MySQL supports stored procedures, stored functions, and triggers, making it possible for you to enjoy programming on the underlying database side. In this article, you will look at triggers in action. Stored procedures and functions can be used similarly. Planning Changes for the Sample Application Assuming you have followed the instructions in Python Data Persistence using MySQL, you should already have the application structure to be reorganized here. To recap, what you should already have is: tags nested list of tags used to describe the posts obtained from the Packt Book Feed page. obtainPost function obtains the information about the most recent post on the Packt Book Feed page. determineTags function determines tags appropriate to the latest post obtained from the Packt Book Feed page. insertPost function inserts the information about the obtained post into the underlying database tables: posts and posttags. execPr function brings together the functionality of the described above functions. That’s what you should already have on the Python side. And on the database side, you should have the following components: posts table contains records representing posts obtained from the Packt Book Feed page. posttags table contains records each of which represents a tag associated with a certain post stored in the posts table. Let’s figure out how we can refactor the above structure, moving some data processing inside the database. The first thing you might want to do is to move the tags list from Python into the database, creating a new table tags for that. Then, you can move the logic implemented with the determineTags function inside the database, defining the AFTER INSERT trigger on the posts table. From within this trigger, you will also insert rows into the posttags table, thus eliminating the need to do it from within the insertPost function. Once you’ve done all that, you can refactor the Python code implemented in the appsample module. To summarize, here are the steps you need to perform in order to refactor the sample application discussed in the earlier article: Create tags table and populate it with the data currently stored in the  tags list implemented in Python. Define the AFTER INSERT trigger on the posts table. Refactor the insertPost function in the appsample.py module. Remove the tags list from the appsample.py module. Remove the determineTags function from the appsample.py module. Refactor the execPr function in the appsample.py module. Refactoring the Underlying Database To keep things simple, the tags table might contain a single column tag with the primary key constraint defined on it. So, you can create the tags table as follows: CREATE TABLE tags ( tag VARCHAR(20) PRIMARY KEY ) ENGINE = InnoDB; Then, you might want to modify the posttags table, adding a foreign key constraint to its tag column. Before you can do that, though, you will need to delete all the rows from this table. This can be done with the following query: DELETE FROM posttags; Now you can move on and alter posttags as follows: ALTER TABLE posttags ADD FOREIGN KEY (tag) REFERENCES tags(tag); The next step is to populate the tags table. You can automate this process with the help of the following Python script: >>> import MySQLdb >>> import appsample >>> db=MySQLdb.connect(host="localhost",user="usrsample",passwd="pswd",db=">>> dbsample") >>> c=db.cursor() >>> c.executemany("""INSERT INTO tags VALUES(%s)""", appsample.tags) >>> db.commit() >>> db.close() As a result, you should have the tags table populated with the data taken from the tags list discussed in Python Data Persistence using MySQL. To make sure it has done so, you can turn back to the mysql prompt and issue the following query against the tags table: SELECT * FROM tags; The above should output the list of tags you have in the tags list. Of course, you can always extend this list, adding new tags with the INSERT statement. For example, you could issue the following statement to add the Visual Studio tag: INSERT INTO tags VALUES('Visual Studio'); Now you can move on and define the AFTER INSERT trigger on the posts table: delimiter // CREATE TRIGGER insertPost AFTER INSERT ON posts FOR EACH ROW BEGIN INSERT INTO posttags(title, tag) SELECT NEW.title as title, tag FROM tags WHERE LOCATE(tag, NEW.title)>0; END // delimiter ; As you can see, the posttags table will be automatically populated with appropriate tags just after a new row is inserted into the posts table. Notice the use of the INSERT … SELECT statement in the body of the trigger. Using this syntax lets you insert several rows into the posttags table at once, without having to use an explicit loop. In the WHERE clause of SELECT, you use standard MySQL string function LOCATE returning the position of the first occurrence of the substring, passed in as the first argument, in the string, passed in as the second argument. In this particular example, though, you are not really interested in obtaining the position of an occurrence of the substring in the string. All you need to find out here is whether the substring appears in the string or not. If it is, it should appear in the posttags table as a separate row associated with the row just inserted into the posts table. Refactoring the Sample’s Python Code Now that you have moved some data and data processing from Python into the underlying database, it’s time to reorganize the appsample custom Python module created as discussed in Python Data Persistence using MySQL. As mentioned earlier, you need to rewrite the insertPost and execPr functions and remove the determineTags function and the tags list. This is what the appsample module should look like after revising: import MySQLdb import urllib2 import xml.dom.minidom def obtainPost(): addr = "http://feeds.feedburner.com/packtpub/sDsa?format=xml" xmldoc = xml.dom.minidom.parseString(urllib2.urlopen(addr).read()) item = xmldoc.getElementsByTagName("item")[0] title = item.getElementsByTagName("title")[0].firstChild.data guid = item.getElementsByTagName("guid")[0].firstChild.data pubDate = item.getElementsByTagName("pubDate")[0].firstChild.data post ={"title": title, "guid": guid, "pubDate": pubDate} return post def insertPost(title, guid, pubDate): db=MySQLdb.connect(host="localhost",user="usrsample",passwd="pswd",db="dbsample") c=db.cursor() c.execute("""INSERT INTO posts (title, guid, pubDate) VALUES(%s,%s,%s)""", (title, guid, pubDate)) db.commit() db.close() def execPr(): p = obtainPost() insertPost(p["title"], p["guid"], p["pubDate"]) If you compare it with appsample discussed in Part 1, you should notice that the revision is much shorter. It’s important to note, however, that nothing has changed from the user standpoint. So, if you now start the execPr function in your Python session: >>>import appsample >>>appsample.execPr() This should insert a new record into the posts table, inserting automatically corresponding tags records into the posttags table, if any. The difference lies in the way it’s going on behind the scenes. Now the Python code is responsible only for obtaining the latest post from the Packt Book Feed page and then inserting a record into the posts table. Dealing with tags is now responsibility of the logic implemented inside the database. In particular, the AFTER INSERT trigger defined on the posts table should take care of inserting the rows into the posttags table. To make sure that everything has worked smoothly, you can now check out the content of the posts and posttags tables. To look at the latest post stored in the posts table, you could issue the following query: SELECT title, str_to_date(pubDate,'%a, %e %b %Y') lastdate FROM posts ORDER BY lastdate DESC LIMIT 1; Then, you might want to look at the related tags stored in the posttags tables, by issuing the following query: SELECT p.title, t.tag, str_to_date(p.pubDate,'%a, %e %b %Y') lastdate FROM posts p, posttags t WHERE p.title=t.title ORDER BY lastdate DESC LIMIT 1; Conclusion In this article, you looked at how some business logic of a Python/MySQL application can be moved from Python into MySQL. For that, you continued with the sample application originally discussed in Python Data Persistence using MySQL.
Read more
  • 0
  • 0
  • 4620

article-image-implementing-basic-helloworld-wcf-windows-communication-foundation-service
Packt
27 Oct 2009
7 min read
Save for later

Implementing a Basic HelloWorld WCF (Windows Communication Foundation) Service

Packt
27 Oct 2009
7 min read
We will build a HelloWorld WCF service by carrying out the following steps: Create the solution and project Create the WCF service contract interface Implement the WCF service Host the WCF service in the ASP.NET Development Server Create a client application to consume this WCF service Creating the HelloWorld solution and project Before we can build the WCF service, we need to create a solution for our service projects. We also need a directory in which to save all the files. Throughout this article, we will save our project source codes in the D:SOAwithWCFandLINQProjects directory. We will have a subfolder for each solution we create, and under this solution folder, we will have one subfolder for each project. For this HelloWorld solution, the final directory structure is shown in the following image: You don't need to manually create these directories via Windows Explorer; Visual Studio will create them automatically when you create the solutions and projects. Now, follow these steps to create our first solution and the HelloWorld project: Start Visual Studio 2008. If the Open Project dialog box pops up, click Cancel to close it. Go to menu File | New | Project. The New Project dialog window will appear. From the left-hand side of the window (Project types), expand Other Project Types and then select Visual Studio Solutions as the project type. From the right-hand side of the window (Templates), select Blank Solution as the template. At the bottom of the window, type HelloWorld as the Name, and D:SOAwithWCFandLINQProjects as the Location. Note that you should not enter HelloWorld within the location, because Visual Studio will automatically create a folder for a new solution. Click the OK button to close this window and your screen should look like the following image, with an empty solution. Depending on your settings, the layout may be different. But you should still have an empty solution in your Solution Explorer. If you don't see Solution Explorer, go to menu View | Solution Explorer, or press Ctrl+Alt+L to bring it up. In the Solution Explorer, right-click on the solution, and select Add | New Project… from the context menu. You can also go to menu File | Add | New Project… to get the same result. The following image shows the context menu for adding a new project. The Add New Project window should now appear on your screen. In the left-hand side of this window (Project types), select Visual C# as the project type, and on the right-hand side of the window (Templates), select Class Library as the template. At the bottom of the window, type HelloWorldService as the Name. Leave D:SOAwithWCFandLINQProjectsHelloWorld as the Location. Again, don't add HelloWorldService to the location, as Visual Studio will create a subfolder for this new project (Visual Studio will use the solution folder as the default base folder for all the new projects added to the solution). You may have noticed that there is already a template for WCF Service Application in Visual Studio 2008. For the very first example, we will not use this template. Instead, we will create everything by ourselves so you know what the purpose of each template is. This is an excellent way for you to understand and master this new technology. Now, you can click the OK button to close this window. Once you click the OK button, Visual Studio will create several files for you. The first file is the project file. This is an XML file under the project directory, and it is called HelloWorldService.csproj. Visual Studio also creates an empty class file, called Class1.cs. Later, we will change this default name to a more meaningful one, and change its namespace to our own one. Three directories are created automatically under the project folder—one to hold the binary files, another to hold the object files, and a third one for the properties files of the project. The window on your screen should now look like the following image: We now have a new solution and project created. Next, we will develop and build this service. But before we go any further, we need to do two things to this project: Click the Show All Files button on the Solution Explorer toolbar. It is the second button from the left, just above the word Solution inside the Solution Explorer. If you allow your mouse to hover above this button, you will see the hint Show All Files, as shown in above diagram. Clicking this button will show all files and directories in your hard disk under the project folder-rven those items that are not included in the project. Make sure that you don't have the solution item selected. Otherwise, you can't see the Show All Files button. Change the default namespace of the project. From the Solution Explorer, right-click on the HelloWorldService project, select Properties from the context menu, or go to menu item Project | HelloWorldService Properties…. You will see the project properties dialog window. On the Application tab, change the Default namespace to MyWCFServices. Lastly, in order to develop a WCF service, we need to add a reference to the ServiceModel namespace. On the Solution Explorer window, right-click on the HelloWorldService project, and select Add Reference… from the context menu. You can also go to the menu item Project | Add Reference… to do this. The Add Reference dialog window should appear on your screen. Select System.ServiceModel from the .NET tab, and click OK. Now, on the Solution Explorer, if you expand the references of the HelloWorldService project, you will see that System.ServiceModel has been added. Also note that System.Xml.Linq is added by default. We will use this later when we query a database. Creating the HelloWorldService service contract interface In the previous section, we created the solution and the project for the HelloWorld WCF Service. From this section on, we will start building the HelloWorld WCF service. First, we need to create the service contract interface. In the Solution Explorer, right-click on the HelloWorldService project, and select Add | New Item…. from the context menu. The following Add New Item - HelloWorldService dialog window should appear on your screen. On the left-hand side of the window (Categories), select Visual C# Items as the category, and on the right-hand side of the window (Templates), select Interface as the template. At the bottom of the window, change the Name from Interface1.cs to IHelloWorldService.cs. Click the Add button. Now, an empty service interface file has been added to the project. Follow the steps below to customize it. Add a using statement: using System.ServiceModel; Add a ServiceContract attribute to the interface. This will designate the interface as a WCF service contract interface. [ServiceContract] Add a GetMessage method to the interface. This method will take a string as the input, and return another string as the result. It also has an attribute, OperationContract. [OperationContract] String GetMessage(String name); Change the interface to public. The final content of the file IHelloWorldService.cs should look like the following: using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.ServiceModel;namespace MyWCFServices{[ServiceContract]public interface IHelloWorldService{[OperationContract]String GetMessage(String name);}}
Read more
  • 0
  • 0
  • 2748
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-new-soa-capabilities-biztalk-server-2009-uddi-services
Packt
27 Oct 2009
6 min read
Save for later

New SOA Capabilities in BizTalk Server 2009: UDDI Services

Packt
27 Oct 2009
6 min read
All truths are easy to understand once they are discovered; the point is to discover them.-Galileo Galilei What is UDDI? Universal Description and Discovery Information (UDDI) is a type of registry whose primary purpose is to represent information about web services. It describes the service providers, the services that provider offers, and in some cases, the specific technical specifications for interacting with those services. While UDDI was originally envisioned as a public, platform independent registry that companies could exploit for listing and consuming services, it seems that many have chosen instead to use UDDI as an internal resource for categorizing and describing their available enterprise services. Besides simply listing available services for others to search and peruse, UDDI is arguably most beneficial for those who wish to perform runtime binding to service endpoints. Instead of hard-coding a service path in a client application, one may query UDDI for a particular service's endpoint and apply it to their active service call. While UDDI is typically used for web services, nothing prevents someone from storing information about any particular transport and allowing service consumers to discover and do runtime resolution to these endpoints. As an example, this is useful if you have an environment with primary, backup, and disaster access points and want your application be able to gracefully look up and failover to the next available service environment. In addition, UDDI can be of assistance if an application is deployed globally but you wish for regional consumers to look up and resolve against the closest geographical endpoint. UDDI has a few core hierarchy concepts that you must grasp to fully comprehend how the registry is organized. The most important ones are included here. Name Purpose Name in Microsoft UDDI services BusinessEntity These are the service providers. May be an organization, business unit or functional area. Provider BusinessService General reference to a business service offered by a provider. May be a logical grouping of actual services. Service BindingTemplate Technical details of an individual service including endpoint Binding tModel (Technical Model) Represents metadata for categorization or description such as transport or protocol tModel As far as relationships between these entities go, a Business Entity may contain many Business Services, which in turn can have multiple Binding Templates. A binding may reference multiple tModels and tModels may be reused across many Binding Templates. What's new in UDDI version three? The latest UDDI specification calls out multiple-registry environments, support for digital signatures applied to UDDI entries, more complex categorization, wildcard searching, and a subscription API. We'll spend a bit of time on that last one in a few moments. Let's take a brief lap around at the Microsoft UDDI Services offering. For practical purposes, consider the UDDI Services to be made up of two parts: an Administration Console and a web site. The website is actually broken up into both a public facing and administrative interface, but we'll talk about them as one unit. The UDDI Configuration Console is the place to set service-wide settings ranging from the extent of logging to permissions and site security. The site node (named UDDI) has settings for permission account groups, security settings (see below), and subscription notification thresholds among others. The web node, which resides immediately beneath the parent, controls web site setting such as logging level and target database. Finally, the notification node manages settings related to the new subscription notification feature and identically matches the categories of the web node. The UDDI Services web site, found at http://localhost/uddi/, is the destination or physically listing, managing, and configuring services. The Search page enables querying by a wide variety of criteria including category, services, service providers, bindings, and tModels. The Publish page is where you go to add new services to the registry or edit the settings of existing ones. Finally, the Subscription page is where the new UDDI version three capability of registry notification is configured. We will demonstrate this feature later in this article. How to add services to the UDDI registry? Now we're ready to add new services to our UDDI registry. First, let's go to the Publish page and define our Service Provider and a pair of categorical tModels. To add a new Provider, we right-click the Provider node in the tree and choose Add Provider. Once a provider is created and named, we have the choice of adding all types of context characteristics such as a contact name(s), categories, relationships, and more. I'd like to add two tModel categories to my environment : one to identify which type of environment the service references (development, test, staging, production) and another to flag which type of transport it uses (Basic HTTP, WS HTTP, and so on). To add atModel, simply right-click the tModels node and choose Add tModel. This first one is named biztalksoa:runtimeresolution:environment. After adding one more tModel for biztalksoa:runtimeresolution:transporttype, we're ready to add a service to the registry. Right-click the BizTalkSOA provider and choose Add Service. Set the name of this service toBatchMasterService. Next, we want to add a binding (or access point) for this service, which describes where the service endpoint is physically located. Switch to the Bindings tab of the service definition and choose New Binding. We need a new access point, so I pointed to our proxy service created earlier and identified it as an endPoint. Finally, let's associate the two new tModel categories with our service. Switch to the Categories tab, and choose to Add Custom Category. We're asked to search for atModel, which represents our category, so a wildcard entry such as %biztalksoa%  is a valid search criterion. After selecting the environment category, we're asked for the key name and value. The key "name" is purely a human-friendly representation of the data whereas the tModel identifier and the key value comprise the actual name-value pair. I've entered production as the value on the environment category, and WS-Http as the key value on thetransporttype category. At this point, we have a service sufficiently configured in the UDDI directory so that others can discover and dynamically resolve against it.
Read more
  • 0
  • 0
  • 2855

article-image-developing-web-applications-using-javaserver-faces-part-2
Packt
27 Oct 2009
5 min read
Save for later

Developing Web Applications using JavaServer Faces: Part 2

Packt
27 Oct 2009
5 min read
JSF Validation Earlier in this article, we discussed how the required attribute for JSF input fields allows us to easily make input fields mandatory. If a user attempts to submit a form with one or more required fields missing, an error message is automatically generated. The error message is generated by the <h:message> tag corresponding to the invalid field. The string First Name in the error message corresponds to the value of the label attribute for the field. Had we omitted the label attribute, the value of the fields id attribute would have been shown instead. As we can see, the required attribute makes it very easy to implement mandatory field functionality in our application. Recall that the age field is bound to a property of type Integer in our managed bean. If a user enters a value that is not a valid integer into this field, a validation error is automatically generated. Of course, a negative age wouldn't make much sense, however, our application validates that user input is a valid integer with essentially no effort on our part. The email address input field of our page is bound to a property of type String in our managed bean. As such, there is no built-in validation to make sure that the user enters a valid email address. In cases like this, we need to write our own custom JSF validators. Custom JSF validators must implement the javax.faces.validator.Validator interface. This interface contains a single method named validate(). This method takes three parameters: an instance of javax.faces.context.FacesContext, an instance of javax.faces.component.UIComponent containing the JSF component we are validating, and an instance of java.lang.Object containing the user entered value for the component. The following example illustrates a typical custom validator. package com.ensode.jsf.validators;import java.util.regex.Matcher;import java.util.regex.Pattern;import javax.faces.application.FacesMessage;import javax.faces.component.UIComponent;import javax.faces.component.html.HtmlInputText;import javax.faces.context.FacesContext;import javax.faces.validator.Validator;import javax.faces.validator.ValidatorException;public class EmailValidator implements Validator { public void validate(FacesContext facesContext, UIComponent uIComponent, Object value) throws ValidatorException { Pattern pattern = Pattern.compile("w+@w+.w+"); Matcher matcher = pattern.matcher( (CharSequence) value); HtmlInputText htmlInputText = (HtmlInputText) uIComponent; String label; if (htmlInputText.getLabel() == null || htmlInputText.getLabel().trim().equals("")) { label = htmlInputText.getId(); } else { label = htmlInputText.getLabel(); } if (!matcher.matches()) { FacesMessage facesMessage = new FacesMessage(label + ": not a valid email address"); throw new ValidatorException(facesMessage); } }} In our example, the validate() method does a regular expression match against the value of the JSF component we are validating. If the value matches the expression, validation succeeds, otherwise, validation fails and an instance of javax.faces.validator.ValidatorException is thrown. The primary purpose of our custom validator is to illustrate how to write custom JSF validations, and not to create a foolproof email address validator. There may be valid email addresses that don't validate using our validator. The constructor of ValidatorException takes an instance of javax.faces.application.FacesMessage as a parameter. This object is used to display the error message on the page when validation fails. The message to display is passed as a String to the constructor of FacesMessage. In our example, if the label attribute of the component is not null nor empty, we use it as part of the error message, otherwise we use the value of the component's id attribute. This behavior follows the pattern established by standard JSF validators. Before we can use our custom validator in our pages, we need to declare it in the application's faces-config.xml configuration file. To do so, we need to add a <validator> element just before the closing </faces-config> element. <validator> <validator-id>emailValidator</validator-id> <validator-class> com.ensode.jsf.validators.EmailValidator </validator-class></validator> The body of the <validator-id> sub element must contain a unique identifier for our validator. The value of the <validator-class> element must contain the fully qualified name of our validator class. Once we add our validator to the application's faces-config.xml, we are ready to use it in our pages. In our particular case, we need to modify the email field to use our custom validator. <h:inputText id="email" label="Email Address" required="true" value="#{RegistrationBean.email}"> <f:validator validatorId="emailValidator"/></h:inputText> All we need to do is nest an <f:validator> tag inside the input field we wish to have validated using our custom validator. The value of the validatorId attribute of <f:validator> must match the value of the body of the <validator-id> element in faces-config.xml. At this point we are ready to test our custom validator. When entering an invalid email address into the email address input field and submitting the form, our custom validator logic was executed and the String we passed as a parameter to FacesMessage in our validator() method is shown as the error text by the <h:message> tag for the field.
Read more
  • 0
  • 0
  • 1641

article-image-data-migration-scenarios-sap-business-one-application-part-2
Packt
27 Oct 2009
7 min read
Save for later

Data Migration Scenarios in SAP Business ONE Application- part 2

Packt
27 Oct 2009
7 min read
Advanced data migration tools: xFusion Studio For our own projects, we have adopted a tool called xFusion. Using this tool, you gain flexibility and are able to reuse migration settings for specific project environments. The tool provides connectivity to directly extract data from applications (including QuickBooks and Peachtree). In addition, it also supports building rules for data profiling, validation, and conversions. For example, our project team participated in the development of the template for the Peachtree interface. We configured the mappings from Peachtree, and connected the data with the right fields in SAP. This was then saved as a migration template. Therefore, it would be easy and straightforward to migrate data from Peachtree to SAP in any future projects. xFusion packs save migration knowledge Based on the concept of establishing templates for migrations, xFusion provides preconfigured templates for the SAP Business ONE application. In xFusion, templates are called xFusion packs. Please note that these preconfigured packs may include master data packs, and also xFusion packs for transaction data. The following xFusion packs are provided for an SAP Business ONE migration: Administration Banking Business partner Finance HR Inventory and production Marketing documents and receipts MRP UDFs Services You can see that the packs are also grouped by business object. For example, you have a group of xFusion packs for inventory and production. You can open the pack and find a group of xFusion files that contain the configuration information. If you open the inventory and production pack, a list of folders will be revealed. Each folder has a set of Excel templates and xFusion fi les (seen in the following screenshot). An xFusion pack essentially incorporates the configuration and data manipulation procedures required to bring data from a source into SAP. The source settings can be saved in xFusion packs so that you can reuse the knowledge with regards to data manipulation and formatting. Data "massaging" using SQL The key for the migration procedure is the capability to do data massaging in order to adjust formats and columns, in a step-by-step manner, based on requirements. Data manipulation is not done programmatically, but rather via a step-by-step process, where each step uses SQL statements to verify and format data. The entire process is represented visually, and thereby documents the steps required. This makes it easy to adjust settings and fine-tune them. The following applications are supported and can, therefore, be used as a source for an SAP migration: (They are existing xFusion packs) SAP Business ONE Sage ACT! SAP SAP BW Peachtree QuickBooks Microsoft Dynamics CRM The following is a list of supported databases: Oracle ODBC MySQL OLE DB SQL Server PostgrSQL Working with xFusion The workflow in xFusion starts when you open an existing xFusion pack, or create a new one. In this example, an xFusion pack for business partner migration was opened. You can see the graphical representation of the migration process in the main window (in the following screenshot). Each icon in the graphic representation represents a data manipulation and formatting step. If you click on an icon, the complete path from the data source to the icon is highlighted. Therefore, you can select the previous steps to adjust the data. The core concept is that you do not directly change the input data, but define rules to convert data from the source format to the target format. If you open an xFusion pack for the SAP Business ONE application, the target is obviously SAP Business ONE. Therefore, you need to enter the privileges and database name so that the pack knows how to access the SAP system. In addition, the source parameters need to be provided. xFusion packs come with example Excel fi les. You need to select the Excel fi les as the relevant source. However, it is important to note that you don't need to use the Excel files. You can use any database, or other source, as long as you adjust the data format using the step-by-step process to represent the same format as provided in Excel. In xFusion. you can use the sample files that come in Excel format. The connection parameters are presented once you double-click on any of the connections listed in the Connections section as follows: It is recommended to click on Test Connection to verify the proper parameters. If all of the connections are right, you can run a migration from the source to the target by right-clicking on an icon and selecting Run Export as shown here: The progress and export is visually documented. This way, you can verify the success. There is also a log file in the directory where the currently utilized xFusion pack resides, as shown in the following screenshot: Tips and recommendations for your own project Now you know all of the main migration tools and methods. If you want to select the right tool and method for your specific situation, you will see that even though there may be many templates and preconfigured packs out there, your own project potentially comes with some individual aspects. When organizing the data migration project, use the project task skeleton I provided. It is important to subdivide the required migration steps into a group of easy-to-understand steps, where data can be verified at each level. If it gets complicated, it is probably not the right way to move forward, and you need to re-think the methods and tools you are using. Common issues The most common issue I found in similar projects is that the data to be migrated is not entirely clean and consistent. Therefore, be sure to use a data verification procedure at each step. Don't just import data, only to find out later that the database is overloaded with data that is not right. Recommendation Separate the master data and the transaction data. If you don't want to lose valuable transaction data, you can establish a reporting database which will save all of the historic transactions. For example, sales history can easily be migrated to an SQL database. You can then provide access to this information from the required SAP forms using queries or Crystal Reports. Case study During the course of evaluating the data import features available in the SAP Business ONE application, we have already learned how to import business partner information and item data. This can easily be done using the standard SAP data import features based on the Excel or text files. Using this method allows the lead, customer, and vendor data to be imported. Let's say that the Lemonade Stand enterprise has salespeople who travel to trade fairs and collect contact information. We can import the address information using the proven BP import method. But after this data is imported, what would the next step be? It would be a good idea to create and manage opportunities based on the address material. Basically, you already know how to use Excel to bring over address information. Let's enhance this concept to bring over opportunity information. We will use xFusion to import opportunity data into the SAP Business ONE application. The basis will be the xFusion pack for opportunities. Importing sales opportunities for the Lemonade Stand The xFusion pack is open, and you can see that it is a nice and clean example without major complexity. That's how it should be, as you see here:
Read more
  • 0
  • 0
  • 2782

article-image-oracle-web-rowset-part2
Packt
27 Oct 2009
4 min read
Save for later

Oracle Web RowSet - Part2

Packt
27 Oct 2009
4 min read
Reading a Row Next, we will read a row from the OracleWebRowSet object. Click on Modify Web RowSet link in the CreateRow.jsp. In the ModifyWebRowSet JSP click on the Read Row link. The ReadRow.jsp JSP is displayed. In the ReadRow JSP specify the Database Row to Read and click on Apply. The second row values are retrieved from the Web RowSet: In the ReadRow JSP the readRow() method of the WebRowSetQuery.java application is invoked. TheWebRowSetQuery object is retrieved from the session object. WebRowSetQuery query=( webrowset.WebRowSetQuery)session.getAttribute("query"); The String[] values returned by the readRow() method are added to theReadRow JSP fields. In the readRow() method theOracleWebRowSet object cursor is moved to the row to be read. webRowSet.absolute(rowRead); Retrieve the row values with the getString() method and add to String[]. Return the String[] object. String[] resultSet=new String[5];resultSet[0]=webRowSet.getString(1);resultSet[1]=webRowSet.getString(2);resultSet[2]=webRowSet.getString(3);resultSet[3]=webRowSet.getString(4);resultSet[4]=webRowSet.getString(5);return resultSet; ReadRow.jsp JSP is listed as follows: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd"><%@ page contentType="text/html;charset=windows-1252"%><%@ page session="true"%><html><head><meta http-equiv="Content-Type" content="text/html;charset=windows-1252"><title>Read Row with Web RowSet</title></head><body><form><h3>Read Row with Web RowSet</h3><table><tr><td><a href="ModifyWebRowSet.jsp">Modify Web RowSetPage</a></td></tr></table></form><%webrowset.WebRowSetQuery query=null;query=( webrowset.WebRowSetQuery)session.getAttribute("query");String rowRead=request.getParameter("rowRead");String journalUpdate=request.getParameter("journalUpdate");String publisherUpdate=request.getParameter("publisherUpdate");String editionUpdate=request.getParameter("editionUpdate");String titleUpdate=request.getParameter("titleUpdate");String authorUpdate=request.getParameter("authorUpdate");if((rowRead!=null)){int row_Read=Integer.parseInt(rowRead);String[] resultSet=query.readRow(row_Read);journalUpdate=resultSet[0];publisherUpdate=resultSet[1];editionUpdate=resultSet[2];titleUpdate=resultSet[3];authorUpdate=resultSet[4];}%><form name="query" action="ReadRow.jsp" method="post"><table><tr><td>Database Row to Read:</td></tr><tr><td><input name="rowRead" type="text" size="25"maxlength="50"/></td></tr><tr><td>Journal:</td></tr><tr><td><input name="journalUpdate" value='<%=journalUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Publisher:</td></tr><tr><td><input name="publisherUpdate"value='<%=publisherUpdate%>' type="text" size="50"maxlength="250"/></td></tr><tr><td>Edition:</td></tr><tr><td><input name="editionUpdate" value='<%=editionUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Title:</td></tr><tr><td><input name="titleUpdate" value='<%=titleUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Author:</td></tr><tr><td><input name="authorUpdate" value='<%=authorUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td><input class="Submit" type="submit" value="Apply"/></td></tr></table></form></body></html>
Read more
  • 0
  • 0
  • 1587
article-image-jboss-tools-palette
Packt
26 Oct 2009
4 min read
Save for later

JBoss Tools Palette

Packt
26 Oct 2009
4 min read
By default, JBoss Tools Palette is available in the Web Development perspective that can be displayed from the Window menu by selecting the Open Perspective | Other option. In the following screenshot, you can see the default look of this palette: Let's dissect this palette to see how it makes our life easier! JBoss Tools Palette Toolbar Note that on the top right corner of the palette, we have a toolbar made of three buttons (as shown in the following screenshot). They are (from left to right): Palette Editor Show/Hide Import Each of these buttons accomplishes different tasks for offering a high level of flexibility and customizability. Next, we will focus our attention on each one of these buttons. Palette Editor Clicking on the Palette Editor icon will display the Palette Editor window (as shown in the following screenshot), which contains groups and subgroups of tags that are currently supported. Also, from this window you can create new groups, subgroups, icons, and of course, tags—as you will see in a few moments. As you can see, this window contains two panels: one for listing groups of tag libraries (left side) and another that displays details about the selected tag and allows us to modify the default values (extreme right). Modifying a tag is a very simple operation that can be done like this: Select from the left panel the tag that you want to modify (for example, the <div> tag from the HTML | Block subgroup, as shown in the previous screenshot). In the right panel, click on the row from the value column that corresponds to the property that you want to modify (the name column). Make the desirable modification(s) and click the OK button for confirming it (them). Creating a set of icons The Icons node from the left panel allows you to create sets of icons and import new icons for your tags. To start, you have to right-click on this node and select the Create | Create Set option from the contextual menu (as shown in the following screenshot). This action will open the Add Icon Set window where you have to specify a name for this new set. Once you're done with the naming, click on the Finish button (as shown in the following screenshot). For example, we have created a set named eHTMLi: Importing an icon You can import a new icon in any set of icons by right-clicking on the corresponding set and selecting the Create | Import Icon option from the contextual menu (as shown in the following screenshot): This action will open the Add Icon window, where you have to specify a name and a path for your icon, and then click on the Finish button (as shown in the following screenshot). Note that the image of the icon should be in GIF format. Creating a group of tag libraries As you can see, the JBoss Tools Palette has a consistent default set of groups of tag libraries, like HTML, JSF, JSTL, Struts, XHTML, etc. If these groups are insufficient, then you can create new ones by right-clicking on the Palette node and selecting the Create | Create Group option from the contextual menu (as shown in the following screenshot). This action will open the Create Group window, where you have to specify a name for the new group, and then click on Finish. For example, we have created a group named mygroup: Note that you can delete (only groups created by the user) or edit groups (any group) by selecting the Delete or Edit options from the contextual menu that appears when you right-click on the chosen group. Creating a tag library Now that we have created a group, it's time to create a library (or a subgroup). To do this, you have to right-click on the new group and select the Create Group option from the contextual menu (as shown in the following screenshot). This action will open the Add Palette Group window, where you have to specify a name and an icon for this library, and then click on the Finish button (as shown in the following screenshot). As an example, we have created a library named eHTML with an icon that we had imported in the Importing an icon section discussed earlier in this article: Note that you can delete a tag library (only tag libraries created by the user) by selecting the Delete option from the contextual menu that appears when you right-click on the chosen library.
Read more
  • 0
  • 0
  • 1669

article-image-new-soa-capabilities-biztalk-server-2009-wcf-sql-server-adapter
Packt
26 Oct 2009
3 min read
Save for later

New SOA Capabilities in BizTalk Server 2009: WCF SQL Server Adapter

Packt
26 Oct 2009
3 min read
Do not go where the path may lead; go instead where there is no path and leave a trail.-Ralph Waldo Emerson Many of the patterns and capabilities shown in this article are compatible with the last few versions of the BizTalk Server product. So what's new in BizTalk Server 2009?` BizTalk Server 2009 is the sixth formal release of the BizTalk Server product. This upcoming release has a heavy focus on platform modernization through new support for Windows Server 2008, Visual Studio.NET 2008, SQL Server 2008, and the .NET Framework 3.5. This will surely help developers who have already moved to these platforms in their day-to-day activities but have been forced to maintain separate environments solely for BizTalk development efforts. Lets get started. What is the WCF SQL Adapter? The BizTalk Adapter Pack 2.0 now contains five system and data adapters including SAP, Siebel, Oracle databases, Oracle applications, and SQL Server. What are these adapters and how are they different than the adapters available for previous version of BizTalk? Up until recently, BizTalk adapters were built using a commonly defined BizTalk Adapter Framework. This framework prescribed interfaces and APIs for adapter developers in order to elicit a common look and feel for the users of the adapters. Moving forward, adapter developers are encouraged by Microsoft to use the new WCF LOB Adapter SDK. As you can guess from the name, this new adapter framework, which can be considered an evolution of the BizTalk Adapter Framework, is based on WCF technologies. All of the adapters in the BizTalk Adapter Pack 2.0 are built upon the WCF LOB Adapter SDK. What this means is that all of the adapters are built as reusable, metadata-rich components that are surfaced to users as WCF bindings. So much like you have a wsHttp or netTcp binding, now you have a sqlBinding or sapBinding. As you would expect from a WCF binding, there is a rich set of configuration attributes for these adapters and they are no longer tightly coupled to BizTalk itself. Microsoft has made connection a commodity, and no longer do organizations have to spend tens of thousands of dollars to connect to line of business systems like SAP through expensive, BizTalk-only adapters. This latest version of the BizTalk Adapter Pack now includes a SQL Server adapter, which replaces the legacy BizTalk-only SQL Server adapter. What do we get from this SQL Server adapter that makes it so much better than the old one? Feature Classic SQL Adapter WCF SQL Adapter Execute create-read-update-delete statements on tables and views; execute stored procedures and generic T-SQL statements Partial (send operations only support stored procedures and updategrams) Yes Database polling via FOR XML Yes Yes Database polling via  traditional tabular results No Yes Proactive database push via SQL Query Notification No Yes Expansive adapter configuration which impacts connection management and transaction behavior No Yes Support for composite transactions which allow aggregation of operations across tables or procedures into a single atomic transaction No Yes Rich metadata browsing and retrieval for finding and selecting database operations No Yes Support for the latest data types (e.g. XML) and SQL Server 2008 platform No Yes Reusable outside of BizTalk applications by WCF or basic HTTP clients No Yes Adapter extension and configuration through out of the box WCF components or custom WCF behaviors No Yes Dynamic WSDL generation which always reflects current state of the system instead of fixed contract which always requires explicit updates No Yes
Read more
  • 0
  • 0
  • 4874

article-image-rss-web-widget
Packt
24 Oct 2009
8 min read
Save for later

RSS Web Widget

Packt
24 Oct 2009
8 min read
What is an RSS Feed? First of all, let us understand what a web feed is. Basically, it is a data format that provides frequently updated content to users. Content distributors syndicate the web feed, allowing users to subscribe, by using feed aggregator. RSS feeds contain data in an XML format. RSS is the term used for describing Really Simple Syndication, RDF Site Summary, or Rich Site Summary, depending upon the different versions. RDF (Resource Description Framework), a family of W3C specification, is a data model format for modelling various information such as title, author, modified date, content etc through variety of syntax format. RDF is basically designed to be read by computers for exchanging information. Since, RSS is an XML format for data representation, different authorities defined different formats of RSS across different versions like 0.90, 0.91, 0.92, 0.93, 0.94, 1.0 and 2.0. The following table shows when and by whom were the different RSS versions proposed. RSS Version Year Developer's Name RSS 0.90 1999 Netscape introduced RSS 0.90. RSS 0.91 1999 Netscape proposed the simpler format of RSS 0.91. 1999 UserLand Software proposed the RSS specification. RSS 1.0 2000 O'Reilly released RSS 1.0. RSS 2.0 2000 UserLand Software proposed the further RSS specification in this version and it is the most popular RSS format being used these days. Meanwhile, Harvard Law school is responsible for the further development of the RSS specification. There had been a competition like scenario for developing the different versions of RSS between UserLand, Netscape and O'Reilly before the official RSS 2.0 specification was released. For a detailed history of these different versions of RSS you can check http://www.rss-specifications.com/history-rss.htm The current version RSS is 2.0 and it is the common format for publishing RSS feeds these days. Like RSS, there is another format that uses the XML language for publishing web feeds. It is known as ATOM feed, and is most commonly used in Wiki and blogging software. Please refer to http://en.wikipedia.org/wiki/ATOM for detail. The following is the RSS icon that denotes links with RSS feeds. If you're using Mozilla's Firefox web browser then you're likely to see the above image in the address bar of the browser for subscribing to an RSS feed link available in any given page. Web browsers like Firefox and Safari discover available RSS feeds in web pages by looking at the Internet media type application/rss+xml. The following tag specifies that this web page is linked with the RSS feed URL: http://www.example.com/rss.xml<link href="http://www.example.com/rss.xml" rel="alternate" type="application/rss+xml" title="Sitewide RSS Feed" /> Example of RSS 2.0 format First of all, let’s look at a simple example of the RSS format. <?xml version="1.0" encoding="UTF-8" ?><rss version="2.0"><channel> <title>Title of the feed</title> <link>http://www.examples.com</link> <description>Description of feed</description> <item> <title>News1 heading</title> <link>http://www.example.com/news-1</link> <description>detail of news1 </description> </item> <item> <title>News2 heading</title> <link>http://www.example.com/news-2</link> <description>detail of news2 </description> </item></channel></rss> The first line is the XML declaration that indicates its version is 1.0. The character encoding is UTF-8. UTF-8 characters support many European and Asian characters so it is widely used as character encoding in web. The next line is the rss declaration, which declares that it is a RSS document of version 2.0 The next line contains the <channel> element which is used for describing the detail of the RSS feed. The <channel> element must have three required elements <title>, <link> and <description>. The title tag contains the title of that particular feed. Similarly, the link element contains the hyperlink of the channel and the description tag describes or carries the main information of the channel. This tag usually contains the information in detail. Furthermore, each <channel> element may have one or more <item> elements which contain the story of the feed. Each <item> element must have the three elements <title>, <link> and <description> whose use is similar to those of channel elements, but they describe the details of each individual items. Finally, the last two lines are the closing tags for the <channel> and <rss> elements. Creating RSS Web Widget The RSS widget we're going to build is a simple one which displays the headlines from the RSS feed, along with the title of the RSS feed. This is another widget which uses some JavaScript, PHP CSS and HTML. The content of the widget is displayed within an Iframe so when you set up the widget, you've to adjust the height and width. To parse the RSS feed in XML format, I've used the popular PHP RSS Parser – Magpie RSS. The homepage of Magpie RSS is located at http://magpierss.sourceforge.net/. Introduction to Magpie RSS Before writing the code, let's understand what the benefits of using the Magpie framework are, and how it works. It is easy to use. While other RSS parsers are normally limited for parsing certain RSS versions, this parser parses most RSS formats i.e. RSS 0.90 to 2.0, as well as ATOM feed. Magpie RSS supports Integrated Object Cache which means that the second request to parse the same RSS feed is fast— it will be fetched from the cache. Now, let's quickly understand how Magpie RSS is used to parse the RSS feed. I'm going to pick the example from their homepage for demonstration. require_once 'rss_fetch.inc';$url = 'http://www.getacoder.com/rss.xml';$rss = fetch_rss($url);echo "Site: ", $rss->channel['title'], "<br>";foreach ($rss->items as $item ) { $title = $item[title]; $url = $item[link]; echo "<a href=$url>$title</a></li><br>";} If you're more interested in trying other PHP RSS parsers then you might like to check out SimplePie RSS Parser (http://simplepie.org/) and LastRSS (http://lastrss.oslab.net/). You can see in the first line how the rss_fetch.inc file is included in the working file. After that, the URL of the RSS feed from getacoder.com is assigned to the $url variable. The fetch_rss() function of Magpie is used for fetching data and converting this data into RSS Objects. In the next line, the title of RSS feed is displayed using the code $rss->channel['title']. The other lines are used for displaying each of the RSS feed's items. Each feed item is stored within an $rss->items array, and the foreach() loop is used to loop through each element of the array. Writing Code for our RSS Widget As I've already discussed, this widget is going to use Iframe for displaying the content of the widget, so let's look at the JavaScript code for embedding Iframe within the HTML code. var widget_string = '<iframe src="http://www.yourserver.com/rsswidget/rss_parse_handler.php?rss_url=';widget_string += encodeURIComponent(rss_widget_url);widget_string += '&maxlinks='+rss_widget_max_links;widget_string += '" height="'+rss_widget_height+'" width="'+rss_widget_width+'"';widget_string += ' style="border:1px solid #FF0000;"';widget_string +=' scrolling="no" frameborder="0"></iframe>';document.write(widget_string); In the above code, the widget string variable contains the string for displaying the widget. The source of Iframe is assigned to rss_parse_handler.php. The URL of the RSS feed, and the headlines from the feed are passed to rss_parse_handler.php via the GET method, using rss_url and maxlinks parameters respectively. The values to these parameters are assigned from the Javascript variables rss_widget_url and rss_widget_max_links. The width and height of the Iframe are also assigned from JavaScript variables, namely rss_widget_width and rss_widget_height. The red border on the widget is displayed by assigning 1px solid #FF0000 to the border attribute using the inline styling of CSS. Since, Inline CSS is used, the frameborder property is set to 0 (i.e. the border of the frame is zero). Displaying borders from the CSS has some benefit over employing the frameborder property. While using CSS code, 1px dashed #FF0000 (border-width border-style border-color) means you can display a dashed border (you can't using frameborder), and you can use the border-right, border-left, border-top, border-bottom attributes of CSS to display borders at specified positions of the object. The scrolling property is set to no here, which means that the scroll bar will not be displayed in the widget if the widget content overflows. If you want to show a scroll bar, then you can set this property to yes. The values of JavaScript variables like rss_widget_url, rss_widget_max_links etc come from the page where we'll be using this widget. You'll see how the values of these variables will be assigned from the section at the end where we'll look at how to use this RSS widget.  
Read more
  • 0
  • 0
  • 4517
article-image-working-simple-associations-using-cakephp
Packt
24 Oct 2009
5 min read
Save for later

Working with Simple Associations using CakePHP

Packt
24 Oct 2009
5 min read
Database relationship is hard to maintain even for a mid-sized PHP/MySQL application, particularly, when multiple levels of relationships are involved because complicated SQL queries are needed. CakePHP offers a simple yet powerful feature called 'object relational mapping' or ORM to handle database relationships with ease.In CakePHP, relations between the database tables are defined through association—a way to represent the database table relationship inside CakePHP. Once the associations are defined in models according to the table relationships, we are ready to use its wonderful functionalities. Using CakePHP's ORM, we can save, retrieve, and delete related data into and from different database tables with simplicity, in a better way—no need to write complex SQL queries with multiple JOINs anymore! In this article by Ahsanul Bari and Anupom Syam, we will have a deep look at various types of associations and their uses. In particular, the purpose of this article is to learn: How to figure out association types from database table relations How to define different types of associations in CakePHP models How to utilize the association for fetching related model data How to relate associated data while saving There are basically 3 types of relationship that can take place between database tables: one-to-one one-to-many many-to-many The first two of them are simple as they don't require any additional table to relate the tables in relationship. In this article, we will first see how to define associations in models for one-to-one and one-to-many relations. Then we will look at how to retrieve and delete related data from, and save data into, database tables using model associations for these simple associations. Defining One-To-Many Relationship in Models To see how to define a one-to-many relationship in models, we will think of a situation where we need to store information about some authors and their books and the relation between authors and books is one-to-many. This means an author can have multiple books but a book belongs to only one author (which is rather absurd, as in real life scenario a book can also have multiple authors). We are now going to define associations in models for this one-to-many relation, so that our models recognize their relations and can deal with them accordingly. Time for Action: Defining One-To-Many Relation Create a new database and put a fresh copy of CakePHP inside the web root. Name the database whatever you like but rename the cake folder to relationship. Configure the database in the new Cake installation. Execute the following SQL statements in the database to create a table named authors, CREATE TABLE `authors` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `name` varchar( 127 ) NOT NULL , `email` varchar( 127 ) NOT NULL , `website` varchar( 127 ) NOT NULL ); Create a books table in our database by executing the following SQL commands: CREATE TABLE `books` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `isbn` varchar( 13 ) NOT NULL , `title` varchar( 64 ) NOT NULL , `description` text NOT NULL , `author_id` int( 11 ) NOT NULL ) Create the Author model using the following code (/app/models/authors.php): <?php class Author extends AppModel{ var $name = 'Author'; var $hasMany = 'Book';} ?> Use the following code to create the Book model (/app/models/books.php): <?phpclass Book extends AppModel{ var $name = 'Book'; var $belongsTo = 'Author';}?> Create a controller for the Author model with the following code: (/app/controllers/authors_controller.php): <?phpclass AuthorsController extends AppController { var $name = 'Authors'; var $scaffold;}?>   Use the following code to create a controller for the Book model (/app/controllers/books_controller.php): <?php class BooksController extends AppController { var $name = 'Books'; var $scaffold; } ?> Now, go to the following URLs and add some test data: http://localhost/relationship/authors/ and http://localhost/relationship/books/ What Just Happened? We have created two tables: authors and books for storing author and book information. A foreign-key named author_id is added to the books table to establish the one-to-many relation between authors and books. Through this foreign-key, an author is related to multiple books, as well as, a book is related to one single author. By Cake convention, the name of a foreign-key should be underscored, singular name of target model, suffixed with _id. Once the database tables are created and relations are established between them, we can define associations in models. In both of the model classes, Author and Book, we defined associations to represent the one-to-many relationship between the corresponding two tables. CakePHP provides two types of association: hasMany and belongsTo to define one-to-many relations in models. These associations are very appropriately named: As an author 'has many' books, Author model should have hasMany association to represent its relation with the Book model. As a book 'belongs to' one author, Book model should have belongsTo association to denote its relation with the Author model. In the Author model, an association attribute $hasMany is defined with the value Book to inform the model that every author can be related to many books. We also added a $belongsTo attribute in the Book model and set its value to Author to let the Book model know that every book is related to only one author. After defining the associations, two controllers were created for both of these models with scaffolding to see how the associations are working.
Read more
  • 0
  • 0
  • 7846

article-image-10-minute-guide-enterprise-service-bus-and-netbeans-soa-pack
Packt
23 Oct 2009
4 min read
Save for later

10 Minute Guide to the Enterprise Service Bus and the NetBeans SOA Pack

Packt
23 Oct 2009
4 min read
Introduction When you are integrating different systems together, it can be very easy to use your vendor’s APIs and program directly against them. Using that approach, developers can easily integrate applications. Supporting these applications however can become problematical. If we have a few systems integrated together in this approach, everything is fine, but the more systems we integrate together, the more integration code we have and it rapidly becomes unfeasible to support this point-to-point integration. To overcome this problem, the integration hub was developed. In this scenario, developers would write against the API of the integration vendor and only had to learn one API. This is a much better approach than the point-to-point integration method however it still has its limitations. There is still a proprietary API to learn (albeit only one this time), but if the integration hub goes down for any reason, then entire Enterprise can become unavailable. The Enterprise Service Bus (ESB) overcomes these problems by providing a scalable, standards based integration architecture. The NetBeans SOA pack includes a copy of OpenESB, which follows this architecture promoted by the Java Business Integration Specification JSR 208. Workings of an ESB At the heart of the ESB is the Normalized Message Router (NMR) - a pluggable framework that allows Java Business Integration (JBI) components to be plugged into it as required.  The NMR is responsible for passing messages between all of the different JBI components that are plugged into it. The two main JBI components that are plugged into the NMR are Binding Components and Service Engines.  Binding Components are responsible for handling all protocol specific transport such as HTTP, SOAP, JMS, File system access, etc.  Service Engines on the other hand execute business logic as BPEL processes, SQL statements, invoking external Java EE web services, etc.   There is a clear separation between Binding Components and Service Engines with protocol specific transactions being handled by the former and business logic being performed by the latter. This architecture promotes loose coupling in that service engines do not communicate directly with each other.  All communication between different JBI components is performed through Binding Components by use of normalized messages as shown in the sequence chart below. In the case of OpenESB, all of these normalized messages are based upon WSDL.  If, for example, a BPEL process needs to invoke a web service or send an email, it does not need to know about SOAP or SMTP that is the responsibility of the Binding Components.  For one Service Engine to invoke another Service Engine all that is required is a WSDL based message to be constructed, which can then be routed via the NMR and Binding Components to the destination Service Engine. OpenESB provides many different Binding Components and Service Engines enabling integration with many varied different systems. So, we can see that OpenESB provides us with a standard based architecture that promotes loose coupling between components.  NetBeans 6 provides tight integration with OpenESB allowing developers to take full advantage of its facilities. Integrating Netbeans6 IDE with OpenESB Integration with NetBeans comes in two parts.  First, NetBeans allows the different JBI components to be managed from within the IDE.  Binding Components and Service Engines can be installed into OpenESB from within NetBeans and from thereon the full lifecycle of the components (start, stop, restart, uninstall) can be controlled directly from within the IDE. Secondly, and more interestingly, the NetBeans IDE provides full editing support for developing Composite Applications¬ applications that bring together business logic and data from different sources.  One of the main features of Composite Applications is probably the BPEL editor.  This allows BPL process to be built up graphically allowing interaction with different data sources via different partner links, which may be web services, different BPEL processes, or SQL statements. Once a BPEL process or composite application has been developed, the NetBeans SOA pack provides tools to allow different bindings to be added onto the application depending on the Binding Components installed into OpenESB.  So, for example, a file binding could be added to a project that could poll the file system periodically looking for input messages to start a BPEL process, the output of which could be saved into a different file or sent directly to an FTP site. In addition to support for developing Composite Applications, the NetBeans SOA pack provides support for some features many Java developers would find useful, namely XML and WSDL editing and validation.  XML and WSDL files can be edited within the IDE as either raw text, or via graphical editors.  If changes are made in the raw text, the graphical editors update accordingly and vice versa.  
Read more
  • 0
  • 0
  • 2894
Modal Close icon
Modal Close icon