Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-working-complex-associations-using-cakephp
Packt
28 Oct 2009
6 min read
Save for later

Working with Complex Associations using CakePHP

Packt
28 Oct 2009
6 min read
Defining Many-To-Many Relationship in Models In the previous article in this series on Working with Simple Associations using CakePHP, we assumed that a book can have only one author. But in real life scenario, a book may also have more than one author. In that case, the relation between authors and books is many-to-many. We are now going to see how to define associations for a many-to-many relation. We will modify our existing code-base that we were working on in the previous article to set up the associations needed to represent a many-to-many relation. Time for Action: Defining Many-To-Many Relation Empty the database tables: TRUNCATE TABLE `authors`;TRUNCATE TABLE `books`; Remove the author_id field from the books table: ALTER TABLE `books` DROP `author_id` Create a new table, authors_books:; CREATE TABLE `authors_books` (`author_id` INT NOT NULL ,`book_id` INT NOT NULL Modify the Author (/app/models/author.php) model: <?phpclass Author extends AppModel{ var $name = 'Author'; var $hasAndBelongsToMany = 'Book';}?> Modify the Book (/app/models/book.php) model: <?phpclass Book extends AppModel{ var $name = 'Book'; var $hasAndBelongsToMany = 'Author';}?> Modify the AuthorsController (/app/controllers/authors_controller.php): <?phpclass AuthorsController extends AppController { var $name = 'Authors'; var $scaffold;}?> Modify the BooksController (/app/controllers/books_controller.php): <?phpclass BooksController extends AppController { var $name = 'Books'; var $scaffold;}?> Now, visit the following URLs and add some test data into the system:http://localhost/relationship/authors/ and http://localhost/relationship/books/ What Just Happened? We first emptied the database and then dropped the field author_id from the books table. Then we added a new join table authors_books that will be used to establish a many-to-many relation between authors and books. The following diagram shows how a join table relates two tables in many-to-many relation: In a many-to-many relation, one record of any of the tables can be related to multiple records of the other table. To establish this link, a join table is used—a join table contains two fields to hold the primary-keys of both of the records in relation. CakePHP has certain conventions for naming a join table—join tables should be named after the tables in relation, in alphabetical order, with underscores in between. The join table between authors and books tables should be named authors_books, not books_authors. Also by Cake convention, the default value for the foreign keys used in the join table must be underscored, singular name of the models in relation, suffixed with _id. After creating the join table, we defined associations in the models, so that our models also know about the new relationship that they have. We added hasAndBelongsToMany (HABTM) associations in both of the models. HABTM is a special type of association used to define a many-to-many relation in models. Both the models have HABTM associations to define the many-to-many relationship from both ends. After defining the associations in the models, we created two controllers for these two models and put in scaffolding in them to see the association working. We could also use an array to set up the HABTM association in the models. Following code segment shows how to use an array for setting up an HABTM association between authors and books in the Author model: var $hasAndBelongsToMany = array( 'Book' => array( 'className' => 'Book', 'joinTable' => 'authors_books', 'foreignKey' => 'author_id', 'associationForeignKey' => 'book_id' ) ); Like, simple relationships, we can also override default association characteristics by adding/modifying key/value pairs in the associative array. The foreignKey key/value pair holds the name of the foreign-key found in the current model—default is underscored, singular name of the current model suffixed with _id. Whereas, associationForeignKey key/value pair holds the foreign-key name found in the corresponding table of the other model—default is underscored, singular name of the associated model suffixed with _id. We can also have conditions, fields, and order key/value pairs to customize the relationship in more detail. Retrieving Related Model Data in Many-To-Many Relation Like one-to-one and one-to-many relations, once the associations are defined, CakePHP will automatically fetch the related data in many-to-many relation. Time for Action: Retrieving Related Model Data Take out scaffolding from both of the controllers—AuthorsController (/app/controllers/authors_controller.php) and BooksController (/app/controllers/books_controller.php). Add an index() action inside the AuthorsController (/app/controllers/authors_controller.php), like the following: <?phpclass AuthorsController extends AppController { var $name = 'Authors'; function index() { $this->Author->recursive = 1; $authors = $this->Author->find('all'); $this->set('authors', $authors); }}?> Create a view file for the /authors/index action (/app/views/authors/index.ctp): <?php foreach($authors as $author): ?><h2><?php echo $author['Author']['name'] ?></h2><hr /><h3>Book(s):</h3><ul><?php foreach($author['Book'] as $book): ?><li><?php echo $book['title'] ?></li><?php endforeach; ?></ul><?php endforeach; ?> Write down the following code inside the BooksController (/app/controllers/books_controller.php): <?phpclass BooksController extends AppController { var $name = 'Books'; function index() { $this->Book->recursive = 1; $books = $this->Book->find('all'); $this->set('books', $books); }}?> Create a view file for the action /books/index (/app/views/books/index.ctp): <?php foreach($books as $book): ?><h2><?php echo $book['Book']['title'] ?></h2><hr /><h3>Author(s):</h3><ul><?php foreach($book['Author'] as $author): ?><li><?php echo $author['name'] ?></li><?php endforeach; ?></ul><?php endforeach; ?> Now, visit the following URLs:http://localhost/relationship/authors/http://localhost/relationship/books/ What Just Happened? In both of the models, we first set the value of $recursive attributes to 1 and then we called the respective models find('all') functions. So, these subsequent find('all') operations return all associated model data that are related directly to the respective models. These returned results of the find('all') requests are then passed to the corresponding view files. In the view files, we looped through the returned results and printed out the models and their related data. In the BooksController, this returned data from find('all') is stored in a variable $books. This find('all') returns an array of books and every element of that array contains information about one book and its related authors. Array ( [0] => Array ( [Book] => Array ( [id] => 1 [title] => Book Title ... ) [Author] => Array ( [0] => Array ( [id] => 1 [name] => Author Name ... ) [1] => Array ( [id] => 3 ... 54 54 ... ...) Same for the Author model, the returned data is an array of authors. Every element of that array contains two arrays: one contains the author information and the other contains an array of books related to this author. These arrays are very much like what we got from a find('all') call in case of the hasMany association.
Read more
  • 0
  • 0
  • 7049

article-image-working-xml-flex-3-and-java-part2
Packt
28 Oct 2009
7 min read
Save for later

Working with XML in Flex 3 and Java-part2

Packt
28 Oct 2009
7 min read
  Loading external XML documents You can use the URLLoader class to load external data from a URL. The URLLoader class downloads data from a URL as text or binary data. In this section, we will see how to use the URLLoader class for loading external XML data into your application. You can create a URLLoader class instance and call the load() method by passing URLRequest as a parameter and register for its complete event to handle loaded data. The following code snippet shows how exactly this works: private var xmlUrl:String = "http://www.foo.com/rssdata.xml";private var request:URLRequest = new URLRequest(xmlUrl);private var loader:URLLoader = new URLLoader(;private var rssData:XML;loader.addEventListener(Event.COMPLETE, completeHandler);loader.load(request);private function completeHandler(event:Event):void { rssData = XML(loader.data); trace(rssData);} Let's see one quick complete sample of loading RSS data from the Internet: <?xml version="1.0" encoding="utf-8"?><mx:Application creationComplete="loadData();"> <mx:Script> <![CDATA[ import mx.collections.XMLListCollection; private var xmlUrl:String = "http://sessions.adobe.com/360FlexSJ2008/feed.xml"; private var request:URLRequest = new URLRequest(xmlUrl); private var loader:URLLoader = new URLLoader(request); [Bindable] private var rssData:XML; private function loadData():void { loader.addEventListener(Event.COMPLETE, completeHandler); loader.load(request); } private function completeHandler(event:Event):void { rssData = new XML(loader.data); } ]]></mx:Script><mx:Panel title="RSS Feed Reader" width="100%" height="100%"> <mx:DataGrid id="dgGrid" dataProvider="{rssData.channel.item}" height="100%" width="100%"> <mx:columns> <mx:DataGridColumn headerText="Title" dataField="title"/> <mx:DataGridColumn headerText="Link" dataField="link"/> <mx:DataGridColumn headerText="pubDate" dataField="pubDate"/> <mx:DataGridColumn headerText="Description" dataField="description"/> </mx:columns></mx:DataGrid><mx:TextArea width="100%" height="80" text="{dgGrid.selectedItem.description}"/></mx:Panel></mx:Application> In the code above, we are loading RSS feed from an external URL and displaying it in DataGrid by using data binding. Output: An example: Building a book explorer In this section, we will build something more complicated and interesting by using many features, including custom components, events, data binding, E4X, loading external XML data, and so on. We will build a sample books explorer, which will load a books catalog from an external XML file and allow the users to explore and view details of books. We will also build a simple shopping cart component, which will list books that a user would add to cart by clicking on the Add to cart button. Create a new Flex project using Flex Builder. Once the project is created, create an assetsimages folder under its src folder. This folder will be used to store images used in this application. Now start creating the following source files into its source folder. Let's start by creating a simple book catalog XML file as follows: bookscatalog.xml:<books> <book ISBN="184719530X"> <title>Building Websites with Joomla! 1.5</title> <author> <lastName>Hagen</lastName> <firstName>Graf</firstName> </author>  <pageCount>363</pageCount> <price>Rs.1,247.40</price> <description>The best-selling Joomla! tutorial guide updated for the latest 1.5 release </description> </book> <book ISBN="1847196160"> <title>Drupal 6 JavaScript and jQuery</title> <author> <lastName>Matt</lastName> <firstName>Butcher</firstName> </author>  <pageCount>250</pageCount> <price>Rs.1,108.80</price> <description>Putting jQuery, AJAX, and JavaScript effects into your Drupal 6 modules and themes</description> </book> <book ISBN="184719494X"> <title>Expert Python Programming</title> <author> <lastName>Tarek</lastName> <firstName>Ziadé</firstName> </author>  <pageCount>350</pageCount> <price>Rs.1,247.4</price> <description>Best practices for designing, coding, and distributing your Python software</description> </book> <book ISBN="1847194885"> <title>Joomla! Web Security</title> <author> <lastName>Tom</lastName> <firstName>Canavan</firstName> </author>  <pageCount>248</pageCount> <price>Rs.1,108.80</price> <description>Secure your Joomla! website from common security threats with this easy-to-use guide</description> </book></books> The above XML file contains details of individual books in an XML form. You can also deploy this file on your web server and specify its URL into URLRequest while loading it. Next, we will create a custom event which we will be dispatching from our custom component. Make sure you create an events package under your src folder in Flex Builder called events, and place this file in it. AddToCartEvent.as:package events{ import flash.events.Event; public class AddToCartEvent extends Event { public static const ADD_TO_CART:String = "addToCart"; public var book:Object; public function AddToCartEvent(type:String, bubbles_Boolean=false, cancelable_Boolean=false) { super(type, bubbles, cancelable); } }} This is a simple custom event created by inheriting the flash.events.Event class. This class defines the ADD_TO_CART string constant, which will be used as the name of the event in the addEventListener() method. You will see this in the BooksExplorer.mxml code. We have also defined an object to hold the reference of the book which the user can add into the shopping cart. In short, this object will hold the XML node of a selected book. Next, we will create the MXML custom component called BookDetailItemRenderer.mxml. Make sure that you create a package under your src folder in Flex Builder called components, and place this file in it and copy the following code in it: <?xml version="1.0" encoding="utf-8"?><mx:HBox cornerRadius="8" paddingBottom="2" paddingLeft="2"paddingRight="2" paddingTop="2"><mx:Metadata>[Event(name="addToCart", type="flash.events.Event")]</mx:Metadata><mx:Script><![CDATA[import events.AddToCartEvent;import mx.controls.Alert;[Bindable][Embed(source="../assets/images/cart.gif")]public var cartImage:Class;private function addToCardEventDispatcher():void {var addToCartEvent:AddToCartEvent = new AddToCartEvent("addToCart", true, true);addtoCartEvent.book = data;dispatchEvent(addtoCartEvent);}]]></mx:Script><mx:HBox width="100%" verticalAlign="middle" paddingBottom="2"paddingLeft="2" paddingRight="2" paddingTop="2" height="100%"borderStyle="solid" borderThickness="2" borderColor="#6E6B6B"cornerRadius="4"><mx:Image id="bookImage" source="{data.image}" height="109"width="78" maintainAspectRatio="false"/><mx:VBox height="100%" width="100%" verticalGap="2"paddingBottom="0" paddingLeft="0" paddingRight="0"paddingTop="0" verticalAlign="middle"><mx:Label id="bookTitle" text="{data.title}"fontSize="12" fontWeight="bold"/><mx:Label id="bookAuthor" text="By: {data.author.lastName},{data.author.firstName}" fontWeight="bold"/><mx:Label id="coverPrice" text="Price: {data.price}"fontWeight="bold"/><mx:Label id="pageCount" text="Pages: {data.pageCount}"fontWeight="bold"/><mx:HBox width="100%" backgroundColor="#3A478D"horizontalAlign="right" paddingBottom="0" paddingLeft="0"paddingRight="5" paddingTop="0" height="22"verticalAlign="middle"><mx:Label text="Add to cart " color="#FFFFFF"fontWeight="bold"/><mx:Button icon="{cartImage}" height="20" width="20"click="addToCardEventDispatcher();"/></mx:HBox></mx:VBox></mx:HBox></mx:HBox>
Read more
  • 0
  • 0
  • 2014

article-image-working-xml-flex-3-and-java-part1
Packt
28 Oct 2009
10 min read
Save for later

Working with XML in Flex 3 and Java-part1

Packt
28 Oct 2009
10 min read
In today's world, many server-side applications make use of XML to structure data because XML is a standard way of representing structured information. It is easy to work with, and people can easily read, write, and understand XML without the need of any specialized skills. The XML standard is widely accepted and used in server communications such as Simple Object Access Protocol (SOAP) based web services. XML stands for eXtensible Markup Language. The XML standard specification is available at http://www.w3.org/XML/. Adobe Flex provides a standardized ECMAScript-based set of API classes and functionality for working with XML data. This collection of classes and functionality provided by Flex are known as E4X. You can use these classes provided by Flex to build sophisticated Rich Internet Applications using XML data. XML basics XML is a standard way to represent categorized data into a tree structure similar to HTML documents. XML is written in plain-text format, and hence it is very easy to read, write, and manipulate its data. A typical XML document looks like this: <book>    <title>Flex 3 with Java</title>    <author>Satish Kore</author>    <publisher>Packt Publishing</publisher>    <pages>300</pages> </book> Generally, XML data is known as XML documents and it is represented by tags wrapped in angle brackets (< >). These tags are also known as XML elements. Every XML document starts with a single top-level element known as the root element. Each element is distinguished by a set of tags known as the opening tag and the closing tag. In the previous XML document, <book> is the opening tag and </book> is the closing tag. If an element contains no content, it can be written as an empty statement (also called self-closing statement). For example, <book/> is as good as writing <book></book>. XML documents can also be more complex with nested tags and attributes, as shown in the following example: <book ISBN="978-1-847195-34-0">   <title>Flex 3 with Java</title>   <author country="India" numberOfBooks="1">    <firstName>Satish</firstName>    <lastName>Kore</lastName> </author>   <publisher country="United Kingdom">Packt Publishing</publisher>   <pages>300</pages> </book> Notice that the above XML document contains nested tags such as <firstName> and <lastName> under the <author> tag. ISBN, country, and numberOfBooks, which you can see inside the tags, are called XML attributes. To learn more about XML, visit the W3Schools' XML Tutorial at http://w3schools.com/xml/. Understanding E4X Flex provides a set of API classes and functionality based on the ECMAScript for XML (E4X) standards in order to work with XML data. The E4X approach provides a simple and straightforward way to work with XML structured data, and it also reduces the complexity of parsing XML documents. Earlier versions of Flex did not have a direct way of working with XML data. The E4X provides an alternative to DOM (Document Object Model) interface that uses a simpler syntax for reading and querying XML documents. More information about other E4X implementations can be found at http://en.wikipedia.org/wiki/E4X. The key features of E4X include: It is based on standard scripting language specifications known as ECMAScript for XML. Flex implements these specifications in the form of API classes and functionality for simplifying the XML data processing. It provides easy and well-known operators, such as the dot (.) and @, to work with XML objects. The @ and dot (.) operators can be used not only to read data, but also to assign data to XML nodes, attributes, and so on. The E4X functionality is much easier and more intuitive than working with the DOM documents to access XML data. ActionScript 3.0 includes the following E4X classes: XML, XMLList, QName, and Namespace. These classes are designed to simplify XML data processing into Flex applications. Let's see one quick example: Define a variable of type XML and create a sample XML document. In this example, we will assign it as a literal. However, in the real world, your application might load XML data from external sources, such as a web service or an RSS feed. private var myBooks:XML =   <books publisher="Packt Pub">    <book title="Book1" price="99.99">    <author>Author1</author>    </book>    <book title="Book2" price="59.99">    <author>Author2</author>    </book>    <book title="Book3" price="49.99">    <author>Author3</author>    </book> </books>; Now, we will see some of the E4X approaches to read and parse the above XML in our application. The E4X uses many operators to simplify accessing XML nodes and attributes, such as dot (.) and attribute identifier (@), for accessing properties and attributes. private function traceXML():void {    trace(myBooks.book.(@price < 50.99).@title); //Output: Book3    trace(myBooks.book[1].author); //Output: Author2    trace(myBooks.@publisher); //Output: Packt Pub    //Following for loop outputs prices of all books    for each(var price in myBooks..@price) {    trace(price);    } } In the code above, we are using a conditional expression to extract the title of the book(s) whose price is set below 50.99$ in the first trace statement. If we have to do this manually, imagine how much code would have been needed to parse the XML. In the second trace, we are accessing a book node using index and printing its author node's value. And in the third trace, we are simply printing the root node's publisher attribute value and finally, we are using a for loop to traverse through prices of all the books and printing each price. The following is a list of XML operators: Operator Name Description    @   attribute identifier Identifies attributes of an XML or XMLList object.     { }     braces(XML) Evaluates an expression that is used in an XML or XMLList initializer.   [ ]     brackets(XML) Accesses a property or attribute of an XML or XMLList object, for example myBooks.book["@title"].     + concatenation(XMLList) Concatenates (combines) XML or XMLList values into an XMLList object.     += concatenation assignment (XMLList) Assigns expression1 The XML object An XML class represents an XML element, attribute, comment, processing instruction, or a text element. We have used the XML class in our example above to initialize the myBooks variable with an XML literal. The XML class is included into an ActionScript 3.0 core class, so you don't need to import a package to use it. The XML class provides many properties and methods to simplify XML processing, such as ignoreWhitespace and ignoreComments properties, used for ignoring whitespaces and comments in XML documents respectively. You can use the prependChild() and appendChild() methods to prepend and append XML nodes to existing XML documents. Methods such as toString() and toXMLString() allow you to convert XML to a string. An example of an XML object: private var myBooks:XML = <books publisher="Packt Pub"> <book title="Book1" price="99.99"> <author>Author1</author> </book> <book title="Book2" price="120.00"> <author>Author2</author> </book> </books>;   In the above example, we have created an XML object by assigning an XML literal to it. You can also create an XML object from a string that contains XML data, as shown in the following example: private var str:String = "<books publisher="Packt Pub"> <book title="Book1" price="99.99"> <author>Author1</author> </book> <book title="Book2" price="59.99"> <author>Author2</author> </book> </books>"; private var myBooks:XML = new XML(str); trace(myBooks.toXMLString()); //outputs formatted xml as string If the XML data in string is not well-formed (for example, a closing tag is missing), then you will see a runtime error. You can also use binding expressions in the XML text to extract contents from a variable data. For example, you could bind a node's name attribute to a variable value, as in the following line: private var title:String = "Book1" var aBook:XML = <book title="{title}">; To read more about XML class methods and properties, go through Flex 3 LiveDocs at http://livedocs.adobe.com/flex/3/langref/XML.html. The XMLList object As the class name indicates, XMLList contains one or more XML objects. It can contain full XML documents, XML fragments, or the results of an XML query. You can typically use all of the XML class's methods and properties on the objects from XMLList. To access these objects from the XMLList collection, iterate over it using a for each… statement. The XMLList provides you with the following methods to work with its objects: child(): Returns a specified child of every XML object children(): Returns specified children of every XML object descendants(): Returns all descendants of an XML object elements(): Calls the elements() method of each XML object in the XMLList. Returns all elements of the XML object parent(): Returns the parent of the XMLList object if all items in the XMLList object have the same parent attribute(attributeName): Calls the attribute() method of each XML object and returns an XMLList object of the results. The results match the given attributeName parameter attributes(): Calls the attributes() method of each XML object and returns an XMLList object of attributes for each XML object contains(): Checks if the specified XML object is present in the XMLList copy(): Returns a copy of the given XMLList object length(): Returns the number of properties in the XMLList object valueOf(): Returns the XMLList object For details on these methods, see the ActionScript 3.0 Language Reference. Let's return to the example of the XMLList: var xmlList:XMLList = myBooks.book.(@price == 99.99); var item:XML; for each(item in xmlList) { trace("item:"+item.toXMLString()); } Output: item:<book title="Book1" price="99.99"> <author>Author1</author> </book> In the example above, we have used XMLList to store the result of the myBooks.book.(@price == 99.99); statement. This statement returns an XMLList containing XML node(s) whose price is 99.99$. Working with XML objects The XML class provides many useful methods to work with XML objects, such as the appendChild() and prependChild() methods to add an XML element to the beginning or end of an XML object, as shown in the following example: var node1:XML = <middleInitial>B</middleInitial> var node2:XML = <lastName>Kore</lastName> var root:XML = <personalInfo></personalInfo> root = root.appendChild(node1); root = root.appendChild(node2); root = root.prependChild(<firstName>Satish</firstName>); The output is as follows: <personalInfo> <firstName>Satish</firstName> <middleInitial>B</middleInitial> <lastName>Kore</lastName> </personalInfo> You can use the insertChildBefore() or insertChildAfter() method to add a property before or after a specified property, as shown in the following example: var x:XML = <count> <one>1</one> <three>3</three> <four>4</four> </count>; x = x.insertChildBefore(x.three, "<two>2</two>"); x = x.insertChildAfter(x.four, "<five>5</five>"); trace(x.toXMLString()); The output of the above code is as follows: <count> <one>1</one> <two>2</two> <three>3</three> <four>4</four> <five>5</five> </count>
Read more
  • 0
  • 0
  • 3136

article-image-oracle-web-rowset-part2
Packt
27 Oct 2009
4 min read
Save for later

Oracle Web RowSet - Part2

Packt
27 Oct 2009
4 min read
Reading a Row Next, we will read a row from the OracleWebRowSet object. Click on Modify Web RowSet link in the CreateRow.jsp. In the ModifyWebRowSet JSP click on the Read Row link. The ReadRow.jsp JSP is displayed. In the ReadRow JSP specify the Database Row to Read and click on Apply. The second row values are retrieved from the Web RowSet: In the ReadRow JSP the readRow() method of the WebRowSetQuery.java application is invoked. TheWebRowSetQuery object is retrieved from the session object. WebRowSetQuery query=( webrowset.WebRowSetQuery)session.getAttribute("query"); The String[] values returned by the readRow() method are added to theReadRow JSP fields. In the readRow() method theOracleWebRowSet object cursor is moved to the row to be read. webRowSet.absolute(rowRead); Retrieve the row values with the getString() method and add to String[]. Return the String[] object. String[] resultSet=new String[5];resultSet[0]=webRowSet.getString(1);resultSet[1]=webRowSet.getString(2);resultSet[2]=webRowSet.getString(3);resultSet[3]=webRowSet.getString(4);resultSet[4]=webRowSet.getString(5);return resultSet; ReadRow.jsp JSP is listed as follows: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd"><%@ page contentType="text/html;charset=windows-1252"%><%@ page session="true"%><html><head><meta http-equiv="Content-Type" content="text/html;charset=windows-1252"><title>Read Row with Web RowSet</title></head><body><form><h3>Read Row with Web RowSet</h3><table><tr><td><a href="ModifyWebRowSet.jsp">Modify Web RowSetPage</a></td></tr></table></form><%webrowset.WebRowSetQuery query=null;query=( webrowset.WebRowSetQuery)session.getAttribute("query");String rowRead=request.getParameter("rowRead");String journalUpdate=request.getParameter("journalUpdate");String publisherUpdate=request.getParameter("publisherUpdate");String editionUpdate=request.getParameter("editionUpdate");String titleUpdate=request.getParameter("titleUpdate");String authorUpdate=request.getParameter("authorUpdate");if((rowRead!=null)){int row_Read=Integer.parseInt(rowRead);String[] resultSet=query.readRow(row_Read);journalUpdate=resultSet[0];publisherUpdate=resultSet[1];editionUpdate=resultSet[2];titleUpdate=resultSet[3];authorUpdate=resultSet[4];}%><form name="query" action="ReadRow.jsp" method="post"><table><tr><td>Database Row to Read:</td></tr><tr><td><input name="rowRead" type="text" size="25"maxlength="50"/></td></tr><tr><td>Journal:</td></tr><tr><td><input name="journalUpdate" value='<%=journalUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Publisher:</td></tr><tr><td><input name="publisherUpdate"value='<%=publisherUpdate%>' type="text" size="50"maxlength="250"/></td></tr><tr><td>Edition:</td></tr><tr><td><input name="editionUpdate" value='<%=editionUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Title:</td></tr><tr><td><input name="titleUpdate" value='<%=titleUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Author:</td></tr><tr><td><input name="authorUpdate" value='<%=authorUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td><input class="Submit" type="submit" value="Apply"/></td></tr></table></form></body></html>
Read more
  • 0
  • 0
  • 1587

article-image-python-data-persistence-using-mysql-part-ii-moving-data-processing-data
Packt
27 Oct 2009
8 min read
Save for later

Python Data Persistence using MySQL Part II: Moving Data Processing to the Data

Packt
27 Oct 2009
8 min read
To move data processing to the data, you can use stored procedures, stored functions, and triggers. All these components are implemented inside the underlying database, and can significantly improve performance of your application due to reducing network overhead associated with multiple calls to the database. It is important to realize, though, the decision to move any piece of processing logic into the database should be taken with care. In some situations, this may be simply inefficient. For example, if you decide to move some logic dealing with the data stored in a custom Python list into the database, while still keeping that list implemented in your Python code, this can be inefficient in such a case, since it only increases the number of calls to the underlying database, thus causing significant network overhead. To fix this situation, you could move the list from Python into the database as well, implementing it as a table. Starting with version 5.0, MySQL supports stored procedures, stored functions, and triggers, making it possible for you to enjoy programming on the underlying database side. In this article, you will look at triggers in action. Stored procedures and functions can be used similarly. Planning Changes for the Sample Application Assuming you have followed the instructions in Python Data Persistence using MySQL, you should already have the application structure to be reorganized here. To recap, what you should already have is: tags nested list of tags used to describe the posts obtained from the Packt Book Feed page. obtainPost function obtains the information about the most recent post on the Packt Book Feed page. determineTags function determines tags appropriate to the latest post obtained from the Packt Book Feed page. insertPost function inserts the information about the obtained post into the underlying database tables: posts and posttags. execPr function brings together the functionality of the described above functions. That’s what you should already have on the Python side. And on the database side, you should have the following components: posts table contains records representing posts obtained from the Packt Book Feed page. posttags table contains records each of which represents a tag associated with a certain post stored in the posts table. Let’s figure out how we can refactor the above structure, moving some data processing inside the database. The first thing you might want to do is to move the tags list from Python into the database, creating a new table tags for that. Then, you can move the logic implemented with the determineTags function inside the database, defining the AFTER INSERT trigger on the posts table. From within this trigger, you will also insert rows into the posttags table, thus eliminating the need to do it from within the insertPost function. Once you’ve done all that, you can refactor the Python code implemented in the appsample module. To summarize, here are the steps you need to perform in order to refactor the sample application discussed in the earlier article: Create tags table and populate it with the data currently stored in the  tags list implemented in Python. Define the AFTER INSERT trigger on the posts table. Refactor the insertPost function in the appsample.py module. Remove the tags list from the appsample.py module. Remove the determineTags function from the appsample.py module. Refactor the execPr function in the appsample.py module. Refactoring the Underlying Database To keep things simple, the tags table might contain a single column tag with the primary key constraint defined on it. So, you can create the tags table as follows: CREATE TABLE tags ( tag VARCHAR(20) PRIMARY KEY ) ENGINE = InnoDB; Then, you might want to modify the posttags table, adding a foreign key constraint to its tag column. Before you can do that, though, you will need to delete all the rows from this table. This can be done with the following query: DELETE FROM posttags; Now you can move on and alter posttags as follows: ALTER TABLE posttags ADD FOREIGN KEY (tag) REFERENCES tags(tag); The next step is to populate the tags table. You can automate this process with the help of the following Python script: >>> import MySQLdb >>> import appsample >>> db=MySQLdb.connect(host="localhost",user="usrsample",passwd="pswd",db=">>> dbsample") >>> c=db.cursor() >>> c.executemany("""INSERT INTO tags VALUES(%s)""", appsample.tags) >>> db.commit() >>> db.close() As a result, you should have the tags table populated with the data taken from the tags list discussed in Python Data Persistence using MySQL. To make sure it has done so, you can turn back to the mysql prompt and issue the following query against the tags table: SELECT * FROM tags; The above should output the list of tags you have in the tags list. Of course, you can always extend this list, adding new tags with the INSERT statement. For example, you could issue the following statement to add the Visual Studio tag: INSERT INTO tags VALUES('Visual Studio'); Now you can move on and define the AFTER INSERT trigger on the posts table: delimiter // CREATE TRIGGER insertPost AFTER INSERT ON posts FOR EACH ROW BEGIN INSERT INTO posttags(title, tag) SELECT NEW.title as title, tag FROM tags WHERE LOCATE(tag, NEW.title)>0; END // delimiter ; As you can see, the posttags table will be automatically populated with appropriate tags just after a new row is inserted into the posts table. Notice the use of the INSERT … SELECT statement in the body of the trigger. Using this syntax lets you insert several rows into the posttags table at once, without having to use an explicit loop. In the WHERE clause of SELECT, you use standard MySQL string function LOCATE returning the position of the first occurrence of the substring, passed in as the first argument, in the string, passed in as the second argument. In this particular example, though, you are not really interested in obtaining the position of an occurrence of the substring in the string. All you need to find out here is whether the substring appears in the string or not. If it is, it should appear in the posttags table as a separate row associated with the row just inserted into the posts table. Refactoring the Sample’s Python Code Now that you have moved some data and data processing from Python into the underlying database, it’s time to reorganize the appsample custom Python module created as discussed in Python Data Persistence using MySQL. As mentioned earlier, you need to rewrite the insertPost and execPr functions and remove the determineTags function and the tags list. This is what the appsample module should look like after revising: import MySQLdb import urllib2 import xml.dom.minidom def obtainPost(): addr = "http://feeds.feedburner.com/packtpub/sDsa?format=xml" xmldoc = xml.dom.minidom.parseString(urllib2.urlopen(addr).read()) item = xmldoc.getElementsByTagName("item")[0] title = item.getElementsByTagName("title")[0].firstChild.data guid = item.getElementsByTagName("guid")[0].firstChild.data pubDate = item.getElementsByTagName("pubDate")[0].firstChild.data post ={"title": title, "guid": guid, "pubDate": pubDate} return post def insertPost(title, guid, pubDate): db=MySQLdb.connect(host="localhost",user="usrsample",passwd="pswd",db="dbsample") c=db.cursor() c.execute("""INSERT INTO posts (title, guid, pubDate) VALUES(%s,%s,%s)""", (title, guid, pubDate)) db.commit() db.close() def execPr(): p = obtainPost() insertPost(p["title"], p["guid"], p["pubDate"]) If you compare it with appsample discussed in Part 1, you should notice that the revision is much shorter. It’s important to note, however, that nothing has changed from the user standpoint. So, if you now start the execPr function in your Python session: >>>import appsample >>>appsample.execPr() This should insert a new record into the posts table, inserting automatically corresponding tags records into the posttags table, if any. The difference lies in the way it’s going on behind the scenes. Now the Python code is responsible only for obtaining the latest post from the Packt Book Feed page and then inserting a record into the posts table. Dealing with tags is now responsibility of the logic implemented inside the database. In particular, the AFTER INSERT trigger defined on the posts table should take care of inserting the rows into the posttags table. To make sure that everything has worked smoothly, you can now check out the content of the posts and posttags tables. To look at the latest post stored in the posts table, you could issue the following query: SELECT title, str_to_date(pubDate,'%a, %e %b %Y') lastdate FROM posts ORDER BY lastdate DESC LIMIT 1; Then, you might want to look at the related tags stored in the posttags tables, by issuing the following query: SELECT p.title, t.tag, str_to_date(p.pubDate,'%a, %e %b %Y') lastdate FROM posts p, posttags t WHERE p.title=t.title ORDER BY lastdate DESC LIMIT 1; Conclusion In this article, you looked at how some business logic of a Python/MySQL application can be moved from Python into MySQL. For that, you continued with the sample application originally discussed in Python Data Persistence using MySQL.
Read more
  • 0
  • 0
  • 4620

article-image-implementing-basic-helloworld-wcf-windows-communication-foundation-service
Packt
27 Oct 2009
7 min read
Save for later

Implementing a Basic HelloWorld WCF (Windows Communication Foundation) Service

Packt
27 Oct 2009
7 min read
We will build a HelloWorld WCF service by carrying out the following steps: Create the solution and project Create the WCF service contract interface Implement the WCF service Host the WCF service in the ASP.NET Development Server Create a client application to consume this WCF service Creating the HelloWorld solution and project Before we can build the WCF service, we need to create a solution for our service projects. We also need a directory in which to save all the files. Throughout this article, we will save our project source codes in the D:SOAwithWCFandLINQProjects directory. We will have a subfolder for each solution we create, and under this solution folder, we will have one subfolder for each project. For this HelloWorld solution, the final directory structure is shown in the following image: You don't need to manually create these directories via Windows Explorer; Visual Studio will create them automatically when you create the solutions and projects. Now, follow these steps to create our first solution and the HelloWorld project: Start Visual Studio 2008. If the Open Project dialog box pops up, click Cancel to close it. Go to menu File | New | Project. The New Project dialog window will appear. From the left-hand side of the window (Project types), expand Other Project Types and then select Visual Studio Solutions as the project type. From the right-hand side of the window (Templates), select Blank Solution as the template. At the bottom of the window, type HelloWorld as the Name, and D:SOAwithWCFandLINQProjects as the Location. Note that you should not enter HelloWorld within the location, because Visual Studio will automatically create a folder for a new solution. Click the OK button to close this window and your screen should look like the following image, with an empty solution. Depending on your settings, the layout may be different. But you should still have an empty solution in your Solution Explorer. If you don't see Solution Explorer, go to menu View | Solution Explorer, or press Ctrl+Alt+L to bring it up. In the Solution Explorer, right-click on the solution, and select Add | New Project… from the context menu. You can also go to menu File | Add | New Project… to get the same result. The following image shows the context menu for adding a new project. The Add New Project window should now appear on your screen. In the left-hand side of this window (Project types), select Visual C# as the project type, and on the right-hand side of the window (Templates), select Class Library as the template. At the bottom of the window, type HelloWorldService as the Name. Leave D:SOAwithWCFandLINQProjectsHelloWorld as the Location. Again, don't add HelloWorldService to the location, as Visual Studio will create a subfolder for this new project (Visual Studio will use the solution folder as the default base folder for all the new projects added to the solution). You may have noticed that there is already a template for WCF Service Application in Visual Studio 2008. For the very first example, we will not use this template. Instead, we will create everything by ourselves so you know what the purpose of each template is. This is an excellent way for you to understand and master this new technology. Now, you can click the OK button to close this window. Once you click the OK button, Visual Studio will create several files for you. The first file is the project file. This is an XML file under the project directory, and it is called HelloWorldService.csproj. Visual Studio also creates an empty class file, called Class1.cs. Later, we will change this default name to a more meaningful one, and change its namespace to our own one. Three directories are created automatically under the project folder—one to hold the binary files, another to hold the object files, and a third one for the properties files of the project. The window on your screen should now look like the following image: We now have a new solution and project created. Next, we will develop and build this service. But before we go any further, we need to do two things to this project: Click the Show All Files button on the Solution Explorer toolbar. It is the second button from the left, just above the word Solution inside the Solution Explorer. If you allow your mouse to hover above this button, you will see the hint Show All Files, as shown in above diagram. Clicking this button will show all files and directories in your hard disk under the project folder-rven those items that are not included in the project. Make sure that you don't have the solution item selected. Otherwise, you can't see the Show All Files button. Change the default namespace of the project. From the Solution Explorer, right-click on the HelloWorldService project, select Properties from the context menu, or go to menu item Project | HelloWorldService Properties…. You will see the project properties dialog window. On the Application tab, change the Default namespace to MyWCFServices. Lastly, in order to develop a WCF service, we need to add a reference to the ServiceModel namespace. On the Solution Explorer window, right-click on the HelloWorldService project, and select Add Reference… from the context menu. You can also go to the menu item Project | Add Reference… to do this. The Add Reference dialog window should appear on your screen. Select System.ServiceModel from the .NET tab, and click OK. Now, on the Solution Explorer, if you expand the references of the HelloWorldService project, you will see that System.ServiceModel has been added. Also note that System.Xml.Linq is added by default. We will use this later when we query a database. Creating the HelloWorldService service contract interface In the previous section, we created the solution and the project for the HelloWorld WCF Service. From this section on, we will start building the HelloWorld WCF service. First, we need to create the service contract interface. In the Solution Explorer, right-click on the HelloWorldService project, and select Add | New Item…. from the context menu. The following Add New Item - HelloWorldService dialog window should appear on your screen. On the left-hand side of the window (Categories), select Visual C# Items as the category, and on the right-hand side of the window (Templates), select Interface as the template. At the bottom of the window, change the Name from Interface1.cs to IHelloWorldService.cs. Click the Add button. Now, an empty service interface file has been added to the project. Follow the steps below to customize it. Add a using statement: using System.ServiceModel; Add a ServiceContract attribute to the interface. This will designate the interface as a WCF service contract interface. [ServiceContract] Add a GetMessage method to the interface. This method will take a string as the input, and return another string as the result. It also has an attribute, OperationContract. [OperationContract] String GetMessage(String name); Change the interface to public. The final content of the file IHelloWorldService.cs should look like the following: using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.ServiceModel;namespace MyWCFServices{[ServiceContract]public interface IHelloWorldService{[OperationContract]String GetMessage(String name);}}
Read more
  • 0
  • 0
  • 2748
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-new-soa-capabilities-biztalk-server-2009-uddi-services
Packt
27 Oct 2009
6 min read
Save for later

New SOA Capabilities in BizTalk Server 2009: UDDI Services

Packt
27 Oct 2009
6 min read
All truths are easy to understand once they are discovered; the point is to discover them.-Galileo Galilei What is UDDI? Universal Description and Discovery Information (UDDI) is a type of registry whose primary purpose is to represent information about web services. It describes the service providers, the services that provider offers, and in some cases, the specific technical specifications for interacting with those services. While UDDI was originally envisioned as a public, platform independent registry that companies could exploit for listing and consuming services, it seems that many have chosen instead to use UDDI as an internal resource for categorizing and describing their available enterprise services. Besides simply listing available services for others to search and peruse, UDDI is arguably most beneficial for those who wish to perform runtime binding to service endpoints. Instead of hard-coding a service path in a client application, one may query UDDI for a particular service's endpoint and apply it to their active service call. While UDDI is typically used for web services, nothing prevents someone from storing information about any particular transport and allowing service consumers to discover and do runtime resolution to these endpoints. As an example, this is useful if you have an environment with primary, backup, and disaster access points and want your application be able to gracefully look up and failover to the next available service environment. In addition, UDDI can be of assistance if an application is deployed globally but you wish for regional consumers to look up and resolve against the closest geographical endpoint. UDDI has a few core hierarchy concepts that you must grasp to fully comprehend how the registry is organized. The most important ones are included here. Name Purpose Name in Microsoft UDDI services BusinessEntity These are the service providers. May be an organization, business unit or functional area. Provider BusinessService General reference to a business service offered by a provider. May be a logical grouping of actual services. Service BindingTemplate Technical details of an individual service including endpoint Binding tModel (Technical Model) Represents metadata for categorization or description such as transport or protocol tModel As far as relationships between these entities go, a Business Entity may contain many Business Services, which in turn can have multiple Binding Templates. A binding may reference multiple tModels and tModels may be reused across many Binding Templates. What's new in UDDI version three? The latest UDDI specification calls out multiple-registry environments, support for digital signatures applied to UDDI entries, more complex categorization, wildcard searching, and a subscription API. We'll spend a bit of time on that last one in a few moments. Let's take a brief lap around at the Microsoft UDDI Services offering. For practical purposes, consider the UDDI Services to be made up of two parts: an Administration Console and a web site. The website is actually broken up into both a public facing and administrative interface, but we'll talk about them as one unit. The UDDI Configuration Console is the place to set service-wide settings ranging from the extent of logging to permissions and site security. The site node (named UDDI) has settings for permission account groups, security settings (see below), and subscription notification thresholds among others. The web node, which resides immediately beneath the parent, controls web site setting such as logging level and target database. Finally, the notification node manages settings related to the new subscription notification feature and identically matches the categories of the web node. The UDDI Services web site, found at http://localhost/uddi/, is the destination or physically listing, managing, and configuring services. The Search page enables querying by a wide variety of criteria including category, services, service providers, bindings, and tModels. The Publish page is where you go to add new services to the registry or edit the settings of existing ones. Finally, the Subscription page is where the new UDDI version three capability of registry notification is configured. We will demonstrate this feature later in this article. How to add services to the UDDI registry? Now we're ready to add new services to our UDDI registry. First, let's go to the Publish page and define our Service Provider and a pair of categorical tModels. To add a new Provider, we right-click the Provider node in the tree and choose Add Provider. Once a provider is created and named, we have the choice of adding all types of context characteristics such as a contact name(s), categories, relationships, and more. I'd like to add two tModel categories to my environment : one to identify which type of environment the service references (development, test, staging, production) and another to flag which type of transport it uses (Basic HTTP, WS HTTP, and so on). To add atModel, simply right-click the tModels node and choose Add tModel. This first one is named biztalksoa:runtimeresolution:environment. After adding one more tModel for biztalksoa:runtimeresolution:transporttype, we're ready to add a service to the registry. Right-click the BizTalkSOA provider and choose Add Service. Set the name of this service toBatchMasterService. Next, we want to add a binding (or access point) for this service, which describes where the service endpoint is physically located. Switch to the Bindings tab of the service definition and choose New Binding. We need a new access point, so I pointed to our proxy service created earlier and identified it as an endPoint. Finally, let's associate the two new tModel categories with our service. Switch to the Categories tab, and choose to Add Custom Category. We're asked to search for atModel, which represents our category, so a wildcard entry such as %biztalksoa%  is a valid search criterion. After selecting the environment category, we're asked for the key name and value. The key "name" is purely a human-friendly representation of the data whereas the tModel identifier and the key value comprise the actual name-value pair. I've entered production as the value on the environment category, and WS-Http as the key value on thetransporttype category. At this point, we have a service sufficiently configured in the UDDI directory so that others can discover and dynamically resolve against it.
Read more
  • 0
  • 0
  • 2855

article-image-data-migration-scenarios-sap-business-one-application-part-2
Packt
27 Oct 2009
7 min read
Save for later

Data Migration Scenarios in SAP Business ONE Application- part 2

Packt
27 Oct 2009
7 min read
Advanced data migration tools: xFusion Studio For our own projects, we have adopted a tool called xFusion. Using this tool, you gain flexibility and are able to reuse migration settings for specific project environments. The tool provides connectivity to directly extract data from applications (including QuickBooks and Peachtree). In addition, it also supports building rules for data profiling, validation, and conversions. For example, our project team participated in the development of the template for the Peachtree interface. We configured the mappings from Peachtree, and connected the data with the right fields in SAP. This was then saved as a migration template. Therefore, it would be easy and straightforward to migrate data from Peachtree to SAP in any future projects. xFusion packs save migration knowledge Based on the concept of establishing templates for migrations, xFusion provides preconfigured templates for the SAP Business ONE application. In xFusion, templates are called xFusion packs. Please note that these preconfigured packs may include master data packs, and also xFusion packs for transaction data. The following xFusion packs are provided for an SAP Business ONE migration: Administration Banking Business partner Finance HR Inventory and production Marketing documents and receipts MRP UDFs Services You can see that the packs are also grouped by business object. For example, you have a group of xFusion packs for inventory and production. You can open the pack and find a group of xFusion files that contain the configuration information. If you open the inventory and production pack, a list of folders will be revealed. Each folder has a set of Excel templates and xFusion fi les (seen in the following screenshot). An xFusion pack essentially incorporates the configuration and data manipulation procedures required to bring data from a source into SAP. The source settings can be saved in xFusion packs so that you can reuse the knowledge with regards to data manipulation and formatting. Data "massaging" using SQL The key for the migration procedure is the capability to do data massaging in order to adjust formats and columns, in a step-by-step manner, based on requirements. Data manipulation is not done programmatically, but rather via a step-by-step process, where each step uses SQL statements to verify and format data. The entire process is represented visually, and thereby documents the steps required. This makes it easy to adjust settings and fine-tune them. The following applications are supported and can, therefore, be used as a source for an SAP migration: (They are existing xFusion packs) SAP Business ONE Sage ACT! SAP SAP BW Peachtree QuickBooks Microsoft Dynamics CRM The following is a list of supported databases: Oracle ODBC MySQL OLE DB SQL Server PostgrSQL Working with xFusion The workflow in xFusion starts when you open an existing xFusion pack, or create a new one. In this example, an xFusion pack for business partner migration was opened. You can see the graphical representation of the migration process in the main window (in the following screenshot). Each icon in the graphic representation represents a data manipulation and formatting step. If you click on an icon, the complete path from the data source to the icon is highlighted. Therefore, you can select the previous steps to adjust the data. The core concept is that you do not directly change the input data, but define rules to convert data from the source format to the target format. If you open an xFusion pack for the SAP Business ONE application, the target is obviously SAP Business ONE. Therefore, you need to enter the privileges and database name so that the pack knows how to access the SAP system. In addition, the source parameters need to be provided. xFusion packs come with example Excel fi les. You need to select the Excel fi les as the relevant source. However, it is important to note that you don't need to use the Excel files. You can use any database, or other source, as long as you adjust the data format using the step-by-step process to represent the same format as provided in Excel. In xFusion. you can use the sample files that come in Excel format. The connection parameters are presented once you double-click on any of the connections listed in the Connections section as follows: It is recommended to click on Test Connection to verify the proper parameters. If all of the connections are right, you can run a migration from the source to the target by right-clicking on an icon and selecting Run Export as shown here: The progress and export is visually documented. This way, you can verify the success. There is also a log file in the directory where the currently utilized xFusion pack resides, as shown in the following screenshot: Tips and recommendations for your own project Now you know all of the main migration tools and methods. If you want to select the right tool and method for your specific situation, you will see that even though there may be many templates and preconfigured packs out there, your own project potentially comes with some individual aspects. When organizing the data migration project, use the project task skeleton I provided. It is important to subdivide the required migration steps into a group of easy-to-understand steps, where data can be verified at each level. If it gets complicated, it is probably not the right way to move forward, and you need to re-think the methods and tools you are using. Common issues The most common issue I found in similar projects is that the data to be migrated is not entirely clean and consistent. Therefore, be sure to use a data verification procedure at each step. Don't just import data, only to find out later that the database is overloaded with data that is not right. Recommendation Separate the master data and the transaction data. If you don't want to lose valuable transaction data, you can establish a reporting database which will save all of the historic transactions. For example, sales history can easily be migrated to an SQL database. You can then provide access to this information from the required SAP forms using queries or Crystal Reports. Case study During the course of evaluating the data import features available in the SAP Business ONE application, we have already learned how to import business partner information and item data. This can easily be done using the standard SAP data import features based on the Excel or text files. Using this method allows the lead, customer, and vendor data to be imported. Let's say that the Lemonade Stand enterprise has salespeople who travel to trade fairs and collect contact information. We can import the address information using the proven BP import method. But after this data is imported, what would the next step be? It would be a good idea to create and manage opportunities based on the address material. Basically, you already know how to use Excel to bring over address information. Let's enhance this concept to bring over opportunity information. We will use xFusion to import opportunity data into the SAP Business ONE application. The basis will be the xFusion pack for opportunities. Importing sales opportunities for the Lemonade Stand The xFusion pack is open, and you can see that it is a nice and clean example without major complexity. That's how it should be, as you see here:
Read more
  • 0
  • 0
  • 2782

article-image-developing-web-applications-using-javaserver-faces-part-2
Packt
27 Oct 2009
5 min read
Save for later

Developing Web Applications using JavaServer Faces: Part 2

Packt
27 Oct 2009
5 min read
JSF Validation Earlier in this article, we discussed how the required attribute for JSF input fields allows us to easily make input fields mandatory. If a user attempts to submit a form with one or more required fields missing, an error message is automatically generated. The error message is generated by the <h:message> tag corresponding to the invalid field. The string First Name in the error message corresponds to the value of the label attribute for the field. Had we omitted the label attribute, the value of the fields id attribute would have been shown instead. As we can see, the required attribute makes it very easy to implement mandatory field functionality in our application. Recall that the age field is bound to a property of type Integer in our managed bean. If a user enters a value that is not a valid integer into this field, a validation error is automatically generated. Of course, a negative age wouldn't make much sense, however, our application validates that user input is a valid integer with essentially no effort on our part. The email address input field of our page is bound to a property of type String in our managed bean. As such, there is no built-in validation to make sure that the user enters a valid email address. In cases like this, we need to write our own custom JSF validators. Custom JSF validators must implement the javax.faces.validator.Validator interface. This interface contains a single method named validate(). This method takes three parameters: an instance of javax.faces.context.FacesContext, an instance of javax.faces.component.UIComponent containing the JSF component we are validating, and an instance of java.lang.Object containing the user entered value for the component. The following example illustrates a typical custom validator. package com.ensode.jsf.validators;import java.util.regex.Matcher;import java.util.regex.Pattern;import javax.faces.application.FacesMessage;import javax.faces.component.UIComponent;import javax.faces.component.html.HtmlInputText;import javax.faces.context.FacesContext;import javax.faces.validator.Validator;import javax.faces.validator.ValidatorException;public class EmailValidator implements Validator { public void validate(FacesContext facesContext, UIComponent uIComponent, Object value) throws ValidatorException { Pattern pattern = Pattern.compile("w+@w+.w+"); Matcher matcher = pattern.matcher( (CharSequence) value); HtmlInputText htmlInputText = (HtmlInputText) uIComponent; String label; if (htmlInputText.getLabel() == null || htmlInputText.getLabel().trim().equals("")) { label = htmlInputText.getId(); } else { label = htmlInputText.getLabel(); } if (!matcher.matches()) { FacesMessage facesMessage = new FacesMessage(label + ": not a valid email address"); throw new ValidatorException(facesMessage); } }} In our example, the validate() method does a regular expression match against the value of the JSF component we are validating. If the value matches the expression, validation succeeds, otherwise, validation fails and an instance of javax.faces.validator.ValidatorException is thrown. The primary purpose of our custom validator is to illustrate how to write custom JSF validations, and not to create a foolproof email address validator. There may be valid email addresses that don't validate using our validator. The constructor of ValidatorException takes an instance of javax.faces.application.FacesMessage as a parameter. This object is used to display the error message on the page when validation fails. The message to display is passed as a String to the constructor of FacesMessage. In our example, if the label attribute of the component is not null nor empty, we use it as part of the error message, otherwise we use the value of the component's id attribute. This behavior follows the pattern established by standard JSF validators. Before we can use our custom validator in our pages, we need to declare it in the application's faces-config.xml configuration file. To do so, we need to add a <validator> element just before the closing </faces-config> element. <validator> <validator-id>emailValidator</validator-id> <validator-class> com.ensode.jsf.validators.EmailValidator </validator-class></validator> The body of the <validator-id> sub element must contain a unique identifier for our validator. The value of the <validator-class> element must contain the fully qualified name of our validator class. Once we add our validator to the application's faces-config.xml, we are ready to use it in our pages. In our particular case, we need to modify the email field to use our custom validator. <h:inputText id="email" label="Email Address" required="true" value="#{RegistrationBean.email}"> <f:validator validatorId="emailValidator"/></h:inputText> All we need to do is nest an <f:validator> tag inside the input field we wish to have validated using our custom validator. The value of the validatorId attribute of <f:validator> must match the value of the body of the <validator-id> element in faces-config.xml. At this point we are ready to test our custom validator. When entering an invalid email address into the email address input field and submitting the form, our custom validator logic was executed and the String we passed as a parameter to FacesMessage in our validator() method is shown as the error text by the <h:message> tag for the field.
Read more
  • 0
  • 0
  • 1641

article-image-developing-web-applications-using-javaserver-faces-part-1
Packt
27 Oct 2009
6 min read
Save for later

Developing Web Applications using JavaServer Faces: Part 1

Packt
27 Oct 2009
6 min read
Although a lot of applications have been written using these APIs, most modern Java applications are written using some kind of web application framework. As of Java EE 5, the standard framework for building web applications is Java Server Faces (JSF). Introduction to JavaServer Faces Before JSF was developed, Java web applications were typically developed using non-standard web application frameworks such as Apache Struts, Tapestry, Spring Web MVC, or many others. These frameworks are built on top of the Servlet and JSP standards, and automate a lot of functionality that needs to be manually coded when using these APIs directly. Having a wide variety of web application frameworks available (at the time of writing, Wikipedia lists 35 Java web application frameworks, and this list is far from extensive!), often resulted in "analysis paralysis", that is, developers often spend an inordinate amount of time evaluating frameworks for their applications. The introduction of JSF to the Java EE 5 specification resulted in having a standard web application framework available in any Java EE 5 compliant application server. We don't mean to imply that other web application frameworks are obsolete or that they shouldn't be used at all, however, a lot of organizations consider JSF the "safe" choice since it is part of the standard and should be well supported for the foreseeable future. Additionally, NetBeans offers excellent JSF support, making JSF a very attractive choice. Strictly speaking, JSF is not a web application framework as such, but a component framework. In theory, JSF can be used to write applications that are not web-based, however, in practice JSF is almost always used for this purpose. In addition to being the standard Java EE 5 component framework, one benefit of JSF is that it was designed with graphical tools in mind, making it easy for tools and IDEs such as NetBeans to take advantage of the JSF component model with drag-and-drop support for components. NetBeans provides a Visual Web JSF Designer that allow us to visually create JSF applications. Developing Our first JSF Application From an application developer's point of view, a JSF application consists of a series of JSP pages containing custom JSF tags, one or more JSF managed beans, and a configuration file named faces-config.xml. The faces-config.xml file declares the managed beans in the application, as well as the navigation rules to follow when navigating from one JSF page to another. Creating a New JSF Project To create a new JSF project, we need to go to File | New Project, select the Java Web project category, and Web Application as the project type. After clicking Next, we need to enter a Project Name, and optionally change other information for our project, although NetBeans provides sensible defaults. On the next page in the wizard, we can select the Server, Java EE Version, and Context Path of our application. In our example, we will simply pick the default values. On the next page of the new project wizard, we can select what frameworks our web application will use. Unsurprisingly, for JSF applications we need to select the JavaServer Faces framework. The Visual Web JavaServer Faces framework allows us to quickly build web pages by dragging-and-dropping components from the NetBeans palette into our pages. Although it certainly allows us to develop applications a lot quicker than manually coding, it hides a lot of the "ins" and "outs" of JSF. Having a background in standard JSF development will help us understand what the NetBeans Visual Web functionality does behind the scenes. When clicking Finish, the wizard generates a skeleton JSF project for us, consisting of a single JSP file called welcomeJSF.jsp, and a few configuration files: web.xml, faces-config.xml and, if we are using the default bundled GlassFish server, the GlassFish specific sun-web.xml file is generated as well. web.xml is the standard configuration file needed for all Java web applications. faces-config.xml is a JSF-specific configuration file used to declare JSF-managed beans and navigation rules. sun-web.xml is a GlassFish-specific configuration file that allows us to override the application's default context root, add security role mappings, and perform several other configuration tasks. The generated JSP looks like this: <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <%-- This file is an entry point for JavaServer Faces application. --%> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>JSP Page</title> </head> <body> <f:view> <h1> <h:outputText value="JavaServer Faces"/> </h1> </f:view> </body> </html> As we can see, a JSF enabled JSP file is a standard JSP file using a couple of JSF-specific tag libraries. The first tag library, declared in our JSP by the following line: <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> is the core JSF tag library, this library includes a number of tags that are independent of the rendering mechanism of the JSF application (recall that JSF can be used for applications other than web applications). By convention, the prefix f (for faces) is used for this tag library. The second tag library in the generated JSP, declared by the following line: <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> is the JSF HTML tag library. This tag library includes a number of tags that are used to implement HTML specific functionality, such as creating HTML forms and input fields. By convention, the prefix h (for HTML) is used for this tag library. The first JSF tag we see in the generated JSP file is the <f:view> tag. When writing a Java web application using JSF, all JSF custom tags must be enclosed inside an <f:view> tag. In addition to JSF-specific tags, this tag can contain standard HTML tags, as well as tags from other tag libraries, such as the JSTL tags. The next JSF-specific tag we see in the above JSP is <h:outputText>. This tag simply displays the value of its value attribute in the rendered page. The application generated by the new project wizard is a simple, but complete, JSF web application. We can see it in action by right-clicking on our project in the project window and selecting Run. At this point the application server is started (if it wasn't already running), the application is deployed and the default system browser opens, displaying our application's welcome page.
Read more
  • 0
  • 0
  • 2614
article-image-new-soa-capabilities-biztalk-server-2009-wcf-sql-server-adapter
Packt
26 Oct 2009
3 min read
Save for later

New SOA Capabilities in BizTalk Server 2009: WCF SQL Server Adapter

Packt
26 Oct 2009
3 min read
Do not go where the path may lead; go instead where there is no path and leave a trail.-Ralph Waldo Emerson Many of the patterns and capabilities shown in this article are compatible with the last few versions of the BizTalk Server product. So what's new in BizTalk Server 2009?` BizTalk Server 2009 is the sixth formal release of the BizTalk Server product. This upcoming release has a heavy focus on platform modernization through new support for Windows Server 2008, Visual Studio.NET 2008, SQL Server 2008, and the .NET Framework 3.5. This will surely help developers who have already moved to these platforms in their day-to-day activities but have been forced to maintain separate environments solely for BizTalk development efforts. Lets get started. What is the WCF SQL Adapter? The BizTalk Adapter Pack 2.0 now contains five system and data adapters including SAP, Siebel, Oracle databases, Oracle applications, and SQL Server. What are these adapters and how are they different than the adapters available for previous version of BizTalk? Up until recently, BizTalk adapters were built using a commonly defined BizTalk Adapter Framework. This framework prescribed interfaces and APIs for adapter developers in order to elicit a common look and feel for the users of the adapters. Moving forward, adapter developers are encouraged by Microsoft to use the new WCF LOB Adapter SDK. As you can guess from the name, this new adapter framework, which can be considered an evolution of the BizTalk Adapter Framework, is based on WCF technologies. All of the adapters in the BizTalk Adapter Pack 2.0 are built upon the WCF LOB Adapter SDK. What this means is that all of the adapters are built as reusable, metadata-rich components that are surfaced to users as WCF bindings. So much like you have a wsHttp or netTcp binding, now you have a sqlBinding or sapBinding. As you would expect from a WCF binding, there is a rich set of configuration attributes for these adapters and they are no longer tightly coupled to BizTalk itself. Microsoft has made connection a commodity, and no longer do organizations have to spend tens of thousands of dollars to connect to line of business systems like SAP through expensive, BizTalk-only adapters. This latest version of the BizTalk Adapter Pack now includes a SQL Server adapter, which replaces the legacy BizTalk-only SQL Server adapter. What do we get from this SQL Server adapter that makes it so much better than the old one? Feature Classic SQL Adapter WCF SQL Adapter Execute create-read-update-delete statements on tables and views; execute stored procedures and generic T-SQL statements Partial (send operations only support stored procedures and updategrams) Yes Database polling via FOR XML Yes Yes Database polling via  traditional tabular results No Yes Proactive database push via SQL Query Notification No Yes Expansive adapter configuration which impacts connection management and transaction behavior No Yes Support for composite transactions which allow aggregation of operations across tables or procedures into a single atomic transaction No Yes Rich metadata browsing and retrieval for finding and selecting database operations No Yes Support for the latest data types (e.g. XML) and SQL Server 2008 platform No Yes Reusable outside of BizTalk applications by WCF or basic HTTP clients No Yes Adapter extension and configuration through out of the box WCF components or custom WCF behaviors No Yes Dynamic WSDL generation which always reflects current state of the system instead of fixed contract which always requires explicit updates No Yes
Read more
  • 0
  • 0
  • 4874

article-image-jboss-tools-palette
Packt
26 Oct 2009
4 min read
Save for later

JBoss Tools Palette

Packt
26 Oct 2009
4 min read
By default, JBoss Tools Palette is available in the Web Development perspective that can be displayed from the Window menu by selecting the Open Perspective | Other option. In the following screenshot, you can see the default look of this palette: Let's dissect this palette to see how it makes our life easier! JBoss Tools Palette Toolbar Note that on the top right corner of the palette, we have a toolbar made of three buttons (as shown in the following screenshot). They are (from left to right): Palette Editor Show/Hide Import Each of these buttons accomplishes different tasks for offering a high level of flexibility and customizability. Next, we will focus our attention on each one of these buttons. Palette Editor Clicking on the Palette Editor icon will display the Palette Editor window (as shown in the following screenshot), which contains groups and subgroups of tags that are currently supported. Also, from this window you can create new groups, subgroups, icons, and of course, tags—as you will see in a few moments. As you can see, this window contains two panels: one for listing groups of tag libraries (left side) and another that displays details about the selected tag and allows us to modify the default values (extreme right). Modifying a tag is a very simple operation that can be done like this: Select from the left panel the tag that you want to modify (for example, the <div> tag from the HTML | Block subgroup, as shown in the previous screenshot). In the right panel, click on the row from the value column that corresponds to the property that you want to modify (the name column). Make the desirable modification(s) and click the OK button for confirming it (them). Creating a set of icons The Icons node from the left panel allows you to create sets of icons and import new icons for your tags. To start, you have to right-click on this node and select the Create | Create Set option from the contextual menu (as shown in the following screenshot). This action will open the Add Icon Set window where you have to specify a name for this new set. Once you're done with the naming, click on the Finish button (as shown in the following screenshot). For example, we have created a set named eHTMLi: Importing an icon You can import a new icon in any set of icons by right-clicking on the corresponding set and selecting the Create | Import Icon option from the contextual menu (as shown in the following screenshot): This action will open the Add Icon window, where you have to specify a name and a path for your icon, and then click on the Finish button (as shown in the following screenshot). Note that the image of the icon should be in GIF format. Creating a group of tag libraries As you can see, the JBoss Tools Palette has a consistent default set of groups of tag libraries, like HTML, JSF, JSTL, Struts, XHTML, etc. If these groups are insufficient, then you can create new ones by right-clicking on the Palette node and selecting the Create | Create Group option from the contextual menu (as shown in the following screenshot). This action will open the Create Group window, where you have to specify a name for the new group, and then click on Finish. For example, we have created a group named mygroup: Note that you can delete (only groups created by the user) or edit groups (any group) by selecting the Delete or Edit options from the contextual menu that appears when you right-click on the chosen group. Creating a tag library Now that we have created a group, it's time to create a library (or a subgroup). To do this, you have to right-click on the new group and select the Create Group option from the contextual menu (as shown in the following screenshot). This action will open the Add Palette Group window, where you have to specify a name and an icon for this library, and then click on the Finish button (as shown in the following screenshot). As an example, we have created a library named eHTML with an icon that we had imported in the Importing an icon section discussed earlier in this article: Note that you can delete a tag library (only tag libraries created by the user) by selecting the Delete option from the contextual menu that appears when you right-click on the chosen library.
Read more
  • 0
  • 0
  • 1669

article-image-working-simple-associations-using-cakephp
Packt
24 Oct 2009
5 min read
Save for later

Working with Simple Associations using CakePHP

Packt
24 Oct 2009
5 min read
Database relationship is hard to maintain even for a mid-sized PHP/MySQL application, particularly, when multiple levels of relationships are involved because complicated SQL queries are needed. CakePHP offers a simple yet powerful feature called 'object relational mapping' or ORM to handle database relationships with ease.In CakePHP, relations between the database tables are defined through association—a way to represent the database table relationship inside CakePHP. Once the associations are defined in models according to the table relationships, we are ready to use its wonderful functionalities. Using CakePHP's ORM, we can save, retrieve, and delete related data into and from different database tables with simplicity, in a better way—no need to write complex SQL queries with multiple JOINs anymore! In this article by Ahsanul Bari and Anupom Syam, we will have a deep look at various types of associations and their uses. In particular, the purpose of this article is to learn: How to figure out association types from database table relations How to define different types of associations in CakePHP models How to utilize the association for fetching related model data How to relate associated data while saving There are basically 3 types of relationship that can take place between database tables: one-to-one one-to-many many-to-many The first two of them are simple as they don't require any additional table to relate the tables in relationship. In this article, we will first see how to define associations in models for one-to-one and one-to-many relations. Then we will look at how to retrieve and delete related data from, and save data into, database tables using model associations for these simple associations. Defining One-To-Many Relationship in Models To see how to define a one-to-many relationship in models, we will think of a situation where we need to store information about some authors and their books and the relation between authors and books is one-to-many. This means an author can have multiple books but a book belongs to only one author (which is rather absurd, as in real life scenario a book can also have multiple authors). We are now going to define associations in models for this one-to-many relation, so that our models recognize their relations and can deal with them accordingly. Time for Action: Defining One-To-Many Relation Create a new database and put a fresh copy of CakePHP inside the web root. Name the database whatever you like but rename the cake folder to relationship. Configure the database in the new Cake installation. Execute the following SQL statements in the database to create a table named authors, CREATE TABLE `authors` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `name` varchar( 127 ) NOT NULL , `email` varchar( 127 ) NOT NULL , `website` varchar( 127 ) NOT NULL ); Create a books table in our database by executing the following SQL commands: CREATE TABLE `books` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `isbn` varchar( 13 ) NOT NULL , `title` varchar( 64 ) NOT NULL , `description` text NOT NULL , `author_id` int( 11 ) NOT NULL ) Create the Author model using the following code (/app/models/authors.php): <?php class Author extends AppModel{ var $name = 'Author'; var $hasMany = 'Book';} ?> Use the following code to create the Book model (/app/models/books.php): <?phpclass Book extends AppModel{ var $name = 'Book'; var $belongsTo = 'Author';}?> Create a controller for the Author model with the following code: (/app/controllers/authors_controller.php): <?phpclass AuthorsController extends AppController { var $name = 'Authors'; var $scaffold;}?>   Use the following code to create a controller for the Book model (/app/controllers/books_controller.php): <?php class BooksController extends AppController { var $name = 'Books'; var $scaffold; } ?> Now, go to the following URLs and add some test data: http://localhost/relationship/authors/ and http://localhost/relationship/books/ What Just Happened? We have created two tables: authors and books for storing author and book information. A foreign-key named author_id is added to the books table to establish the one-to-many relation between authors and books. Through this foreign-key, an author is related to multiple books, as well as, a book is related to one single author. By Cake convention, the name of a foreign-key should be underscored, singular name of target model, suffixed with _id. Once the database tables are created and relations are established between them, we can define associations in models. In both of the model classes, Author and Book, we defined associations to represent the one-to-many relationship between the corresponding two tables. CakePHP provides two types of association: hasMany and belongsTo to define one-to-many relations in models. These associations are very appropriately named: As an author 'has many' books, Author model should have hasMany association to represent its relation with the Book model. As a book 'belongs to' one author, Book model should have belongsTo association to denote its relation with the Author model. In the Author model, an association attribute $hasMany is defined with the value Book to inform the model that every author can be related to many books. We also added a $belongsTo attribute in the Book model and set its value to Author to let the Book model know that every book is related to only one author. After defining the associations, two controllers were created for both of these models with scaffolding to see how the associations are working.
Read more
  • 0
  • 0
  • 7846
article-image-rss-web-widget
Packt
24 Oct 2009
8 min read
Save for later

RSS Web Widget

Packt
24 Oct 2009
8 min read
What is an RSS Feed? First of all, let us understand what a web feed is. Basically, it is a data format that provides frequently updated content to users. Content distributors syndicate the web feed, allowing users to subscribe, by using feed aggregator. RSS feeds contain data in an XML format. RSS is the term used for describing Really Simple Syndication, RDF Site Summary, or Rich Site Summary, depending upon the different versions. RDF (Resource Description Framework), a family of W3C specification, is a data model format for modelling various information such as title, author, modified date, content etc through variety of syntax format. RDF is basically designed to be read by computers for exchanging information. Since, RSS is an XML format for data representation, different authorities defined different formats of RSS across different versions like 0.90, 0.91, 0.92, 0.93, 0.94, 1.0 and 2.0. The following table shows when and by whom were the different RSS versions proposed. RSS Version Year Developer's Name RSS 0.90 1999 Netscape introduced RSS 0.90. RSS 0.91 1999 Netscape proposed the simpler format of RSS 0.91. 1999 UserLand Software proposed the RSS specification. RSS 1.0 2000 O'Reilly released RSS 1.0. RSS 2.0 2000 UserLand Software proposed the further RSS specification in this version and it is the most popular RSS format being used these days. Meanwhile, Harvard Law school is responsible for the further development of the RSS specification. There had been a competition like scenario for developing the different versions of RSS between UserLand, Netscape and O'Reilly before the official RSS 2.0 specification was released. For a detailed history of these different versions of RSS you can check http://www.rss-specifications.com/history-rss.htm The current version RSS is 2.0 and it is the common format for publishing RSS feeds these days. Like RSS, there is another format that uses the XML language for publishing web feeds. It is known as ATOM feed, and is most commonly used in Wiki and blogging software. Please refer to http://en.wikipedia.org/wiki/ATOM for detail. The following is the RSS icon that denotes links with RSS feeds. If you're using Mozilla's Firefox web browser then you're likely to see the above image in the address bar of the browser for subscribing to an RSS feed link available in any given page. Web browsers like Firefox and Safari discover available RSS feeds in web pages by looking at the Internet media type application/rss+xml. The following tag specifies that this web page is linked with the RSS feed URL: http://www.example.com/rss.xml<link href="http://www.example.com/rss.xml" rel="alternate" type="application/rss+xml" title="Sitewide RSS Feed" /> Example of RSS 2.0 format First of all, let’s look at a simple example of the RSS format. <?xml version="1.0" encoding="UTF-8" ?><rss version="2.0"><channel> <title>Title of the feed</title> <link>http://www.examples.com</link> <description>Description of feed</description> <item> <title>News1 heading</title> <link>http://www.example.com/news-1</link> <description>detail of news1 </description> </item> <item> <title>News2 heading</title> <link>http://www.example.com/news-2</link> <description>detail of news2 </description> </item></channel></rss> The first line is the XML declaration that indicates its version is 1.0. The character encoding is UTF-8. UTF-8 characters support many European and Asian characters so it is widely used as character encoding in web. The next line is the rss declaration, which declares that it is a RSS document of version 2.0 The next line contains the <channel> element which is used for describing the detail of the RSS feed. The <channel> element must have three required elements <title>, <link> and <description>. The title tag contains the title of that particular feed. Similarly, the link element contains the hyperlink of the channel and the description tag describes or carries the main information of the channel. This tag usually contains the information in detail. Furthermore, each <channel> element may have one or more <item> elements which contain the story of the feed. Each <item> element must have the three elements <title>, <link> and <description> whose use is similar to those of channel elements, but they describe the details of each individual items. Finally, the last two lines are the closing tags for the <channel> and <rss> elements. Creating RSS Web Widget The RSS widget we're going to build is a simple one which displays the headlines from the RSS feed, along with the title of the RSS feed. This is another widget which uses some JavaScript, PHP CSS and HTML. The content of the widget is displayed within an Iframe so when you set up the widget, you've to adjust the height and width. To parse the RSS feed in XML format, I've used the popular PHP RSS Parser – Magpie RSS. The homepage of Magpie RSS is located at http://magpierss.sourceforge.net/. Introduction to Magpie RSS Before writing the code, let's understand what the benefits of using the Magpie framework are, and how it works. It is easy to use. While other RSS parsers are normally limited for parsing certain RSS versions, this parser parses most RSS formats i.e. RSS 0.90 to 2.0, as well as ATOM feed. Magpie RSS supports Integrated Object Cache which means that the second request to parse the same RSS feed is fast— it will be fetched from the cache. Now, let's quickly understand how Magpie RSS is used to parse the RSS feed. I'm going to pick the example from their homepage for demonstration. require_once 'rss_fetch.inc';$url = 'http://www.getacoder.com/rss.xml';$rss = fetch_rss($url);echo "Site: ", $rss->channel['title'], "<br>";foreach ($rss->items as $item ) { $title = $item[title]; $url = $item[link]; echo "<a href=$url>$title</a></li><br>";} If you're more interested in trying other PHP RSS parsers then you might like to check out SimplePie RSS Parser (http://simplepie.org/) and LastRSS (http://lastrss.oslab.net/). You can see in the first line how the rss_fetch.inc file is included in the working file. After that, the URL of the RSS feed from getacoder.com is assigned to the $url variable. The fetch_rss() function of Magpie is used for fetching data and converting this data into RSS Objects. In the next line, the title of RSS feed is displayed using the code $rss->channel['title']. The other lines are used for displaying each of the RSS feed's items. Each feed item is stored within an $rss->items array, and the foreach() loop is used to loop through each element of the array. Writing Code for our RSS Widget As I've already discussed, this widget is going to use Iframe for displaying the content of the widget, so let's look at the JavaScript code for embedding Iframe within the HTML code. var widget_string = '<iframe src="http://www.yourserver.com/rsswidget/rss_parse_handler.php?rss_url=';widget_string += encodeURIComponent(rss_widget_url);widget_string += '&maxlinks='+rss_widget_max_links;widget_string += '" height="'+rss_widget_height+'" width="'+rss_widget_width+'"';widget_string += ' style="border:1px solid #FF0000;"';widget_string +=' scrolling="no" frameborder="0"></iframe>';document.write(widget_string); In the above code, the widget string variable contains the string for displaying the widget. The source of Iframe is assigned to rss_parse_handler.php. The URL of the RSS feed, and the headlines from the feed are passed to rss_parse_handler.php via the GET method, using rss_url and maxlinks parameters respectively. The values to these parameters are assigned from the Javascript variables rss_widget_url and rss_widget_max_links. The width and height of the Iframe are also assigned from JavaScript variables, namely rss_widget_width and rss_widget_height. The red border on the widget is displayed by assigning 1px solid #FF0000 to the border attribute using the inline styling of CSS. Since, Inline CSS is used, the frameborder property is set to 0 (i.e. the border of the frame is zero). Displaying borders from the CSS has some benefit over employing the frameborder property. While using CSS code, 1px dashed #FF0000 (border-width border-style border-color) means you can display a dashed border (you can't using frameborder), and you can use the border-right, border-left, border-top, border-bottom attributes of CSS to display borders at specified positions of the object. The scrolling property is set to no here, which means that the scroll bar will not be displayed in the widget if the widget content overflows. If you want to show a scroll bar, then you can set this property to yes. The values of JavaScript variables like rss_widget_url, rss_widget_max_links etc come from the page where we'll be using this widget. You'll see how the values of these variables will be assigned from the section at the end where we'll look at how to use this RSS widget.  
Read more
  • 0
  • 0
  • 4517

article-image-jboss-perspective
Packt
23 Oct 2009
4 min read
Save for later

JBoss AS Perspective

Packt
23 Oct 2009
4 min read
As you know, Eclipse offers an ingenious system of perspectives that helps us to switch between different technologies and to keep the main-screen as clean as possible. Every perspective is made of a set of components that can be added/removed by the user. These components are known as views. The JBoss AS Perspective has a set of specific views as follows: JBoss Server View Project Archives View Console View Properties View For launching the JBoss AS Perspective (or any other perspective), follow these two simple steps: From the Window menu, select Open Perspective | Other article. In the Open Perspective window, select the JBoss AS option and click on OK button (as shown in the following screenshot). If everything works fine, you should see the JBoss AS perspective as shown in the following screenshot: If any of these views is not available by default in your JBoss AS perspective, then you can add it manually by selecting from the Window menu the Show View | Other option. In the Show View window (shown in the following screenshot), you just select the desired view and click on the OK button. JBoss Server View This view contains a simple toolbar known as JBoss Server View Toolbar and two panels that separate the list of servers (top part) from the list of additional information about the selected server (bottom part). Note that the quantity of additional information is directly related to the server type. Top part of JBoss Server View In the top part of the JBoss Server View, we can see a list of our servers, their states, and if they are running or if they have stopped. Starting the JBoss AS The simplest ways to start our JBoss AS server are: Select the JBoss 4.2 Server from the server list and click the Start the server button from the JBoss Server View Toolbar (as shown in the following screenshot). Select the JBoss 4.2 Server from the server list and right-click on it. From the context menu, select the Start option (as shown in the following screenshot). In both cases, a detailed evolution of the startup process will be displayed in the Console View, as you can see in the following screenshot. Stopping the JBoss AS The simplest ways to stop JBoss AS server are: Select the JBoss 4.2 Server from the server list and click the Stop the server button from the JBoss Server View Toolbar. Select the JBoss 4.2 Server from the server list and right-click on it. From the context menu, select the Stop option. In both cases, a detailed evolution of the stopping process will be displayed in the Console View, as you can see in the following screenshot. Additional operations on JBoss AS Beside Start and Stop operations, JBoss Server View allows us to: Add a new server (the New Server option from the contextual menu) Remove an existing server (the Delete option from the contextual menu) Start the server in debug mode (first button on the JBoss Server View Toolbar) Start the server in profiling mode (third button on the JBoss Server View Toolbar) Publish to the server or synching the publish information between the server and the workspace (the Publish option from the contextual menu or the last button on the JBoss Server View Toolbar) Discard all publish state and republish from scratch (the Clean option from the contextual menu) Twiddle server (the Twiddle Server option from the contextual menu) Edit launch configuration (the Edit Launch Configuration option from the contextual menu as shown in the following screenshot). Add/remove projects (the Add and Remove Projects option from the contextual menu) Double-click the server name and modify parts of that server in the Server Editor—if you have a username and a password to start the server, then you can specify those credentials here (as shown in the following screenshot). Twiddle is a JMX library that comes with JBoss, and it is used to access (any) variables that are exposed via the JBoss JMX interfaces. Server publish status A server may have one of the following statuses: Synchronized: Allows you to see if changes are sync (as shown in the follo wing screenshot) Publishing: Allows you to see if changes are being updated Republish: Allows you to see if changes are waiting
Read more
  • 0
  • 0
  • 1799
Modal Close icon
Modal Close icon