Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-developing-web-applications-using-javaserver-faces-part-1
Packt
27 Oct 2009
6 min read
Save for later

Developing Web Applications using JavaServer Faces: Part 1

Packt
27 Oct 2009
6 min read
Although a lot of applications have been written using these APIs, most modern Java applications are written using some kind of web application framework. As of Java EE 5, the standard framework for building web applications is Java Server Faces (JSF). Introduction to JavaServer Faces Before JSF was developed, Java web applications were typically developed using non-standard web application frameworks such as Apache Struts, Tapestry, Spring Web MVC, or many others. These frameworks are built on top of the Servlet and JSP standards, and automate a lot of functionality that needs to be manually coded when using these APIs directly. Having a wide variety of web application frameworks available (at the time of writing, Wikipedia lists 35 Java web application frameworks, and this list is far from extensive!), often resulted in "analysis paralysis", that is, developers often spend an inordinate amount of time evaluating frameworks for their applications. The introduction of JSF to the Java EE 5 specification resulted in having a standard web application framework available in any Java EE 5 compliant application server. We don't mean to imply that other web application frameworks are obsolete or that they shouldn't be used at all, however, a lot of organizations consider JSF the "safe" choice since it is part of the standard and should be well supported for the foreseeable future. Additionally, NetBeans offers excellent JSF support, making JSF a very attractive choice. Strictly speaking, JSF is not a web application framework as such, but a component framework. In theory, JSF can be used to write applications that are not web-based, however, in practice JSF is almost always used for this purpose. In addition to being the standard Java EE 5 component framework, one benefit of JSF is that it was designed with graphical tools in mind, making it easy for tools and IDEs such as NetBeans to take advantage of the JSF component model with drag-and-drop support for components. NetBeans provides a Visual Web JSF Designer that allow us to visually create JSF applications. Developing Our first JSF Application From an application developer's point of view, a JSF application consists of a series of JSP pages containing custom JSF tags, one or more JSF managed beans, and a configuration file named faces-config.xml. The faces-config.xml file declares the managed beans in the application, as well as the navigation rules to follow when navigating from one JSF page to another. Creating a New JSF Project To create a new JSF project, we need to go to File | New Project, select the Java Web project category, and Web Application as the project type. After clicking Next, we need to enter a Project Name, and optionally change other information for our project, although NetBeans provides sensible defaults. On the next page in the wizard, we can select the Server, Java EE Version, and Context Path of our application. In our example, we will simply pick the default values. On the next page of the new project wizard, we can select what frameworks our web application will use. Unsurprisingly, for JSF applications we need to select the JavaServer Faces framework. The Visual Web JavaServer Faces framework allows us to quickly build web pages by dragging-and-dropping components from the NetBeans palette into our pages. Although it certainly allows us to develop applications a lot quicker than manually coding, it hides a lot of the "ins" and "outs" of JSF. Having a background in standard JSF development will help us understand what the NetBeans Visual Web functionality does behind the scenes. When clicking Finish, the wizard generates a skeleton JSF project for us, consisting of a single JSP file called welcomeJSF.jsp, and a few configuration files: web.xml, faces-config.xml and, if we are using the default bundled GlassFish server, the GlassFish specific sun-web.xml file is generated as well. web.xml is the standard configuration file needed for all Java web applications. faces-config.xml is a JSF-specific configuration file used to declare JSF-managed beans and navigation rules. sun-web.xml is a GlassFish-specific configuration file that allows us to override the application's default context root, add security role mappings, and perform several other configuration tasks. The generated JSP looks like this: <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <%-- This file is an entry point for JavaServer Faces application. --%> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>JSP Page</title> </head> <body> <f:view> <h1> <h:outputText value="JavaServer Faces"/> </h1> </f:view> </body> </html> As we can see, a JSF enabled JSP file is a standard JSP file using a couple of JSF-specific tag libraries. The first tag library, declared in our JSP by the following line: <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> is the core JSF tag library, this library includes a number of tags that are independent of the rendering mechanism of the JSF application (recall that JSF can be used for applications other than web applications). By convention, the prefix f (for faces) is used for this tag library. The second tag library in the generated JSP, declared by the following line: <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> is the JSF HTML tag library. This tag library includes a number of tags that are used to implement HTML specific functionality, such as creating HTML forms and input fields. By convention, the prefix h (for HTML) is used for this tag library. The first JSF tag we see in the generated JSP file is the <f:view> tag. When writing a Java web application using JSF, all JSF custom tags must be enclosed inside an <f:view> tag. In addition to JSF-specific tags, this tag can contain standard HTML tags, as well as tags from other tag libraries, such as the JSTL tags. The next JSF-specific tag we see in the above JSP is <h:outputText>. This tag simply displays the value of its value attribute in the rendered page. The application generated by the new project wizard is a simple, but complete, JSF web application. We can see it in action by right-clicking on our project in the project window and selecting Run. At this point the application server is started (if it wasn't already running), the application is deployed and the default system browser opens, displaying our application's welcome page.
Read more
  • 0
  • 0
  • 2614

article-image-google-apps-surfing-web
Packt
13 Dec 2011
8 min read
Save for later

Google Apps: Surfing the Web

Packt
13 Dec 2011
8 min read
Browsing and using websites Back in the day, when I first started writing, there were two ways to research—own a lot of books (I did and still do), and/or spend a lot of time at the library (I did, don't have to now). This all changed in the early 90s when Sir Tim Berners-Lee invented the World Wide Web (WWW) . The web used the Internet, which was already there, although Al Gore did not invent the Internet. Although earlier experiments took place, Vinton Cerf's development of the basic transmission protocols making all we enjoy today possible gives him the title "Father of the Internet". All of us reading this book use the Internet and the web almost every day. App Inventor itself relies extensively on the web; you have to use the web in designing apps and downloading the Blocks Editor to power them up. Adding web browsing to our apps gives us: Access to literally trillions of web pages (no one knows how many; Google said it was over a trillion in 2008, and growth has been tremendous since then). Lets us leverage the limited resources and storage capacity of the user's phone by many times as we use the millions of dollars invested in server farms (vast collections of interconnected computers) and thousands of web pages on commercial sites (which they want us to use, even beg us to use because it promotes their products or services). Makes it possible to write powerful and useful apps with really little effort on our part. Take, for example, what I referred to earlier in this book as a link app. This is an app that, when called by the user, simply uses ActivityStarter to open a browser and load a web page. Let's whip one up. Time for action – building an eBay link app Yes, eBay loves us—especially if we buy or sell items on this extensive online auction site. And they want you to do it not just at home but when you're out with only your smartphone. To encourage use from your phone, eBay has spent a gazillion dollars (that's a lot) in providing a mobile site (http://m.ebay.com) that makes the entire eBay site (hundreds of thousands, probably millions of pages) available and useful to you. Back to that word, leverage—the ancient Greek mathematician, Archimedes, said about levers, "Give me a place to stand on and I will move the Earth." Our app's leverage won't move the Earth, but we can sure bid on just about everything on it. Okay, the design of our eBay app is very simple since the screen will be there for only a second or so. I'll explain that in a moment. So, we need just a label and two non-visual components: ActivityStarter and Clock. In the Properties column for the ActivityStarter, put android.intent.action.VIEW in the Action field (again, this is how your app calls the phone's web browser, and the way it's entered is case-sensitive). This gives us something as shown in the following screenshot (with a little formatting added): Now, the reason for the LOADING...nInternet connection required (in the label, n being a line break) is a basic notifier and error trap. First, if the Internet connection is slow, it lets the user know something is happening. Second, if there is not 3G or Wi-Fi connection, it tells the user why nothing will happen until they get a connection. Simple additions like the previous (basically thinking of the user's experience in using our apps) define the difference between amateur and professional apps. Always try to anticipate, because if we leave any way possible at all to screw it up, some user will find it. And who gets blamed? Right. You, me, or whatever idiot wrote that defective app. And they will zing you on Android Market. Okay, now to our buddy: the Blocks Editor. We need only one small block group. Everything is in the Clock1.Timer block to connect to the eBay mobile site and then, job done, gracefully exit. Inside the clock timer frame, we put four blocks, which accomplish the following logic: In our ActivityStarter1.DataUri goes the web address of the eBay mobile website home: http://m.ebay.com. The ActivityStarter1.StartActivity block calls the phone's web browser and passes the address to it. Clock1.TimerAlwaysFires set to false tells AI, "Stop, don't do anything else." Finally, the close application block (which we should always be nice and include somewhere in our apps) removes the app from the phone or other Android device's memory, releasing those always scarce resources. Make a nice icon, and it's ready to go onto Android Market (I put mine on as eBay for Droids). What just happened? That's it—a complete application giving access to a few million great bargains on eBay. This is how it will look when the eBay mobile site home page opens on the user's phone: Pretty neat, huh? And there are lots of other major sites out there that offer mobile sites, easily converted into link apps such as this eBay example. But, what if you want to make the app more complex with a lot of choices? One that after the user finishes browsing a site, they come back to your app and choose yet another site to visit? No problem. Here's an example of that. I recently published an app called AVLnews (AVL is the airport code for Asheville, North Carolina where I live). This app actually lets the user drill down into local news for all 19 counties in the Western North Carolina mountains. Here's what the front page of the app looks like: Even with several pages of choices, this type of app is truly easy to create. You need only a few buttons, labels, and (as ever) horizontal and vertical arrangements for formatting. And, of course, add a single ActivityStarter for calling the phone's web browser. Now, here's the cool part! No matter how long and how many pages your user reads on the site a button sends him to, when he or she hits the hardware Back button on their phone, they return to your app. The preceding is the only way the Back button works for an App Inventor app! If user is within an AI app and hits the Back button, it immediately kills your app (except when a listpicker is displayed, in which case it returns you to the screen that invoked it). This is doubleplusungood, so to speak. So, always supply navigation buttons for changing pages and maybe a note somewhere not to use the hardware button. Okay, back to my AVLnews app as an example. The first button I built was for the Citizen-Times, our major newspaper in the area. They have a mobile site such as the one we saw earlier for eBay. It looks nice and is easy to read on a Droid or other smartphone, like the following: And, it's equally easy to program—this is all you need: I then moved on to other news sources for the areas. The small weekly newspapers in the more isolated, outlying mountain counties have no expensive mobile sites. The websites they do have are pretty conventional. But, when I linked to them, the page would come up with tiny, unreadable type. I had to move it around, do the double tap on the content to fit to the page trick to expand the page, turn the phone on its side, and many such more, and still had trouble reading stuff. Folks, you do not do things like that to your users. Not if you want them to think you are one cool app inventor, as I know you to be by now. So, here's the trick to making almost all web pages readable on a phone—Google Mobilizer! It's a free service of Google at http://www.google.com/gwt/n. Use it to construct links that return a nicely formatted and readable page as shown in the next screenshot. People will thank you for it and think you expended many hours of genius-level programming effort to achieve it. And, remember, it's not polite to disagree with people, eh? Often, the search terms and mobilizing of your link is long and goes on for some time (the following screenshot is only a part of it). However, that's where your real genius comes into play: building a search term that gets the articles fitting whatever is the topic on the button. Summing up: using the power of the web lets us build powerful apps that use few resources on the phone yet return thousands upon thousands of pages. Enjoy. Now, we will look at yet another Google service: Fusion Tables. And see how our App Inventor apps can take advantage of these online data sources.  
Read more
  • 0
  • 0
  • 2591

article-image-normalizing-dimensional-model
Packt
08 Feb 2010
3 min read
Save for later

Normalizing Dimensional Model

Packt
08 Feb 2010
3 min read
First Normal Form Violation in Dimension table Let’s revisit the author problem in the book dimension. The AUTHOR column contains multiple authors, a first-normal-form violation, which prevents us from querying book or sales by author. BOOK dimension table BOOK_SK TITLE AUTHOR PUBLISHER CATEGORY SUB-CATEGORY 1 Programming in Java King, Chan Pac Programming Java 2 Learning Python Simpson Pac Programming Python 3 Introduction to BIRT Chan, Gupta, Simpson (Editor) Apes Reporting BIRT 4 Advanced Java King, Chan Apes Programming Java Normalizing and spinning off the authors into a dimension and adding an artificial BOOK AUTHOR fact solves the problem; it is an artificial fact as it does not contain any real business measure.  Note that the Editor which is an author’s role is also “normalized” into its column in the AUTHOR table (It is related to author, but not actually an author’s name). AUTHOR table AUTHOR_SK AUTHOR_NAME 1 King 2 Chan 3 Simpson 4 Gupta BOOK AUTHOR table BOOK_SK AUTHOR_SK ROLE COUNT 1 1 Co-author 1 1 2 Co-author 1 2 3 Author 1 3 2 Co-author 1 3 3 Editor 1 3 4 Co-author 1 4 1 Co-author 1 4 2 Co-author 1 Note the artificial COUNT measure which facilitates aggregation always has a value of numeric 1. SELECT name, SUM(COUNT) FROM book_author ba, book_dim b, author_dim aWHERE ba.book_sk = b.book_sk AND ba.author_sk = a.author_skGROUP BY name You might need to query sales by author, which you can do so by combining the queries of each of the two stars (the two facts) on their common dimension (BOOK dimension), producing daily book sales by author. SELECT dt, title, name, role, sales_amt FROM (SELECT book_sk, dt, title, sales_amt FROM sales_fact s, date_dim d, book_dim b WHERE s.book_sk = b.book_sk AND s.date_sk = d.date_sk) sales, (SELECT b.book_sk, name, role FROM book_author ba, book_dim b, author_dim a WHERE ba.book_sk = b.book_sk AND ba.author_sk = a.author_sk) authorWHERE sales.book_sk = author.book_sk Single Column with Repeating Value in Dimension table Columns like the PUBLISHER, though not violating any normal form, is also good to get normalized, which we accomplish by adding an artificial fact, PUBLISHED BOOK fact, and its own dimension, PUBLISHER dimension. This normalization is not exactly the same as that in normalizing first-normal-form violation; the PUBLISHER dimension can correctly be linked to the SALES fact, the publisher surrogate key must be added though in the SALES fact. PUBLISHER table   PUBLISHER_SK PUBLISHER 1 Pac 2 Apes BOOK PUBLISHER table BOOK_SK PUBLISHER_SK COUNT 1 1 1 1 2 1 2 3 1 3 2 1 3 3 1 3 4 1 4 1 1 4 2 1 Related Columns with Repeating Value in Dimension table CATEGORY and SUB-CATEGORY columns are related, they form a hierarchy. Each of them can be normalized into its own dimension, but they need to be all linked into one artificial fact. Non-Measure Column in Fact table The ROLE column inside the BOOK AUTHOR fact is not a measure; it violates the dimensional modeling norm; to resolve we just need to spin it off into its own dimension, effectively normalizing the fact table. ROLE dimension table and sample rows   ROLE_SK ROLE 1 Author 2 Co-Author 3 Editor BOOK AUTHOR table with normalized ROLE BOOK_SK AUTHOR_SK ROLE_SK COUNT 1 1 2 1 1 2 2 1 2 3 1 1 3 2 2 1 3 3 3 1 3 4 2 1 4 1 2 1 4 2 2 1   Summary This article shows that both dimensional table and fact table in a dimensional model can be normalized without violating its modeling norm. If you have read this article, you may be interested to view : Solving Many-to-Many Relationship in Dimensional Modeling
Read more
  • 0
  • 0
  • 2549

article-image-seam-conversation-management-using-jboss-seam-components-part-1
Packt
24 Dec 2009
8 min read
Save for later

Seam Conversation Management using JBoss Seam Components: Part 1

Packt
24 Dec 2009
8 min read
The JBoss Seam framework provides elegant solutions to a number of problems. One of these problems is the concept of conversation management. Traditional web applications have a limited number of scopes (or container-managed memory regions) in which they can store data needed by the application at runtime. In a typical Java web application, these scopes are the application scope, the session scope, and the request scope. JSP-based Java web applications also have a page scope. Application scope is typically used to store stateless components or long-term read-only application data. Session scope provides a convenient, medium-term storage for per-user application state, such as user credentials, application preferences, and the contents of a shopping cart. Request scope is short-term storage for per-request information, such as search keywords, data table sort direction, and so on. Seam introduces another scope for JSF applications: the conversation scope. The conversation scope can be as short-term as the request scope, or as long-term as the session scope. Seam conversations come in two types: temporary conversations and long-running conversations. A temporary Seam conversation typically lasts as  long as a single HTTP request. A long-running Seam conversation typically spans several screens and can be tied to more elaborate use cases and workflows within the application, for example, booking a hotel, renting a car, or placing an order for computer hardware. There are some important implications for Seam's conversation management when using Ajax capabilities of RichFaces and Ajax4jsf. As an Ajax-enabled JSF form may involve many Ajax requests before the form is "submitted" by the user at the end of a  use case, some subtle side effects can impact our application if we are not careful. Let's look at an example of how to use Seam conversations effectively with Ajax. Temporary conversations When a Seam-enabled conversation-scoped JSF backing bean is accessed for the first time, through a value expression or method expression from the JSF page for instance, the Seam framework creates a temporary conversation if a conversation does not already exist and stores the component instance in that scope. If a long-running conversation already exists, and the component invocation requires a long-running conversation, for example by associating the view with a long-running conversation in pages.xml, by annotating the bean class or method with Seam's @Conversational annotation, by annotating a method with Seam's @Begin annotation, or by using the conversationPropagation request parameter, then Seam stores the component instance in the existing long-running conversation. ShippingCalculatorBean.java The following source code demonstrates how to declare a conversation-scoped backing being using Seam annotations. In this example, we declare the ShippingCalculatorBean as a Seam-managed conversation-scoped component named shippingCalculatorBeanSeam. @Scope(ScopeType.CONVERSATION) public class ShippingCalculatorBean implements Serializable { /** * */ private static final long serialVersionUID = 1L; private Country country; private Product product; public Country getCountry() { return country; } public Product getProduct() { return product; } public Double getTotal() { Double total = 0d; if (country != null && product != null) { total = product.getPrice(); if (country.getName().equals("USA")) { total = +5d; } else { total = +10d; } } return total; } public void setCountry(Country country) { this.country = country; } public void setProduct(Product product) { this.product = product; } } faces-config.xml We also declare the same ShippingCalculatorBean class as a request-scoped backing bean named shippingCalculaorBean in faces-config.xml. Keep in mind that the JSF framework manages this instance of the class, so none of the Seam annotations are effective for instances of this managed bean. <managed-bean> <description>Shipping calculator bean.</description> <managed-bean-name>shippingCalculatorBean</managed-bean-name> <managed-bean-class>chapter5.bean.ShippingCalculatorBean </managed-bean-class> <managed-bean-scope>request</managed-bean-scope> </managed-bean> pages.xml The pages.xml file is an important Seam configuration file. When a Seam-enabled web application is deployed, the Seam framework looks for and processes a file in the WEB-INF directory named pages.xml. This file contains important information about the pages in the JSF application, and enables us to indicate if a long-running conversation should be started automatically when a view is first accessed. In this example, we declare two pages in pages.xml, one that does not start a long-running conversation, and one that does. <?xml version="1.0" encoding="utf-8"?> <pages xsi_schemaLocation="http://jboss.com/products/seam/pages http://jboss.com/products/seam/pages-2.1.xsd"> <page view-id="/conversation01.jsf" /> <page view-id="/conversation02.jsf"> <begin-conversation join="true"/> </page> … </pages> conversation01.jsf Let's look at the source code for our first Seam conversation test page. In this page, we render two forms side-by-side in an HTML panel grid. The first form is bound to the JSF-managed request-scoped ShippingCalculatorBean, and the second form is bound to the Seam-managed conversation-scoped ShippingCalculatorBean. The form allows the user to select a product and a shipping destination, and then calculates the shipping cost when the command button is clicked. When the user tabs through the fields in a form, an Ajax request is sent, submitting the form data and re-rendering the button. The button is in a disabled state until the user has selected a value in both the fields. The Ajax request creates a new HTTP request on the server, so for the first form JSF creates a new request-scoped instance of our ShippingCalculatorBean for every Ajax request. As the view is not configured to use a long-running conversation, Seam creates a new temporary conversation and stores a new instance of our ShippingCalculatorBean class in that scope for each Ajax request. Therefore, the behavior that can be observed when running this page in the browser is that the calculation simply does not work. The value is always zero. This is because the model state is being lost due to the incorrect scoping of our backing beans. <h:panelGrid columns="2" cellpadding="10"> <h:form> <rich:panel> <f:facet name="header"> <h:outputText value="Shipping Calculator (No Conversation)" /> </f:facet> <h:panelGrid columns="1" width="100%"> <h:outputLabel value="Select Product: " for="product" /> <h:selectOneMenu id="product" value="#{shippingCalculatorBean.product}"> <s:selectItems var="product" value="#{productBean.products}" label="#{product.name}" noSelectionLabel="Select" /> <a4j:support event="onchange" reRender="button" /> <s:convertEntity /> </h:selectOneMenu> <h:outputLabel value="Select Shipping Destination: " for="country" /> <h:selectOneMenu id="country" value="#{shippingCalculatorBean.country}"> <s:selectItems var="country" value="#{customerBean.countries}" label="#{country.name}" noSelectionLabel="Select" /> <a4j:support event="onchange" reRender="button"/> <s:convertEntity /> </h:selectOneMenu> <h:panelGrid columns="1" columnClasses="centered" width="100%"> <a4j:commandButton id="button" value="Calculate" disabled="#{shippingCalculatorBean.country eq null or shippingCalculatorBean.product eq null}" reRender="total" /> <h:panelGroup> <h:outputText value="Total Shipping Cost: " /> <h:outputText id="total" value="#{shippingCalculatorBean.total}"> <f:convertNumber type="currency" currencySymbol="$" maxFractionDigits="0" /> </h:outputText> </h:panelGroup> </h:panelGrid> </h:panelGrid> </rich:panel> </h:form> <h:form> <rich:panel> <f:facet name="header"> <h:outputText value="Shipping Calculator (with Temporary Conversation)" /> </f:facet> <h:panelGrid columns="1"> <h:outputLabel value="Select Product: " for="product" /> <h:selectOneMenu id="product" value="#{shippingCalculatorBeanSeam.product}"> <s:selectItems var="product" value="#{productBean.products}" label="#{product.name}" noSelectionLabel="Select" /> <a4j:support event="onchange" reRender="button" /> <s:convertEntity /> </h:selectOneMenu> <h:outputLabel value="Select Shipping Destination: " for="country" /> <h:selectOneMenu id="country" value="#{shippingCalculatorBeanSeam.country}"> <s:selectItems var="country" value="#{customerBean.countries}" label="#{country.name}" noSelectionLabel="Select" /> <a4j:support event="onchange" reRender="button" /> <s:convertEntity /> </h:selectOneMenu> <h:panelGrid columns="1" columnClasses="centered" width="100%"> <a4j:commandButton id="button" value="Calculate" disabled="#{shippingCalculatorBeanSeam.country eq null or shippingCalculatorBeanSeam.product eq null}" reRender="total" /> <h:panelGroup> <h:outputText value="Total Shipping Cost: " /> <h:outputText id="total" value="#{shippingCalculatorBeanSeam.total}"> <f:convertNumber type="currency" currencySymbol="$" maxFractionDigits="0" /> </h:outputText> </h:panelGroup> </h:panelGrid> </h:panelGrid> </rich:panel> </h:form> </h:panelGrid> The following screenshot demonstrates the problem of using request-scoped or temporary conversation-scoped backing beans in an Ajax-enabled JSF application. As an Ajax request is simply an asynchronous HTTP request marshalled by client-side code executed by the browser's JavaScript interpreter, the request-scoped backing beans are recreated with every Ajax request. The model state is lost and the behavior of the components in the view is incorrect.
Read more
  • 0
  • 0
  • 2536

article-image-working-databases
Packt
01 Oct 2013
19 min read
Save for later

Working with Databases

Packt
01 Oct 2013
19 min read
(For more resources related to this topic, see here.) We have learned how to create snippets, how to work with forms, and Ajax, how to test your code, and how to create a REST API, and all this is awesome! However, we did not learn how to persist data to make it durable. In this article, we will see how to use Mapper, an object-relational mapping ( ORM ) system for relational databases, included with Lift. The idea is to create a map of database tables into a well-organized structure of objects; for example, if you have a table that holds users, you will have a class that represents the user table. Let's say that a user can have several phone numbers. Probably, you will have a table containing a column for the phone numbers and a column containing the ID of the user owning that particular phone number. This means that you will have a class to represent the table that holds phone numbers, and the user class will have an attribute to hold the list of its phone numbers; this is known as a one-to-many relationship . Mapper is a system that helps us build such mappings by providing useful features and abstractions that make working with databases a simple task. For example, Mapper provides several types of fields such as MappedString, MappedInt, and MappedDate which we can use to map the attributes of the class versus the columns in the table being mapped. It also provides useful methods such as findAll that is used to get a list of records or save, to persist the data. There is a lot more that Mapper can do, and we'll see what this is through the course of this article. Configuring a connection to database The first thing we need to learn while working with databases is how to connect the application that we will build with the database itself. In this recipe, we will show you how to configure Lift to connect with the database of our choice. For this recipe, we will use PostgreSQL; however, other databases can also be used: Getting ready Start a new blank project. Edit the build.sbt file to add the lift-mapper and PostgreSQL driver dependencies: "net.liftweb" %% "lift-mapper" % liftVersion % "compile", "org.postgresql" % "postgresql" % "9.2-1003-jdbc4" % "compile" Create a new database. Create a new user. How to do it... Now carry out the following steps to configure a connection with the database: Add the following lines into the default.props file: db.driver=org.postgresql.Driver db.url=jdbc:postgresql:liftbook db.user=<place here the user you've created> db.password=<place here the user password> Add the following import statement in the Boot.scala file: import net.liftweb.mapper._ Create a new method named configureDB() in the Boot.scala file with the following code: def configureDB() { for { driver <- Props.get("db.driver") url <- Props.get("db.url") } yield { val standardVendor = new StandardDBVendor(driver, url, Props.get("db.user"), Props.get("db.password")) LiftRules.unloadHooks.append(standardVendor.closeAllConnections_! _) DB.defineConnectionManager(DefaultConnectionIdentifier, standardVendor) } } Then, invoke configureDB() from inside the boot method. How it works... Lift offers the net.liftweb.mapper.StandardDBVendor class, which we can use to create connections to the database easily. This class takes four arguments: driver, URL, user, and password. These are described as follows: driver : The driver argument is the JDBC driver that we will use, that is, org.postgresql.Driver URL : The URL argument is the JDBC URL that the driver will use, that is, jdbc:postgresql:liftbook user and password : The user and password arguments are the values you set when you created the user in the database. After creating the default vendor, we need to bind it to a Java Naming and Directory Interface (JNDI) name that will be used by Lift to manage the connection with the database. To create this bind, we invoked the defineConnectionManager method from the DB object. This method adds the connection identifier and the database vendor into a HashMap, using the connection identifier as the key and the database vendor as the value. The DefaultConnectionIdentifier object provides a default JNDI name that we can use without having to worry about creating our own connection identifier. You can create your own connection identifier if you want. You just need to create an object that extends ConnectionIdentifier with a method called jndiName that should return a string. Finally, we told Lift to close all the connections while shutting down the application by appending a function to the unloadHooks variable. We did this to avoid locking connections while shutting the application down. There's more... It is possible to configure Lift to use a JNDI datasource instead of using the JDBC driver directly. In this way, we can allow the container to create a pool of connections and then tell Lift to use this pool. To use a JNDI datasource, we will need to perform the following steps: Create a file called jetty-env.xml in the WEB-INF folder under src/main/webapp/ with the following content: <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www.eclipse.org/jetty/configure.dtd"> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <New id="dsliftbook" class="org.eclipse.jetty.plus.jndi.Resource"> <Arg>jdbc/dsliftbook</Arg> <Arg> <New class="org.postgresql.ds.PGSimpleDataSource"> <Set name="User">place here the user you've created</Set> <Set name="Password">place here the user password</Set> <Set name="DatabaseName">liftbook</Set> <Set name="ServerName">localhost</Set> <Set name="PortNumber">5432</Set> </New> </Arg> </New> </Configure> Add the following line into the build .sbt file: env in Compile := Some(file("./src/main/webapp/WEB-INF/jetty-env.xml") asFile) Remove all the jetty dependencies and add the following: "org.eclipse.jetty" % "jetty-webapp" % "8.0.4.v20111024" % "container", "org.eclipse.jetty" % "jetty-plus" % "8.0.4.v20111024" % "container", Add the following code into the web.xml file. <resource-ref> <res-ref-name>jdbc/dsliftbook</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> Remove the configureDB method. Replace the invocation of configureDB method with the following line of code: DefaultConnectionIdentifier.jndiName = "jdbc/dsliftbook" The creation of the file jetty-envy.xml and the change we made in the web.xml file were to create the datasource in jetty and make it available to Lift. Since the connections will be managed by jetty now, we don't need to append hooks so that Lift can close the connections or any other configuration when shutting down the application. All we need to do is tell Lift to get the connections from the JNDI datasource that we have configured into the jetty. We do this by setting the jndiName variable of the default connection identifier, DefaultConnectionIdentifier as follows: DefaultConnectionIdentifier.jndiName = "jdbc/dsliftbook" The change we've made to the build.sbt file was to make the jetty-env.xml file available to the embedded jetty. So, we can use it when we get the application started by using the container:start command. See also... You can learn more about how to configure a JNDI datasource in Jetty at the following address: http://wiki.eclipse.org/Jetty/Howto/Configure_JNDI_Datasource Mapping a table to a Scala class Now that we know how to connect Lift applications to the database, the next step is to learn how to create mappings between a database table and a Scala object using Mapper. Getting ready We will re-use the project we created in the previous recipe since it already has the connection configured. How to do it... Carry out the following steps to map a table into a Scala object using Mapper: Create a new file named Contact.scala inside the model package under src/main/scala/code/ with the following code: package code.model import net.liftweb.mapper.{MappedString, LongKeyedMetaMapper, LongKeyedMapper, IdPK} class Contact extends LongKeyedMapper[Contact] with IdPK { def getSingleton = Contact object name extends MappedString(this, 100) } object Contact extends Contact with LongKeyedMetaMapper[Contact] { override def dbTableName = "contacts" } Add the following import statement in the Boot.scala file: import code.model.Contact Add the following code into the boot method of the Boot class: Schemifier.schemify( true, Schemifier.infoF _, Contact ) Create a new snippet named Contacts with the following code: package code.snippet import code.model.Contact import scala.xml.Text import net.liftweb.util.BindHelpers._ class Contacts { def prepareContacts_!() { Contact.findAll().map(_.delete_!) val contactsNames = "John" :: "Joe" :: "Lisa" :: Nil contactsNames.foreach(Contact.create.name(_).save()) } def list = { prepareContacts_!() "li *" #> Contact.findAll().map { c => { c.name.get } } } } Edit the index.html file by replacing the content of the div element with main as the value of id using the following code : <div data-list="Contacts.list"> <ul> <li></li> </ul> </div> Start the application. Access your local host (http://localhost:8080), and you will see a page with three names, as shown in the following screenshot: How it works... To work with Mapper, we use a class and an object. The first is the class that is a representation of the table; in this class we define the attributes the object will have based on the columns of the table we are mapping. The second one is the class' companion object where we define some metadata and helper methods. The Contact class extends the LongKeyedMapper[Contact] trait and mixes the trait IdPK. This means that we are defining a class that has an attribute called id (the primary key), and this attribute has the type Long. We are also saying that the type of this Mapper is Contact. To define the attributes that our class will have, we need to create objects that extend "something". This "something" is the type of the column. So, when we say an object name extends MappedString(this, 100), we are telling Lift that our Contact class will have an attribute called name, which will be a string that can be 100 characters long. After defining the basics, we need to tell our Mapper where it can get the metadata about the database table. This is done by defining the getSingleton method. The Contact object is the object that Mapper uses to get the database table metadata. By default, Lift will use the name of this object as the table name. Since we don't want our table to be called contact but contacts, we've overridden the method dbTableName. What we have done here is created an object called Contact, which is a representation of a table in the database called contacts that has two columns: id and name. Here, id is the primary key and of the type Long, and name is of the type String, which will be mapped to a varchar datatype. This is all we need to map a database table to a Scala class, and now that we've got the mapping done, we can use it. To demonstrate how to use the mapping, we have created the snippet Contacts. This snippet has two methods. The list method does two things; it first invokes the prepareContacts_!() method, and then invokes the findAll method from the Contact companion object. The prepareContacts_!() method also does two things: it first deletes all the contacts from the database and then creates three contacts: John, Joe, and Lisa. To delete all the contacts from the database, it first fetches all of them using the findAll method, which executes a select * from contacts query and returns a list of Contact objects, one for each existing row in the table. Then, it iterates over the collection using the foreach method and for each contact, it invokes the delete_! method which as you can imagine will execute a delete from contacts where id = contactId query is valid. After deleting all the contacts from the database table, it iterates the contactsNames list, and for each element it invokes the create method of the Contact companion object, which then creates an instance of the Contact class. Once we have the instance of the Contact class, we can set the value of the name attribute by passing the value instance.name(value). You can chain commands while working with Mapper objects because they return themselves. For example, let's say our Contact class has firstName and lastName as attributes. Then, we could do something like this to create and save a new contact: Contact.create.firstName("John").lastName("Doe").save() Finally, we invoke the save() method of the instance, which will make Lift execute an insert query, thus saving the data into the database by creating a new record. Getting back to the list method, we fetch all the contacts again by invoking the findAll method, and then create a li tag for each contact we have fetched from the database. The content of the li tags is the contact name, which we get by calling the get method of the attribute we want the value of. So, when you say contact.name.get, you are telling Lift that you want to get the value of the name attribute from the contact object, which is an instance of the Contact class. There's more... Lift comes with a variety of built-in field types that we can use; MappedString is just one of them. The others include, MappedInt, MappedLong, and MappedBoolean. All these fields come with some built-in features such as the toForm method, which returns the HTML needed to generate a form field, and the validate method that validates the value of the field. By default, Lift will use the name of the object as the name of the table's column; for example, if you define your object as name—as we did—Lift will assume that the column name is name. Lift comes with a built-in list of database-reserved words such as limit, order, user, and so on. If the attribute you are mapping is a database-reserved word, Lift will append _c at the end of the column's name when using Schemifier. For example, if you create an attribute called user, Lift will create a database column called user_c. You can change this behavior by overriding the dbColumnName method, as shown in the following code: object name extends MappedString(this, 100) { override def dbColumnName = "some_new_name" } In this case, we are telling Lift that the name of the column is some_new_name. We have seen that you can fetch data from the database using the findAll method. However, this method will fetch every single row from the database. To avoid this, you can filter the result using the By object; for example, let's say you want to get only the contacts with the name Joe. To accomplish this, you would add a By object as the parameter of the findAll method as follows: Contact.findAll(By(Contact.name, "Joe")). There are also other filters such as ByList and NotBy. And if for some reason the features offered by Mapper to build select queries aren't enough, you can use methods such as findAllByPreparedStatement and findAllByInsecureSQL where you can use raw SQL to build the queries. The last thing left to talk about here is how this example would work if we didn't create any table in the database. Well, I hope you remember that we added the following lines of code to the Boot.scala file: Schemifier.schemify( true, Schemifier.infoF _, Contact ) As it turns out, the Schemifier object is a helper object that assures that the database has the correct schema based on a list of MetaMappers. This means that for each MetaMapper we pass to the Schemifier object, the object will compare the MetaMapper with the database schema and act accordingly so that both the MetaMapper and the database schema match. So, in our example, Schemifier created the table for us. If you change MetaMapper by adding attributes, Schemifier will create the proper columns in the table. See also... To learn more about query parameters, you can find more at: https://www.assembla.com/spaces/liftweb/wiki/Mapper#querying_the_database Creating one-to-many relationships In the previous recipe, we learned how to map a Scala class to a database table using Mapper. However, we have mapped a simple class with only one attribute, and of course, we will not face this while dealing with real-world applications. We will probably need to work with more complex data such as the one having one-to-many or many-to-many relationships. An example of this kind of relationship would be an application for a store where you'll have customers and orders, and we need to associate each customer with the orders he or she placed. This means that one customer can have many orders. In this recipe, we will learn how to create such relationships using Mapper. Getting ready We will modify the code from the last section by adding a one-to-many relationship into the Contact class. You can use the same project from before or duplicate it and create a new project. How to do it... Carry out the following steps: Create a new class named Phone into the model package under src/main/scala/code/ using the following code: package code.model import net.liftweb.mapper._ class Phone extends LongKeyedMapper[Phone] with IdPK { def getSingleton = Phone object number extends MappedString(this, 20) object contact extends MappedLongForeignKey(this, Contact) } object Phone extends Phone with LongKeyedMetaMapper[Phone] { override def dbTableName = "phones" } Change the Contact class declaration from: class Contact extends LongKeyedMapper[Contact] with IdPK To: class Contact extends LongKeyedMapper[Contact] with IdPK with OneToMany[Long, Contact] Add the attribute that will hold the phone numbers to the Contact class as follows: object phones extends MappedOneToMany(Phone, Phone.contact, OrderBy(Phone.id, Ascending)) with Owned[Phone] with Cascade[Phone] Insert the new class into the Schemifier list of parameters. It should look like the following code: Schemifier.schemify( true, Schemifier.infoF _, Contact, Phone ) In the Contacts snippet, replace the import statement from the following code: import code.model.Contact To: import code.model._ Modify the Contacts.prepareContacts_! method to associate a phone number to each contact. Your method should be similar to the one in the following lines of code: def prepareContacts_!() { Contact.findAll().map(_.delete_!) val contactsNames = "John" :: "Joe" :: "Lisa" :: Nil val phones = "5555-5555" :: "5555-4444" :: "5555-3333" :: "5555-2222" :: "5555-1111" :: Nil contactsNames.map(name => { val contact = Contact.create.name(name) val phone = Phone.create.number(phones((new Random()).nextInt(5))).saveMe() contact.phones.append(phone) contact.save() }) } Replace the list method's code with the following code: def list = { prepareContacts_!() ".contact *" #> Contact.findAll().map { contact => { ".name *" #> contact.name.get & ".phone *" #> contact.phones.map(_.number.get) } } } In the index.html file, replace the ul tag with the one in the following code snippet: <ul> <li class="contact"> <span class="name"></span> <ul> <li class="phone"></li> </ul> </li> </ul> Start the application. Access http://localhost:8080. Now you will see a web page with three names and their phone numbers, as shown in the following screenshot: How it works... In order to create a one-to-many relationship, we have mapped a second table called phone in the same way we've created the Contact class. The difference here is that we have added a new attribute called contact. This attribute extends MappedLongForeignKe y, which is the class that we need to use to tell Lift that this attribute is a foreign key from another table. In this case, we are telling Lift that contact is a foreign key from the contacts table in the phones table. The first parameter is the owner, which is the class that owns the foreign key, and the second parameter is MetaMapper, which maps the parent table. After mapping the phones table and telling Lift that it has a foreign key, we need to tell Lift what the "one" side of the one-to-many relationship will be. To do this, we need to mix the OneToMany trait in the Contact class. This trait will add features to manage the one-to-many relationship in the Contact class. Then, we need to add one attribute to hold the collection of children records; in other words, we need to add an attribute to hold the contact's phone numbers. Note that the phones attribute extends MappedOneToMany, and that the first two parameters of the MappedOneToMany constructor are Phone and Phone.contact. This means that we are telling Lift that this attribute will hold records from the phones table and that it should use the contact attribute from the Phone MetaMapper to do the join. The last parameter, which is optional, is the OrderBy object. We've added it to tell Lift that it should order, in an ascending order, the list of phone numbers by their id. We also added a few more traits into the phones attribute to show some nice features. One of the traits is Owned; this trait tells Lift to delete all orphan fields before saving the record. This means that if, for some reason, there are any records in the child table that has no owner in the parent table, Lift will delete them when you invoke the save method, helping you keep your databases consistent and clean. Another trait we've added is the Cascade trait. As its name implies, it tells Lift to delete all the child records while deleting the parent record; for example, if you invoke contact.delete_! for a given contact instance, Lift will delete all phone records associated with this contact instance. As you can see, Lift offers an easy and handy way to deal with one-to-many relationships. See also... You can read more about one-to-many relationships at the following URL: https://www.assembla.com/wiki/show/liftweb/Mapper#onetomany
Read more
  • 0
  • 0
  • 2526

article-image-biztalk-application-currency-exchange-rates
Packt
03 Aug 2011
7 min read
Save for later

BizTalk Application: Currency Exchange Rates

Packt
03 Aug 2011
7 min read
Microsoft BizTalk 2010: Line of Business Systems Integration A practical guide to integrating Line of Business systems with BizTalk Server 2010 We are going to assume that we have two companies in our Dynamics AX implementation; one that is Canadian Dollar (CAD) based, and one that is United States Dollar(USD) based. Thus, we need to use the LedgerExchangeRates.create and LedgerExchangeRates.find actions in both companies. For the remainder of this example, we'll refer to these as daxCADCompany and daxUSDCompany. The complete solution, titled Chapter9-AXExchangeRates, is included in the source code. Dynamics AX schemas We'll start by creating a new BizTalk project, Chapter9-AXExchangeRates, in Visual Studio. After the AIF actions setup is complete, the next step is to generate the required schemas that are needed for our BizTalk application. This is done by right clicking on the BizTalk project in Visual Studio 2010, click Add, highlight and click Add Generated Items. This will bring up the Add Generated Items window, under the Templates section—Visual Studio installed template, select Add Adapter Metadata, and click Add. This will bring up the Add Adapter Wizard window (shown in the following screenshot), so we'll simply select Microsoft Dynamics AX 2009 and click Next. Now, we'll need to fill in the AX server instance name (AX 2009-SHARED in our example) under Server name, and the TCP/IP Port number (2712, which is the default port number, but this can differ). Now, click Next from the BizTalk Adapter for Microsoft Dynamics AX Schema Import Wizard window. Specify the connection information in the next step. In the next window, you should see all the active AIF services. Note that since the AIF services table is a global table, so you will see all the active services in your Dynamics AX instance. This does not mean that each endpoint, thus each company, is configured to accept the actions that each AIF service listed has available. This is the point where you first verify that your connectivity and AIF setup is correct. An error here using the wizard typically is due to an error in the AIF channel configuration. In the wizard window above, you can see the AIF services that are enabled. In our case, the ExchangeRatesService is the only service currently enabled in our Dynamics AX instance. Under this service, you will see three possible modes (sync, async request, and async response) to perform these actions. All three will actually produce the same schemas. Which mode and action (create, find, findkeys, or read) we use is actually determined by the metadata in our message we'll be sending to AX and the logical port configurations in our orchestration. Now, click Finish. Now in the project solution, we see that the wizard will generate the two artifacts. The first ExchangeRates_ExchangeRates.xsd is the schema for the message type that we need to send when calling the LedgerExchangeRates.create action and it is also the same schema returned in the response message when calling the action LedgerExchangeRates.find. Since we are actually dealing with the same AX table in both actions, Exchange Rates, both actions will in part (one will be the inbound message, the other will be the outbound message) require the same schema. The second artifact, BizTalk Orchestration.odx, is also generated by default by the wizard. In the orchestration view, we can see that four Multi-part Message Types were also added to the orchestration. Rename the orchestration to something more meaningful such as ProcessExchangeRates.odx. Now that we have defined our message type that will be returned in our response message, we need to define what the request type will be. Notice from the orchestration view that two messages, ExchangeRatesService_create_Response and ExchangeRatesService_find_Request, have types which Visual Studio has in error 'does not exist or is invalid'. For the out-of-the-box find action, we need the message type DynamicsAX5.QueryCriteria. The other message type is return by AX when calling a create action is DynamicsAX5.EntityKey (if we called a createList action, the returned message would be of type DynamicsAX5.EntitiyKeyList). The schemas for these message types are in the Microsoft.Dynamics.BizTalk.Adapter.Schemas assembly, which can be found in the bin directory of the install location of the BizTalk adapter. Add this reference to the project in Visual Studio. Then, re-select the appropriate message type for each Message Part that is invalid from the Select Artifact Type window as shown. Next, depending on your organization, typically you may want to either populate the noon exchange rates or closing rates. For our example, we will use the closing USD/CAD exchange rates from the Bank of Canada. This is published at 16:30 EST on the website (http://www.bankofcanada.ca/rss/fx/close/fx-close.xml). Since this source is already in XML, download and save a sample. We then generate a schema from Visual Studio using the BizTalk Schema Generator (right click the solution, Add Generated Items, Add Generated Schemas, using the Well-Formed XML (Not Loaded) document type. This will generate the schema for the message that we need to receive by our BizTalk application daily. In the example provided, the schema is ClosingFxRates.xsd (the wizard will generate four other .xsd files that are referenced in ClosingFxRates.xsd). A simple way to schedule the download of this XML data file is to use the Schedule Task Adapter (http://biztalkscheduledtask.codeplex.com/), which can be downloaded and installed at no cost (the source code is also available). Download and install the adapter (requires Microsoft .NET Framework Version 1.1 Redistributable Package), then add using the BizTalk Server Administration Console with the name Schedule. We will use this adapter in our receive location to retrieve the XML via http. There are also RSS adapters available to be purchased from, for example, /nsoftware (http://www.nsoftware.com/). However, for this example, the scheduled task adapter will suffice. Now, since the source of our exchange rates is a third-party schema, and your specific requirements for the source will most likely differ, we'll create a canonical schema ExchangeRates.xsd. As you can see in the schema below, we are only interested in a few pieces of information: Base Currency (USD or CAD in our example), Target Currency (again USD or CAD), Rate, and finally, Date. Creating a canonical schema will also simplify the rest of our solution. Now that we have all the schemas defined for our message types defined or referenced, we can add the messages that we require to the orchestration. We'll begin by adding the message msgClosingFxRates. That will be our raw input data from the Bank of Canada with the message type from the generated schema ClosingFxRates.RDF. For each exchange rate, we'll need to first query Dynamics AX to see if it exists, thus we'll need a request message and a response message. Add a message msgAXQueryExchangeRatesRequest, which will be a multi-part message type ExchangeRatesService_find_Request, and msgAXQueryExchangeRatesResponse that will be a multi-part message type ExchangeRatesService_find_Response. Next, we'll create the messages for the XML that we'll send and receive from Dynamics AX to create an exchange rate. Add a message msgAXCreateExchnageRatesRequest, which will be a multi-part message type of ExchangeRatesService_create_Request, and msgAXCreateExchnageRatesResponse that will be a multi-part message type ExchangeRatesService_create_Response. Finally, we'll need to create two messages, msgExchangeRatesUSDCAD and msgExchangeRatesCADUSD, which will have the message type of the canonical schema ExchangeRates. These messages will contain the exchange rates for USD to CAD and for CAD to USD respectively. We'll create these two messages just to simplify our orchestration for this example. In practice, if you're going to deal with several exchange rates, you will need to add logic to the orchestration to loop through the list rates that you're interested in and have only one message of type ExchangeRates resent several times.
Read more
  • 0
  • 0
  • 2476
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-more-adf-business-components-and-fusion-page-runtime
Packt
09 Jan 2013
11 min read
Save for later

More on ADF Business Components and Fusion Page Runtime

Packt
09 Jan 2013
11 min read
(For more resources related to this topic, see here.) Lifecycle of an ADF Fusion web page with region When a client requests for a page with region, at the server, ADF runtime intercepts and pre-processes the request before passing it to the page lifecycle handler. The pre-processing tasks include security check, initialization of Trinidad runtime, and setting up the binding context and ADF context. This is shown in the following diagram: After setting up the context for processing the request, the ADF framework starts the page lifecycle for the page. During the Before Restore View phase of the page, the framework will try to synchronize the controller state with the request, using the state token sent by the client. If this is a new request, a new root view port is created for the top-level page. In simple words, a view port maps to a page or page fragment in the current view. During view port initialization, the framework will build a data control frame for holding the data controls. During this phrase runtime also builds the binding containers used in the current page. The data control frame will then be added to the binding context object for future use. After setting up the basic infrastructure required for processing the request, the page lifecycle moves to the Restore View phase. During the Restore View phase, the framework generates a component tree for the page. Note that the UI component tree, at this stage, contains only metadata for instantiating the UI components. The component instantiation happens only during the Render Response phase, which happens later in the page lifecycle. If this is a fresh request, the lifecycle moves to the Render Response phase. Note that, in this article, we are not discussing how the framework handles the post back requests from the client. During the Render Response phase, the framework instantiates the UI components for each node in the component tree by traversing the tree hierarchy. The completed component tree is appended to UIViewRoot, which represents the root of the UI component tree. Once the UI components are created, runtime walks through the component tree and performs the pre-rendering tasks. The pre-rendering event is used by components with lazy initialization abilities, such as region, to keep themselves ready for rendering if they are added to the component tree during the page cycle. While processing a region, the framework creates a new child view port and controller state for the region, and starts processing the associated task flow. The following is the algorithm used by the framework while initializing the task flow: If the task flow is configured not to share data control, the framework creates a new data control frame for the task flow and adds to the parent data control frame (the data control frame for the parent view port). If the task flow is configured to start a new transaction, the framework calls beginTransaction() on the control frame. If the task flow is configured to use existing transaction, the framework asks the data control frame to create a save point and to associate it to the page flow stack. If the task flow is configured to 'use existing transaction if possible', framework will start a new transaction on the data control, if there is no transaction opened on it. If a transaction is already opened on the data control, the framework will use the existing one. Once the pre-render processing is over, each component will be asked to write out its value into the response object. During this action, the framework will evaluate the EL expressions specified for the component properties, whenever they are referred in the page lifecycle. If the EL expressions contain binding expression referring properties of the business components, evaluation of the EL will end up in instantiating corresponding model components. The framework performs the following tasks during the evaluation of the model-bound EL expressions: It instantiates the data control if it is missing from the current data control frame. It performs a check out of the application module. It attaches the transaction object to the application module. Note that it is the transaction object that manages all database transactions for an application module. Runtime uses the following algorithm for attaching transactions to the application module: If the application module is nested under a root application module or if it is used in a task flow that has been configured to use an existing transaction, the framework will identify the existing DBTransaction object that has been created for the root application module or for the calling task flow, and attach it to the current application module. Under the cover, the framework uses the jbo.shared.txn parameter (named transaction) to share the transaction between the application modules. In other words, if an application module needs to share a transaction with another module, the framework assigns the same jbo.shared.txn value for both application modules at runtime. While attaching the transaction to the application module, runtime will look up the transaction object by using the jbo.shared.txn value set for the application module and if any transaction object is found for this key, it re-uses the same. If the application module is a regular one, and not part of the task flow that shares a transaction with caller, the framework will generate a new DBTransaction object and attach it to the application module. After initializing the data control, the framework adds it to the data control frame. The data control frame holds all the data control used in the current view port. Remember that a, view port maps to a page or a page fragment. Execute an appropriate view object instance, which is bound to the iterator. At the end of the render response phase, the framework will output the DOM content to the client. Before finishing the request, the ADF binding filter will call endRequest() on each data control instance participating in the request. Data controls use this callback to clean up the resources and check in the application modules back to the pool. Transaction management in Fusion web applications A transaction for a business application may be thought of as a unit of work resulting in changes to the application state. Oracle ADF simplifies the transaction handling by abstracting the micro-level management of transactions from the developers. This section discusses the internal aspects of the transaction management in Fusion web applications. In this discussion, we will consider only ADF Business Component-based applications. What happens when the task flow commits a transaction Oracle ADF allows you to define transactional boundaries, using task flows. Each task flow can be configured to define the transactional unit of work. Let us see what happens when a task flow return activity tries to commit the currently opened transaction. The following is the algorithm used by the framework when the task flow commits the transaction: When you action a task flow return activity, a check is carried over to see whether the task flow is configured for committing the current transaction or not. And if found true, runtime will identify the data control frame associated with the current view port and call the commit operation on it. The data control frame delegates the "commit" call to the transaction handler instance for further processing. The transaction handler iterates over all data controls added to the data control frame and invokes commitTransaction on each root data control. It is the transaction handler that engages all data controls added to the data control frame in the transaction commit cycle. Data control delegates the commit call to the transaction object that is attached to the application module. Note that if you have a child task flow participating in the transaction started by the caller or application modules nested under a root application module, they all share the same transaction object. The commit call on a transaction object will commit changes done by all application modules attached to it. The following diagram illustrates how transaction objects that are attached to the application modules are getting engaged when a client calls commit on the data control frame: Programmatically managing a transaction in a task flow If the declarative solution provided by the task flow for managing the transaction is not flexible enough to meet your use case, you can handle the transaction programmatically by calling the beginTransaction(), commit(), and rollback() methods exposed by oracle.adf.model.DataControlFrame. The data control frame acts as a bucket for holding data controls used in a binding container (page definition file). A data control frame may also hold child data control frames, if the page or page fragment has regions sharing the transaction with the parent. When you call beginTransaction(), commit(), or rollback() on a data control frame, all the data controls added to the data control frame will participate in the appropriate transaction cycle. In plain words, the data control frame provides a mechanism to manage the transaction seamlessly, freeing you from the pain of managing transactions separately for each data control present in the page definition. Note that you can use the DataControlFrame APIs for managing a transaction only in the context of a bounded task flow with an appropriate transaction setting (in the context of a controller transaction). The following example illustrates the APIs for programmatically managing a transaction, using the data control frame: //In managed bean class public void commit(){ //Get the binding context BindingContext bindingContext = BindingContext. getCurrent(); //Gets the name of current(root) DataControlFrame String currentFrame = bindingContext.getCurrentDataControlFrame(); //Finds DataControlFrame instance DataControlFrame dcFrame = bindingContext.findDataControlFrame(currentFrame); try { // Commit the trensaction dcFrame.commit(); //Open a new transaction allowing user to continue //editing data dcFrame.beginTransaction(null); } catch (Exception e) { //Report error through binding container ((DCBindingContainer)bindingContext. getCurrentBindingsEntry()). reportException(e); } } Programmatically managing a transaction in the business components The preceding solution of calling the commit method on the data control frame is ideal to be used in the client tier in the context of the bounded task flows. What if you need to programmatically commit the transaction from the business service layer, which does not have any binding context? To commit or roll back the transactions in the business service layer logic where there is no binding context, you can call commit() or rollback() on the oracle.jbo.Transaction object associated with the root application modules. The following example shows a method defined in an application module, which invokes commit on the Transaction object attached to the root application module: //In application module implementation class/***This method calls commit on transaction object*/public void commit(){this.getRootApplicationModule().getTransaction().commit();} Sharing a transaction between application modules at runtime An application module, nested under a root application module, shares the same transaction context with the root. This solution will fit well if you know that the application module needs to be nested during the development phase of the application. What if an application module needs to invoke the business methods from various application modules whose names are known only at runtime, and all the method calls require to happen in same transaction? You can use the following API in such scenarios to create the required application module at runtime: DBTransaction::createApplicationModule(defName); The following method defined in a root application module creates a nested application module on the fly. Both calling and called application modules share the same transaction context. //In application module implementation class/*** Caller passes the AM definition name of the application* module that requires to participate in the existing* transaction. This method creates new AM if no instance is* found for the supplied amName and invokes required service* on it.* @param amName* @param defName*/public void nestAMIfRequiredAndInvokeMethod(String amName, StringdefName) {//TxnAppModule is a generic interface implemented by all//transactional AMs used in this exampleTxnAppModule txnAM = null;boolean generatedLocally = false;try {//Check whether the TxnAppModuleImpl is already nestedtxnAM = (TxnAppModule)getDBTransaction().getRootApplicationModule().findApplicationModule(amName);//create a new nested instance of the TxnAppModuleImpl,// if not nested alreadyif(txnAM == null) {txnAM = (TxnAppModule)this.getDBTransaction().createApplicationModule(defName);generatedLocally = true;}//Invoke business methodsif (txnAM != null) {txnAM.updateEmployee();}} catch (Exception e) {e.printStackTrace();} finally {//Remove locally created AM once use is overif (generatedLocally && txnAM != null) {txnAM.remove();}}}
Read more
  • 0
  • 0
  • 2474

article-image-need-java-business-integration-and-service-engines-netbeans
Packt
23 Oct 2009
6 min read
Save for later

Need for Java Business Integration and Service Engines in NetBeans

Packt
23 Oct 2009
6 min read
In this article, we will discuss the following topics: Need for Java Business Integration (JBI) Enterprise Service Bus Normalized Message Router Service Engines in NetBeans Need for Java Business Integration (JBI) To have a good understanding of Service Engines (a specific type of JBI component), we need to first understand the reason for Java Business Integration. In the business world, not all systems talk the same language. They use different protocols and different forms of communications. Legacy systems in particular can use proprietary protocols for external communication. The advent and acceptance of XML has been greatly beneficial in allowing systems to be easily integrated, but XML itself is not the complete solution. When some systems were first developed, they were not envisioned to be able to communicate with many other systems; they were developed with closed interfaces using closed protocols. This, of course, is fine for the system developer, but makes system integration very difficult. This closed and proprietary nature of enterprise systems makes integration between enterprise applications very difficult. To allow enterprise systems to effectively communicate between each other, system integrators would use vendor-supplied APIs and data formats or agree on common exchange mechanisms between their systems. This is fine for small short term integration, but quickly becomes unproductive as the number of enterprise applications to integrate gets larger. The following figure shows the problems with traditional integration. As we can see in the figure, each third party system that we want to integrate with uses a different protocol. As a system integrator, we potentially have to learn new technologies and new APIs for each system we wish to integrate with. If there are only two or three systems to integrate with, this is not really too much of a problem. However, the more systems we wish to integrate with, the more proprietary code we have to learn and integration with other systems quickly becomes a large problem. To try and overcome these problems, the Enterprise Application Integration (EAI) server was introduced. This concept has an integration server acting as a central hub. The EAI server traditionally has proprietary links to third party systems, so the application integrator only has to learn one API (the EAI server vendors). With this architecture however, there are still several drawbacks. The central hub can quickly become a bottleneck, and because of the hub-and-spoke architecture, any problems at the hub are rapidly manifested at all the clients. Enterprise Service Bus To help solve this problem, leading companies in the integration community (led by Sun Microsystems) proposed the Java Business Integration Specification Request (JSR 208) (Full details of the JSR can be found at http://jcp.org/en/jsr/detail?id=208). JSR 208 proposed a standard framework for business integration by providing a standard set of service provider interfaces (SPIs) to help alleviate the problems experienced with Enterprise Application Integration. The standard framework described in JSR 208 allows pluggable components to be added into a standard architecture and provides a standard common mechanism for each of these components to communicate with each other based upon WSDL. The pluggable nature of the framework described by JSR 208 is depicted in the following figure. It shows us the concept of an Enterprise Service Bus and introduces us to the Service Engine (SE) component: JSR 208 describes a service engine as a component, which provides business logic and transformation services to other components, as well as consuming such services. SEs can integrate Java-based applications (and other resources), or applications with available Java APIs. Service Engine is a component which provides (and consumes) business logic and transformation services to other components. There are various Service Engines available, such as the BPEL service engine for orchestrating business processes, or the Java EE service engine for consuming Java EE Web Services. The Normalized Message Router As we can see from the previous figure, SE's don't communicate directly with each other or with the clients, instead they communicate via the NMR. This is one of the key concepts of JBI, in that it promotes loose coupling of services. So, what is NMR and what is its purpose? NMR is responsible for taking messages from clients and routing them to the appropriate Service Engines for processing. (This is not strictly true as there is another standard JBI component called the Binding Component responsible for receiving client messages. Again, this further enhances the support for loose coupling within JBI, as Service Engines are decoupled from their transport infrastructure). NMR is responsible for passing normalized (that is based upon WSDL) messages between JBI components. Messages typically consist of a payload and a message header which contains any other message data required for the Service Engine to understand and process the message (for example, security information). Again, we can see that this provides a loosely coupled model in which Service Engines have no prior knowledge of other Service Engines. This therefore allows the JBI architecture to be flexible, and allows different component vendors to develop standard based components. Normalized Message Router enables technology for allowing messages to be passed between loosely coupled services such as Service Engines. The figure below gives an overview of the message routing between a client application and two service engines, in this case the EE and SQL service engines. In this figure, a request is made from the client to the JBI Container. This request is passed via NMR to the EE Service Engine. The EE Service Engine then makes a request to the SQL Service Engine via NMR. The SQL Service Engine returns a message to the EE Service Engine again via NMR. Finally, the message is routed back to the client through NMR and JBI framework. The important concept here is that NMR is a message routing hub not only between clients and service engines, but also for intra-communication between different service engines. The entire architecture we have discussed is typically referred to as an Enterprise Service Bus. Enterprise Service Bus (ESB) is a standard-based middleware architecture that allows pluggable components to communicate with each other via a messaging subsystem.
Read more
  • 0
  • 0
  • 2461

article-image-microsoft-azure-blob-storage
Packt
07 Jan 2011
5 min read
Save for later

Microsoft Azure Blob Storage

Packt
07 Jan 2011
5 min read
  Microsoft Azure: Enterprise Application Development Straight talking advice on how to design and build enterprise applications for the cloud Build scalable enterprise applications using Microsoft Azure The perfect fast-paced case study for developers and architects wanting to enhance core business processes Packed with examples to illustrate concepts Written in the context of building an online portal for the case-study application Blobs in the Azure ecosystem Blobs are one of the three simple storage options for Windows Azure, and are designed to store large files in binary format. There are two types of blobs—block blobs and page blobs. Block blobs are designed for streaming, and each blob can have a size of up to 200 GB. Page blobs are designed for read/write access and each blob can store up to 1 TB each. If we're going to store images or video for use in our application, we'd store them in blobs. On our local systems, we would probably store these files in different folders. In our Azure account, we place blobs into containers, and just as a local hard drive can contain any number of folders, each Azure account can have any number of containers. Similar to folders on a hard drive, access to blobs is set at the container level, where permissions can be either "public read" or "private". In addition to permission settings, each container can have 8 KB of metadata used to describe or categorize it (metadata are stored as name/value pairs). Each blob can be up to 1 TB depending on the type of blob, and can also have up to 8 KB of metadata. For data protection and scalability, each blob is replicated at least three times, and "hot blobs" are served from multiple servers. Even though the cloud can accept blobs of up to 1 TB in size, Development Storage can accept blobs only up to 2 GB. This typically is not an issue for development, but still something to remember when developing locally. Page blobs form the basis for Windows Azure Drive—a service that allows Azure storage to be mounted as a local NTFS drive on the Azure instance, allowing existing applications to run in the cloud and take advantage of Azure-based storage while requiring fewer changes to adapt to the Azure environment. Azure drives are individual virtual hard drives (VHDs) that can range in size from 16 MB to 1 TB. Each Windows Azure instance can mount up to 16 Azure drives, and these drives can be mounted or dismounted dynamically. Also, Windows Azure Drive can be mounted as readable/writable from a single instance of an Azure service, or it can be mounted as a read-only drive for multiple instances. At the time of writing, there was no driver that allowed direct access to the page blobs forming Azure drives, but the page blobs can be downloaded, used locally, and uploaded again using the standard blob API. Creating Blob Storage Blob Storage can be used independent of other Azure services, and even if we've set up a Windows Azure or SQL Azure account, Blob Storage is not automatically created for us. To create a Blob Storage service, we need to follow these steps: Log in to the Windows Azure Developer portal and select our project. After we select our project, we should see the project page, as shown in the next screenshots: Clicking the New Service link on the application page takes us to the service creation page, as shown next: Selecting Storage Account allows us to choose a name and description for our storage service. This information is used to identify our services in menus and listings. Next, we choose a unique name for our storage account. This name must be unique across all of Azure—it can include only lowercase letters and numbers, and must be at least three characters long. If our account name is available, we then choose how to localize our data. Localization is handled by "affinity groups", which tie our storage service to the data centers in different geographic regions. For some applications, it may not matter where we locate our data. For other applications, we may want multiple affinity groups to provide timely content delivery. And for a few applications, regulatory requirements may mean we have to bind our data to a particular region. Clicking the Create button creates our storage service, and when complete, a summary page is shown. The top half of the summary page reiterates the description of our service and provides the endpoints and 256-bit access keys. These access keys are very important—they are the authentication keys we need to pass in our request if we want to access private storage or add/update a blob. The bottom portion of the confirmation page reiterates the affinity group the storage service belongs to. We can also enable a content delivery network and custom domain for our Blob Storage account. Once we create a service, it's shown on the portal menu and in the project summary once we select a project. That's it! We now have our storage services created. We're now ready to look at blobs in a little more depth.
Read more
  • 0
  • 0
  • 2449

article-image-biztalk-application-dynamics-ax-message-outflow
Packt
03 Aug 2011
4 min read
Save for later

BizTalk Application: Dynamics AX Message Outflow

Packt
03 Aug 2011
4 min read
  Microsoft BizTalk 2010: Line of Business Systems Integration A practical guide to integrating Line of Business systems with BizTalk Server 2010 We found that rather than repeating code in several BizTalk solutions when you need to retrieve data from the AIF Queue, it's relatively simple to create a general solution to accomplish this. This solution will retrieve all data via the BizTalk Dynamics AX adapter by polling the Queue at a set interval of time. The minimum polling interval is 1 minute, thus any messages you put in the AIF Queue will not be immediately consumed by BizTalk. The complete solution (Chapter9-AXMessageOutflow) is included with the source code. We'll start by creating a new BizTalk project, Chapter9-AXMessageOutflow, in Visual Studio. Add in a new orchestration, ProcessOutboundAXMessage.odx, which will be the only orchestration required for this example. Also, we'll need to add reference to the Microsoft.Dynamics.BizTalk.Adapter.Schemas assembly and sign the project with a strong name key.   Message setup Next, we'll add two messages to our orchestration: msgAxOutboundMessage and msgAXDocument. These will be the only two messages required in this example. The first message, msgAXOutboundMessage, is of type DynamicsAX5.Message.Envelope. The schema is located in the referenced Microsoft.Dynamics.BizTalk.Adapter.Schemas assembly. (Move the mouse over the image to enlarge.) All outbound messages from the AIF Queue are of this type. As you can see from the sample screenshot below, we have some metadata in the header node but what we are really interested in is the XML contents of the Body node. The contents of the MessageParts node in the Body node will be of type ExchangeRatesService_ExchangeRates.xsd. Thus, all the schemas we require for both inbound and outbound transactions can be generated using the adapter. For the second message, since we don't want to specify a document type, we will use System.Xml.XmlDocument for the Message Type. Using the System.Xml.XmlDocument message type allows for great flexibility in this solution. We can push any message to the AIF queue, and no changes to this BizTalk application are required. Only changes to consuming applications may need to add the AX schema of the message in order to process it. Orchestration setup Next, we create a new a logical port that will receive all messages from Dynamics AX via the AIF Queue with the following settings: Port NameReceiveAXOutboundMessage_PortPort Type NameReceiveAXOutboundMessage_PortTypeCommunication PatternOne-WayPort direction of communicationI'll always be receiving messages on this portPort BindingSpecify Later Also, create a new send port. For this example, we'll just send to a folder drop using the FILE adapter so that we can easily view the XML documents. In practice, other BizTalk applications will most likely process these messages, so you may choose to modify the send port to meet your requirements. Send port settings: Port NameSendAXDocument_PortPort Type NameSendAXDocument_PortTypeCommunication PatternOne-WayPort direction of communicationI'll always be sending messages on this portPort BindingSpecify Later Next, we will need to add the following to the orchestration: Receive shape (receive msgAXOutboundMessage message) Expression shape (determine the file name for msgAXDocument) Message assignment (construct msgAXDocument message) Send shape (send msgAXDocument message) We'll also add two variables (aifActionName and xpathExpression) of type System.String and xmlDoc of type System.Xml.XmlDocument. In the expression shape, we want to extract the AIF Action so that we can name the outbound XML documents in a similar fashion. This will allow us to easily identify the message type from AX. Next, we'll put the following inside expression shape below receive to extract the AIF action name. aifActionName = msgAXOutboundMessage(DynamicsAx5.Action); aifActionName = aifActionName.Substring(55,aifActionName. LastIndexOf('/') - 55); Now, we need to extract the contents of the body message, which is the XML document that we are interested in. Inside the message assignment shape, we will use XPath to extract the message. What we are interested in is the contents of the Body node in the DynamicsAX5.Message.Envelope message we will receive from AX via the AIF Queue. Add the following code inside the assignment shape to extract the XML, assign it to the message we are sending out, and set a common name that we can use in our send port: // Extract Contents of Body node in Envelope Message xpathExpression = "/*[local-name()='Envelope' and namespaceuri()=' http://schemas.microsoft.com/dynamics/2008/01/documents/ Message']/*[local-name()='Body' and namespace-uri()='http://schemas.microsoft.com/dynamics/2008/01/ documents/Message']"; xmlDoc = xpath(msgAXOutboundMessage, xpathExpression); // Extract the XML we are interested in xmlDoc.LoadXml(xmlDoc.FirstChild.FirstChild.InnerXml); // Set the message to the XML Document msgAXDocument = xmlDoc; // Assign FILE.ReceivedFileNameproperty msgAXDocument(FILE.ReceivedFileName) = aifActionName;  
Read more
  • 0
  • 0
  • 2447
Packt
24 Oct 2013
13 min read
Save for later

Using Media Files – playing audio files

Packt
24 Oct 2013
13 min read
(For more resources related to this topic, see here.) Playing audio files JUCE provides a sophisticated set of classes for dealing with audio. This includes: sound file reading and writing utilities, interfacing with the native audio hardware, audio data conversion functions, and a cross-platform framework for creating audio plugins for a range of well-known host applications. Covering all of these aspects is beyond the scope of this article, but the examples in this section will outline the principles of playing sound files and communicating with the audio hardware. In addition to showing the audio features of JUCE, in this section we will also create the GUI and autogenerate some other aspects of the code using the Introjucer application. Creating a GUI to control audio file playback Create a new GUI application Introjucer project of your choice, selecting the option to create a basic window. In the Introjucer application, select the Config panel, and select Modules in the hierarchy. For this project we need the juce_audio_utils module (which contains a special Component class for configuring the audio device hardware); therefore, turn ON this module. Even though we created a basic window and a basic component, we are going to create the GUI using the Introjucer application. Navigate to the Files panel and right-click (on the Mac, press control and click) on the Source folder in the hierarchy, and select Add New GUI Component… from the contextual menu. When asked, name the header MediaPlayer.h and click on Save. In the Files hierarchy, select the MediaPlayer.cpp file. First select the Class panel and change the Class name from NewComponent to MediaPlayer. We will need four buttons for this basic project: a button to open an audio file, a Play button, a Stop button, and an audio device settings button. Select the Subcomponents panel, and add four TextButton components to the editor by right-clicking to access the contextual menu. Space the buttons equally near the top of the editor, and configure each button as outlined in the table as follows: Purpose member name name text background (normal) Open file openButton open Open... Default Play/pause file playButton play Play Green Stop playback stopButton stop Stop Red Configure audio settingsButton settings Audio Settings... Default Arrange the buttons as shown in the following screenshot: For each button, access the mode pop-up menu for the width setting, and choose Subtracted from width of parent. This will keep the right-hand side of the buttons the same distance from the right-hand side of the window if the window is resized. There are more customizations to be done in the Introjucer project, but for now, make sure that you have saved the MediaPlayer.h file, the MediaPlayer.cpp file, and the Introjucer project before you open your native IDE project. Make sure that you have saved all of these files in the Introjucer application; otherwise the files may not get correctly updated in the file system when the project is opened in the IDE. In the IDE we need to replace the MainContentComponent class code to place a MediaPlayer object within it. Change the MainComponent.h file as follows: #ifndef __MAINCOMPONENT_H__#define __MAINCOMPONENT_H__#include "../JuceLibraryCode/JuceHeader.h"#include "MediaPlayer.h"class MainContentComponent : public Component{public:MainContentComponent();void resized();private:MediaPlayer player;};#endif Then, change the MainComponent.cpp file to: #include "MainComponent.h"MainContentComponent::MainContentComponent(){addAndMakeVisible (&player);setSize (player.getWidth(),player.getHeight());}void MainContentComponent::resized(){player.setBounds (0, 0, getWidth(), getHeight());} Finally, make the window resizable in the Main.cpp file, and build and run the project to check that the window appears as expected. Adding audio file playback support Quit the application and return to the Introjucer project. Select the MediaPlayer.cpp file in the Files panel hierarchy and select its Class panel. The Parent classes setting already contains public Component. We are going to be listening for state changes from two of our member objects that are ChangeBroadcaster objects. To do this, we need our MediaPlayer class to inherit from the ChangeListener class. Change the Parent classes setting such that it reads: public Component, public ChangeListener Save the MediaPlayer.h file, the MediaPlayer.cpp file, and the Introjucer project again, and open it into your IDE. Notice in the MediaPlayer.h file that the parent classes have been updated to reflect this change. For convenience, we are going to add some enumerated constants to reflect the current playback state of our MediaPlayer object, and a function to centralize the change of this state (which will, in turn, update the state of various objects, such as the text displayed on the buttons). The ChangeListener class also has one pure virtual function, which we need to add. Add the following code to the [UserMethods] section of MediaPlayer.h: //[UserMethods]-- You can add your own custom methods...enum TransportState {Stopped,Starting,Playing,Pausing,Paused,Stopping};void changeState (TransportState newState);void changeListenerCallback (ChangeBroadcaster* source);//[/UserMethods] We also need some additional member variables to support our audio playback. Add these to the [UserVariables] section: //[UserVariables] -- You can add your own custom variables...AudioDeviceManager deviceManager;AudioFormatManager formatManager;ScopedPointer<AudioFormatReaderSource> readerSource;AudioTransportSource transportSource;AudioSourcePlayer sourcePlayer;TransportState state;//[/UserVariables] The AudioDeviceManager object will manage our interface between the application and the audio hardware. The AudioFormatManager object will assist in creating an object that will read and decode the audio data from an audio file. This object will be stored in the ScopedPointer<AudioFormatReaderSource> object. The AudioTransportSource object will control the playback of the audio file and perform any sampling rate conversion that may be required (if the sampling rate of the audio file differs from the audio hardware sampling rate). The AudioSourcePlayer object will stream audio from the AudioTransportSource object to the AudioDeviceManager object. The state variable will store one of our enumerated constants to reflect the current playback state of our MediaPlayer object. Now add some code to the MediaPlayer.cpp file. In the [Constructor] section of the constructor, add following two lines: playButton->setEnabled (false);stopButton->setEnabled (false); This sets the Play and Stop buttons to be disabled (and grayed out) initially. Later, we enable the Play button once a valid file is loaded, and change the state of each button and the text displayed on the buttons, depending on whether the file is currently playing or not. In this [Constructor] section you should also initialize the AudioFormatManager as follows: formatManager.registerBasicFormats(); This allows the AudioFormatManager object to detect different audio file formats and create appropriate file reader objects. We also need to connect the AudioSourcePlayer, AudioTransportSource and AudioDeviceManager objects together, and initialize the AudioDeviceManager object. To do this, add the following lines to the [Constructor] section: sourcePlayer.setSource (&transportSource);deviceManager.addAudioCallback (&sourcePlayer);deviceManager.initialise (0, 2, nullptr, true); The first line connects the AudioTransportSource object to the AudioSourcePlayer object. The second line connects the AudioSourcePlayer object to the AudioDeviceManager object. The final line initializes the AudioDeviceManager object with: The number of required audio input channels (0 in this case). The number of required audio output channels (2 in this case, for stereo output). An optional "saved state" for the AudioDeviceManager object (nullptr initializes from scratch). Whether to open the default device if the saved state fails to open. As we are not using a saved state, this argument is irrelevant, but it is useful to set this to true in any case. The final three lines to add to the [Constructor] section to configure our MediaPlayer object as a listener to the AudioDeviceManager and AudioTransportSource objects, and sets the current state to Stopped: deviceManager.addChangeListener (this);transportSource.addChangeListener (this);state = Stopped; In the buttonClicked() function we need to add some code to the various sections. In the [UserButtonCode_openButton] section, add: //[UserButtonCode_openButton] -- add your button handler...FileChooser chooser ("Select a Wave file to play...",File::nonexistent,"*.wav");if (chooser.browseForFileToOpen()) {File file (chooser.getResult());readerSource = new AudioFormatReaderSource(formatManager.createReaderFor (file), true);transportSource.setSource (readerSource);playButton->setEnabled (true);}//[/UserButtonCode_openButton] When the openButton button is clicked, this will create a FileChooser object that allows the user to select a file using the native interface for the platform. The types of files that are allowed to be selected are limited using the wildcard *.wav to allow only files with the .wav file extension to be selected. If the user actually selects a file (rather than cancels the operation), the code can call the FileChooser::getResult() function to retrieve a reference to the file that was selected. This file is then passed to the AudioFormatManager object to create a file reader object, which in turn is passed to create an AudioFormatReaderSource object that will manage and own this file reader object. Finally, the AudioFormatReaderSource object is connected to the AudioTransportSource object and the Play button is enabled. The handlers for the playButton and stopButton objects will make a call to our changeState() function depending on the current transport state. We will define the changeState() function in a moment where its purpose should become clear. In the [UserButtonCode_playButton] section, add the following code: //[UserButtonCode_playButton] -- add your button handler...if ((Stopped == state) || (Paused == state))changeState (Starting);else if (Playing == state)changeState (Pausing);//[/UserButtonCode_playButton] This changes the state to Starting if the current state is either Stopped or Paused, and changes the state to Pausing if the current state is Playing. This is in order to have a button with combined play and pause functionality. In the [UserButtonCode_stopButton] section, add the following code: //[UserButtonCode_stopButton] -- add your button handler...if (Paused == state)changeState (Stopped);elsechangeState (Stopping);//[/UserButtonCode_stopButton] This sets the state to Stopped if the current state is Paused, and sets it to Stopping in other cases. Again, we will add the changeState() function in a moment, where these state changes update various objects. In the [UserButtonCode_settingsButton] section add the following code: //[UserButtonCode_settingsButton] -- add your button handler...bool showMidiInputOptions = false;bool showMidiOutputSelector = false;bool showChannelsAsStereoPairs = true;bool hideAdvancedOptions = false;AudioDeviceSelectorComponent settings (deviceManager,0, 0, 1, 2,showMidiInputOptions,showMidiOutputSelector,showChannelsAsStereoPairs,hideAdvancedOptions);settings.setSize (500, 400);DialogWindow::showModalDialog(String ("Audio Settings"),&settings,TopLevelWindow::getTopLevelWindow (0),Colours::white,true); //[/UserButtonCode_settingsButton] This presents a useful interface to configure the audio device settings. We need to add the changeListenerCallback() function to respond to changes in the AudioDeviceManager and AudioTransportSource objects. Add the following to the [MiscUserCode] section of the MediaPlayer.cpp file: //[MiscUserCode] You can add your own definitions...void MediaPlayer::changeListenerCallback (ChangeBroadcaster* src){if (&deviceManager == src) {AudioDeviceManager::AudioDeviceSetup setup;deviceManager.getAudioDeviceSetup (setup);if (setup.outputChannels.isZero())sourcePlayer.setSource (nullptr);elsesourcePlayer.setSource (&transportSource);} else if (&transportSource == src) {if (transportSource.isPlaying()) {changeState (Playing);} else {if ((Stopping == state) || (Playing == state))changeState (Stopped);else if (Pausing == state)changeState (Paused);}}}//[/MiscUserCode] If our MediaPlayer object receives a message that the AudioDeviceManager object changed in some way, we need to check that this change wasn't to disable all of the audio output channels, by obtaining the setup information from the device manager. If the number of output channels is zero, we disconnect our AudioSourcePlayer object from the AudioTransportSource object (otherwise our application may crash) by setting the source to nullptr. If the number of output channels becomes nonzero again, we reconnect these objects. If our AudioTransportSource object has changed, this is likely to be a change in its playback state. It is important to note the difference between requesting the transport to start or stop, and this change actually taking place. This is why we created the enumerated constants for all the other states (including transitional states). Again we issue calls to the changeState() function depending on the current value of our state variable and the state of the AudioTransportSource object. Finally, add the important changeState() function to the [MiscUserCode] section of the MediaPlayer.cpp file that handles all of these state changes: void MediaPlayer::changeState (TransportState newState){if (state != newState) {state = newState;switch (state) {case Stopped:playButton->setButtonText ("Play");stopButton->setButtonText ("Stop");stopButton->setEnabled (false);transportSource.setPosition (0.0);break;case Starting:transportSource.start();break;case Playing:playButton->setButtonText ("Pause");stopButton->setButtonText ("Stop");stopButton->setEnabled (true);break;case Pausing:transportSource.stop();break;case Paused:playButton->setButtonText ("Resume");stopButton->setButtonText ("Return to Zero");break;case Stopping:transportSource.stop();break;}}} After checking that the newState value is different from the current value of the state variable, we update the state variable with the new value. Then, we perform the appropriate actions for this particular point in the cycle of state changes. These are summarized as follows: In the Stopped state, the buttons are configured with the Play and Stop labels, the Stop button is disabled, and the transport is positioned to the start of the audio file. In the Starting state, the AudioTransportSource object is told to start. Once the AudioTransportSource object has actually started playing, the system will be in the Playing state. Here we update the playButton button to display the text Pause, ensure the stopButton button displays the text Stop, and we enable the Stop button. If the Pause button is clicked, the state becomes Pausing, and the transport is told to stop. Once the transport has actually stopped, the state changes to Paused, the playButton button is updated to display the text Resume and the stopButton button is updated to display Return to Zero. If the Stop button is clicked, the state is changed to Stopping, and the transport is told to stop. Once the transport has actually stopped, the state changes to Stopped (as described in the first point). If the Return to Zero button is clicked, the state is changed directly to Stopped (again, as previously described). When the audio file reaches the end of the file, the state is also changed to Stopped. Build and run the application. You should be able to select a .wav audio file after clicking the Open... button, play, pause, resume, and stop the audio file using the respective buttons, and configure the audio device using the Audio Settings… button. The audio settings window allows you to select the input and output device, the sample rate, and the hardware buffer size. It also provides a Test button that plays a tone through the selected output device. Summary This article has covered a few of the techniques for dealing with audio files in JUCE. The article has given only an introduction to get you started; there are many other options and alternative approaches, which may suit different circumstances. The JUCE documentation will take you through each of these and point you to related classes and functions. Resources for Article: Further resources on this subject: Quick start – media files and XBMC [Article] Audio Playback [Article] Automating the Audio Parameters – How it Works [Article]
Read more
  • 0
  • 0
  • 2441

article-image-getting-started-enterprise-library
Packt
10 Nov 2010
6 min read
Save for later

Getting Started with Enterprise Library

Packt
10 Nov 2010
6 min read
Introducing Enterprise Library Enterprise Library (EntLib) is a collection of reusable software components or application blocks designed to assist software developers with common enterprise development challenges. Each application block addresses a specific cross-cutting concern and provides highly configurable features, which results in higher developer productivity. EntLib is implemented and provided by Microsoft patterns & practices group, a dedicated team of professionals who work on solving these cross-cutting concerns with active participation from the developer community. This is an open source project and thus freely available under the Microsoft Public License (Ms-PL) at the CodePlex open source community site (http://entlib.codeplex.com), basically granting us a royalty-free copyright license to reproduce its contribution, build derivative works, and distribute them. More information can be found at the Enterprise Library community site http://www.codeplex.com/entlib. Enterprise Library consists of nine application blocks; two are concerned with wiring up stuff together and the remaining seven are functional application blocks. The following is the complete list of application blocks; these are briefly discussed in the next sections. Wiring Blocks Unity Dependency Injection Policy Injection Application Block Functional Blocks Data Access Application Block Logging Application Block Exception Handling Application Block Caching Application Block Validation Application Block Security Application Block Cryptography Application Block Wiring Application Blocks Wiring blocks provide mechanisms to build highly flexible, loosely coupled, and maintainable systems. These blocks are mainly about wiring or plugging together different functionalities. The following two blocks fall under this category: Unity Dependency Injection Policy Injection Application Block Unity Application Block The Unity Application Block is a lightweight, flexible, and extensible dependency injection container that supports interception and various injection mechanisms such as constructor, property, and method call injection. The Unity Block is a standalone open source project, which can be leveraged in our application. This block allows us to develop loosely coupled, maintainable, and testable applications. Enterprise Library leverages this block for wiring the configured objects. More information on the Unity block is available at http://unity.codeplex.com. Policy Injection Application Block The Policy Injection Application Block is included in this release of Enterprise Library for backwards compatibility and policy injection is implemented using the Unity interception mechanism. This block provides a mechanism to change object behavior by inserting code between the client and the target object without modifying the code of the target object. Functional Application Blocks Enterprise Library consists of the following functional application blocks, which can be used individually or can be grouped together to address a specific cross-cutting concern. Data Access Application Block Logging Application Block Exception Handling Application Block Caching Application Block Validation Application Block Security Application Block Cryptography Application Block Data Access Application Block Developing an application that stores/ retrieve data in/from some kind of a relational database is quite common; this involves performing CRUD (Create, Read, Update, Delete) operations on the database by executing T-SQL or stored procedure commands. But we often end up writing the plumbing code over and over again to perform these operations: plumbing code such as creating a connection object, opening and closing a connection, parameter caching, and so on. The following are the key benefits of the Data Access block: The Data Access Application Block (DAAB) abstracts developers from the underlying database technology by providing a common interface to perform database operations. DAAB also takes care of the ordinary tasks like creating a connection object, opening and closing a connection, parameter caching, and so on. It helps in bringing consistency to the application and allows changing of database type by modifying the configuration. Logging Application Block Logging is an essential activity, which is required to understand what's happening behind the scene while the application is running. This is especially helpful in identifying issues and tracing the source of the problem without debugging. The Logging Application Block provides a very simple, flexible, standard, and consistent way to log messages. Administrators have the power to change the log destination (file, database, e-mail, and so on), modify message format, decide on which category is turned on/off, and so on. Exception Handling Application Block Handling exceptions appropriately and allowing the user to either continue or exit gracefully is essential for any application to avoid user frustration. The Exception Handling Application Block adapts the policy-driven approach to allow developers/administrators to define how to handle exceptions. The following are the key benefits of the Exception Handling Block: It provides the ability to log exception messages using the Logging Application Block. It provides a mechanism to replace the original exception with another exception, which prevents disclosure of sensitive information. It provides mechanism to wrap the original exception inside another exception to maintain the contextual information. Caching Application Block Caching in general is a good practice for data that has a long life span; caching is recommended if the possibility of data being changed at the source is low and the change doesn't have significant impact on the application. The Caching Application Block allows us to cache data locally in our application; it also gives us the flexibility to cache the data in-memory, in a database or in an isolated storage. Validation Application Block The Validation Application Block (VAB) provides various mechanisms to validate user inputs. As a rule of thumb always assume user input is not valid unless proven to be valid. The Validation block allows us to perform validation in three different ways; we can use configuration, attributes, or code to provide validation rules. Additionally it also includes adapters specifically targeting ASP.NET, Windows Forms, and Windows Communication Foundation (WCF). Security Application Block The Security Application Block simplifies authorization based on rules and provides caching of the user's authorization and authentication data. Authorization can be done against Microsoft Active Directory Service, Authorization Manager (AzMan) , Active Directory Application Mode (ADAM), and Custom Authorization Provider. Decoupling of the authorization code from the authorization provider allows administrators to change the provider in the configuration without changing the code. Cryptography Application Block The Cryptography Application Block provides a common API to perform basic cryptography operations without inclining towards any specific cryptography provider and the provider is configurable. Using this application block we can perform encryption/decryption, hashing, & validate whether the hash matches some text.
Read more
  • 0
  • 0
  • 2410

article-image-introduction-flash-builder-4-network-monitor
Packt
13 May 2010
3 min read
Save for later

An Introduction to Flash Builder 4-Network Monitor

Packt
13 May 2010
3 min read
Adobe Flash Builder 4 (formally known as Adobe Flex Builder), which no doubt needs no words of introduction, has become a de-facto standard in rich internet applications development. Latest version is considered as a ground breaking release not only for its dozens of new and enhanced features but also for its new-fangled component architectures like Spark, designer-developer workflow, integration with flash and catalyst, Data centric development, Unit testing, Debugging enhancements etc. In this article, we’ll get acquainted with a brand new premium feature of Adobe Flash Builder 4 called Network Monitor. Network Monitor enables developers to inspect and monitor client-server traffic in the form of textual, XML, AMF, or JSON data within the Adobe Flash Builder 4. It shows real-time data-traffic between application and a local or remote server along with wealth of other related information about the transferred data such as status, size, body etc. If you have used FireBug (A firefox plugin), then you will appreciate Network Monitor too. It is extremely handy during HTTP errors to check the response which is not accessible from the fault event object. Creating a Sample Application Enough Talking lets start and create a very simple application which will serve as groundwork to explore Network Monitor. Assuming you are already equipped with basic knowledge of application creation, we will move on quickly without going through minor details. This sample application will read the PackPublishing Official RSS feed and display every news title along with its publishing date in a DataGrid control. Network monitor will set forth into action when data request will be triggered. Go to File Menu, Select New > Flex Project. Insert information in the New Flex Project dialog box according to following screenshot and hit Enter. In Flex 4, all the non-visual mxml components such as RPC components, effects, validators, formatters etc are declared inside <fx:Declarations> tag. Declare a HTTPService component inside <fx:Declarations> tag. Set its id property to newsService Set its url property to https://www.packtpub.com/rss.xml Set its showBusyCursor property to true and resultFormat property to e4x. Generate result and fault event handlers, though only result event handler will be used. Your HTTPService code should look like following <s:HTTPService id="newsService" url="https://www.packtpub.com/rss.xml" showBusyCursor="true" resultFormat="e4x" result="newsService_resultHandler(event)" fault="newsService_faultHandler(event)"/> Now set the application layout to VerticalLayout <s:layout> <s:VerticalLayout verticalAlign="middle" horizontalAlign="center"/> </s:layout> Add a Label control and set its text property to Packt Publishing Add a DataGrid control, set its id property to dataGrid, and add two DataGridColumn in it. Set first column’s dataField property to title and headerText to Title Set second column’s dataField property to pubDate and headerText to Date Your controls should look like as following <s:Label text="Packt Publishing" fontWeight="bold" fontSize="22"/> <mx:DataGrid id="dataGrid" width="600"> <mx:columns> <mx:DataGridColumn dataField="title" headerText="Title"/> <mx:DataGridColumn dataField="pubDate" width="200" headerText="Date"/> </mx:columns> </mx:DataGrid> Finally add following code in newsService’s result handler. var xml:XML = XML(event.result); dataGrid.dataProvider = xml..item;
Read more
  • 0
  • 0
  • 2391
article-image-geronimo-plugins
Packt
12 Nov 2009
11 min read
Save for later

Geronimo Plugins

Packt
12 Nov 2009
11 min read
Developing a plugin In this section, we will develop our very own plugin, the World Clock plugin. This is a very simple plugin that provides the time in different locales. We will go through all of the steps required to develop it from scratch. These steps are as follows: Creating the plugin project Generating the plugin project, using maven2 Writing the plugin interface and implementation Creating a deployment plan Installing the plugin Creating a plugin project There are many ways in which you can develop plugins. You can manually create all of the plugin artifacts and package them. We will use the easiest method, that is, by using Maven's geronimo-plugin-archetype. This will generate the plugin project with all of the artifacts with the default values filled in. To generate the plugin project, run the following command: mvn archetype:create -DarchetypeGroupId=org.apache.geronimo.buildsupport -DarchetypeArtifactId=geronimo-plugin-archetype -DarchetypeVersion=2.1.4 -DgroupId=com.packt.plugins -DartifactId=WorldClock This will create a plugin project called WorldClock. A directory called WorldClock will be created, with the following artifacts in it: pom.xml pom.sample.xml src/main/plan/plan.xml src/main/resources In the same directory in which the WorldClock directory is created, you will need to create a java project that will contain the source code of the plugin. We can create this by using the following command: mvn archetype:create -DgroupId=com.packt.plugins -DartifactId=WorldClockModule This will create a java project with the same groupId and artifactId in a directory called WorldClockModule. This directory will contain the following artifacts: pom.xml src/main/java/com/packt/plugins/App.java src/test/java/com/packt/plugins/AppTest.java You can safely remove the second and third artifacts, as they are just sample stubs generated by the archetype. In this project, we will need to modify the pom.xml to have a dependency on the Geronimo kernel, so that we can compile the GBean that we are going to create and include in this module. The modified pom.xml is shown below: <project xsi_schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.packt.plugins</groupId> <artifactId>WorldClockModule</artifactId> <packaging>jar</packaging> <version>1.0-SNAPSHOT</version> <name>WorldClockModule</name> <url>http://maven.apache.org</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.geronimo.framework</groupId> <artifactId>geronimo-kernel</artifactId> <version>2.1.4</version> </dependency> </dependencies></project> For simplicity, we have only one GBean in our sample. In a real world scenario, there may be many GBeans that you will need to create. Now we need to create the GBean that forms the core functionality of our plugin. Therefore, we will create two classes, namely, Clock and ClockGBean. These classes are shown below: package com.packt.plugins;import java.util.Date;import java.util.Locale;public interface Clock { public void setTimeZone(String timeZone); public String getTime();} and package com.packt.plugins;import java.text.DateFormat;import java.text.SimpleDateFormat;import java.util.Calendar;import java.util.Date;import java.util.GregorianCalendar;import java.util.Locale;import java.util.TimeZone;import org.apache.geronimo.gbean.GBeanInfo;import org.apache.geronimo.gbean.GBeanInfoBuilder;import org.apache.geronimo.gbean.GBeanLifecycle;import sun.util.calendar.CalendarDate;public class ClockGBean implements GBeanLifecycle, Clock{ public static final GBeanInfo GBEAN_INFO; private String name; private String timeZone; public String getTime() { GregorianCalendar cal = new GregorianCalendar(TimeZone. getTimeZone(timeZone)); int hour12 = cal.get(Calendar.HOUR); // 0..11 int minutes = cal.get(Calendar.MINUTE); // 0..59 int seconds = cal.get(Calendar.SECOND); // 0..59 boolean am = cal.get(Calendar.AM_PM) == Calendar.AM; return (timeZone +":"+hour12+":"+minutes+":"+seconds+":"+((am)? "AM":"PM")); } public void setTimeZone(String timeZone) { this.timeZone = timeZone; } public ClockGBean(String name){ this.name = name; timeZone = TimeZone.getDefault().getID(); } public void doFail() { System.out.println("Failed............."); } public void doStart() throws Exception { System.out.println("Started............"+name+" "+getTime()); } public void doStop() throws Exception { System.out.println("Stopped............"+name); } static { GBeanInfoBuilder infoFactory = GBeanInfoBuilder.createStatic ("ClockGBean",ClockGBean.class); infoFactory.addAttribute("name", String.class, true); infoFactory.addInterface(Clock.class); infoFactory.setConstructor(new String[] {"name"}); GBEAN_INFO = infoFactory.getBeanInfo(); } public static GBeanInfo getGBeanInfo() { return GBEAN_INFO; }} As you can see, Clock is an interface and ClockGBean  is a GBean that implements this interface. The Clock interface exposes the functionality that is provided by the ClockGBean. The doStart(),   doStop(), and doFail()  methods are provided by the GBeanLifeCycle interface, and provide lifecycle callback functionality. The next step is to run Maven to build this module. Go to the command prompt, and change the directory to the WorldClockModule directory. To build the module, run the following command: mvn clean install Once the build completes, you will find a WorldClockModule-1.0-SNAPSHOT.jar in the WorldClockModule/target directory. Now change the directory to WorldClock, and open the generated pom.xml file. You will need to uncomment the deploymentConfigs for the gbeanDeployer, and add the following module that you want to include in the plugin: <module> <groupId>com.packt.plugins</groupId> <artifactId>WorldClockModule</artifactId> <version>1.0</version> <type>jar</type></module> You will notice that we are using the car-maven-plugin in the pom.xml file. The car-maven-plugin is used to build Apache Geronimo configuration archives without starting the server. The final step is to create the deployment plan in order to deploy the module that we just created into the Apache Geronimo server. This deployment plan will be used by the car-maven-plugin to actually create the artifacts that will be created during deployment to Apache Geronimo. The deployment plan is shown below: <module > <environment> <moduleId> <groupId>com.packt.plugins</groupId> <artifactId>WorldClock</artifactId> <version>1.0</version> <type>car</type> </moduleId> <dependencies/> <hidden-classes/> <non-overridable-classes/> <private-classes/> </environment> <gbean name="ClockGBean" class="com.packt.clock.ClockGBean"> <attribute name="name">ClockGBean</attribute> </gbean></module> Once the plan is ready, go to the command prompt and change the directory to the WorldClock directory. Run the following command to build the plugin: mvn clean install You will notice that the car-maven-plugin is invoked and a WorldClock-1.0-SNAPSHOT.car file is created in the WorldClock/target directory. We have now completed the steps required to create an Apache Geronimo plugin. In the next section, we will see how we can install the plugin in Apache Geronimo. Installing a plugin We can install a plugin in three different ways. One way is to   use the deploy.bat or deploy.sh script, another way is to use the install-plugin command in GShell, and the third way is to use the Administration Console to  install a plugin from a plugin repository. We will discuss each of these methods: Using deploy.bat or deploy.sh file: The deploy.bat or deploy.sh script is found in the <GERONIMO_HOME>/bin directory. It has an option install-plugin, which can be used to install plugins onto the server. The command syntax is shown below: deploy install-plugin <path to the plugin car file> Running this command, and passing the path to the plugin .car archive on the disk, will result in the plugin being installed onto the Geronimo server. Once the installation has finished, an Installation Complete message will be displayed, and the command will exit. Using GShell: Invoke  the gsh command from the command prompt, after changing the current directory to <GERONIMO_HOME>/bin. This will bring up the GShell prompt. In the GShell prompt, type the following command to install the plugin: deploy/install-plugin <path to the plugin car file> Please note that, you should escape special characters in the path by using a leading "" (back slash) before the character. Another way to install plugins that are available in remote plugin repository is by using the list-plugins command. The syntax of this command is as given below: deploy/list-plugins <URI of the remote repository> If a remote repository is not specified, then the one configured in Geronimo will be used instead. Once this command has been invoked, the list of available plugins in the remote repository is shown, along with their serial numbers, and you will be prompted to enter a comma separated list of the serial numbers of the plugins that you want to install. Using the Administration Console: The Administration Console has a Plugins portlet that can be used to list the plugins available in a repository specified by the user. You can use the Administration Console to select and install the plugins that you want from this list. This portlet also has the capability to export applications or services in your server instance as Geronimo plugins, so that they can be installed on other server instances. See the Plugin portlet section for details of the usage of this portlet. Available plugins The  web  site http://geronimoplugins.com/ hosts Apache Geronimo plugins. It has many plugins listed for Apache Geronimo. There are plugins for Quartz, Apache Directory Server, and many other popular software packages. However, they are not always available for the latest versions of Apache Geronimo. A couple of fairly up-to-date plugins that are available for Apache Geronimo are the Windows Service Wrapper plugin and the Apache Tuscany plugin for Apache Geronimo. The Windows Service Wrapper provides the ability for Apache Geronimo to be registered as a windows service. The Tuscany plugin is an implementation of the SCA Java EE Integration specification by integrating Apache Tuscany as an Apache Geronimo plugin. Both of these plugins are available from the Apache Geronimo web site. Pluggable Administration Console Older versions of Apache Geronimo came with a monolithic Administration Console. However, the server was extendable through plugins. This introduced a problem: How to administer the new plugins that were added to the server? To resolve this problem, the Apache Geronimo developers rewrote the Administration Console to be extensible through console plugins called Administration Console Extensions. In this section, we will look into how to create an Administration Console portlet for the World Clock plugin that we developed in the previous section. Architecture The pluggable Administration Console functionality is based on the support provided by the Apache Pluto portlet container for dynamically adding and removing portlets and pages without requiring a restart. Apache Geronimo exposes this functionality through two GBeans, namely, the Administration Console Extension (ACE) GBean  (org.apache.geronimo.pluto.AdminConsoleExtensionGBean) and the Portal Container Services GBean (org.apache.geronimo.pluto. PortalContainerServicesGBean). The PortalContainerServicesGBean exposes the features of the Pluto container in order to add and remove portlets and pages at runtime. The ACE GBean invokes these APIs to add and remove the portlets or pages. The  ACE GBean should be specified in the Geronimo-specific deployment plan of your web application or plugin, that is, geronimo-web.xml. The architecture is shown in the following figure: Developing an Administration Console extension We will now go through the steps to develop an Administration Console Extension for the World Clock plugin that we created in the previous section. We will use Maven WAR archetype to create a web application project. To create the project, run the following command from the command-line console: mvn archetype:create -DgroupId=com.packt.plugins -DartifactId=ClockWebApp -DarchetypeArtifactId=maven-archetype-webapp This will result in the Maven web project being created, named ClockWebApp. A default pom.xml will be created. This will need to be edited to add dependencies to the two modules, as shown in the following code snippet: <dependency> <groupId>org.apache.geronimo.framework</groupId> <artifactId>geronimo-kernel</artifactId> <version>2.1.4</version></dependency><dependency> <groupId>com.packt.plugins</groupId> <artifactId>WorldClockModule</artifactId> <version>1.0</version></dependency> We add these dependencies because the portlet that we are going to write will use the classes mentioned in the above two modules.
Read more
  • 0
  • 0
  • 2381

article-image-testing-your-app
Packt
12 Apr 2013
15 min read
Save for later

Testing your App

Packt
12 Apr 2013
15 min read
(For more resources related to this topic, see here.) Types of testing Testing can happen on many different levels. From the code level to integration and even testing individual functions of the user-facing implementation of an enterprise application, there are numerous tools and techniques to test your application. In particular, we will cover the following: Unit testing Functional testing Browser testing Black box versus white box testing Testing is often talked about within the context of black box versus white box testing. This is a useful metaphor in understanding testing at different levels. With black box testing, you look at your application as a black box knowing nothing of its internals—typically from the perspective of a user of the system. You simply execute functionality of the application and test whether the expected outcomes match the actual outcomes. White box differs from black box testing in that you know the internals of the application upfront and can thus pinpoint failures directly and test for specific conditions. In this case, you simply feed in data into specific parts of the system and test whether the expected output matches the actual output. Unit testing The first level of testing is at the code level. When you are te sting specific and individual units of code on whether they meet their stated goals, you are unit testing. Unit testing is often talked about in conjunction with test-driven development, the practice of writing unit tests first and then writing the minimal amount of code necessary to pass those tests. Having a suite of unit tests against your code and employing test-driven processes—when done right—can keep your code focused and help to ensure the stability of your enterprise application. Typically, unit tests are set up in a separate folder in your codebase. Each test case is composed of the following parts: Setup to build the test conditions under which the code or module is being tested An instantiation and invocation of the code or module being tested A verification of the results returned Setting up your unit test You usually start by setting up your test data. For example, if you are testing a piece of code that requires an authenticated account, you might consider creating a set of test users of your enterprise application. It is advisable that your test data be coupled with your test so that your tests are not dependent on your system being in a specific state. Invoking your target Once you have set up your test data and the conditions in which the code you are testing needs to run, you are ready to invoke it. This can be as simple as invoking a method. Mocking is a very important concept to understand when unit testing. Consider a set of unit tests for a business logic module that has a dependency on some external application programming interface (API). Now imagine if the API goes down. The tests would fail. While it is nice to get an indication that the API you are dependent upon is having issues, a failing unit test because of this is misleading because the goal of the unit test is to test the business logic rather than external resources on which you are dependent. This is where mock objects come into the picture. Mock objects are stubs that replicate the interface of a resource. They are set up to always return the same data the external resource would under normal conditions. This way you are isolating your test to just the unit of code you are testing. Mocking employs a pattern called dependency injection or inversion of control. Sure, the code you are testing may be dependent on an external resource. Yet how will you swap it in a mock resource? Code that is easy to unit test allows you to pass in or "inject" these dependencies when invoking it. Dependency injection is a design pattern where code that is dependent on an external resource has that dependency passed into it thereby decoupling your code from that dependency. The following code snippet is difficult to test since the dependency is encapsulated into the function being tested. We are at an impasse. var doSomething = function() {var api = getApi();//A bunch of codeapi.call();}var testOfDoSomething = function() {var mockApi = getMockApi();//What do I do now???} The following new code snippet uses dependency injection to circumvent the problem by instantiating the dependency and passing it into the function being tested: var doSomething = function(api) {//A bunch of codeapi.call();}var testOfDoSomething = function() {var mockApi = getMockApi();doSomething(mockApi);} In general, this is good practice not just for unit testing but for keeping your code clean and easy to manage. Instantiating a dependency once and injecting where it is needed makes it easier to change that dependency if the need occurs. There are many mocking frameworks available including JsMockito (http://jsmockito.org/ ) for JavaScript and Mockery (https://github.com/padraic/mockery)for PHP. Verifying the results Once you have invoked the code being tested, you need to capture the results and verify them. Verification comes in the form of assertions. Every unit testing framework comes with its own set of assertion methods, but the concept is the same: take a result and test it against an expectation. You can assert whether two things are equal. You can assert whether two things are not equal. You can assert whether a result is a valid number of a string. You can assert whether one value is greater than another. The general idea is you are testing actual data against your hypothesis. Assertions usually bubble up to the framework's reporting module and are manifested as a list of passed or failed tests. Frameworks and tools A bevy of tools have arisen in the past few years that aid in unit testing of JavaScript. What follows is a brief survey of notable frameworks and tools used to unit test JavaScript code. JsTestDriver JsTestDriver is a framework built at Google for unit testing. It has a server that runs on multiple browsers on a machine and will allow you to execute test cases in the Eclipse IDE. This screenshot shows the results of JsTestDriver. When run, it executes all tests configured to run and displays the results. More information about JsTestDriver can be found at http://code.google.com/p/js-test-driver/. QUnit QUnit is a JavaScript unit testing framework created by John Resig of jQuery fame. To use it, you need to create only a test harness web page and include the QUnit library as a script reference. There is even a hosted version of the library. Once included, you need to only invoke the test method, passing in a function and a set of assertions. It will then generate a nice report. Although QUnit has no dependencies and can test standard JavaScript code, it is oriented around jQuery. More information about QUnit can be found at http://qunitjs.com/. Sinon.JS Often coupled with QUnit, Sinon.JS introduces the concept of spying wherein it records function calls, the arguments passed in, the return value, and even the value of the this object. You can also create fake objects such as fake servers and fake timers to make sure your code tests in isolation and your tests run as quickly as possible. This is particularly useful when you need to make fake AJAX requests. More information about Sinon.JS can be found at http://sinonjs.org/. Jasmine Jasmine is a testing framework based on the concept of behavior-driven development. Much akin to test-driven development, it extends it by infusing domain-driven design principles and seeks to frame unit tests back to user-oriented behavior and business value. Jasmine as well as other behavior-driven design based frameworks build test cases—called specs—using as much English as possible so that when a report is generated, it reads more naturally than a conventional unit test report. More information about Jasmine can be found at http://pivotal.github.com/jasmine/. Functional testing Selenium has become the name in website functional testing. Its browser automation capabilities allow you to record test cases in your favorite web browser and run them across multiple browsers. When you have this, you can automate your browser tests, integrate them with your build and continuous integration server, and run them simultaneously to get quicker results when you need them. Selenium includes the Selenium IDE, a utility for recording and running Selenium scripts. Built as a Firefox add-on, it allows you to create Selenium test cases by loading and clicking on web pages in Firefox. You can easily record what you do in the browser and replay it. You can then add tests to determine whether actual behavior matches expected behavior. It is very useful for quickly creating simple test cases for a web application. Information on installing it can be found at http://seleniumhq.org/docs/02_selenium_ide.html. The following screenshot shows the Selenium IDE. Click on the red circle graphic on the right-hand side to set it to record, and then browse to http://google.com in the browser window and search for "html5". Click on the red circle graphic to stop recording. You can then add assertions to test whether certain properties of the page match expectations. In this case, we are asserting that the text of the first link in the search results is for the Wikipedia page for HTML5. When we run our test, we see that it passes (of course, if the search results for "html5" on Google change, then this particular test will fail). Selenium includes WebDriver, an API that allows you to drive a browser natively either locally or remotely. Coupled with its automation capabilities, WebDriver can run tests against browsers on multiple remote machines to achieve greater scale. For our MovieNow application, we will set up functional testing by using the following components: The Selenium standalone server The php-webdriver connector from Facebook PHPUnit The Selenium standalone server The Selenium standalone server routes requests to the HTML5 application. It needs to be started for the tests to run. It can be deployed anywhere, but by default it is accessed at http://localhost:4444/wd/hub. You can download the latest version of the standalone server at http://code.google.com/p/selenium/downloads/list or you can fire up the version included in the sample code under the test/lib folder. To start the server, execute the following line via the command line (you will need to have Java installed on your machine): java -jar lib/selenium-server-standalone-#.jar Here, # indicates the version number. You should see something akin to the following: At this point, it is listening for connections. You will see log messages here as you run your tests. Keep this window open. The php-webdriver connector from Facebook The php-webdriver connector serves as a library for WebDriver in PHP. It gives you the ability to make and inspect web requests using drivers for all the major web browsers as well as HtmlUnit. Thus it allows you to create test cases against any web browser. You can download it at https://github.com/facebook/php-webdriver.We have included the files in the webdriver folder. PHPUnit PHPUnit is a unit testing framework that provides the constructs necessary for running our tests. It has the plumbing necessary for building and validating test cases. Any unit testing framework will work with Selenium; we have chosen PHPUnit since it is lightweight and works well with PHP. You can download and install PHPUnit any number of ways (you can go to http://www.phpunit.de/manual/current/en/installation.html for more information on installing it). We have included the phpunit.phar file in the test/lib folder for your convenience. You can simply run it by executing the following via the command line: php lib/phpunit.phar <your test suite>.php To begin, we will add some PHP files to the test folder. The first file is webtest. php. Create this file and add the following code: <?phprequire_once "webdriver/__init__.php";class WebTest extends PHPUnit_Framework_TestCase {protected $_session;protected $_web_driver;public function __construct() {parent::__construct();$_web_driver = new WebDriver();$this->_session = $_web_driver->session('firefox');}public function __destruct() {$this->_session->close();unset($this->_session);}}?> The WebTest class integrated WebDriver into PHPUnit via the php-webdriver connector. This will serve as the base class for all of our test cases. As you can see, it starts with the following: require_once "webdriver/__init__.php"; This is a reference to __init__.php in the php-webdriver files. This brings in all the classes needed for WebDriver. In the constructor, WebTest initializes the driver and session objects used in all test cases. In the destructor, it cleans up its connections. Now that we have everything set up, we can create our first functional test. Add a file called generictest.php to the test folder. We will import WebTest and extend that class as follows: <?phprequire_once "webtest.php";class GenericTest extends WebTest {}?> Inside of the GenericTest class, add the following test case: public function testForData() {$this->_session->open('http://localhost/html5-book/Chapter%2010/');sleep(5); //Wait for AJAX data to load$result = $this->_session->element("id", "movies-near-me")->text();//May need to change settings to always allow sharing of location$this->assertGreaterThan(0, strlen($result));} We will open a connection to our application (feel free to change the URL to wherever you are running your HTML5 application), wait 5 seconds for the initial AJAX to load, and then test for whether the movies-near-me div is populated with data. To run this test, go to the command line and execute the following lines: chmod +x lib/phpunit.pharphp lib/phpunit.phar generictest.php You should see the following: This indicates that the test is passed. Congratulations! Now let us see it fail. Add the following test case: public function testForTitle() {$this->_session->open('http://localhost/html5-book/Chapter%2010/');$result = $this->_session->title();$this->assertEquals('Some Title', $result);} Rerun PHPUnit and you should see something akin to the following: As you can see, it was expecting 'Some Title' but actually found 'MovieNow'. Now that we have gotten you started, we will let you create your own tests. Refer to http://www.phpunit.de/manual/3.7/en/index.html for guidance on the different assertions you can make using PHPUnit. More information about Selenium can be found at http://seleniumhq.org/. Browser testing HTML5 enterprise applications must involve actually looking at the application on different web browsers. Thankfully, many web browsers are offered on multiple platforms. Google Chrome, Mozilla Firefox, and Opera all have versions that will install easily on Windows, Mac OSX, and flavors of Linux such as Ubuntu. Safari has versions for Windows and Mac OSX, and there are ways to install it on Linux with some tweaking. Nevertheless, Internet Explorer can only run on Windows. One way to work around this limitation is to install virtualization software. Virtualization allows you to run an entire operating system virtually within a host operating system. It allows you to run Windows applications on Mac OSX or Linux applications on Windows. There are a number of notable virtualization packages including VirtualBox, VMWare Fusion, Parallels, and Virtual PC. Although Virtual PC runs only on Windows, Microsoft does offer a set of prepackaged virtual hard drives that include specific versions of Internet Explorer for testing purposes. See the following URLs for details: http://www.microsoft. com/en-us/download/details.aspx?id=11575. Another common way to test for compatibility is to use web-based browser virtualization. There are a number of services such as BrowserStack (http://www.browserstack.com/), CrossBrowserTesting (http://crossbrowsertesting.com/), and Sauce Labs (https://saucelabs.com/) that offer a service whereby you can enter a URL and see it rendered in an assortment of web browsers and platforms (including mobile) virtually through the web. Many of them even work through a proxy to allow you to view, test, and debug web applications running on your local machine. Continuous integration With any testing solution, it is important to create and deploy your builds and run your tests in an automated fashion. Continuous integration solutions like Hudson, Jenkins, CruiseControl, and TeamCity allow you to accomplish this. They merge code from multiple developers, and run a number of automated functions from deploying modules to running tests. They can be invoked to run on a schedule basis or can be triggered by events such as a commitment of code to a code repository via a postcommit hook. Summary We covered several types of testing in this article including unit testing, functional testing, and browser testing. For each type of testing, there are many tools to help you make sure that your enterprise application runs in a stable way, most of which we covered bar a few. Because every minute change to your application code has the potential to destabilize it, we must assume that that every change does. To ensure that your enterprise applications remain stable and with minimal defect, having a testing strategy in place with a rich suite of tests—from unit to functional—combined with a continuous integration server running those tests is essential. One must, of course, weigh the investment in time for writing and executing tests against the time needed for writing production code, but the savings in long-term maintenance costs can make that investment worthwhile. Resources for Article : Further resources on this subject: Building HTML5 Pages from Scratch [Article] Blocking versus Non blocking scripts [Article] HTML5: Generic Containers [Article]
Read more
  • 0
  • 0
  • 2376
Modal Close icon
Modal Close icon