Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-10-minute-guide-enterprise-service-bus-and-netbeans-soa-pack
Packt
23 Oct 2009
4 min read
Save for later

10 Minute Guide to the Enterprise Service Bus and the NetBeans SOA Pack

Packt
23 Oct 2009
4 min read
Introduction When you are integrating different systems together, it can be very easy to use your vendor’s APIs and program directly against them. Using that approach, developers can easily integrate applications. Supporting these applications however can become problematical. If we have a few systems integrated together in this approach, everything is fine, but the more systems we integrate together, the more integration code we have and it rapidly becomes unfeasible to support this point-to-point integration. To overcome this problem, the integration hub was developed. In this scenario, developers would write against the API of the integration vendor and only had to learn one API. This is a much better approach than the point-to-point integration method however it still has its limitations. There is still a proprietary API to learn (albeit only one this time), but if the integration hub goes down for any reason, then entire Enterprise can become unavailable. The Enterprise Service Bus (ESB) overcomes these problems by providing a scalable, standards based integration architecture. The NetBeans SOA pack includes a copy of OpenESB, which follows this architecture promoted by the Java Business Integration Specification JSR 208. Workings of an ESB At the heart of the ESB is the Normalized Message Router (NMR) - a pluggable framework that allows Java Business Integration (JBI) components to be plugged into it as required.  The NMR is responsible for passing messages between all of the different JBI components that are plugged into it. The two main JBI components that are plugged into the NMR are Binding Components and Service Engines.  Binding Components are responsible for handling all protocol specific transport such as HTTP, SOAP, JMS, File system access, etc.  Service Engines on the other hand execute business logic as BPEL processes, SQL statements, invoking external Java EE web services, etc.   There is a clear separation between Binding Components and Service Engines with protocol specific transactions being handled by the former and business logic being performed by the latter. This architecture promotes loose coupling in that service engines do not communicate directly with each other.  All communication between different JBI components is performed through Binding Components by use of normalized messages as shown in the sequence chart below. In the case of OpenESB, all of these normalized messages are based upon WSDL.  If, for example, a BPEL process needs to invoke a web service or send an email, it does not need to know about SOAP or SMTP that is the responsibility of the Binding Components.  For one Service Engine to invoke another Service Engine all that is required is a WSDL based message to be constructed, which can then be routed via the NMR and Binding Components to the destination Service Engine. OpenESB provides many different Binding Components and Service Engines enabling integration with many varied different systems. So, we can see that OpenESB provides us with a standard based architecture that promotes loose coupling between components.  NetBeans 6 provides tight integration with OpenESB allowing developers to take full advantage of its facilities. Integrating Netbeans6 IDE with OpenESB Integration with NetBeans comes in two parts.  First, NetBeans allows the different JBI components to be managed from within the IDE.  Binding Components and Service Engines can be installed into OpenESB from within NetBeans and from thereon the full lifecycle of the components (start, stop, restart, uninstall) can be controlled directly from within the IDE. Secondly, and more interestingly, the NetBeans IDE provides full editing support for developing Composite Applications¬ applications that bring together business logic and data from different sources.  One of the main features of Composite Applications is probably the BPEL editor.  This allows BPL process to be built up graphically allowing interaction with different data sources via different partner links, which may be web services, different BPEL processes, or SQL statements. Once a BPEL process or composite application has been developed, the NetBeans SOA pack provides tools to allow different bindings to be added onto the application depending on the Binding Components installed into OpenESB.  So, for example, a file binding could be added to a project that could poll the file system periodically looking for input messages to start a BPEL process, the output of which could be saved into a different file or sent directly to an FTP site. In addition to support for developing Composite Applications, the NetBeans SOA pack provides support for some features many Java developers would find useful, namely XML and WSDL editing and validation.  XML and WSDL files can be edited within the IDE as either raw text, or via graphical editors.  If changes are made in the raw text, the graphical editors update accordingly and vice versa.  
Read more
  • 0
  • 0
  • 2894

article-image-troubleshooting-lotus-notesdomino-7-applications
Packt
23 Oct 2009
19 min read
Save for later

Troubleshooting Lotus Notes/Domino 7 applications

Packt
23 Oct 2009
19 min read
Introduction The major topics that we'll cover in this article are: Testing your application (in other words, uncovering problems before your users do it for you). Asking the right questions when users do discover problems. Using logging to help troubleshoot your problems. We'll also examine two important new Notes/Domino 7 features that can be critical for troubleshooting applications: Domino Domain Monitoring (DDM) Agent Profiler   For more troubleshooting issues visit: TroubleshootingWiki.org Testing your Application Testing an application before you roll it out to your users may sound like an obvious thing to do. However, during the life cycle of a project, testing is often not allocated adequate time or money. Proper testing should include the following: A meaningful amount of developer testing and bug fixing: This allows you to catch most errors, which saves time and frustration for your user community. User representative testing: A user representative, who is knowledgeable about the application and how users use it, can often provide more robust testing than the developer. This also provides early feedback on features. Pilot testing: In this phase, the product is assumed to be complete, and a pilot group uses it in production mode. This allows for limited stress testing as well as more thorough testing of the feature set. In addition to feature testing, you should test the performance of the application. This is the most frequently skipped type of testing, because some consider it too complex and difficult. In fact, it can be difficult to test user load, but in general, it's not difficult to test data load. So, as part of any significant project, it is a good practice to programmatically create the projected number of documents that will exist within the application, one or two years after it has been fully deployed, and have a scheduled agent trigger the appropriate number of edits-per-hour during the early phases of feature testing. Although this will not give a perfect picture of performance, it will certainly help ascertain whether and why the time to create a new document is unacceptable (for example, because the @Db formulas are taking too long, or because the scheduled agent that runs every 15 minutes takes too long due to slow document searches). Asking the Right Questions Suppose that you've rolled out your application and people are using it. Then the support desk starts getting calls about a certain problem. Maybe your boss is getting an earful at meetings about sluggish performance or is hearing gripes about error messages whenever users try to click a button to perform some action. In this section, we will discuss a methodology to help you troubleshoot a problem when you don't necessarily have all the information at your disposal. We will include some specific questions that can be asked verbatim for virtually any application. The first key to success in troubleshooting an application problem is to narrow down where and when it happens. Let's take these two very different problems suggested above (slow performance and error messages), and pose questions that might help unravel them: Does the problem occur when you take a specific action? If so, what is that action? Your users might say, "It's slow whenever I open the application", or "I get an error when I click this particular button in this particular form". Does the problem occur for everyone who does this, or just for certain people? If just certain people, what do they have in common? This is a great way to get your users to help you help them. Let them be a part of the solution, not just "messengers of doom". For example, you might ask questions such as, "Is it slow only for people in your building or your floor? Is it slow only for people accessing the application remotely? Is it slow only for people who have your particular access (for example, SalesRep)?" Does this problem occur all the time, at random times, or only at certain times? It's helpful to check whether or not the time of day or the day of week/month is relevant. So typical questions might be similar to the following: "Do you get this error every time you click the button or just sometimes? If just sometimes, does it give you the error during the middle of the day, but not if you click it at 7 AM when you first arrive? Do you only get the error on Mondays or some other day of the week? Do you only see the error if the document is in a certain status or has certain data in it? If it just happens for a particular document, please send me a link to that document so that I can inspect it carefully to see if there is invalid or unexpected data." Logging Ideally, your questions have narrowed down the type of problem it could be. So at this point, the more technical troubleshooting can start. You will likely need to gather concrete information to confirm or refine what you're hearing from the users. For example, you could put a bit of debugging code into the button that they're clicking so that it gives more informative errors, or sends you an email (or creates a log document) whenever it's clicked or whenever an error occurs. Collecting the following pieces of information might be enough to diagnose the problem very quickly: Time/date User name Document UNID (if the button is pushed in a document) Error Status or any other likely field that might affect your code By looking for common denominators (such as the status of the documents in question, or access or roles of the users), you will likely be able to further narrow down the possibilities of why the problem is happening. This doesn't solve your problem of course, but it helps in advancing you a long way towards that goal. A trickier problem to troubleshoot might be one we mentioned earlier: slow performance. Typically, after you've determined that there is some kind of performance delay, it's a good idea to first collect some server logging data. Set the following Notes.ini variables in the Server Configuration document in your Domino Directory, on the Notes.ini tab: Log_Update=1Log_AgentManager=1 These variables instruct the server to write output to the log.nsf database in the Miscellaneous Events view. Note that they may already be set in your environment. If not, they're fairly unobtrusive, and shouldn't trouble your administration group. Set them for a 24-hour period during a normal business week, and then examine the results to see if anything pops out as being suspicious. For view indexing, you should look for lines like these in the Miscellaneous Events (Log_Update=1): 07/01/2006 09:29:57 AM Updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Updating views in appsTracking.nsf07/01/2006 09:30:17 AM Finished updating views in appsTracking.nsf07/01/2006 09:30:17 AM Updating views in appsZooSchedule.nsf07/01/2006 09:30:18 AM Finished updating views in appsZooSchedule.nsf And lines like these for Agent execution (Log_AgentManager=1): 06/30/2006 09:43:49 PM AMgr: Start executing agent 'UpdateTickets' in 'appsSalesPipeline.nsf ' by Executive '1'06/30/2006 09:43:52 PM AMgr: Start executing agent 'ZooUpdate' in 'appsZooSchedule.nsf ' by Executive '2'06/30/2006 09:44:44 PM AMgr: Start executing agent 'DirSynch' in 'appsTracking.nsf ' by Executive '1' Let's examine these lines to see whether or not there is anything we can glean from them. Starting with the Log_Update=1 setting, we see that it gives us the start and stop times for every database that gets indexed. We also see that the database file paths appear alphabetically. This means that, if we search for the text string updating views and pull out all these lines covering (for instance) an hour during a busy part of the day, and copy/paste these lines into a text editor so that they're all together, then we should see complete database indexing from A to Z on your server repeating every so often. In the log.nsf database, there may be many thousands of lines that have nothing to do with your investigation, so culling the important lines is imperative for you to be able to make any sense of what's going on in your environment. You will likely see dozens or even hundreds of databases referenced. If you have hundreds of active databases on your server, then culling all these lines might be impractical, even programmatically. Instead, you might focus on the largest group of databases. You will notice that the same databases are referenced every so often. This is the Update Cycle, or view indexing cycle. It's important to get a sense of how long this cycle takes, so make sure you don't miss any references to your group of databases. Imagine that SalesPipeline.nsf and Tracking.nsf were the two databases that you wanted to focus on. You might cull the lines out of the log that have updating views and which reference these two databases, and come up with something like the following: 07/01/2006 09:29:57 AM Updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 09:30:17 AM Updating views in appsTracking.nsf07/01/2006 09:30:20 AM Finished updating views in appsTracking.nsf07/01/2006 10:15:55 AM Updating views in appsSalesPipeline.nsf07/01/2006 10:16:33 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 10:16:33 AM Updating views in appsTracking.nsf07/01/2006 10:16:43 AM Finished updating views in appsTracking.nsf07/01/2006 11:22:31 AM Updating views in appsSalesPipeline.nsf07/01/2006 11:23:33 AM Finished updating views in appsSalesPipeline.nsf07/01/2006 11:23:33 AM Updating views in appsTracking.nsf07/01/2006 11:23:44 AM Finished updating views in appsTracking.nsf This gives us some very important information: the Update task (view indexing) is taking approximately an hour to cycle through the databases on the server; that's too long. The Update task is supposed to run every 15 minutes, and ideally should only run for a few minutes each time it executes. If the cycle is an hour, then that means update is running full tilt for that hour, and as soon as it stops, it realizes that it's overdue and kicks off again. It's possible that if you examine each line in the log, you'll find that certain databases are taking the bulk of the time, in which case it might be worth examining the design of those databases. But it might be that every database seems to take a long time, which might be more indicative of a general server slowdown. In any case, we haven't solved the problem; but at least we know that the problem is probably server-wide. More complex applications, and newer applications, tend to reflect server‑performance problems more readily, but that doesn't necessarily mean they carry more responsibility for the problem. In a sense, they are the "canary in the coal mine". If you suspect the problem is confined to one database (or a few), then you can increase the logging detail by setting Log_Update=2. This will give you the start time for every view in every database that the Update task indexes. If you see particular views taking a long time, then you can examine the design of those views. If no database(s) stand out, then you might want to see if the constant indexing occurs around the clock or just during business hours. If it's around the clock, then this might point to some large quantities of data that are changing in your databases. For example, you may be programmatically synchronizing many gigabytes of data throughout the day, not realizing the cost this brings in terms of indexing. If slow indexing only occurs during business hours, then perhaps the user/data load has not been planned out well for this server. As the community of users ramps up in the morning, the server starts falling behind and never catches up until evening. There are server statistics that can help you determine whether or not this is the case. (These server statistics go beyond the scope of this book, but you can begin your investigation by searching on the various Notes/Domino forums for "server AND performance AND statistics".) As may be obvious at this point, troubleshooting can be quite time-consuming. The key is to make sure that you think through each step so that it either eliminates something important, or gives you a forward path. Otherwise, you can find yourself still gathering information weeks and months later, with users and management feeling very frustrated. Before moving on from this section, let's take a quick look at agent logging. Agent Manager can run multiple agents in different databases, as determined by settings in your server document. Typically, production servers only allow two or three concurrent agents to run during business hours, and these are marked in the log as Executive '1', Executive '2', and so on. If your server is often busy with agent execution, then you can track Executive '1' and see how many different agents it runs, and for how long. If there are big gaps between when one agent starts and when the next one does (for Executive '1'), this might raise suspicion that the first agent took that whole time to execute. To verify this, turn up the logging by setting the Notes.ini variable debug_amgr=*. (This will output a fair amount of information into your log, so it's best not to leave it on for too long, but normally one day is not a problem.) Doing this will give you a very important piece of information: the number of "ticks" it took for the agent to run. One second equals 100 ticks, so if the agent takes 246,379 ticks, this equals 2,463 seconds (about 41 minutes). As a general rule, you want scheduled agents to run in seconds, not minutes; so any agent that is taking this long will require some examination. In the next section, we will talk about some other ways you can identify problematic agents. Domino Domain Monitoring (DDM) Every once in a while, a killer feature is introduced—a feature so good, so important, so helpful, that after using it, we just shake our heads and wonder how we ever managed without it for so long. Domino Domain Monitor (DDM) is just such a feature. DDM is too large to be completely covered in this one section, so we will confine our overview to what it can do in terms of troubleshooting applications. For a more thorough explanation of DDM and all its features, see the book, Upgrading to Lotus Notes and Domino (www.packtpub.com/upgrading_lotus/book). In the events4.nsf database, you will find a new group of documents you can create for tracking agent or application performance. On Domino 7 servers, a new database is created automatically with the filename ddm.nsf. This stores the DDM output you will examine. For application troubleshooting, some of the most helpful areas to track using DDM are the following: Full-text index needs to be built. If you have agents that are creating a full‑text index on the fly because the database has no full‑text index built, DDM can track that potential problem for you. Especially useful is the fact that DDM compiles the frequency per database, so (for instance) you can see if it happens once per month or once per hour. Creating full‑text indexes on the fly can result in a significant demand on server resources, so having this notification is very useful. We discuss an example of this later in this section. Agent security warnings. You can manually examine the log to try to find errors about agents not being able to execute due to insufficient access. However, DDM will do this for you, making it much easier to find (and therefore fix) such problems. Resource utilization. You can track memory, CPU, and time utilization of your agents as run by Agent Manager or by the HTTP task. This means that at any time you can open the ddm.nsf database and spot the worst offenders in these categories, over your entire server/domain. We will discuss an example of CPU usage later in this section. The following illustration shows the new set of DDM views in the events4.nsf (Monitoring configuration) database: The following screenshot displays the By Probe Server view after we've made a few document edits: Notice that there are many probes included out-of-the-box (identified by the property "author = Lotus Notes Template Development") but set to disabled. In this view, there are three that have been enabled (ones with checkmarks) and were created by one of the authors of this book. If you edit the probe document highlighted above, Default Application Code/Agents Evaluated By CPU Usage (Agent Manager), the document consists of three sections. The first section is where you choose the type of probe (in this case Application Code) and the subtype (in this case Agents Evaluated By CPU Usage). The second section allows you to choose the servers to run against, and whether you want this probe to run against agents/code executed by Agent Manager or by the HTTP task (as shown in the following screenshot). This is an important distinction. For one thing, they are different tasks, and therefore one can hit a limit while the other still has room to "breathe". But perhaps more significantly, if you choose a subtype of Agents Evaluated By Memory Usage, then the algorithms used to evaluate whether or not an agent is using too much memory are very different. Agents run by the HTTP task will be judged much more harshly than those run by the Agent Manager task. This is because with the HTTP task, it is possible to run the same agent with up to hundreds of thousands of concurrent executions. But with Agent Manager, you are effectively limited to ten concurrent instances, and none within the same database. The third section allows you to set your threshold for when DDM should report the activity: You can select up to four levels of warning: Fatal, Failure, Warning (High), and Warning (Low). Note that you do not have the ability to change the severity labels (which appear as icons in the view). Unless you change the database design of ddm.nsf, the icons displayed in the view and documents are non-configurable. Experiment with these settings until you find the approach that is most useful for your corporation. Typically, customers start by overwhelming themselves with information, and then fine-tuning the probes so that much less information is reported. In this example, only two statuses are enabled: one for six seconds, with a label of Warning (High), and one for 60 seconds, with a label of Failure. Here is a screenshot of the DDM database: Notice that there are two Application Code results, one with a status of Failure (because that agent ran for more than 60 seconds), and one with a status of Warning (High) (because that agent ran for more than six seconds but less than 60 seconds). These are the parameters set in the Probe document shown previously, which can easily be changed by editing that Probe document. If you want these labels to be different, you must enable different rows in the Probe document. If you open one of these documents, there are three sections. The top section gives header information about this event, such as the server name, the database and agent name, and so on. The second section includes the following table, with a tab for the most recent infraction and a tab for previous infractions. This allows you to see how often the problem is occurring, and with what severity. The third section provides some possible solutions, and (if applicable) automation. For example, in our example, you might want to "profile" your agent. (We will profile one of our agents in the final section of this article.) DDM can capture full-text operations against a database that is not full‑text indexed. It tracks the number of times this happens, so you can decide whether to full‑text index the database, change the agent, or neither. For a more complete list of the errors and problems that DDM can help resolve, check the Domino 7 online help or the product documentation (www.lotus.com). Agent Profiler If any of the troubleshooting tips or techniques we've discussed in this article causes you to look at an agent and think, "I wonder what makes this agent so slow", then the Agent Profiler should be the next tool to consider. Agent Profiler is another new feature introduced in Notes/Domino 7. It gives you a breakdown of many methods/properties in your LotusScript agent, telling you how often each one was executed and how long they took to execute. In Notes/Domino 7, the second (security) tab of Agent properties now includes a checkbox labeled Profile this agent. You can select this option if you want an agent to be profiled. The next time the agent runs, a profile document in the database is created and filled with the information from that execution. This document is then updated every time the agent runs. You can view these results from the Agent View by highlighting your agent and selecting Agent | View Profile Results. The following is a profile for an agent that performed slow mail searches: Although this doesn't completely measure (and certainly does not completely troubleshoot) your agents, it is an important step forward in troubleshooting code. Imagine the alternative: dozens of print statements, and then hours of collating results! Summary In closing, we hope that this article has opened your eyes to new possibilities in troubleshooting, both in terms of techniques and new Notes/Domino 7 features. Every environment has applications that users wish ran faster, but with a bit of care, you can troubleshoot your performance problems and find resolutions. After you have your servers running Notes/Domino 7, you can use DDM and Agent Profiler (both exceptionally easy to use) to help nail down poorly performing code in your applications. These tools really open a window on what had previously been a room full of mysterious behavior. Full-text indexing on the fly, code that uses too much memory, and long running agents are all quickly identified by Domino Domain Monitoring (DDM). Try it!
Read more
  • 0
  • 0
  • 3072

article-image-interacting-databases-through-java-persistence-api
Packt
23 Oct 2009
17 min read
Save for later

Interacting with Databases through the Java Persistence API

Packt
23 Oct 2009
17 min read
We will look into: Creating our first JPA entity Interacting with JPA entities with entity manager Generating forms in JSF pages from JPA entities Generating JPA entities from an existing database schema JPA named queries and JPQL Entity relationships Generating complete JSF applications from JPA entities Creating Our First JPA Entity JPA entities are Java classes whose fields are persisted to a database by the JPA API. JPA entities are Plain Old Java Objects (POJOs), as such, they don't need to extend any specific parent class or implement any specific interface. A Java class is designated as a JPA entity by decorating it with the @Entity annotation. In order to create and test our first JPA entity, we will be creating a new web application using the JavaServer Faces framework. In this example we will name our application jpaweb. As with all of our examples, we will be using the bundled GlassFish application server. To create a new JPA Entity, we need to right-click on the project and select New | Entity Class. After doing so, NetBeans presents the New Entity Class wizard. At this point, we should specify the values for the Class Name and Package fields (Customer and com.ensode.jpaweb in our example), then click on the Create Persistence Unit... button. The Persistence Unit Name field is used to identify the persistence unit that will be generated by the wizard, it will be defined in a JPA configuration file named persistence.xml that NetBeans will automatically generate from the Create Persistence Unit wizard. The Create Persistence Unit wizard will suggest a name for our persistence unit, in most cases the default can be safely accepted. JPA is a specification for which several implementations exist. NetBeans supports several JPA implementations including Toplink, Hibernate, KODO, and OpenJPA. Since the bundled GlassFish application server includes Toplink as its default JPA implementation, it makes sense to take this default value for the Persistence Provider field when deploying our application to GlassFish. Before we can interact with a database from any Java EE 5 application, a database connection pool and data source need to be created in the application server. A database connection pool contains connection information that allow us to connect to our database, such as the server name, port, and credentials. The advantage of using a connection pool instead of directly opening a JDBC connection to a database is that database connections in a connection pool are never closed, they are simply allocated to applications as they need them. This results in performance improvements, since the operations of opening and closing database connections are expensive in terms of performance. Data sources allow us to obtain a connection from a connection pool by obtaining an instance of javax.sql.DataSource via JNDI, then invoking its getConnection() method to obtain a database connection from a connection pool. When dealing with JPA, we don't need to directly obtain a reference to a data source, it is all done automatically by the JPA API, but we still need to indicate the data source to use in the application's Persistence Unit. NetBeans comes with a few data sources and connection pools pre-configured. We could use one of these pre-configured resources for our application, however, NetBeans also allows creating these resources "on the fly", which is what we will be doing in our example. To create a new data source we need to select the New Data Source... item from the Data Source combo box. A data source needs to interact with a database connection pool. NetBeans comes pre-configured with a few connection pools out of the box, but just like with data sources, it allows us to create a new connection pool "on demand". In order to do this, we need to select the New Database Connection... item from the Database Connection combo box. NetBeans includes JDBC drivers for a few Relational Database Management Systems (RDBMS) such as JavaDB, MySQL, and PostgreSQL "out of the box". JavaDB is bundled with both GlassFish and NetBeans, therefore we picked JavaDB for our example. This way we avoid having to install an external RDBMS. For RDBMS systems that are not supported out of the box, we need to obtain a JDBC driver and let NetBeans know of it's location by selecting New Driver from the Name combo box. We then need to navigate to the location of a JAR file containing the JDBC driver. Consult your RDBMS documentation for details. JavaDB is installed in our workstation, therefore the server name to use is localhost. By default, JavaDB listens to port 1527, therefore that is the port we specify in the URL. We wish to connect to a database called jpaintro, therefore we specify it as the database name. Since the jpaintro database does not exist yet, we pass the attribute create=true to JavaDB, this attribute is used to create the database if it doesn't exist yet. Every JavaDB database contains a schema named APP, since each user by default uses a schema named after his/her own login name. The easiest way to get going is to create a user named "APP" and select a password for this user. Clicking on the Show JDBC URL checkbox reveals the JDBC URL for the connection we are setting up. The New Database Connection wizard warns us of potential security risks when choosing to let NetBeans remember the password for the database connection. Database passwords are scrambled (but not encrypted) and stored in an XML file under the .netbeans/[netbeans version]/config/Databases/Connections directory. If we follow common security practices such as locking our workstation when we walk away from it, the risks of having NetBeans remember database passwords will be minimal. Once we have created our new data source and connection pool, we can continue configuring our persistence unit. It is a good idea to leave the Use Java Transaction APIs checkbox checked. This will instruct our JPA implementation to use the Java Transaction API (JTA) to allow the application server to manage transactions. If we uncheck this box, we will need to manually write code to manage transactions. Most JPA implementations allow us to define a table generation strategy. We can instruct our JPA implementation to create tables for our entities when we deploy our application, to drop the tables then regenerate them when our application is deployed, or not create any tables at all. NetBeans allows us to specify the table generation strategy for our application by clicking the appropriate value in the Table Generation Strategy radio button group. When working with a new application, it is a good idea to select the Drop and Create table generation strategy. This will allow us to add, remove, and rename fields in our JPA entity at will without having to make the same changes in the database schema. When selecting this table generation strategy, tables in the database schema will be dropped and recreated, therefore any data previously persisted will be lost. Once we have created our new data source, database connection and persistence unit, we are ready to create our new JPA entity. We can do so by simply clicking on the Finish button. At this point NetBeans generates the source for our JPA entity. JPA allows the primary field of a JPA entity to map to any column type (VARCHAR, NUMBER). It is best practice to have a numeric surrogate primary key, that is, a primary key that serves only as an identifier and has no business meaning in the application. Selecting the default Primary Key type of long will allow for a wide range of values to be available for the primary keys of our entities. package com.ensode.jpaweb;import java.io.Serializable;import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id;@Entitypublic class Customer implements Serializable { private static final long serialVersionUID = 1L; private Long id; public void setId(Long id) { this.id = id; } @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } //Other generated methods (hashCode(), equals() and //toString() omitted for brevity.} As we can see, a JPA entity is a standard Java object. There is no need to extend any special class or implement any special interface. What differentiates a JPA entity from other Java objects are a few JPA-specific annotations. The @Entity annotation is used to indicate that our class is a JPA entity. Any object we want to persist to a database via JPA must be annotated with this annotation. The @Id annotation is used to indicate what field in our JPA entity is its primary key. The primary key is a unique identifier for our entity. No two entities may have the same value for their primary key field. This annotation can be placed just above the getter method for the primary key class. This is the strategy that the NetBeans wizard follows. It is also correct to specify the annotation right above the field declaration. The @Entity and the @Id annotations are the bare minimum two annotations that a class needs in order to be considered a JPA entity. JPA allows primary keys to be automatically generated. In order to take advantage of this functionality, the @GeneratedValue annotation can be used. As we can see, the NetBeans generated JPA entity uses this annotation. This annotation is used to indicate the strategy to use to generate primary keys. All possible primary key generation strategies are listed in the following table:   Primary Key Generation Strategy   Description   GenerationType.AUTO   Indicates that the persistence provider will automatically select a primary key generation strategy. Used by default if no primary key generation strategy is specified.   GenerationType.IDENTITY   Indicates that an identity column in the database table the JPA entity maps to must be used to generate the primary key value.   GenerationType.SEQUENCE   Indicates that a database sequence should be used to generate the entity's primary key value.   GenerationType.TABLE   Indicates that a database table should be used to generate the entity's primary key value.       In most cases, the GenerationType.AUTO strategy works properly, therefore it is almost always used. For this reason the New Entity Class wizard uses this strategy. When using the sequence or table generation strategies, we might have to indicate the sequence or table used to generate the primary keys. These can be specified by using the @SequenceGenerator and @TableGenerator annotations, respectively. Consult the Java EE 5 JavaDoc at http://java.sun.com/javaee/5/docs/api/ for details. For further knowledge on primary key generation strategies you can refer EJB 3 Developer Guide by Michael Sikora, which is another book by Packt Publishing (http://www.packtpub.com/developer-guide-for-ejb3/book). Adding Persistent Fields to Our Entity At this point, our JPA entity contains a single field, its primary key. Admittedly not very useful, we need to add a few fields to be persisted to the database. package com.ensode.jpaweb;import java.io.Serializable;import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id;@Entitypublic class Customer implements Serializable { private static final long serialVersionUID = 1L; private Long id; private String firstName; private String lastName; public void setId(Long id) { this.id = id; } @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } //Additional methods omitted for brevity} In this modified version of our JPA entity, we added two fields to be persisted to the database; firstName will be used to store the user's first name, lastName will be used to store the user's last name. JPA entities need to follow standard JavaBean coding conventions. This means that they must have a public constructor that takes no arguments (one is automatically generated by the Java compiler if we don't specify any other constuctors), and all fields must be private, and accessed through getter and setter methods. Automatically Generating Getters and Setters In NetBeans, getter and setter methods can be generated automatically. Simply declare new fields as usual then use the "insert code" keyboard shortcut (default is Alt+Insert), then select Getter and Setter from the resulting pop-up window, then click on the check box next to the class name to select all fields, then click on the Generate button. Before we can use JPA persist our entity's fields into our database, we need to write some additional code. Creating a Data Access Object (DAO) It is a good idea to follow the DAO design pattern whenever we write code that interacts with a database. The DAO design pattern keeps all database access functionality in DAO classes. This has the benefit of creating a clear separation of concerns, leaving other layers in our application, such as the user interface logic and the business logic, free of any persistence logic. There is no special procedure in NetBeans to create a DAO. We simply follow the standard procedure to create a new class by selecting File | New, then selecting Java as the category and the Java Class as the file type, then entering a name and a package for the class. In our example, we will name our class CustomerDAO and place it in the com.ensode.jpaweb package. At this point, NetBeans create a very simple class containing only the package and class declarations. To take complete advantage of Java EE features such as dependency injection, we need to make our DAO a JSF managed bean. This can be accomplished by simply opening faces-config.xml, clicking its XML tab, then right-clicking on it and selecting JavaServer Faces | Add Managed Bean. We get the Add Manged Bean dialog as seen here: We need to enter a name, fully qualified name, and scope for our managed bean (which, in our case, is our DAO), then click on the Add button. This action results in our DAO being declared as a managed bean in our application's faces-config.xml configuration file. <managed-bean> <managed-bean-name>CustomerDAO</managed-bean-name> <managed-bean-class> com.ensode.jpaweb.CustomerDAO </managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean> We could at this point start writing our JPA code manually, but with NetBeans there is no need to do so, we can simply right-click on our code and select Persistence | Use Entity Manager, and most of the work is automatically done for us. Here is how our code looks like after doing this trivial procedure: package com.ensode.jpaweb;import javax.annotation.Resource;import javax.naming.Context;import javax.persistence.EntityManager;import javax.persistence.PersistenceContext;@PersistenceContext(name = "persistence/LogicalName", unitName = "jpawebPU")public class CustomerDAO { @Resource private javax.transaction.UserTransaction utx; protected void persist(Object object) { try { Context ctx = (Context) new javax.naming.InitialContext(). lookup("java:comp/env"); utx.begin(); EntityManager em = (EntityManager) ctx.lookup("persistence/LogicalName"); em.persist(object); utx.commit(); } catch (Exception e) { java.util.logging.Logger.getLogger( getClass().getName()).log( java.util.logging.Level.SEVERE, "exception caught", e); throw new RuntimeException(e); } }} All highlighted code is automatically generated by NetBeans. The main thing NetBeans does here is add a method that will automatically insert a new row in the database, effectively persisting our entity's properties. As we can see, NetBeans automatically generates all necessary import statements. Additionally, our new class is automatically decorated with the @PersistenceContext annotation. This annotation allows us to declare that our class depends on an EntityManager (we'll discuss EntityManager in more detail shortly). The value of its name attribute is a logical name we can use when doing a JNDI lookup for our EntityManager. NetBeans by default uses persistence/LogicalName as the value for this property. The Java Naming and Directory Interface (JNDI) is an API we can use to obtain resources, such as database connections and JMS queues, from a directory service. The value of the unitName attribute of the @PersistenceContext annotation refers to the name we gave our application's Persistence Unit. NetBeans also creates a new instance variable of type javax.transaction.UserTransaction. This variable is needed since all JPA code must be executed in a transaction. UserTransaction is part of the Java Transaction API (JTA). This API allows us to write code that is transactional in nature. Notice that the UserTransaction instance variable is decorated with the @Resource annotation. This annotation is used for dependency injection. in this case an instance of a class of type javax.transaction.UserTransaction will be instantiated automatically at run-time, without having to do a JNDI lookup or explicitly instantiating the class. Dependency injection is a new feature of Java EE 5 not present in previous versions of J2EE, but that was available and made popular in the Spring framework. With standard J2EE code, it was necessary to write boilerplate JNDI lookup code very frequently in order to obtain resources. To alleviate this situation, Java EE 5 made dependency injection part of the standard. The next thing we see is that NetBeans added a persist method that will persist a JPA entity, automatically inserting a new row containing our entity's fields into the database. As we can see, this method takes an instance of java.lang.Object as its single parameter. The reason for this is that the method can be used to persist any JPA entity (although in our example, we will use it to persist only instances of our Customer entity). The first thing the generated method does is obtain an instance of javax.naming.InitialContext by doing a JNDI lookup on java:comp/env. This JNDI name is the root context for all Java EE 5 components. The next thing the method does is initiate a transaction by invoking uxt.begin(). Notice that since the value of the utx instance variable was injected via dependency injection (by simply decorating its declaration with the @Resource annotation), there is no need to initialize this variable. Next, the method does a JNDI lookup to obtain an instance of javax.persistence.EntityManager. This class contains a number of methods to interact with the database. Notice that the JNDI name used to obtain an EntityManager matches the value of the name attribute of the @PersistenceContext annotation. Once an instance of EntityManager is obtained from the JNDI lookup, we persist our entity's properties by simply invoking the persist() method on it, passing the entity as a parameter to this method. At this point, the data in our JPA entity is inserted into the database. In order for our database insert to take effect, we must commit our transaction, which is done by invoking utx.commit(). It is always a good idea to look for exceptions when dealing with JPA code. The generated method does this, and if an exception is caught, it is logged and a RuntimeException is thrown. Throwing a RuntimeException has the effect of rolling back our transaction automatically, while letting the invoking code know that something went wrong in our method. The UserTransaction class has a rollback() method that we can use to roll back our transaction without having to throw a RunTimeException. At this point we have all the code we need to persist our entity's properties in the database. Now we need to write some additional code for the user interface part of our application. NetBeans can generate a rudimentary JSF page that will help us with this task.
Read more
  • 0
  • 0
  • 6087

article-image-building-jsfejb3-applications
Packt
22 Oct 2009
15 min read
Save for later

Building JSF/EJB3 Applications

Packt
22 Oct 2009
15 min read
Building JSF/EJB3 Applications This practical article shows you how to create a simple data-driven application using JSF and EJB3 technologies. The article also shows you how to effectively use NetBeans IDE when building enterprise applications. What We Are Going to Build The sample application we are building throughout the article is very straightforward. It offers just a few pages. When you click the Ask us a question link on the welcomeJSF.jsp page, you will be taken to the following page, on which you can submit a question:     Once you’re done with your question, you click the Submit button. As a result, the application persist your question along with your email in the database. The next page will look like this:     The web tier of the application is build using the JavaServer Faces technology, while EJB is used to implement the database-related code. Software You Need to Follow the Article Exercise To build the sample discussed here, you will need the following software components installed on your computer: Java Standard Development Kit (JDK) 5.0 or higher Sun Java System Application Server Platform Edition 9 MySQL NetBeans IDE 5.5 Setting Up the Database The first step in building our application is to set up the database to interact with. In fact, you could choose any database you like to be the application’s backend database. For the purpose of this article, though, we will discuss how to use MySQL. To keep things simple, let’s create a questions table that contains just three columns, outlined in the following table: Column Type Description trackno INTEGER AUTO_INCREMENT PRIMARY KEY Stores a track number generated automatically when a row is inserted. user_email VARCHAR(50) NOT NULL   question VARCHAR(2000) NOT NULL   Of course, a real-world questions table would contain a few more columns, for example, dateOfSubmission containing the date and time of submitting the question. To create the questions table, you first have to create a database and grant the required privileges to the user with which you are going to connect to that database. For example, you might create database my_db and user usr identified by password pswd. To do this, you should issue the following SQL commands from MySQL Command Line Client:   CREATE DATABASE my_db; GRANT CREATE, DROP, SELECT, INSERT, UPDATE, DELETE ON my_db.* TO 'usr'@'localhost' IDENTIFIED BY 'pswd'; In order to use the newly created database for subsequent statements, you should issue the following statement:   USE my_db   Finally, create the questions table in the database as follows:   CREATE TABLE questions(      trackno INTEGER AUTO_INCREMENT PRIMARY KEY,      user_email VARCHAR(50) NOT NULL,      question  VARCHAR(2000) NOT NULL ) ENGINE = InnoDB;     Once you’re done, you have the database with the questions table required to store incoming users’ questions. Setting Up a Data Source for Your MySQL Database Since the application we are going to build will interact with MySQL, you need to have installed an appropriate MySQL driver on your application server. For example, you might want to install MySQL Connector/J, which is the official JDBC driver for MySQL. You can pick up this software from the "downloads" page of the MySQL AB website at http://mysql.org/downloads/. Install the driver on your GlassFish application server as follows: Unpack the downloaded archive containing the driver to any directory on your machine Add mysql-connector-java-xxx-bin.jar to the CLASSPATH environment variable Make sure that your GlassFish application server is up and running Launch the Application Server Admin Console by pointing your browser at http://localhost:4848/ Within the Common Tasks frame, find and double-click the ResourcesJDBCNew Connection Pool node On the New Connection Pool page, click the New… button The first step of the New Connection Pool master, set the fields as shown in the following table: Setting Value Name jdbc/mysqlPool Resource type javax.sql.DataSource Database Vendor mysql   Click Next to move on to the second page of the master On the second page of New Connection Pool, set the properties to reflect your database settings, like that shown in the following table: Name Value databaseName my_db serverName localhost port 3306 user usr password pswd   Once you are done with setting the properties, click Finish. The newly created jdbc/mysqlPool connection pool should appear on the list. To check it, you should click its link to open it in a window, and then click the Ping button. If everything is okay, you should see a message telling you Ping succeeded Creating the Project The next step is to create an application project with NetBeans. To do this, follow the steps below: Choose File/New project and then choose the EnterpriseEnterprise Application template for the project. Click Next On the Name and Location page of the master, specify the name for the project: JSF_EJB_App. Also make sure that Create EJB Module and Create Web Application Module are checked. And click Finish As a result, NetBeans generates a new enterprise application in a standard project, containing actually two projects: an EJB module project and Web application project. In this particular example, you will use the first project for EJBs and the second one for JSF pages. Creating Entity Beans and Persistent Unit You create entity beans and the persistent unit in the EJB module project—in this example this is the JSF_EJB_App-ejb project. In fact, the sample discussed here will contain the only entity bean: Question. You might automatically generate it and then edit as needed. To generate it with NetBeans, follow the steps below: Make sure that your Sun Java System Application Server is up and running In the Project window, right click JSF_EJB_App-ejb project, and then choose New/Entity Classes From Database. As a result, you’ll be prompted to connect to your Sun Java System Application Server. Do it by entering appropriate credentials. In the New Entity Classes from Database window, select jdbc/mysqlPool from the Data Source combobox. If you recall from the Setting up a Data Source for your MySQL database section discussed earlier in this article, jdbc/mysqlPool is a JDBC connection pool created on your application server In the Connect dialog appeared, you’ll be prompted to connect to your MySQL database. Enter password pswd, set the Remember password during this session checkbox, and then click OK In the Available Tables listbox, choose questions, and click Add button to move it to the Selected Tables listbox. After that, click Next On the next page of the New Entity Classes from Database master, fill up the Package field. For example, you might choose the following name: myappejb.entities. And change the class name from Questions to Question in the Class Names box. Next, click the Create Persistent Unit button In the Create Persistent Unit window, just click the Create button, leaving default values of the fields In the New Entity Classes from Database dialog, click Finish As a result, NetBeans will generate the Question entity class, which you should edit so that the resultant class looks like the following:   package myappejb.entities; import java.io.Serializable; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name = "questions") public class Question implements Serializable { @Id @Column(name = "trackno") private Integer trackno; @Column(name = "user_email", nullable = false) private String userEmail; @Column(name = "question", nullable = false) private String question; public Question() { } public Integer getTrackno() { return this.trackno; } public void setTrackno(Integer trackno) { this.trackno = trackno; } public String getUserEmail() { return this.userEmail; } public void setUserEmail(String userEmail) { this.userEmail = userEmail; } public String getQuestion() { return this.question; } public void setQuestion(String question) { this.question = question; } }     Once you’re done, make sure to save all the changes made by choosing File/Save All. Having the above code in hand, you might of course do without first generating the Question entity from the database, but simply create an empty Java file in the myappejb.entities package, and then insert the above code there. Then you could separately create the persistent unit. However, the idea behind building the Question entity with the master here is to show how you can quickly get a required piece of code to be then edited as needed, rather than creating it from scratch. Creating Session Beans To finish with the JSF_EJB_App-ejb project, let’s proceed to creating the session bean that will be used by the web tier. In particular, you need to create the QuestionSessionBean session bean that will be responsible for persisting the data a user enters on the askquestion page. To generate the bean’s frame with a master, follow the steps below: In the Project window, right click JSF_EJB_App-ejb project, and then choose New/Session Bean In the New Session Bean window, enter EJB name: QuestionSessionBean. Then specify the package: myappejb.ejb. Make sure that the Session Type is set to Stateless and Create Interface is set to Remote. Click Finish As a result, NetBeans should generate two Java files: QuestionSessionBean.java and QuestionSessionRemote.java. You should modify QuestionSessionBean.java so that it contains the following code:   package myappejb.ejb; import javax.annotation.Resource; import javax.ejb.Stateless; import javax.ejb.TransactionManagement; import javax.ejb.TransactionManagementType; import javax.persistence.EntityManager; import javax.persistence.EntityManagerFactory; import javax.persistence.PersistenceUnit; import javax.transaction.UserTransaction; import myappejb.entities.Question; @Stateless <b>@TransactionManagement(TransactionManagementType.BEAN)</b> public class QuestionSessionBean implements myappejb.ejb.QuestionSessionRemote { /** Creates a new instance of QuestionSessionBean */ public QuestionSessionBean() { } @Resource private UserTransaction utx; @PersistenceUnit(unitName = "JSF_EJB_App-ejbPU") private EntityManagerFactory emf; private EntityManager getEntityManager() { return emf.createEntityManager(); } public void save(Question question) throws Exception { EntityManager em = getEntityManager(); try { utx.begin(); em.joinTransaction(); em.persist(question); utx.commit(); } catch (Exception ex) { try { utx.rollback(); throw new Exception(ex.getLocalizedMessage()); } catch (Exception e) { throw new Exception(e.getLocalizedMessage()); } } finally { em.close(); } } }   Next, modify the QuestionSessionRemote.java so that it looks like this:   package myappejb.ejb; import javax.ejb.Remote; import myappejb.entities.Question; @Remote public interface QuestionSessionRemote { void save(Question question) throws Exception; }   Choose File/Save All to save the changes made. That’s it. You just finished with your EJB module project. Adding JSF Framework to the Project Now that you have the entity and session beans created, let’s switch to the JSF_EJB_App-war project, where you’re building the web tier for the application.Before you can proceed to building JSF pages, you need to add the JavaServer Faces framework to the JSF_EJB_App-war project. To do this, follow the steps below: In the Project window, right click JSF_EJB_App-war project, and then choose Properties In the Project Properties window, select Frameworks from Categories, and click Add button. As a result, the Add a Framework dialog should appear In the Add a Framework dialog, choose JavaServer Faces and click OK Then click OK in the Project Properties dialog As a result, NetBeans adds the JavaServer Faces framework to the JSF_EJB_App-war project. Now if you extend the Configuration Files folder under the JSF_EJB_App-war project node in the Project window, you should see, among other configuration files, faces-config.xml there. Also notice the appearance of the welcomeJSF.jsp page in the Web Pages folder Creating JSF Managed Beans The next step is to create managed beans whose methods will be called from within the JSF pages. In this particular example, you need to create only one such bean: let’s call it QuestionController. This can be achieved by following the steps below: In the Project window, right click JSF_EJB_App-war project, and then choose New/Empty Java File In the New Empty Java File window, enter QuestionController as the class name and enter myappjsf.jsf in the Package field. Then, click Finish In the generated empty java file, insert the following code:   package myappjsf.jsf; import javax.ejb.EJB; import javax.faces.application.FacesMessage; import javax.faces.context.FacesContext; import myappejb.entities.Question; import myappejb.ejb.QuestionSessionBean; public class QuestionController { @EJB private QuestionSessionBean sbean; private Question question; public QuestionController() { } public Question getQuestion() { return question; } public void setQuestion(Question question) { this.question = question; } public String createSetup() { this.question = new Question(); this.question.setTrackno(null); return "question_create"; } public String create() { try { Integer trck = sbean.save(question); addSuccessMessage("Your question was successfully submitted."); } catch (Exception ex) { addErrorMessage(ex.getLocalizedMessage()); } return "created"; } public static void addErrorMessage(String msg) { FacesMessage facesMsg = new FacesMessage(FacesMessage.SEVERITY_ERROR, msg, msg); FacesContext fc = FacesContext.getCurrentInstance(); fc.addMessage(null, facesMsg); } public static void addSuccessMessage(String msg) { FacesMessage facesMsg = new FacesMessage(FacesMessage.SEVERITY_INFO, msg, msg); FacesContext fc = FacesContext.getCurrentInstance(); fc.addMessage("successInfo", facesMsg); } }   Next, you need to add information about the newly created JSF managed bean to the faces-config.xml configuration file automatically generated when adding the JSF framework to the project. Find this file in the following folder: JSF_EJB_App-warWeb PagesWEB-INF in the Project window, and then insert the following tag between the and tags:   <managed-bean> <managed-bean-name>questionJSFBean</managed-bean-name> <managed-bean-class>myappjsf.jsf.QuestionController</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean>   Finally, make sure to choose File/Save All to save the changes made in faces-config.xml as well as in QuestionController.java. Creating JSF Pages To keep things simple, you create just one more JSF page: askquestion.jsp, where a user can submit a question. First, though, let’s modify the welcomeJSF.jsp page so that you can use it to move on to askquestion.jsp and then return to, once a question has been submitted. To achieve this, modify welcomeJSF.jsp as follows:   <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>JSP Page</title> </head> <body> <f:view> <h:messages errorStyle="color: red" infoStyle="color: green" layout="table"/> <h:form> <h1><h:outputText value="Ask us a question" /></h1> <h:commandLink action="#{questionJSFBean.createSetup}" value="New question"/> <br> </h:form> </f:view> </body> </html> Now you can move on and create askquestion.jsp. To do this, follow the steps below: In the Project window, right click JSF_EJB_App-war project, and then choose New/JSP In the New JSP File window, enter askquestion as the name for the page, and click Finish Modify the newly created askquestion.jsp so that it finally looks like this:   <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib uri="http://java.sun.com/jsf/core" prefix="f" %> <%@taglib uri="http://java.sun.com/jsf/html" prefix="h" %> <html>     <head>         <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />         <title>New Question</title>     </head>     <body>         <f:view>             <h:messages errorStyle="color: red" infoStyle="color: green" layout="table"/>             <h1>Your question, please!</h1>             <h:form>                 <h:panelGrid columns="2">                     <h:outputText value="Your Email:"/>                     <h:inputText id="userEmail" value= "#{questionJSFBean.question.userEmail}" title="Your Email" />                     <h:outputText value="Your Question:"/>                     <h:inputTextarea id="question" value= "#{questionJSFBean.question.question}"  title="Your Question" rows ="5" cols="35" />                 </h:panelGrid>                 <h:commandLink action="#{questionJSFBean.create}" value="Create"/>                 <br>                 <a href="/JSF_EJB_App-war/index.jsp">Back to index</a>             </h:form>         </f:view>     </body> </html>   The next step is to set page navigation. Turning back to the faces-config.xml configuration file, insert the following code there.   <navigation-rule> <navigation-case> <from-outcome>question_create</from-outcome> <to-view-id>/askquestion.jsp</to-view-id> </navigation-case> </navigation-rule> <navigation-rule> <navigation-case> <from-outcome>created</from-outcome> <to-view-id>/welcomeJSF.jsp</to-view-id> </navigation-case> </navigation-rule> Make sure that the above tags are within the <faces-config> and </faces-config> root tags. Check It You are ready now to check the application you just created. To do this, right-click JSF_EJB_App-ejb project in the Project window and choose Deploy Project. After the JSF_EJB_App-ejb project is successfully deployed, right click the JSF_EJB_App-war project and choose Run Project. As a result, the newly created application will run in a browser. As mentioned earlier, the application contains very few pages, actually three ones. For testing purposes, you can submit a question, and then check the questions database table to make sure that everything went as planned. Summary Both JSF and EJB 3 are popular technologies when it comes to building enterprise applications. This simple example illustrates how you can use these technologies together in a complementary way.
Read more
  • 0
  • 0
  • 3251

article-image-oracle-web-rowset-part1
Packt
22 Oct 2009
6 min read
Save for later

Oracle Web RowSet - Part1

Packt
22 Oct 2009
6 min read
The ResultSet interface requires a persistent connection with a database to invoke the insert, update, and delete row operations on the database table data. The RowSet interface extends the ResultSet interface and is a container for tabular data that may operate without being connected to the data source. Thus, the RowSet interface reduces the overhead of a persistent connection with the database. In J2SE 5.0, five new implementations of RowSet—JdbcRowSet, CachedRowSet, WebRowSet, FilteredRowSet, and JoinRowSet—were introduced. The WebRowSet interface extends the RowSet interface and is the XML document representation of a RowSet object. A WebRowSet object represents a set of fetched database table rows, which may be modified without being connected to the database. Support for Oracle Web RowSet is a new feature in Oracle Database 10g driver. Oracle Web RowSet precludes the requirement for a persistent connection with the database. A connection is required only for retrieving data from the database with a SELECT query and for updating data in the database after all the required row operations on the retrieved data has been performed. Oracle Web RowSet is used for queries and modifications on the data retrieved from the database. Oracle Web RowSet, as an XML document representation of a RowSet facilitates the transfer of data. In Oracle Database 10g and 11g JDBC drivers, Oracle Web RowSet is implemented in the oracle.jdbc.rowset package. The OracleWebRowSet class represents a Oracle Web RowSet. The data in the Web RowSet may be modified without connecting to the database. The database table may be updated with the OracleWebRowSet class after the modifications to the Web RowSet have been made. A database JDBC connection is required only for retrieving data from the database and for updating the database. An XML document representation of the data in a Web RowSet may be obtained for data exchange. In this article, the Web RowSet feature in Oracle 10g database JDBC driver is implemented in JDeveloper 10g. An example Web RowSet will be created from a database. The Web RowSet will be modified and stored in the database table. In this article, we will learn the following: Creating a Oracle Web RowSet object Adding a row to Oracle Web RowSet Modifying the database table with Web RowSet In the second half of the article, we will cover the following : Reading a row from Oracle Web RowSet Updating a row in Oracle Web RowSet Deleting a row from Oracle Web RowSet Updating Database Table with modified Oracle Web RowSet Setting the Environment We will use Oracle database to generate an updatable OracleWebRowSet object. Therefore, install Oracle database 10g including the sample schemas. Connect to the database with the OE schema: SQL> CONNECT OE/<password> Create an example database table, Catalog, with the following SQL script: CREATE TABLE OE.Catalog(Journal VARCHAR(25), Publisher Varchar(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25));INSERT INTO OE.Catalog VALUES('Oracle Magazine', 'OraclePublishing', 'July-August 2005', 'Tuning Undo Tablespace','Kimberly Floss');INSERT INTO OE.Catalog VALUES('Oracle Magazine', 'OraclePublishing', 'March-April 2005', 'Starting with Oracle ADF', 'SteveMuench'); Configure JDeveloper 10g for Web RowSet implementation. Create a project in JDeveloper. Select File | New | General | Application. In the Create Application window specify an Application Name and click on Next. In the Create Project window, specify a Project Name and click on Next. A project is added in the Applications Navigator. Next, we will set the project libraries. Select Tools | ProjectProperties and in the Project Properties window, select Libraries | Add Library to add a library. Add the Oracle JDBC library to project libraries. If the Oracle JDBC drivers version prior to the Oracle database 10g (R2) JDBC drivers version is used, create a library from the Oracle Web RowSet implementation classes JAR file: C:JDeveloper10.1.3jdbclibocrs12.jar. The ocrs12.jar is required only for JDBC drivers prior to Oracle database 10g (R2) JDBC drivers. In Oracle database 10g (R2) JDBC drivers OracleRowSet implementation classes are packaged in the ojdbc14.jar. In Oracle database 11g JDBC drivers Oracle RowSet implementation classes are packaged in ojdbc5.jar and ojdbc6.jar. In the Add Library window select the User node and click on New. In the Create Library window specify a Library Name, select the Class Path node and click on Add Entry. Add an entry for ocrs12.jar. As Web RowSet was introduced in J2SE 5.0, if J2SE 1.4 is being used we also need to add an entry for the RowSet implementations JAR file, rowset.jar. Download the JDBC RowSet Implementations 1.0.1 zip file, jdbc_rowset_tiger-1_0_1-mrel-ri.zip, from http://java.sun.com/products/jdbc/download.html#rowset1_0_1 and extract the JDBC RowSet zip file to a directory. Click on OK in the Create Library window. Click on OK in the Add Library window. A library for the Web RowSet application is added. Now configure an OC4J data source. Select Tools | Embedded OC4J Server Preferences. A data source may be configured globally or for the current workspace. If a global data source is created using Global | Data Sources, the data source is configured in the C:JDeveloper10.1.3jdevsystemoracle.j2ee.10.1.3.36.73embedded-oc4jconfig data-sources.xml file. If a data source is configured for the current workspace using Current Workspace | Data Sources, the data source is configured in the data-sources.xml file. For example, the data source file for the WebRowSetApp application is WebRowSetApp-data-sources.xml. In the Embedded OC4J Server Preferences window configure either a global data source or a data source in the current workspace. A global data source definition is available to all applications deployed in the OC4J server instance. A managed-data-source element is added to the data-sources.xml file. <managed-data-source name='OracleDataSource' connection-pool-name='Oracle Connection Pool' jndi-name='jdbc/OracleDataSource'/><connection-pool name='Oracle Connection Pool'><connection-factory factory-class='oracle.jdbc.pool.OracleDataSource' user='OE' password='pw'url="jdbc:oracle:thin:@localhost:1521:ORCL"></connection-factory></connection-pool> Add a JSP, GenerateWebRowSet.jsp, to the WebRowSet project. Select File | New | Web Tier | JSP | JSP. Click on OK. Select J2EE 1.3 or J2EE 1.4 in the Web Application window and click on Next. In the JSP File window specify a File Name and click on Next. Select the default settings in the Error Page Options page and click on Next. Select the default settings in the Tag Librarieswindow and click on Next. Select the default options in the HTML Options window and click on Next. Click on Finish in the Finish window. Next, configure the web.xml deployment descriptor to include a reference to the data source resource configured in the data-sources.xml file as shown in following listing: <resource-ref><res-ref-name>jdbc/OracleDataSource</res-ref-name><res-type>javax.sql.DataSource</res-type><res-auth>Container</res-auth></resource-ref>
Read more
  • 0
  • 0
  • 2372

article-image-consuming-adapter-outside-biztalk-server
Packt
22 Oct 2009
3 min read
Save for later

Consuming the Adapter from outside BizTalk Server

Packt
22 Oct 2009
3 min read
In addition to infrastructure-related updates such as the aforementioned platform modernization, Windows Server 2008 Hyper-V virtualization support, and additional options for fail over clustering, BizTalk Server also includes new core functionality. You will find better EDI and AS2 capabilities for B2B situations and a new platform for mobile development of RFID solutions. One of the benefits of the new WCF SQL Server Adapter that I mentioned earlier was the capability to use this adapter outside of a BizTalk Server solution. Let's take a brief look at three options for using this adapter by itself and without BizTalk as a client or service. Called directly via WCF service reference If your service resides on a machine where the WCF SQL Server Adapter (and thus, the sqlBinding) is installed, then you may actually add a reference directly to the adapter endpoint. I have a command-line application, which serves as my service client. If we right-click this application, and have the WCF LOB Adapter SDK installed, then Add Adapter Service Reference appears as an option. Choosing this option opens our now-beloved wizard for browsing adapter metadata. As before, we add the necessary connection string details and browse the BatchMaster table and opt for the Select operation. Unlike the version of this wizard that opens for BizTalk Server projects, notice the Advanced options button at the bottom. This button opens a property window that lets us select a variety of options such as asynchronous messaging support and suppression of an accompanying configuration file. After the wizard is closed, we end up with a new endpoint and binding in our existing configuration file, and a .NET class containing the data and service contracts necessary to consume the service. We should now call this service as if we were calling any typical WCF service. Because the auto-generated namespace for the data type definition is a bit long, I first added an alias to that namespace. Next, I have a routine, which builds up the query message, executes the service, and prints a subset of the response. using DirectReference = schemas.microsoft.com.Sql._2008._05.Types.Tables.dbo; … private static void CallReferencedSqlAdapterService() { Console.WriteLine("Calling referenced adapter service");TableOp_dbo_BatchMasterClient client = new TableOp_dbo_BatchMasterClient("SqlAdapterBinding_TableOp_dbo_BatchMaster"); try{string columnString = "*";string queryString = "WHERE BatchID = 1";DirectReference.BatchMaster[] batchResult =client.Select(columnString, queryString);Console.WriteLine("Batch results ...");Console.WriteLine("Batch ID: " + batchResult[0].BatchID.ToString());Console.WriteLine("Product: " + batchResult[0].ProductName);Console.WriteLine("Manufacturing Stage: " + batchResult[0].ManufStage);client.Close(); Console.ReadLine(); } catch (System.ServiceModel.CommunicationException){client.Abort(); } catch (System.TimeoutException) { client.Abort(); } catch (System.Exception) { client.Abort(); throw; } } Once this quick block of code is executed, I can confirm that my database is accessed and my expected result set returned.
Read more
  • 0
  • 0
  • 1899
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-manual-generic-and-ordered-tests-using-visual-studio-2008
Packt
22 Oct 2009
6 min read
Save for later

Manual, Generic, and Ordered Tests using Visual Studio 2008

Packt
22 Oct 2009
6 min read
The following screenshot describes a simple web application, which has a page for the new user registration. The user has to provide the necessary field details. After entering the details, the user will click on the Register button provided in the web page to submit all the details so that it gets registered to the site. To confirm this to the user, the system will send a notification with a welcoming email to the registered user. The mail is sent to the email address provided by the user. In the application shown in the above screenshot, the entire registration process cannot be automated for testing. For example, the email verification and checking the confirmation email sent by the system will not be automated as the user has to go manually and check the email. This part of the manual testing process will be explained in detail in this article. Manual tests Manual testing, as described earlier, is the simplest type of testing carried out by the testers without any automation tool. This test may contain a single or multiple tests inside. Manual test type is the best choice to be selected when the test is too difficult or complex to automate, or if the budget allotted for the application is not sufficient for automation. Visual Studio 2008 supports two types of manual tests file types. One as text file and the other as Microsoft Word. Manual test using text format This format helps us to create the test in the text format within Visual Studio IDE. The predefined template is available in Visual Studio for authoring this test. This template provides the structure for creating the tests. This format has the extension of .mtx. Visual Studio servers act as an editor for this test format. For creating this test in Visual Studio, either create a new test project and then add the test or select the menu option Test | New Test... and then choose the option to add the test to a new project. Now create the test using the menu option and select Manual Test (Text Format) from the available list as shown in the screenshot below. You can see the list Add to Test Project drop–down, which lists the different options to add the test to a test project. If you have not yet created the test project and selected the option to create the test, the drop-down option selected will create a new test project for the test to be added. If you have a test project already created, then we can also see that project in the list to get this new test added to the project. We can choose any option as per our need. For this sample, let us create a new test project in C#. So the first option from the drop-down of Add to Test Project would be selected in this case. After selecting the option, provide the name for the new test project the system will ask for. Let us name it TestingAppTest project. Now you can see the project getting created under the solution and the test template is also added to the test project as shown next. The template contains the detailed information for each section. This will help the tester or whoever is writing the test case to write the steps required for this test. Now update the test case template created above with the test steps required for checking the email confirmation message after the registration process. The test document also contains the title for the test, description, and the revision history for the changes made to the test case. Before executing the test and looking into the details of the run and the properties of the test, we will create the same test using Microsoft Word format as described in the next section. Manual test using Microsoft Word format This is similar to the manual test that was created using text format, except that the file type is Microsoft Word with extension .mht. While creating the manual test choose the template Manual Test (Word format) instead of the Manual Test (Text Format) as explained in the previous section. This option is available only if Microsoft Word is installed in the system. This will launch the Word template using the MS Word installed (version 2003 or later) in the system for writing the test details as shown in the following screenshot. The Word format helps us to have richer formatting capabilities with different fonts, colors, and styles for the text with graphic images and tables embedded for the test. This document not only provides the template but also the help information for each and every section so that the tester can easily understand the sections and write the test cases. This help information is provided in both the Word and Text format of the manual tests. In the test document seen in previous screenshot, we can fill the Test Details, Test Target, Test Steps, and Revision History similar to the one we did for the text format. The completed test case test document will look like this: Save the test details and close the document. Now we have both formats of manual tests in the project. Open the Test View window or the Test List Editor window to see the list of tests we have in the project. It should list two manual tests with their names and the project to which the tests are associated with. The tests shown in the Test View window looks like the one shown here: The same tests list shown by the Test List Editor would look like the one shown below. The additional properties like test list name, the project name the test belongs to, is also shown in the list editor. There are options for each test either to run or get added to any particular list. Manual tests also have other properties, which we can make use of during testing. These properties can be seen in the Properties window, which can be opened by choosing the manual test either in the Test View or in the Test List Editor windows by right-clicking the test and selecting the Properties option. The same window can also be opened by choosing the menu option View | Properties window. Both formats of manual testing have the same set of properties. Some of these properties are editable while some are read-only, which will be set by the application based on the test type. Some properties are directly related to TFS. The VSTFS is the integrated collaboration server, which combines team portal, work item tracking, build management, process guidance, and version control into a unified server.
Read more
  • 0
  • 0
  • 3025

article-image-functional-testing-jmeter
Packt
22 Oct 2009
5 min read
Save for later

Functional Testing with JMeter

Packt
22 Oct 2009
5 min read
JMeter is a 100% pure Java desktop application. JMeter is found to be very useful and convenient in support of functional testing. Although JMeter is known more as a performance testing tool, functional testing elements can be integrated within the Test Plan, which was originally designed to support load testing. Many other load-testing tools provide little or none of this feature, restricting themselves to performance-testing purposes. Besides integrating functional-testing elements along with load-testing elements in the Test Plan, you can also create a Test Plan that runs these exclusively. In other words, aside from creating a Load Test Plan, JMeter also allows you to create a Functional Test Plan. This flexibility is certainly resource-efficient for the testing project. In this article by Emily H. Halili, we will give you a walkthrough on how to create a Test Plan as we incorporate and/or configure JMeter elements to support functional testing. Preparing for Functional Testing JMeter does not have a built-in browser, unlike many functional-test tools. It tests on the protocol layer, not the client layer (i.e. JavaScripts, applets, and many more.) and it does not render the page for viewing. Although, by default that embedded resources can be downloaded, rendering these in the Listener | View Results Tree may not yield a 100% browser-like rendering. In fact, it may not be able to render large HTML files at all. This makes it difficult to test the GUI of an application under testing. However, to compensate for these shortcomings, JMeter allows the tester to create assertions based on the tags and text of the page as the HTML file is received by the client. With some knowledge of HTML tags, you can test and verify any elements as you would expect them in the browser. It is unnecessary to select a specific workload time to perform a functional test. In fact, the application you want to test may even reside locally, with your own machine acting as the "localhost" server for your web application. For this article, we will limit ourselves to selected functional aspects of the page that we seek to verify or assert. Using JMeter Components We will create a Test Plan in order to demonstrate how we can configure the Test Plan to include functional testing capabilities. The modified Test Plan will include these scenarios: 1. Create Account —New Visitor creating an Account 2. Login User —User logging in to an Account Following these scenarios, we will simulate various entries and form submission as a request to a page is made, while checking the correct page response to these user entries. We will add assertions to the samples following these scenarios to verify the 'correctness' of a requested page. In this manner, we can see if the pages responded correctly to invalid data. For example, we would like to check that the page responded with the correct warning message when a user enters an invalid password, or whether a request returns the correct page. First of all, we will create a series of test cases following the various user actions in each scenario. The test cases may be designed as follows: CREATE ACCOUNT Test Steps Data Expected 1 Go to Home page. www.packtpub.com Home page loads and renders with no page error 2 Click Your Account link (top right). User action 1. Your Account page loads and renders with no page error.2. Logout link is not found. 3 No Password: - Enter email address in Email text field.- Click the Create Account and Continue button. email=EMAIL 1. Your Account page resets with Warning message-Please enter password.2. Logout link not found. 4 Short Password: - Enter email address in Email text field.- Enter password in Password text field.- Enter password in Confirm Password text field. - Click Create Account and Continue button. email=EMAILpassword=SHORT_PWD confirm password=SHORT_PWD 1. Your Account page resets with Warning message-Your password must be 8 characters or longer.2. Logout link is not found. 5 Unconfirmed Password: - Enter email address in Email text field.- Enter password in Password text field.- Enter password in Confirm Password text field. - Click Create Account and Continue button. email=EMAILpassword=VALID_PWDconfirm password=INVALID_PWD 1. Your Account page resets with Warning messagePassword does not match.2. Logout link is not found. 6 Register Valid User: - Enter email address in Email text field.- Enter password in Password text field.- Enter password in Confirm Password text field. - Click Create Account and Continue button. email=EMAILpassword=VALID_PWDconfirm password=VALID_PWD 1. Logout link is found.2. Page redirects to User Account page.3. Message found: You are registered as: e:<EMAIL>. 7 Click Logout link. User action 1. Logout link is NOT found.     LOGIN USER Test Steps Data Expected 1 Click Home page. User action 1. WELCOME tab is active. 2 Log in Wrong Password: - Enter email in Email text field- Enter password at Password text field.- Click Login button. email=EMAILpassword=INVALID_PWD 1. Logout link is NOT found.2. Page refreshes.3. Warning message-Sorry your password was incorrect appears. 3 Log in Non-Exist Account:- Enter email in Email text field.- Enter password in Password text field.- Click Login button. email=INVALID_EMAILpassword=INVALID_PWD 1. Logout link is NOT found.2. Page refreshes.3. Warning message-Sorry, this does not match any existing accounts. Please check your details and try again or open a new account below appears. 4 Log in Valid Account:- Enter email in Email text field.- Enter password in Password text field.- Click Login-button. email=EMAILpassword=VALID_PWD 1. Logout link is found.2. Page reloads.3. Login successful message-You are logged in as: appears. 5 Click Logout link. User action 1. Logout link is NOT found.    
Read more
  • 0
  • 3
  • 12881

article-image-linq-objects
Packt
22 Oct 2009
10 min read
Save for later

LINQ to Objects

Packt
22 Oct 2009
10 min read
Without LINQ, we would have to go through the values one-by-one and then find the required details. However, using LINQ we can directly query collections and filter the required values without using any looping. LINQ provides powerful filtering, ordering, and grouping capabilities that requires minimum coding. For example, if we want to find out the types stored in an assembly and then filter the required details, we can use LINQ to query the assembly details using System.Reflection classes. The System.Reflection namespace contains types that retrieve information about assemblies, modules, members, parameters, and other entities as collections are managed code, by examining their metadata. Also, files under a directory are a collection of objects that can be queried using LINQ. We shall see some of the examples for querying some collections. Array of Integers The following example shows an integer array that contains a set of integers. We can apply the LINQ queries on the array to fetch the required values.     int[] integers = { 1, 6, 2, 27, 10, 33, 12, 8, 14, 5 };       IEnumerable<int> twoDigits =       from numbers in integers       where numbers >= 10       select numbers;       Console.WriteLine("Integers > 10:");       foreach (var number in twoDigits)       {          Console.WriteLine(number);       } The integers variable contains an array of integers with different values. The variable twoDigits, which is of type IEnumerable, holds the query. To get the actual result, the query has to be executed. The actual query execution happens when the query variable is iterated through the foreach loop by calling GetEnumerator() to enumerate the result. Any variable of type IEnumerable<T>, can be enumerated using the foreach construct. Types that support IEnumerable<T> or a derived interface such as the generic IQueryable<T>, are called queryable types. All collections such as list, dictionary and other classes are queryable. There are some non-generic IEnumerable collections like ArrayList that can also be queried using LINQ. For that, we have to explicitly declare the type of the range variable to the specific type of the objects in the collection, as it is explained in the examples later in this article. The twoDigits variable will hold the query to fetch the values that are greater than or equal to 10. This is used for fetching the numbers one-by-one from the array. The foreach loop will execute the query and then loop through the values retrieved from the integer array, and write it to the console. This is an easy way of getting the required values from the collection. If we want only the first four values from a collection, we can apply the Take() query operator on the collection object. Following is an example which takes the  first four integers from the collection. The four integers in the resultant collection are displayed using the foreach method.    IEnumerable<int> firstFourNumbers = integers.Take(4);   Console.WriteLine("First 4 numbers:");   foreach (var num in firstFourNumbers)   {      Console.WriteLine(num);   } The opposite of Take() operator is Skip() operator, which is used to skip the number of items in the collection and retrieve the rest. The following example skips the first four items in the list and retrieves the remaining.    IEnumerable<int> skipFirstFourNumbers = integers.Skip(4);   Console.WriteLine("Skip first 4 numbers:");   foreach (var num in skipFirstFourNumbers)   {      Console.WriteLine(num);   } This example shows the way to take or skip the specified number of items from the collection. So what if we want to skip or take the items until we find a match in the list? We have operators to get this. They are TakeWhile() and SkipWhile(). For example, the following code shows how to get the list of numbers from the integers collection until 50 is found. TakeWhile() uses an expression to include the elements in the collection as long as the condition is true and it ignores the other elements in the list. This expression represents the condition to test the elements in the collection for the match.    int[] integers = { 1, 9, 5, 3, 7, 2, 11, 23, 50, 41, 6, 8 };   IEnmerable<int> takeWhileNumber = integers.TakeWhile(num =>      num.CompareTo(50) != 0);   Console.WriteLine("Take while number equals 50");   foreach (int num in takeWhileNumber)      {         Console.WriteLine(num.ToString());      } Similarly, we can skip the items in the collection using SkipWhile(). It uses an expression to bypass the elements in the collection as long as the condition is true. This expression is used to evaluate the condition for each element in the list. The output of the expression is boolean. If the expression returns false, the remaining elements in the collections are returned and the expression will not be executed for the other elements. The first occurrence of the return value as false will stop the expression for the other elements and returns the remaining elements. These operators will provide better results if used against ordered lists as the expression is ignored for the other elements once the first match is found.    IEnumerable<int> skipWhileNumber = integers.SkipWhile(num =>      num.CompareTo(50) != 0);   Console.WriteLine("Skip while number equals 50");   foreach (int num in skipWhileNumber)   {      Console.WriteLine(num.ToString());   } Collection of Objects In this section we will see how we can query a custom built objects collection. Let us take the Icecream object, and build the collection, then we can query the collection. This Icecream class in the following code contains different properties such as Name, Ingredients, TotalFat, and Cholesterol.     public class Icecream    {        public string Name { get; set; }        public string Ingredients { get; set; }        public string TotalFat { get; set; }        public string Cholesterol { get; set; }        public string TotalCarbohydrates { get; set; }        public string Protein { get; set; }        public double Price { get; set; }     } Now build the Icecreams list collection using the class defined perviously.     List<Icecream> icecreamsList = new List<Icecream>        {            new Icecream {Name="Chocolate Fudge Icecream", Ingredients="cream,                milk, mono and diglycerides...", Cholesterol="50mg",                Protein="4g", TotalCarbohydrates="35g", TotalFat="20g",                Price=10.5        },        new Icecream {Name="Vanilla Icecream", Ingredients="vanilla extract,            guar gum, cream...", Cholesterol="65mg", Protein="4g",            TotalCarbohydrates="26g", TotalFat="16g", Price=9.80 },            new Icecream {Name="Banana Split Icecream", Ingredients="Banana, guar            gum, cream...", Cholesterol="58mg", Protein="6g",            TotalCarbohydrates="24g", TotalFat="13g", Price=7.5 }        }; We have icecreamsList collection which contains three objects with values of the Icecream type. Now let us say we have to retrieve all the ice-creams that cost less. We can use a looping method, where we have to look at the price value of each object in the list one-by-one and then retrieve the objects that have less value for the Price property. Using LINQ, we can avoid looping through all the objects and its properties to find the required ones. We can use LINQ queries to find this out easily. Following is a query that fetches the ice-creams with low prices from the collection. The query uses the where condition, to do this. This is similar to relational database queries. The query gets executed when the variable of type IEnumerable is enumerated when referred to in the foreach loop.     List<Icecream> Icecreams = CreateIcecreamsList();    IEnumerable<Icecream> IcecreamsWithLessPrice =    from ice in Icecreams    where ice.Price < 10    select ice;    Console.WriteLine("Ice Creams with price less than 10:");    foreach (Icecream ice in IcecreamsWithLessPrice)    {        Console.WriteLine("{0} is {1}", ice.Name, ice.Price);     } As we used List<Icecream> objects, we can also use ArrayList to hold the objects, and a LINQ query can be used to retrieve the specific objects from the collection according to our need. For example, following is the code to add the same Icecreams objects to the ArrayList, as we did in the previous example.     ArrayList arrListIcecreams = new ArrayList();    arrListIcecreams.Add( new Icecream {Name="Chocolate Fudge Icecream",        Ingredients="cream, milk, mono and diglycerides...",        Cholesterol="50mg", Protein="4g", TotalCarbohydrates="35g",        TotalFat="20g", Price=10.5 });    arrListIcecreams.Add( new Icecream {Name="Vanilla Icecream",        Ingredients="vanilla extract, guar gum, cream...",        Cholesterol="65mg", Protein="4g", TotalCarbohydrates="26g",        TotalFat="16g", Price=9.80 });    arrListIcecreams.Add( new Icecream {Name="Banana Split Icecream",        Ingredients="Banana, guar gum, cream...", Cholesterol="58mg",        Protein="6g", TotalCarbohydrates="24g", TotalFat="13g", Price=7.5    }); Following is the query to fetch low priced ice-creams from the list.     var queryIcecreanList = from Icecream icecream in arrListIcecreams    where icecream.Price < 10    select icecream; Use the foreach loop, shown as follows, to display the price of the objects retrieved using the above query.     foreach (Icecream ice in queryIcecreanList)    Console.WriteLine("Icecream Price : " + ice.Price);
Read more
  • 0
  • 0
  • 2159

article-image-aggregate-services-servicemix-jbi-esb
Packt
22 Oct 2009
10 min read
Save for later

Aggregate Services in ServiceMix JBI ESB

Packt
22 Oct 2009
10 min read
EAI - The Broader Perspective No one should have (or will) ever dared to build a 'Single System' which will take care of the entire business requirements of an enterprise. Instead, we build few (or many) systems,and each of them takes care of a set of functionalities in a single Line of Business (LOB). There is absolutely nothing wrong here, but the need of the hour is that these systems have to exchange information and interoperate in many new ways which have not been foreseen earlier. Business grows, enterprise boundaries expands and mergers and acquisition are all norms of the day. If IT cannot scale up with these volatile environments, the failure is not far. Let me take a single, but not simple problem that today's Businesses and IT face - Duplicate Data. By Duplicate Data we mean data related to a single entity stored in multiple systems and storage mechanisms, that too in multiple formats and multiple content. I will take the 'Customer' entity as an example so that I can borrow the 'Single Customer View' (SCV) jargon to explain the problem. We gather customer information while he makes a web order entry or when he raises a complaint against the product or service purchased or when we raise a marketing campaign for a new product to be introduced or ... The list continues, and in each of these scenarios we make use of different systems to collect and store the same customer information. 'Same Customer' - is it same? Who can answer this question? Is there a Data Steward who can provide you with the SCV from amongst the many information silos existing in your Organization? To rephrase the question, does your organization at least have a 'Single View of Truth', if it doesn't have a 'Single Source of Truth'? Information locked away inside disparate, monolithic application silos has proven a stubborn obstacle in answering the queries business requires, impeding the opportunities of selling, not to mention cross-selling and up-selling. Yeah, it's time to cleanse and distill each customer's data into a single best-record view that can be used to improve source system data quality. For that, first we need to integrate the many source systems available. Today, companies are even acquiring just to get access to it's invaluable Customer information! This is just one of the highlights of the importance of integration to control Information Entropy in the otherwise complicated IT landscape. Figure 1. The 'Single Customer View' Dilemma So Integration is not an end, but a means to end a full list of problems faced by enterprises today. We have been doing integration for many years. There exist many platforms, technologies and frameworks doing the same thing. Built around that, we have multiple Integration Architectures too, amongst which, the Point to Pont, Hub and Spoke, and the Message Bus are common. Figure 2 represents these integration topologies. Figure 2. EAI Topologies Let us now look at the salient features of these topologies to see if we are self-sufficient or need something more. Point to Point In Point to Point, we define integration solutions for a pair of applications. Thus, we have two end points to be integrated. We can build protocol and/or format adaptors/transformers at one or either end. This is the easiest way to integrate, as long as the volume of integration is low. We normally use technology specific APIs like FTP, IIOP, Remoting or batch interfaces to realize integration. The advantage is that between these two points, we have tight coupling, since both ends have knowledge about their peers. The downside is that if there are 6 nodes (systems) to be interconnected, we need at least 30 separate channels for both forward and reverse transport. So think of a mid-sized Enterprise with some 1000 systems to integrate! Hub & Spoke Hub And Spoke Architecture is also called as the Message Broker. It provides a centralized hub (Broker) to which all applications are connected. Each application connects with the central hub through lightweight connectors. The lightweight connectors facilitate application integration with minimum or no changes to the existing applications. Message Transformation and Routing takes place within the Hub. The major drawback of the Hub and Spoke Architecture is that if the Hub fails, the entire Integration topology fails. Enterprise Message Bus An Enterprise Message Bus provides a common communication infrastructure which acts as a platform-neutral and language-neutral adaptor between applications. This communication infrastructure may include a Message Router and/or Publish-Subscribe channels. So applications interact each other through the message bus with the help of Request-Response queues. Sometimes the applications have to use adapters that handle scenarios like invoking CICS transactions. Such adapters may provide connectivity between the applications and the message bus using proprietary bus APIs and application APIs. Service Oriented Integration (SOI) Service Oriented Architecture (SOA) provides us with a set of principles, patterns and practices, to provide and consume services which are orchestrated using open standards so as to remove single vendor lock-into provide an agile infrastructure where services range from business definition to technical implementation. In SOA, we no longer deal with single format and single protocol, instead we accept the fact that heterogeneity exists between applications. And our architecture still needs to ensure interoperability and thus information exchange. To help us do integration in the SOA manner, we require a pluggable service infrastructure where providers, consumers, and middleware services can collaborate in the famous 'Publish -- Find -- Bind' triangle. So, similar to the integration topologies described above, we need a backbone upon which we can build SOA that can provide a collection of middleware services that provides integration capabilities. This is what we mean by Service Oriented Integration (SOI). Gartner originally identified Enterprise Service Bus (ESB) Architecture as a core component in the SOA landscape. ESB provides a technical framework to align your SOA based integration needs. In the rest of the article we will concentrate on ESB. Enterprise Service Bus (ESB) Roy Schutle from Gartner defines an ESB as:"A Web-services-capable middleware infrastructure that supports intelligent program-to-program communication and mediates the relationships among loosely-coupled and uncoupled business components." In the ESB Architecture (Refer Figure 2), applications communicate through an SOA middleware backbone. The most distinguishing feature of the ESB Architecture is the distributed nature of the integration topology. This makes the ESB capabilities to spread out across the bus in a distributed fashion, thus avoiding any single point of failure. Scalability is achieved by distributing the capabilities into separately deployable service containers. Smart, intelligent connectors connect the applications to the Bus. Technical services like transformation, routing, security, etc. are provided internally by these connectors. The Bus federates services which are hosted locally or remotely, thus collaborating distributed capabilities. Many ESB solutions are based on Web Services Description Language (WSDL) technologies, and they use Extensible Markup Language (XML) formats for message translation and transformation. The best way to think about an ESB is to imagine the many features which we can provide to the message exchange at a mediation layer (the ESB layer), a few among them is listed below: Addressing & Routing  Synchronous and Asynchronous style invocations  Multiple Transport and protocol bindings  Content transformation and translation  Business Process Orchestration (BPM)  Event processing  Adapters to multiple platforms  etc... Service Aggregation in ESB ESB provides you the best ways of integrating services so that services are not only interoperable but also reusable in the form of aggregating in multiple ways and scenarios. This means, services can be mixed and matched to adapt to multiple protocols and consumer requirements. Let me explain you this concept, as we will explore more into this with the help of sample code too. In code and component reuse, we try to reduce ‘copy and paste’ reuse and encourage inheritance, composition and instance pooling. Similar analogy exists in SOI where services are hosted and pooled for multiple clients through multiple transport channels, and ESB can do this in the best way integration world has ever seen. We call this as the notion of shared services. For example, if a financial organization provides a ‘credit history check service’, an ESB can facilitate reuse of this service by multiple business processes (like a Personal Loan approval process or a Home Mortgage approval process). So, once we create our 'core services', we can then arbitrarily compose these services in a declarative fashion so as to define and publish more and more composite services. Business Process Management (BPM) tools can be integrated over ESB to leverage service aggregation and service collaboration. This facilitates reuse of basic or core (or fine grained) services at Business Process level. So, granularity of services is important which will also decide the level of reusability. Coarse grained or composite services consume fine grained services. Applications that consume  coarse-grained services are not exposed to the fine-grained services they use. Composite services can be assembled from coarse-grained as well as fine-grained services. To make the concept clear, let us take the example of provisioning a new VOIP (Voice Over IP) Service for a new Customer. This is a composite service which in turn calls multiple coarse grained services like 'validateOrder', 'createOrVerifyCustomer', 'checkProductAvailability', etc. Now, the createOrVerifyCustomer coarse grained service in turn call multiple fine grained services like 'validateCustomer', 'createCustomer', 'createBillingAddress', 'createMailingAddress', etc. Figure 3. Service Composition Java Business Integration (JBI) Java Business Integration (JBI) provides a collaboration framework which provides standard interfaces for integration components and protocols to plug into, thus allowing the assembly of Service Oriented Integration (SOI) frameworks. JSR 208 is an extension of Java 2 Enterprise Edition (J2EE), but it is specific for Java Business Integration Service Provider Interfaces (SPI). SOA and SOI are the targets of JBI and hence it is built around Web Services Description Language (WSDL). The nerve of the JBI architecture is the NMR (Normalized Message Router). This is a bus through which messages flow in either directions from a source to a destination. You can listen to Ron Ten-Hove, the Co-spec lead for JSR 208 here and he writes more about JBI components in the PDF download titled JBI Components: Part 1. JBI provides the best available, open foundation for structuring applications by composition of services rather than modularized, structured code that we have been doing in traditional programming paradigms. A JBI compliant ESB implementation must support four different service invocations, leading to four corresponding Message Exchange Patterns (MEP):   One-Way (In-Only MEP): Service Consumer issues a request to Service Provider. No error (fault) path is provided.  Reliable One-Way (Robust In-Only MEP): Service Consumer issues a request to Service Provider. Provider may respond with a fault if it fails to process the request.  Request-Response (In-Out MEP): Service Consumer issues a request to Service Provider, with expectation of response. Provider may respond with a fault if it fails to process request.  Request Optional-Response (In Optional-Out MEP): Service Consumer issues a request to Service Provider, which may result in a response. Both Consumer and provider have the option of generating a fault in response to a message received during the interaction.
Read more
  • 0
  • 0
  • 4222
article-image-setting-and-configuring-liferay-portal
Packt
22 Oct 2009
5 min read
Save for later

Setting up and Configuring a Liferay Portal

Packt
22 Oct 2009
5 min read
Setting up Liferay Portal As an administrator at the enterprise, you need to undertake a lot of administration tasks, such as installing Liferay portal, installing and setting up databases, and so on. You can install Liferay Portal through different ways, based on your specific needs. Normally, there are three main installation options: Using an open source bundle—It is the easiest and fastest installation method to install Liferay portal as a bundle. By using a Java SE runtime environment with an embedded database, you simply unzip and run the bundle. Detailed installation procedure—You can install the portal in an existing application server. This option is available for all the supported application servers. Using the extension environment—You can use a full development environment to extend the functionality. We will take up the third installation option "Using the extension environment" in the coming section. Using Liferay Portal Bundled with Tomcat 5.5 in Windows First let's consider one scenario when you, as an administrator, need to install Liferay portal in Windows with MySQL database, and your local Java version is JavaSE 5.0. Let's install Liferay portal bundled with Tomcat 5.5 in Windows as follows: Download Liferay Portal bundled with Tomcat for JDK 5.0 from Liferay official web site. Unzip the bundled file. Set up MySQL database as follows:create database liferay;grant all on liferay.* to 'liferay'@'localhost' identified by'liferay' with grant option;grant all on liferay.* to 'liferay'@'localhost.localdomain'identified by 'liferay' with grant option; Create a database and account in MySQL: Copy the MySQL JDBC driver mysql.jar to $TOMCAT_DIR/lib/ext; Comment the Hypersonic data source (HSQL) configuration and uncomment MySQL configuration ($TOMCAT_DIR/conf/Catalina/localhost/ROOT.xml):<!-- Hypersonic --><!--<Resource name="jdbc/LiferayPool" auth="Container"type="javax.sql.DataSource" driverClassName="org.hsqldb.jdbcDriver"url="jdbc:hsqldb:lportal"username="sa"password=""maxActive="20" /> --><!-- MySQL --><Resource name="jdbc/LiferayPool" auth="Container"type="javax.sql.DataSource" driverClassName="com.mysql.jdbc.Driver"url="jdbc:mysql://localhost/liferay?useUnicode=true&amp;characterEncoding=UTF-8"username="liferay"password="liferay"maxActive="20" /> Run $TOMCAT_DIR /bin/startup.bat. Open your browser and go to http://localhost:8080 (here we assume that it is a local installation, otherwise use the real host name or IP). Login as an administrator—User: test@liferay.com and Password: test. Note that the bundle comes with an embedded HSQL database loaded with sample data from the public website of Liferay. Do not use the Hypersonic in production. Using Liferay Portal Bundled with Tomcat 6.x in Linux Let's consider another scenario when you, as an administrator, need to install Liferay portal in Linux with MySQL database, and your local Java version is Java 6.0. Let's install Liferay portal bundled with Tomcat 6.0 in Linux as follows: Download Liferay Portal bundled with Tomcat 6.0 from Liferay official web site. Unzip the bundled file. Create a database and account in MySQL (as stated before). Run $TOMCAT_DIR/bin/startup.sh. Open your browser and go to http://localhost:8080 (assuming local installation; otherwise use the real host name or IP). Log in as an administrator—User: test@liferay.com and Password: test. Note that, Liferay Portal creates the tables it needs along with example data, the first time it starts. Furthermore, it is necessary to make the script executable by running chmod +x filename.sh. It is often necessary to run the executable from the directory where it resides. Using More Options for Liferay Portal Installation You can use one of the following options for Servlet containers and full Java EE application servers to install Liferay Portal: Geronimo + Tomcat Glassfish for AIX Glassfish for Linux Glassfish for OSX Glassfish for Solaris Glassfish for Solaris (x86) Glassfish for Windows JBoss + Jetty 4.0 JBoss + Tomcat 4.0 JBoss + Tomcat 4.2 Jetty JOnAS + Jetty JOnAS + Tomcat Pramati Resin Tomcat 5.5 for JDK 1.4 Tomcat 5.5 for JDK 5.0 Tomcat 6.0 You can choose a preferred bundle according to your requirements and download it from the official download page directly. Simply go to the website http://www.liferay.com and click on Downloads page. Flexible Deployment Matrix As an administrator, you can install Liferay Portals on all major application servers, databases, and operating systems. There are over 700 ways to deploy Liferay Portal. Thus, you can reuse your existing resources, stick to your budget and get an immediate return on you investment that everyone can be happy with. In general, you can install Liferay portal in Linux, UNIX and Windows with any one of the following application servers (or Servlet containers) and by selecting any one of the following database systems. The applications servers (or Servlet containers) that Liferay Portal can run on, include: Borland ES 6.5 Apache Geronimo 2.x Sun GlassFish 2 UR1 JBoss 4.0.x, 4.2.x JOnAS 4.8.x JRun 4 Updater 3 OracleAS 10.1.3.x Orion 2.0.7 Pramati 5.0 RexIP 2.5 SUN JSAS 9.1 WebLogic 8.1 SP4, 9.2, 10 WebSphere 5.1, 6.0.x, 6.1.x Jetty 5.1.10 Resin 3.0.19 Tomcat 5.0.x/5.5.x/6.0.x Databases that Liferay portal can run on include: Apache Derby IBM DB2 Firebird Hypersonic Informix InterBase JDataStore MySQL Oracle PostgresSQL SAP SQL Server Sybase Operating systems that Liferay portal can run on include: LINUX (Debian, RedHat, SUSE, Ubuntu, and so on.) UNIX (AIX, FreeBSD, HP-UX, OS X, Solaris, and so on.) WINDOWS MAC OS X
Read more
  • 0
  • 0
  • 4537

article-image-pop-image-widget-using-javascript-php-and-css
Packt
22 Oct 2009
7 min read
Save for later

Pop-up Image Widget using JavaScript, PHP and CSS

Packt
22 Oct 2009
7 min read
If you’re a regular blog reader then it’s likely that you’ve encountered the Recent Visitors widget form (http://mybloglog.com). This widget displays the profile like name, picture and sites authored by members of Mybloglog who have recently visited your blog. In the Mybloglog widget, when you move the mouse cursor to the member’s picture, you’ll see a popup displaying a brief description of that member. A glance at MyBlogLog widget The above image is of a MyBlogLog widget. As you can see in the right part of the widget, there is a list of the recent visitors to the blog from members of MyBlogLog. You may also have noticed that in the left part of the widget is a popup showing the details and an image of the visitor. This popup is displayed when the mouse is moved over the image on the widget. Now, let’s look at the code which we got from MyBlogLog to display the above widget. <script src="http://pub.mybloglog.com/comm3.php?mblID=2007121300465126&r= widget&is=small&o=l&ro=5&cs=black&ww=220&wc=multiple"></script> In the above script element, the language and type attributes are not specified. Although they are optional attributes in HTML - you must specify a value in the type attribute to make the above syntax valid in an XHTML web page. If you closely looked at the src attribute of the script element, you can see that the source page of the script is a .php file. You can use the JavaScript code with any file extension like .php , .asp, and so on , but whenever you use such a file in src attribute please note that the final output code of the file (after being parsed by server) should be a valid JavaScript code. Creating pop-up image widget This pop-up image widget is somewhat similar to MyBlogLog widget but it is a simplified version of that widget. This is a very simple widget with uses JavaScript, PHP and CSS. Here you’ll see four images in the widget and a pop-up image (corresponding to the chosen image) will be displayed when you move the mouse over it. After getting the core concept, you can extend the functionality to make this look fancier. Writing Code for Pop-up Image Widget As I’ve already discussed, this widget is going to contain PHP code, JavaScript and a little bit of CSS as well. For this, you need to write the code in a PHP file with the .php extension. First of all, declare the variables for storing the current mouse position and string variables for storing the string of the widget. var widget_posx=0;var widget_posy=0;var widget_html_css=''; The widget_posx variable is to hold the x co-ordinate values of the mouse position on the screen, whereas, the widget_posy variable will store the y co-ordinate. The widget_html_css variable stores the HTML and CSS elements which will be used later in the code. The (0,0) co-ordinate of the output devices like monitor is located at the top left position. So the mouse position 10,10 will be somewhere near the top left corner of monitor. After declaring the variables, let’s define an event handler to track the mouse position on the web page. document.onmousemove=captureMouse; As you can see above, we’ve called a function captureMouse() When the mouse is moved anywhere on the document (web page), the event handler which is the function captureMouse() is called on the onmousemove event. The Document object represents the entire HTML document and can be used to access and capture the events of all elements on a page. Each time a user moves the mouse one pixel, a mousemove event occurs. It engages system resources to process all mousemove events, hence, use this event carefully! Now, let’s look at the code of the captureMouse() function. function captureMouse(event){ if (!event){var event = window.event;}if (event.pageX || event.pageY) { widget_posx = event.pageX; widget_posy = event.pageY; } else if (event.clientX || event.clientY) { widget_posx = event.clientX; widget_posy = event.clientY; } } As you can see in the above function, the event variable is passed as a function parameter. This event variable is the JavaScript’s Event object. The Event object keeps track of various events that occur on the page, such as the user moving the mouse or clicking on the link, and allows you to react to them by writing code which is relevant to the event. if (!event){var event = window.event;} In the above code, the first line of the event handler ensures that if the browser doesn’t pass the event information to the above function, then we would obtain it from any explicit event registration of the window object. We can track different activity in the document by the event object with the help of its various defined properties. For example, if eventObj is the event object and we’ve to track whether the ctrl key is pressed (or not) - we can use the following code in JavaScript: eventObj.ctrlKey If we’ve assigned the x, y-position of mouse in the page using the pageX and pageY properties, we can also get the same mouse position of the mouse cursor using clientX and clientY property. Most browsers provide both pageX/pageY and clientX/clientY. Internet Explorer is the only current browser that provides clientX/clientY, but not pageX/pageY. To provide cross-browser support, we’ve used both pageX/pageY and clientX/clientY to get the mouse co-ordinates in the document, and assigned them to the widget_posx and widget_posy variables accordingly. Now, let’s look at widget_html_css variable, where we’re going to store the string which is going to be displayed in the widget. widget_html_css+='<style type="text/css">';widget_html_css+='.widgetImageCss';widget_html_css+='{ margin:2px;border:1px solid #CCCCCC;cursor:pointer}';widget_html_css+='</style>'; As you can see in the string of the above variable, we’ve added the style for the HTML element with the class name widgetImageCss within the style element. When applied, this class in the HTML adds a 2 pixel margins ‘brown color border’ to the element. Furthermore, the mouse cursor will be converted into pointer (a hand) which is defined with the cursor attribute in CSS. widget_html_css+='<div id="widget_popup"style="position:absolute;z-index:10; display:none">&nbsp;</div>'; Using the above code, we’re adding a division element with id widget_popup to the DOM. We’ve also added style to this element using inline styling. The position attribute of this element is set to absolute so that this element can move freely without disturbing the layout of the document. The z-index property is used for stacking the order of the element and in the above element it is set 10 so that this element will be displayed above all the other elements of the document. Finally, the display property is set to none for hiding the element at first. Afterwards, this element will be displayed with the pop-up image using JavaScript in the document. Elements can have negative stack orders i.e. you can set the z-index to -1 for an element. This will display it underneath the other elements on the page. Z-index only works on elements that have been positioned using CSS (such as position:absolute). Now, the PHP part of the codes comes in. We’ve used PHP to add the images to the widget_html_css string variables of JavaScript. We’ve used PHP in this part rather than using JavaScript for making this application flexible. JavaScript is a client side scripting language and can’t access the database or do any kind of server activity. Using PHP, you can extract and display the images from the database which might be the integral part of your desired widget.
Read more
  • 0
  • 0
  • 8547

article-image-visual-studio-2008-test-types
Packt
22 Oct 2009
15 min read
Save for later

Visual Studio 2008 Test Types

Packt
22 Oct 2009
15 min read
Software testing in Visual Studio Team System 2008 Before going into the details of the actual testing using Visual Studio 2008, we need to understand the different tools provided by Visual Studio Team System (VSTS) and their usage. Once we understand the tools usage, then we should be able to perform different types of testing using VSTS. As we go along creating a number of different tests, we will encounter difficulty in managing the test similar to the code and its different versions during application development. There are different features such as the Test List Editor and Test View and the Team Foundation Server (TFS) for managing and maintaining all the tests created using VSTS. Using this Test List Editor, we can group similar tests, create number of lists, add, or delete tests from the list. The other aspect of this article is to see the different file types getting created in Visual Studio during testing. Most of these files are in XML format, which get created automatically whenever the corresponding test is created. The tools such as the Team Explorer, Code Coverage, Test View, and Test Results are not new to Visual Studio 2008 but actually available since Visual Studio 2005. While we go through the windows and their purposes, we can check the IDE and the tools integration into Visual Studio 2008. Testing as part of Software Development Life Cycle The main objective of testing is to find the defects early in the SDLC. If the defect is found early, then the cost will be less, but if the defect is found during production or implementation stage, then the cost will be higher. Moreover, testing is carried out to assure the quality and reliability of the software. In order to find the defect earlier, the testing activities should start early, that is, in the Requirement phase of SDLC and continue till the end of SDLC. In the Coding phase, various testing activities takes place. Based on the design, the developers start coding the modules. Static and dynamic testing is carried out by the developers. Code reviews and code walkthroughs are also conducted. Once the coding is completed, then comes the Validation phase, where different phases or forms of testing are performed. Unit Testing: This is the first stage of testing in SDLC. This is performed by the developer to check whether the developed code meets the stated requirements. If there are any defects, the developer logs them against the code and fixes the code. The code is retested and then moved to the testers after confirming the code without any defects for the piece of functionality. This phase identifies a lot of defects and also reduces the cost and time involved in testing the application and fixing the code. Integration Testing: This testing is carried out between two or more modules or functions together with the intent of finding interface defects between them. This testing is completed as a part of unit or functional testing, and sometimes becomes its own standalone test phase. On a larger level, integration testing can involve putting together groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. Defects found are logged and later fixed by the developers. There are different ways of integration testing such as top-down testing and bottom-up testing: The Top-Down approach is followed to test the highest level of components and integrate first to test the high-level logic and the flow. The low-level components are tested later. The Bottom-Up approach is the exact opposite of the top-down approach. In this case, the low-level functionalities are tested and integrated first and then the high-level functionalities are tested. The disadvantage in this approach is that the high-level or the most complex functionalities are tested later. The Umbrella approach uses both the top-down and bottom-up patterns. The inputs for functions are integrated in the bottom-up approach and then the outputs for the functions are integrated in the top-down approach. System Testing: It compares the system specifications against the actual system. The system test design is derived from the system design documents and is used in this phase. Sometimes, system testing is automated using testing tools. Once all the modules are integrated, several errors may arise. Testing done at this stage is called system testing. Defects found in this testing are logged and fixed by the developers. Regression Testing: This is not mentioned in the testing phase, but is carried out once the defects are fixed by the developers. The main objective of this type of testing is to determine if bug fixes have been successful and have not created any new problems. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred and to check if any new functionality was introduced in the software. Types of testing Visual Studio provides a range of testing types and tools for software applications. Following are some of those types: Unit test Manual test Web test Load test Stress test Performance test Capacity Planning test Generic test Ordered test In addition to these types, there are additional tools provided to manage, order the listing, and execute tests created in Visual Studio. Some of these are the Test View, Test List Editor, and the Test Results window. We will look at these testing tools and the supporting tools for managing the testing in Visual Studio 2008 in detail later. Unit test As soon as the developer finishes the code, the developer wants to know if it is producing the expected result before getting into any more detailed testing or handing over the component to the tester. The type of testing performed by the developers to test their own code is called Unit testing. Visual Studio has great support for Unit testing. The main goal of the unit testing is to isolate each piece of the code or individual functionality and test if the method is returning the expected result for different set of parameter values. It is extremely important to run unit tests to catch the defects in the early stage. The methods generated by the automated unit testing tool call the methods in the classes from the source code and test the output of each of the methods by comparing them with the expected values. The unit test tool produces a separate set of test code for the source. Using the test code we can pass the parameter values to the method and test the value returned by the method, and then compare them with the expected result. Unit testing code can be easily created by using the code generation feature, which creates the testing source code for the source application code. The generated unit testing code will contain several attributes to identify the Test Class, Test Method, and Test Project. These attributes are assigned when the unit test code gets generated from the original source code. Then using this code, the developer has to change the values and assert methods to compare the expected result from these methods. The Unit test class is similar to the other classes in any other project. The good thing here is that we can create new test classes by inheriting the base test class. The base test class will contain the common or reusable testing methods. This is the new Unit testing feature which helps us reduce the code and reuse the existing test classes. Whenever any code change occurs, it is easy to figure out the fault with the help of Unit tests, rerun those tests, and check whether the code is giving the intended output. This is to verify the code change the developer has made and to confirm that it is not affecting the other parts of the application. All the methods and classes generated for the automated unit testing are inherited from the namespace Microsoft.VisualStudio.TestTools.UnitTesting. Manual test Manual testing is the oldest and the simplest type of testing, but yet very crucial for software testing. It requires a tester to run all the tests without any automation tool. It helps us to validate whether the application meets various standards defined for effective and efficient accessibility and usage. Manual testing comes to play in the following scenarios: There is not enough budget for automation. The tests are more complicated, or are too difficult to be converted into automated tests. The tests are going to be executed only once. There is not enough time to automate the tests. Automated tests would be time-consuming to create and run. Manual tests can be created either using a Word document or Text format in Visual Studio 2008. This is a form of describing the test steps that should be performed by the tester. The step should also mention the expected result out of testing the step. Web tests Web tests are used for testing the functionality of the web page, web application, web site, web services, and a combination of all these. Web tests can be created by recording the interactions that are performed in the browser. These can be played back to test the web application. Web tests are normally a series of HTTP requests (GET/POST). Web tests can be used for testing the application performance as well as for stress testing. During HTTP requests, the web test takes care of testing the web page redirects, validations, viewstate information, authentication, and JavaScript executions. There are different validation rules and extraction rules used in web testing. The validation rules are used for validating the form field names, texts, and tags in the requested web page. We can validate the results or values against the expected result as per business needs. These validation rules are also used for checking the time taken for the HTTP request. At some point in time, we need to extract the data returned by the web pages. We may need the data for future use, or we may have to collect the data for testing purposes. In this case, we have to use the extraction rules for extracting the data returned by the page requested. Using this process, we can extract the form fields, texts, or values in the web page and store it in the web test context or collection. Web tests cannot be performed only with the existence of a web page. We need some data to be populated from the database or some other source to test the web page functionality and performance. There is a data binding mechanism used in Web test, which is used for providing the data required for the requested page. We can bind the data from a database or any other data source. For example, the web page would be a reporting page that might require some query string parameters as well as the data to be shown in the page according to the parameters passed. To provide data for all these data-driven testing, we have to use the concept of data binding with the data source. Web tests can be classified into Simple Web tests and Coded Web tests. Both these are supported by VSTS. Simple Web tests are very simple to create and execute. It executes on its own as per the recording. Once the test is started, there won't be any intervention. The disadvantage is that it is not conditional. It's a series of valid flow of events. Coded Web tests are bit more complex, but provide a lot of flexibility. For example, if we need some conditional execution of tests based on some values then we have to depend on this coded web test. These tests are created using either C# or Visual Basic code. Using the generated code we can control the flow of test events. But the disadvantage is its high complexity and maintenance cost. Load test Load testing is a method of testing used in different types of testing. The important thing with Load testing is that it is about performance. This type of testing is conducted with other types of testing, which means that it can be performed along with either Web testing or Unit testing. The main purpose of load testing is to identify the performance of application based on different scenarios. Most of the time, we can predict the performance of the application that we develop, if it is running on one machine or a desktop. But in the case of web applications such as online ordering systems, we know the estimated maximum number of users, but do not know the connection speeds and location from where the users will access the web site. For such scenarios, the web application should support all the end users with good performance irrespective of the system they use, their Internet connection, the place, and the tool they use to access the web site. So before we release this web site to the customers or the end users, we should check the performance of the application so that it can support the mass end user group. This is where load testing will be very useful in testing the application along with Web test or Unit test. When a Web test is added to a Load test, it will simulate multiple users opening simultaneous connections to the same web application and making multiple HTTP requests. Load testing in Visual Studio comes with lots of properties which can be set to test the web application with different browsers, different user profiles, light loads, and heavy loads. Results of different tests can be saved in a repository to compare the set of results and improve their performance. In case of client server and multi-tier applications, we will be having a lot of components which will reside in the server and serve the client requests. To get the performance of these components, we have to make use of a Load test with a set of Unit tests. One good example would be to test the data access service component that calls a stored procedure in the backend database and returns the results to the application that is using this service. Load tests can be run either from the local machine or by submitting to a rig, which is a group of computers used for simulating the tests remotely. A rig consists of a single controller and one or more agents. Load tests can be used in different scenarios of testing: Stress testing: This checks the functionality of the application under heavy load. The resource provided to the application could vary based on the input file size or the size of the data set, for example, uploading a file which is more than 50MB in size. Smoke testing: This checks if the application performs well for a short duration with a light load. Performance testing: This checks the responsiveness and throughput of the application with different loads. Capacity Planning test: This checks the application performance with various capacities. Ordered test As we know, there are different types of testing required to build quality software. We take care of running all these tests for the applications we develop. But we also have an order in which to execute all these different tests. For example, we do the unit testing first, then the integration test, then the smoke test, and then we go for the functional test. We can order the execution of these tests using Visual Studio. Another example would be to test the configurations for the application before actually testing the functionality of the application. If we don't order the test, we would never know whether the end result is correct or not. Sometimes, the tests will not go through successfully if the tests are not run in order. Ordering of tests is done using the Test View window in Visual Studio. We can list all the available tests in the Test View and choose the tests in the same order using different options provided by Visual Studio and then run the tests. Visual Studio will take care of running the tests in the same order we have chosen in the list. So once we are able to run the test successfully in an order, we can also expect the same ordering in getting the results. Visual Studio provides the results of all the tests in a single row in the Test Results window. Actually, this single row result will contain the results of all the tests run in the order. We can just double-click the single row result to get the details of each tests run in the ordered test. Ordered test is the best way of controlling the tests and running the tests in an order. Generic test We have seen different types and ways of testing the applications using VSTS. There are situations where we might end up having other applications for testing, which are not developed using Visual Studio. We might have only the executables or binaries for those applications. But we may not have the supported testing tool for those applications. This is where we need the generic testing method. This is just a way of testing third-party applications using Visual Studio. Generic tests are used to wrap the existing tests. Once the wrapping is done, then it is just another test in VSTS. Using Visual Studio, we can collect the test results, and gather the code coverage data too. We can manage and run the generic tests in Visual Studio just like the others tests.
Read more
  • 0
  • 0
  • 7780
article-image-jboss-jbpm-concepts-and-jbpm-process-definition-language-jpdl
Packt
22 Oct 2009
6 min read
Save for later

JBoss jBPM Concepts and jBPM Process Definition Language (jPDL)

Packt
22 Oct 2009
6 min read
JBoss jBPM Concepts JBoss jBPM is built around the concept of waiting. That may sound strange given that software is usually about getting things done, but in this case there is a very good reason for waiting. Real-life business processes cut across an organization, involve numerous humans and multiple systems, and happen over a period of time. In regular software, the code that makes up the system is normally built to "do all these tasks as soon as possible". This wouldn't work for a business process, as the people who need to take part in the process won't always want or be able to "do their task now". The software needs some way of waiting, until the process actor is ready to do their activity. Then once they have done their activity, the software needs to know what is the next activity in the chain and then wait for the next process actor to get round to doing their bit. The orchestration of this sequence of "wait, work, wait, work" is handled by the JBoss jBPM engine. The jBPM engine looks up our process definition and works out which way it should direct us through the process. We know the "process definition" better as our graphical process map. jBPM Process Definition Language—jPDL We will introduce the key terms and concepts here to get the ball rolling. We won't linger too long over the definitions, as the best way to fix the terminology in the brain is to see it used in context. At this point, we will introduce some core terminologies for a better understanding. The visual process map in the Designer is an example of what the JBoss jBPM project calls "Graph Oriented Programming". Instead of programming our software in code, we are programming our software using a visual process map: referred to as a "directed graph". This directed graph is also defined in the XML representation of the process we saw in the Source view. The graph plus the XML is a notation set, which is properly called jPDL, the "jBPM Process Definition Language". A process definition specified in jPDL is composed of "nodes", "transitions", and "actions", which together describe how an "instance" of the process should traverse the directed graph. During execution of the process, as the instance moves through the directed graph, it carries through a "token", which is a pointer to the node of the graph at which the instance is currently waiting. A "signal" tells the token which "transition" it should take from the node: signals specify which path to take through the process. Let's break this down a little bit with some more detail. Nodes A node in jPDL is modeled visually as a box, and hence looks very similar to the activity box we are used to from our workflow and activity flow diagrams. The concept of "nodes" does subtly differ from that of activities, however. In designing jPDL, the jBPM team have logically separated the idea of waiting for the result of an action from that of doing an action. They believe that the term "activity" blurs the line between these two ideas, which causes problems when trying to implement the logic behind a business process management system. For example, both "Seek approval" and "Record approval" would be modeled as activities on an activity flow diagram, but the former would be described as a "state" and the latter as an "action" in jPDL: the state element represents the concept of waiting for the action to happen, moving the graph to the next state. "Node" is therefore synonymous with "state" in jPDL. "Actions" are bits of code that can be added by a developer to tell the business process management system to perform an action that needs to be done by the system: for example, recording the approval of a holiday request in a database. Actions aren't mapped visually, but are recorded in the XML view of the process definition. We'll cover actions a bit later. There are different types of node, and they are used to accomplish different things. Let's quickly go through them so we know how they are used. Tasks A task node represents a task that is to be performed by humans. If we model a task node on our graph, it will result in a task being added to the task list of the person assigned to that task, when the process is executed. The process instance will wait for the person to complete that task and hand back the outcome of the task to the node. State A state node simply tells the process instance to wait, and in contrast to a task node, it doesn't create a task in anybody's task list. A state node would normally be used to model the behavior of waiting for an external system to provide a response. This would typically be done in combination with an Action, which we'll talk about soon. The process instance will resume execution when a signal comes back from the external system. Forks and Joins We can model concurrent paths of execution in jPDL using forks and joins. For example, the changes we made to our model to design our To Be process can be modeled using forks and joins to represent the parallel running of activities. We use a Fork to split the path of execution up, and then join it back together using a Join: the process instance will wait at the Join until the parallel tasks on both sides are completed. The instance can't move on until both chains of activities are finished. jBPM creates multiple child tokens related to the parent token for each path of execution. Decision In modeling our process in jBPM, there are two distinct types of decision with which we need to concern ourselves. Firstly, there is the case where the process definition itself needs to make a decision, based on data at its disposal, and secondly, where a decision made by a human or an external system is an input to the process definition. Where the process definition itself will make the decision, we can use a decision node in the model. Where the outcome of the decision is simply input into the process definition at run time, we should use a state node with multiple exiting transitions representing the possible outcomes of the decision.
Read more
  • 0
  • 1
  • 3308

article-image-enterprise-javabeans
Packt
22 Oct 2009
10 min read
Save for later

Enterprise JavaBeans

Packt
22 Oct 2009
10 min read
Readers familiar with previous versions of J2EE will notice that Entity Beans were not mentioned in the above paragraph. In Java EE 5, Entity Beans have been deprecated in favor of the Java Persistence API (JPA). Entity Beans are still supported for backwards compatibility; however, the preferred way of doing Object Relational Mapping with Java EE 5 is through JPA. Refer to Chapter 4 in the book Java EE 5 Development using GlassFish Application Server for a detailed discussion on JPA. Session Beans As we previously mentioned, session beans typically encapsulate business logic. In Java EE 5, only two artifacts need to be created in order to create a session bean: the bean itself, and a business interface. These artifacts need to be decorated with the proper annotations to let the EJB container know they are session beans. Previous versions of J2EE required application developers to create several artifacts in order to create a session bean. These artifacts included the bean itself, a local or remote interface (or both), a local home or a remote home interface (or both) and a deployment descriptor. As we shall see in this article, EJB development has been greatly simplified in Java EE 5. Simple Session Bean The following example illustrates a very simple session bean: package net.ensode.glassfishbook; import javax.ejb.Stateless; @Stateless public class SimpleSessionBean implements SimpleSession { private String message = "If you don't see this, it didn't work!"; public String getMessage() { return message; } } The @Stateless annotation lets the EJB container know that this class is a stateless session bean. There are two types of session beans, stateless and stateful. Before we explain the difference between these two types of session beans, we need to clarify how an instance of an EJB is provided to an EJB client application. When EJBs (both session beans and message-driven beans) are deployed, the EJB container creates a series of instances of each EJB. This is what is typically referred to as the EJB pool. When an EJB client application obtains an instance of an EJB, one of the instances in the pool is provided to this client application. The difference between stateful and stateless session beans is that stateful session beans maintain conversational state with the client, where stateless session beans do not. In simple terms, what this means is that when an EJB client application obtains an instance of a stateful session bean, the same instance of the EJB is provided for each method invocation, therefore, it is safe to modify any instance variables on a stateful session bean, as they will retain their value for the next method call. The EJB container may provide any instance of an EJB in the pool when an EJB client application requests an instance of a stateless session bean. As we are not guaranteed the same instance for every method call, values set to any instance variables in a stateless session bean may be "lost" (they are not really lost; the modification is in another instance of the EJB in the pool). Other than being decorated with the @Stateless annotation, there is nothing special about this class. Notice that it implements an interface called SimpleSession. This interface is the bean's business interface. The SimpleSession interface is shown next: package net.ensode.glassfishbook; import javax.ejb.Remote; @Remote public interface SimpleSession { public String getMessage(); } The only peculiar thing about this interface is that it is decorated with the @Remoteannotation. This annotation indicates that this is a remote business interface . What this means is that the interface may be in a different JVM than the client application invoking it. Remote business interfaces may even be invoked across the network. Business interfaces may also be decorated with the @Local interface. This annotation indicates that the business interface is a local business interface. Local business interface implementations must be in the same JVM as the client application invoking their methods. As remote business interfaces can be invoked either from the same JVM or from a different JVM than the client application, at first glance, we might be tempted to make all of our business interfaces remote. Before doing so, we must be aware of the fact that the flexibility provided by remote business interfaces comes with a performance penalty, because method invocations are made under the assumption that they will be made across the network. As a matter of fact, most typical Java EE application consist of web applications acting as client applications for EJBs; in this case, the client application and the EJB are running on the same JVM, therefore, local interfaces are used a lot more frequently than remote business interfaces. Once we have compiled the session bean and its corresponding business interface,we need to place them in a JAR file and deploy them. Just as with WAR files, the easiest way to deploy an EJB JAR file is to copy it to [glassfish installationdirectory]/glassfish/domains/domain1/autodeploy. Now that we have seen the session bean and its corresponding business interface, let's take a look at a client sample application: package net.ensode.glassfishbook; import javax.ejb.EJB; public class SessionBeanClient { @EJB private static SimpleSession simpleSession; private void invokeSessionBeanMethods() { System.out.println(simpleSession.getMessage()); System.out.println("nSimpleSession is of type: " + simpleSession.getClass().getName()); } public static void main(String[] args) { new SessionBeanClient().invokeSessionBeanMethods(); } } The above code simply declares an instance variable of type net.ensode.SimpleSession, which is the business interface for our session bean. The instance variable is decorated with the @EJB annotation; this annotation lets the EJB container know that this variable is a business interface for a session bean. The EJB container then injects an implementation of the business interface for the client code to use. As our client is a stand-alone application (as opposed to a Java EE artifact such as a WAR file) in order for it to be able to access code deployed in the server, it must be placed in a JAR file and executed through the appclient utility. This utility can be found at [glassfish installation directory]/glassfish/bin/. Assuming this path is in the PATH environment variable, and assuming we placed our client code in a JAR file called simplesessionbeanclient.jar, we would execute the above client code by typing the following command in the command line: appclient -client simplesessionbeanclient.jar Executing the above command results in the following console output: If you don't see this, it didn't work! SimpleSession is of type: net.ensode.glassfishbook._SimpleSession_Wrapper which is the output of the SessionBeanClient class. The first line of output is simply the return value of the getMessage() method we implemented in the session bean. The second line of output displays the fully qualified class name of the class implementing the business interface. Notice that the class name is not the fully qualified name of the session bean we wrote; instead, what is actually provided is an implementation of the business interface created behind the scenes by the EJB container. A More Realistic Example In the previous section, we saw a very simple, "Hello world" type of example. In this section, we will show a more realistic example. Session beans are frequently used as Data Access Objects (DAOs). Sometimes, they are used as a wrapper for JDBC calls, other times they are used to wrap calls to obtain or modify JPA entities. In this section, we will take the latter approach. The following example illustrates how to implement the DAO design pattern in asession bean. Before looking at the bean implementation, let's look at the business interface corresponding to it: package net.ensode.glassfishbook; import javax.ejb.Remote; @Remote public interface CustomerDao { public void saveCustomer(Customer customer); public Customer getCustomer(Long customerId); public void deleteCustomer(Customer customer); } As we can see, the above is a remote interface implementing three methods; thesaveCustomer() method saves customer data to the database, the getCustomer()method obtains data for a customer from the database, and the deleteCustomer() method deletes customer data from the database. Let's now take a look at the session bean implementing the above business interface. As we are about to see, there are some differences between the way JPA code is implemented in a session bean versus in a plain old Java object. package net.ensode.glassfishbook; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import javax.annotation.Resource; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.sql.DataSource; @Stateless public class CustomerDaoBean implements CustomerDao { @PersistenceContext private EntityManager entityManager; @Resource(name = "jdbc/__CustomerDBPool") private DataSource dataSource; public void saveCustomer(Customer customer) { if (customer.getCustomerId() == null) { saveNewCustomer(customer); } else { updateCustomer(customer); } } private void saveNewCustomer(Customer customer) { customer.setCustomerId(getNewCustomerId()); entityManager.persist(customer); } private void updateCustomer(Customer customer) { entityManager.merge(customer); } public Customer getCustomer(Long customerId) { Customer customer; customer = entityManager.find(Customer.class, customerId); return customer; } public void deleteCustomer(Customer customer) { entityManager.remove(customer); } private Long getNewCustomerId() { Connection connection; Long newCustomerId = null; try { connection = dataSource.getConnection(); PreparedStatement preparedStatement = connection .prepareStatement( "select max(customer_id)+1 as new_customer_id " + "from customers"); ResultSet resultSet = preparedStatement.executeQuery(); if (resultSet != null && resultSet.next()) { newCustomerId = resultSet.getLong("new_customer_id"); } connection.close(); } catch (SQLException e) { e.printStackTrace(); } return newCustomerId; } } The first difference we should notice is that an instance of javax.persistence. EntityManager is directly injected into the session bean. In previous JPA examples,we had to inject an instance of javax.persistence.EntityManagerFactory, then use the injected EntityManagerFactory instance to obtain an instance of EntityManager. The reason we had to do this was that our previous examples were not thread safe. What this means is that potentially the same code could be executed concurrently by more than one user. As EntityManager is not designed to be used concurrently by more than one thread, we used an EntityManagerFactory instance to provide each thread with its own instance of EntityManager. Since the EJB container assigns a session bean to a single client at time, session beans are inherently thread safe, therefore, we can inject an instance of EntityManager directly into a session bean. The next difference between this session bean and previous JPA examples is that in previous examples, JPA calls were wrapped between calls to UserTransaction.begin() and UserTransaction.commit(). The reason we had to do this is because JPA calls are required to be in wrapped in a transaction, if they are not in a transaction, most JPA calls will throw a TransactionRequiredException. The reason we don't have to explicitly wrap JPA calls in a transaction as in previous examples is because session bean methods are implicitly transactional; there is nothing we need to do to make them that way. This default behavior is what is known as Container-Managed Transactions. Container-Managed Transactions are discussed in detail later in this article. When a JPA entity is retrieved in one transaction and updated in a different transaction, the EntityManager.merge() method needs to be invoked to update the data in the database. Invoking EntityManager.persist() in this case will result in a "Cannot persist detached object" exception.
Read more
  • 0
  • 0
  • 2635
Modal Close icon
Modal Close icon