Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-manual-generic-and-ordered-tests-using-visual-studio-2008
Packt
22 Oct 2009
6 min read
Save for later

Manual, Generic, and Ordered Tests using Visual Studio 2008

Packt
22 Oct 2009
6 min read
The following screenshot describes a simple web application, which has a page for the new user registration. The user has to provide the necessary field details. After entering the details, the user will click on the Register button provided in the web page to submit all the details so that it gets registered to the site. To confirm this to the user, the system will send a notification with a welcoming email to the registered user. The mail is sent to the email address provided by the user. In the application shown in the above screenshot, the entire registration process cannot be automated for testing. For example, the email verification and checking the confirmation email sent by the system will not be automated as the user has to go manually and check the email. This part of the manual testing process will be explained in detail in this article. Manual tests Manual testing, as described earlier, is the simplest type of testing carried out by the testers without any automation tool. This test may contain a single or multiple tests inside. Manual test type is the best choice to be selected when the test is too difficult or complex to automate, or if the budget allotted for the application is not sufficient for automation. Visual Studio 2008 supports two types of manual tests file types. One as text file and the other as Microsoft Word. Manual test using text format This format helps us to create the test in the text format within Visual Studio IDE. The predefined template is available in Visual Studio for authoring this test. This template provides the structure for creating the tests. This format has the extension of .mtx. Visual Studio servers act as an editor for this test format. For creating this test in Visual Studio, either create a new test project and then add the test or select the menu option Test | New Test... and then choose the option to add the test to a new project. Now create the test using the menu option and select Manual Test (Text Format) from the available list as shown in the screenshot below. You can see the list Add to Test Project drop–down, which lists the different options to add the test to a test project. If you have not yet created the test project and selected the option to create the test, the drop-down option selected will create a new test project for the test to be added. If you have a test project already created, then we can also see that project in the list to get this new test added to the project. We can choose any option as per our need. For this sample, let us create a new test project in C#. So the first option from the drop-down of Add to Test Project would be selected in this case. After selecting the option, provide the name for the new test project the system will ask for. Let us name it TestingAppTest project. Now you can see the project getting created under the solution and the test template is also added to the test project as shown next. The template contains the detailed information for each section. This will help the tester or whoever is writing the test case to write the steps required for this test. Now update the test case template created above with the test steps required for checking the email confirmation message after the registration process. The test document also contains the title for the test, description, and the revision history for the changes made to the test case. Before executing the test and looking into the details of the run and the properties of the test, we will create the same test using Microsoft Word format as described in the next section. Manual test using Microsoft Word format This is similar to the manual test that was created using text format, except that the file type is Microsoft Word with extension .mht. While creating the manual test choose the template Manual Test (Word format) instead of the Manual Test (Text Format) as explained in the previous section. This option is available only if Microsoft Word is installed in the system. This will launch the Word template using the MS Word installed (version 2003 or later) in the system for writing the test details as shown in the following screenshot. The Word format helps us to have richer formatting capabilities with different fonts, colors, and styles for the text with graphic images and tables embedded for the test. This document not only provides the template but also the help information for each and every section so that the tester can easily understand the sections and write the test cases. This help information is provided in both the Word and Text format of the manual tests. In the test document seen in previous screenshot, we can fill the Test Details, Test Target, Test Steps, and Revision History similar to the one we did for the text format. The completed test case test document will look like this: Save the test details and close the document. Now we have both formats of manual tests in the project. Open the Test View window or the Test List Editor window to see the list of tests we have in the project. It should list two manual tests with their names and the project to which the tests are associated with. The tests shown in the Test View window looks like the one shown here: The same tests list shown by the Test List Editor would look like the one shown below. The additional properties like test list name, the project name the test belongs to, is also shown in the list editor. There are options for each test either to run or get added to any particular list. Manual tests also have other properties, which we can make use of during testing. These properties can be seen in the Properties window, which can be opened by choosing the manual test either in the Test View or in the Test List Editor windows by right-clicking the test and selecting the Properties option. The same window can also be opened by choosing the menu option View | Properties window. Both formats of manual testing have the same set of properties. Some of these properties are editable while some are read-only, which will be set by the application based on the test type. Some properties are directly related to TFS. The VSTFS is the integrated collaboration server, which combines team portal, work item tracking, build management, process guidance, and version control into a unified server.
Read more
  • 0
  • 0
  • 3025

article-image-integrating-zimbra-collaboration-suite-microsoft-outlook
Packt
21 Oct 2009
11 min read
Save for later

Integrating Zimbra Collaboration Suite with Microsoft Outlook

Packt
21 Oct 2009
11 min read
Introduction Let's face it, in today's business environment, there is only one email client that truly matters. I am not saying it is the best client, or that it offers more features, and I certainly am not saying it is the most secure. What I am saying is that you would be hard pressed to walk into an organization, of let's say more than 10 desktops, and not see users checking their email with Microsoft Outlook. Zimbra Collaboration Suite offers uncanny support for Outlook including: Native Sync with MAPI Support for both Online and Offline Modes Cached mode operation Support for multiple calendars Here, we will discuss these features and focus on configuring Outlook to work with the Zimbra server. As you will see with a Zimbra back end and an Outlook client, it is transparent to your users whether you are using Microsoft Exchange or Zimbra Collaboration Suite as your back end product. This transparency to users makes the migration from Exchange to Zimbra that much easier in the eyes of your users, especially when it comes to user training. The ability to seamlessly integrate Zimbra and Outlook is one of Zimbra's strongest assets and one of its strongest arguments for making the transition from Exchange to Zimbra. However, if you want to use the full power of Zimbra (not only the fancy look but great features such as the searches), you should use the Web Client. We will take a detailed look at Zimbra integration with Outlook, including: The Zimbra Connector for Outlook (ZCO) A look at Zimbra integration Sharing Outlook folders Outlook uses the Messaging Application Programming Interface (MAPI) to allow programs to communicate with messaging systems and stores. MAPI is proprietary to Microsoft and is key to Zimbra being able to synchronize and work with Outlook. Zimbra uses a connector to facilitate this communication—called the Zimbra Connector—for Outlook. The PST Import Wizard One of the beauties of Outlook's integration in Zimbra is that you won't start from scratch: Zimbra gives you tools that are able to import your data (emails, calendars, contacts, etc) either from a concurrent solution's server (Exchange or Domino) or directly from a PST file (the file used by Outlook to store all its data).   We'll have a look at the PST Import Wizard. To download it: Log in to the Administration Console at https://zimbra.emailcs.com:7071/. Click on the Downloads tab on the left of the navigation pane. 3. In the Content Pane, click on the PST Import Wizard to download the executable file. 4. Save the file to the local computer, or a network accessible shared folder. 5. Double click the .exe file to launch the wizard (there's no installation process). 6. Click on Next on the presentation page. 7. In the Hostname field enter: zimbra.emailcs.com. 8. In the Port field you can leave the default (80) and let the Use Secure Connection box unchecked. 9. In the Username field, enter your Zimbra's user (worker@emailcs.com) and in the Password field your Zimbra's password. Click on the Next button. You will now have to select the PST file that you want to import into the Zimbra server. The Zimbra Import Wizard helps you as it opens the default Outlook PST's directory when you click on the Browse button. Once you have selected the PST you want to import and clicked on the Open button, you can now click on the Next button of the initial window. Now, you can choose how your data will be imported: Import Junk-Mail Folder: If you leave this box checked, all your spam will be imported to the server and marked as spam on the server. Import Deleted Items Folder: With this, the deleted mails will be imported on the server. Ignore previously imported items: This option is used when you're importing your data in several attempts. If you leave it checked, the already imported mails won't be imported again. Import items received after: Checking this box and choosing a date in the calendar next to it, allows us to do a partial import based on date. Import messages with some headers but no message body: You should leave this one checked as it imports some badly formatted mails. Convert meeting, organized by me, from my old address to my new address: You need to check this box (and enter your previous email address) if you're getting a new email address on the Zimbra server. If you don't, the meetings will be imported but you won't be the owner. Migrate private appointments: This one is your own choice as Zimbra does not handle (yet) the private items; all the imported appointments will be viewable by anyone you share your calendar with.   Once your choices are made, you can click on the Next button. 14. Click OK in the confirmation window. 15. Next window will show you the import evolution (number of items and percentage). You can stop the import at anytime and start again later. Once the import is finished, a window pops up with a summary of the import session. 16. You can click on the OK button of the summary window then Finish in the last window. The Zimbra Connector for Outlook The Zimbra Connector for Outlook (ZCO) is a downloadable .msi installable file that must be installed on the desktop in order for Outlook and Zimbra to communicate. To download the ZCO to the client: Log in to the Administration Console at https://zimbra.emailcs.com:7071. Click on the Downloads tab on the left of the navigation pane. 3. In the Content Pane, click on the Zimbra Connector for Outlook to download the .msi installable file. Save the file to the local computer, or a network accessible shared folder. Double click the .msi file to start the installation process. The installation wizard will begin and go ahead and accept the License Agreement and accept all of the defaults. Once complete click FINISH. The ZCO creates a brand new profile within Outlook, called Zimbra. If you had previous profile(s) created on your computer, be sure to choose the "Zimbra" one. The ZCO is now installed, and the first time we run Outlook on this client, the connector will prompt us for configuration information. Zimbra currently supports Outlook 2003 only. In the Server Name field, enter: zimbra.emailcs.com. For port leave the default of 80. Email address will be the email address for this user. In our case, we will use the Worker Bee with an email address of worker@emailcs.com. For the password, enter the same password Worker would use to log in to the AJAX web client. Once completed, click OK and Outlook will open Worker's email box. The ZCO will now sync the Global Access List and will get all the emails from the server locally. It means that if you imported lots of item previously, you might need some time to get them back into Outlook from the server. Luckily, then the sync process happens in the background. As you can see in the following screenshot, the folders we created in the Web Client are now configured in Outlook. The first time Outlook is opened, it automatically performs a send/receive with the Zimbra server. After this initial synchronization, there is nothing a user needs to do from then on to initiate synchronization with the server. There is also nothing the user needs to set-up to let Outlook know whether or not the user is Online—connected to the server, or Offline—disconnected from the server. Outlook automatically checks the status and acts accordingly. Therefore, a user need not be connected to the server to work with email that has already been received, check the address book, or work with the Calendar. All changes that the user makes in Offline mode, will synchronize with the server the next time the server is connected and Outlook is online. At this time, we should take a quick look around Outlook and see how integrated Zimbra really is with Outlook. A Look at Zimbra Integration The integration of Zimbra is more than just the ability to send and receive email. Outlook is now acting as the front end to create contacts, appointments, and tasks that will be stored on the server. Let's take a moment to look at each one individually. Contacts The easiest way to see the integration of contacts between Outlook and Zimbra is to compose a new mail message, and instead of typing in an email address, click on the TO: button and select Global Address List from the Address Book drop-down menu. As you can see, this feature looks exactly the same whether you are using Exchange or Zimbra as your back end collaboration server. Users are familiar with this look and feel, and the ability to select users that are within the organization's Global Address List. This list comes directly from the Zimbra server and is maintained there as well. The user also has the option to select the own personal contact list. This list could be created and maintained either via the web client, through Outlook directly, or both as they will synchronize together. Appointments In most work organizations, the ability to create appointments, invite people to attend, and check invitees schedule is a key function of Exchange and Outlook. Luckily for us, the same functionality could be used with Zimbra and Outlook. As seen in the following figure, the process for creating an appointment is exactly the same. In the Calendar application, click New --> Appointment. Click on the Invite Attendees button. Click the To button and select the Global Address List from the drop-down menu. Select the CEO from the address list and click OK. Click on the Scheduling tab.   The Calendar is synced with the Zimbra server and is able to check the availability of the users within the organization, a key feature of any collaboration suite. Once you have found a schedule when all the attendees are free, go back to the Appointment tab, type in a Subject for your appointment then click on the Send button. The last feature we will look at is Sharing Outlook folders. Sharing Outlook Folders Users have the option to share any Outlook folder with users in the Global Address List. Essentially, this is the same ability we covered in an earlier chapter with the Web Client for the Contacts or Calendar. However, here the process is different. Users could be delegated different levels of access to Outlook folders. These levels include: Read View items in folder only Edit Edit any contents in the folder Create Create/add items to the folder Delete Delete/modify items in the folder Act on workflow Respond to meeting and task requests Administer Folder Modify the permissions on the folder There are also predefined roles that users could assign to other users in the Global Address List including: Administrator Has all rights to the folder listed above Delegate Has access to all rights except for Administer folder Editor Access to Read, Edit, Create and Delete Author Access to Read and Create Reviewer Read only To assign roles and rights to the folder: 1. Right click the folder and click Properties. 2. Click on the Sharing tab. 3. Click Add and select CEO from the Global Address List. 4. With CEO highlighted, select Administrator from the Permission Level drop-down-box. 5. With Administrator selected, you should be able to see all of the Permissions selected. 6. Change the Permission Level to Reviewer and you will see that only Read items, is selected. 7. Go ahead and play with the various levels so you can get a feel for the different permissions associated with the various levels. 8. Once complete, click OK. An email will be sent to the CEO informing him that he now has Administrator access to the Inbox of the Worker Bee. In order for the CEO to work with the new Shared Folder (the Worker's Inbox in this case), the CEO would simply: 1. Click on File --> Open --> Other User's Mailbox in the Outlook Menu bar. 2. Select Worker from the Global Address List. 3. Once the folder is added to Outlook, click on the Send/Receive button to synchronize the folder. Summary The goal of this article was to take a brief look at using the Microsoft Outlook client as a front end to the Zimbra Collaboration Suite. In my experience, users do not like change and they tend to be comfortable with applications they are familiar with. One of the most common objections to changing email systems is that users rely so heavily on their email and contacts that they do not want to have to learn a whole new system to access them. Hopefully, if I have done my job, you could now see how users need not be afraid of moving to a Zimbra system, because in the end, their everyday life and functionality is not going to change much. They could still use the tool that they are most familiar with, but still have the added benefit of using the AJAX Web Client when they are on the road or away from their desks.
Read more
  • 0
  • 0
  • 5411

article-image-configuring-jdbc-oracle-jdeveloper
Packt
21 Oct 2009
14 min read
Save for later

Configuring JDBC in Oracle JDeveloper

Packt
21 Oct 2009
14 min read
Introduction Unlike Eclipse IDE, which requires a plug-in, JDeveloper has a built-in provision to establish a JDBC connection with a database. JDeveloper is the only Java IDE with an embedded application server, the Oracle Containers for J2EE (OC4J). This database-based web application may run in JDeveloper without requiring a third-party application server. However, JDeveloper also supports third-party application servers. Starting with JDeveloper 11, application developers may point the IDE to an application server instance (or OC4J instance), including third-party application servers that they want to use for testing during development. JDeveloper provides connection pooling for the efficient use of database connections. A database connection may be used in an ADF BC application, or in a JavaEE application. A database connection in JDeveloper may be configured in the Connections Navigator. A Connections Navigator connection is available as a DataSource registered with a JNDI naming service. The database connection in JDeveloper is a reusable named connection that developers configure once and then use in as many of their projects as they want. Depending on the nature of the project and the database connection, the connection is configured in the bc4j.xcfg file or a JavaEE data source. Here, it is necessary to distinguish between data source and DataSource. A data source is a source of data; for example an RDBMS database is a data source. A DataSource is an interface that represents a factory for JDBC Connection objects. JDeveloper uses the term Data Source or data source to refer to a factory for connections. We will also use the term Data Source or data source to refer to a factory for connections, which in the javax.sql package is represented by the DataSource interface. A DataSource object may be created from a data source registered with the JNDI (Java Naming and Directory) naming service using JNDI lookup. A JDBC Connection object may be obtained from a DataSource object using the getConnection method. As an alternative to configuring a connection in the Connections Navigator a data source may also be specified directly in the data source configuration file data-sources.xml. In this article we will discuss the procedure to configure a JDBC connection and a JDBC data source in JDeveloper 10g IDE. We will use the MySQL 5.0 database server and MySQL Connector/J 5.1 JDBC driver, which support the JDBC 4.0 specification. In this article you will learn the following: Creating a database connection in JDeveloper Connections Navigator. Configuring the Data Source and Connection Pool associated with the connection configured in the Connections Navigator. The common JDBC Connection Errors. Before we create a JDBC connection and a data source we will discuss connection pooling and DataSource. Connection Pooling and DataSource The javax.sql package provides the API for server-side database access. The main interfaces in the javax.sql package are DataSource, ConnectionPoolDataSource, and PooledConnection. The DataSource interface represents a factory for connections to a database. DataSource is a preferred method of obtaining a JDBC connection. An object that implements the DataSource interface is typically registered with a Java Naming and Directory API-based naming service. DataSource interface implementation is driver-vendor specific. The DataSource interface has three types of implementations: Basic implementation: In basic implementation there is 1:1 correspondence between a client's Connection object and the connection with the database. This implies that for every Connection object, there is a connection with the database. With the basic implementation, the overhead of opening, initiating, and closing a connection is incurred for each client session. Connection pooling implementation: A pool of Connection objects is available, from which connections are assigned to the different client sessions. A connection pooling manager implements the connection pooling. When a client session does not require a connection, the connection is returned to the connection pool and becomes available to other clients. Thus, the overheads of opening, initiating, and closing connections are reduced. Distributed transaction implementation: Distributed transaction implementation produces a Connection object that is mostly used for distributed transactions and is always connection pooled. A transaction manager implements the distributed transactions. An advantage of using a data source is that code accessing a data source does not have to be modified when an application is migrated to a different application server. Only the data source properties need to be modified. A JDBC driver that is accessed with a DataSource does not register itself with a DriverManager. A DataSource object is created using a JNDI lookup and subsequently a Connection object is created from the DataSource object. For example, if a data source JNDI name is jdbc/OracleDS a DataSource object may be created using JNDI lookup. First, create an InitialContext object and subsequently create a DataSource object using the InitialContext lookup method. From the DataSource object create a Connection object using the getConnection() method: InitialContext ctx=new InitialContext(); DataSource ds=ctx.lookup("jdbc/OracleDS"); Connection conn=ds.getConnection(); The JNDI naming service, which we used to create a DataSource object is provided by J2EE application servers such as the Oracle Application Server Containers for J2EE (OC4J) embedded in the JDeveloper IDE. A connection in a pool of connections is represented by the PooledConnection interface, not the Connection interface. The connection pool manager, typically the application server, maintains a pool of PooledConnection objects. When an application requests a connection using the DataSource.getConnection() method, as we did using the jdbc/OracleDS data source example, the connection pool manager returns a Connection object, which is actually a handle to an object that implements the PooledConnection interface. A ConnectionPoolDataSource object, which is typically registered with a JNDI naming service, represents a collection of PooledConnection objects. The JDBC driver provides an implementation of the ConnectionPoolDataSource, which is used by the application server to build and manage a connection pool. When an application requests a connection, if a suitable PooledConnection object is available in the connection pool, the connection pool manager returns a handle to the PooledConnection object as a Connection object. If a suitable PooledConnection object is not available, the connection pool manager invokes the getPooledConnection() method of the ConnectionPoolDataSource to create a new PooledConnection object. For example, if connectionPoolDataSource is a ConnectionPoolDataSource object a new PooledConnection gets created as follows: PooledConnection pooledConnection=connectionPoolDataSource.getPooledConnection(); The application does not have to invoke the getPooledConnection() method though; the connection pool manager invokes the getPooledConnection() method and the JDBC driver implementing the ConnectionPoolDataSource creates a new PooledConnection, and returns a handle to it. The connection pool manager returns a Connection object, which is a handle to a PooledConnection object, to the application requesting a connection. When an application closes a Connection object using the close() method, as follows, the connection does not actually get closed. conn.close(); The connection handle gets deactivated when an application closes a Connection object with the close() method. The connection pool manager does the deactivation. When an application closes a Connection object with the close() method any client info properties that were set using the setClientInfo method are cleared. The connection pool manager is registered with a PooledConnection object using the addConnectionEventListener() method. When a connection is closed, the connection pool manager is notified and the connection pool manager deactivates the handle to the PooledConnection object, and returns the PooledConnection object to the connection pool to be used by another application. The connection pool manager is also notified if a connection has an error. A PooledConnection object is not closed until the connection pool is being reinitialized, the server is shutdown, or a connection becomes unusable. In addition to connections being pooled, PreparedStatement objects are also pooled by default if the database supports statement pooling. It can be discovered if a database supports statement pooling using the supportsStatementPooling() method of the DatabaseMetaData interface. The PeparedStatement pooling is also managed by the connection pool manager. To be notified of PreparedStatement events such as a PreparedStatement getting closed or a PreparedStatement becoming unusable, a connection pool manager is registered with a PooledConnection manager using the addStatementEventListener() method. A connection pool manager deregisters a PooledConnection object using the removeStatementEventListener() method. Methods addStatementEventListener and removeStatementEventListener are new methods in the PooledConnection interface in JDBC 4.0. Pooling of Statement objects is another new feature in JDBC 4.0. The Statement interface has two new methods in JDBC 4.0 for Statement pooling: isPoolable() and setPoolable(). The isPoolable method checks if a Statement object is poolable and the setPoolable method sets the Statement object to poolable. When an application closes a PreparedStatement object using the close() method the PreparedStatement object is not actually closed. The PreparedStatement object is returned to the pool of PreparedStatements. When the connection pool manager closes a PooledConnection object by invoking the close() method of PooledConnection all the associated statements also get closed. Pooling of PreparedStatements provides significant optimization, but if a large number of statements are left open, it may not be an optimal use of resources. Thus, the following procedure is followed to obtain a connection in an application server using a data source: Create a data source with a JNDI name binding to the JNDI naming service. Create an InitialContext object and look up the JNDI name of the data source using the lookup method to create a DataSource object. If the JDBC driver implements the DataSource as a connection pool, a connection pool becomes available. Request a connection from the connection pool. The connection pool manager checks if a suitable PooledConnection object is available. If a suitable PooledConnection object is available, the connection pool manager returns a handle to the PooledConnection object as a Connection object to the application requesting a connection. If a PooledConnection object is not available the connection pool manager invokes the getPooledConnection() method of the ConnectionPoolDataSource, which is implemented by the JDBC driver. The JDBC driver implementing the ConnectionPoolDataSource creates a PooledConnection object and returns a handle to it. The connection pool manager returns a handle to the PooledConnection object as a Connection object to the application requesting a connection. When an application closes a connection, the connection pool manager deactivates the handle to the PooledConnection object and returns the PooledConnection object to the connection pool. ConnectionPoolDataSource provides some configuration properties to configure a connection pool. The configuration pool properties are not set by the JDBC client, but are implemented or augmented by the connection pool. The properties can be set in a data source configuration. Therefore, it is not for the application itself to change the settings, but for the administrator of the pool, who also happens to be the developer sometimes, to do so. Connection pool properties supported by ConnectionPoolDataSource are discussed in following table: Connection Pool Property Type Description maxStatements int Maximum number of statements the pool should keep open. 0 (zero) indicates that statement caching is not enabled. initialPoolSize int The initial number of connections the pool should have at the time of creation. minPoolSize int The minimum number of connections in the pool. 0 (zero) indicates that connections are created as required. maxPoolSize int The maximum number of connections in the connection pool. 0 indicates that there is no maximum limit. maxIdleTime int Maximum duration (in seconds) a connection can be kept open without being used before the connection is closed. 0 (zero) indicates that there is no limit. propertyCycle int The interval in seconds the pool should wait before implementing the current policy defined by the connection pool properties. maxStatements int The maximum number of statements the pool can keep open. 0 (zero) indicates that statement caching is not enabled.     Setting the Environment Before getting started, we have to install the JDeveloper 10.1.3 IDE and the MySQL 5.0 database. Download JDeveloper from: http://www.oracle.com/technology/software/products/jdev/index.html. Download the MySQL Connector/J 5.1, the MySQL JDBC driver that supports JDBC 4.0 specification. To install JDeveloper extract the JDeveloper ZIP file to a directory. Log in to the MySQL database and set the database to test. Create a database table, Catalog, which we will use in a web application. The SQL script to create the database table is listed below: CREATE TABLE Catalog(CatalogId VARCHAR(25)PRIMARY KEY, Journal VARCHAR(25), Publisher VARCHAR(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25));INSERT INTO Catalog VALUES('catalog1', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'Database Resource Manager', 'Kimberly Floss');INSERT INTO Catalog VALUES('catalog2', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'From ADF UIX to JSF', 'Jonas Jacobi'); MySQL does not support ROWID, for which support has been added in JDBC 4.0. Having installed the JDeveloper IDE, next we will configure a JDBC connection in the Connections Navigator. Select the Connections tab and right-click on the Database node to select New Database Connection. Click on Next in Create Database Connection Wizard. In the Create Database Connection Type window, specify a Connection Name—MySQLConnection for example—and set Connection Type to Third Party JDBC Driver, because we will be using MySQL database, which is a third-party database for Oracle JDeveloper and click on Next. If a connection is to be configured with Oracle database select Oracle (JDBC) as the Connection Type and click on Next. In the Authentication window specify Username as root (Password is not required to be specified for a root user by default), and click on Next. In the Connection window, we will specify the connection parameters, such as the driver name and connection URL; click on New to specify a Driver Class. In the Register JDBC Driver window, specify Driver Class as com.mysql.jdbc.Driver and click on Browse to select a Library for the Driver Class. In the Select Library window, click on New to create a new library for the MySQL Connector/J 5.1 JAR file. In the Create Library window, specify Library Name as MySQL and click on Add Entry to add a JAR file entry for the MySQL library. In the Select Path Entry window select mysql-connector-java-5.1.3-rcmysql-connector-java-5.1.3-rc-bin.jar and click on Select. In the Create Library window, after a Class Path entry gets added to the MySQL library, click on OK. In the Select Library window, select the MySQL library and click on OK. In the Register JDBC Driver window, the MySQL library gets specified in the Library field and the mysql-connector-java-5.1.3-rcmysql-connector-java-5.1.3-rc-bin.jar gets specified in the Classpath field. Now, click on OK. The Driver Class, Library, and Classpath fields get specified in the Connection window. Specify URL as jdbc:mysql://localhost:3306/test, and click on Next. In the Test window click on Test Connection to test the connection that we have configured. A connection is established and a success message gets output in the Status text area. Click on Finish in the Test window. A connection configuration, MySQLConnection, gets added to the Connections navigator. The connection parameters are displayed in the structure view. To modify any of the connection settings, double-click on the Connection node. The Edit Database Connection window gets displayed. The connection Username, Password, Driver Class, and URL may be modified in the Edit window. A database connection configured in the Connections navigator has a JNDI name binding in the JNDI naming service provided by OC4J. Using the JNDI name binding, a DataSource object may be created in a J2EE application. To view, or modify the configuration settings of the JDBC connection select Tools | Embedded OC4J Server Preferences in JDeveloper. In the window displayed, select Global | Data Sources node, and to update the data-sources.xml file with the connection defined in the Connections navigator, click on the Refresh Now button. Checkboxes may be selected to Create data-source elements where not defined, and to Update existing data-source elements. The connection pool and data source associated with the connection configured in the Connections navigator get listed. Select the jdev-connection-pool-MySQLConnection node to list the connection pool properties as Property Set A and Property Set B. The tuning properties of the JDBC connection pool may be set in the Connection Pool window. The different tuning attributes are listed in following table:        
Read more
  • 0
  • 0
  • 10382

article-image-osworkflow-and-quartz-task-scheduler
Packt
21 Oct 2009
10 min read
Save for later

OSWorkflow and the Quartz Task Scheduler

Packt
21 Oct 2009
10 min read
Task Scheduling with Quartz Both people-oriented and system-oriented BPM systems need a mechanism to execute tasks within an event or temporal constraint, for example, every time a state change occurs or every two weeks. BPM suites address these requirements with a job-scheduling component responsible for executing tasks at a given time. OSWorkflow, the core of our open-source BPM solution, doesn't include these temporal capabilities by default. Thus, we can enhance OSWorkflow by adding the features present in the Quartz open-source project. What is Quartz? Quartz is a Java job-scheduling system capable of scheduling and executing jobs in a very flexible manner. The latest stable Quartz version is 1.6. You can download Quartz from http://www.opensymphony.com/quartz/download.action. Installing The only file you need in order to use Quartz out of the box is quartz.jar. It contains everything you need for basic usage. Quartz configuration is in the quartz. properties file, which you must put in your application's classpath. Basic Concepts The Quartz API is very simple and easy to use. The first concept that you need to be familiar with is the scheduler. The scheduler is the most important part of Quartz, managing as the word implies the scheduling and unscheduling of jobs and the firing of triggers. Task Scheduling with Quartz A job is a Java class containing the task to be executed and the trigger is the temporal specification of when to execute the job. A job is associated with one or more triggers and when a trigger fires, it executes all its related jobs. That's all you need to know to execute our Hello World job. Integration with OSWorkflow By complementing the features of OSWorkflow with the temporal capabilities of Quartz, our open-source BPM solution greatly enhances its usefulness. The Quartz-OSWorkflow integration can be done in two ways—Quartz calling OSWorkflow workflow instances and OSWorkflow scheduling and unscheduling Quartz jobs. We will cover the former first, by using trigger-functions, and the latter with the ScheduleJob function provider. Creating a Custom Job Job's are built by implementing the org.quartz.Job interface as follows: public void execute(JobExecutionContext context) throws JobExecutionException; The interface is very simple and concise, with just one method to be implemented. The Scheduler will invoke the execute method when the trigger associated with the job fires. The JobExecutionContext object passed as an argument has all the context and environment data for the job, such as the JobDataMap. The JobDataMap is very similar to a Java map but provides strongly typed put and get methods. This JobDataMap is set in the JobDetail file before scheduling the job and can be retrieved later during the execution of the job via the JobExecutionContext's getJobDetail().getJobDataMap() method. Trigger Functions trigger-functions are a special type of OSWorkflow function designed specifically for job scheduling and external triggering. These functions are executed when the Quartz trigger fires, thus the name. trigger-functions are not associated with an action and they have a unique ID. You shouldn't execute a trigger-function in your code. To define a trigger-function in the definition, put the trigger-functions declaration before the initial-actions element. ... <trigger-functions> <trigger-function id="10"> <function type="beanshell"> <arg name="script"> propertySet.setString("triggered", "true"); </arg> </function> </trigger-function> </trigger-functions> <initial-actions> ... This XML definition fragment declares a trigger-function (having an ID of 10), which executes a beanshell script. This script will put a named property inside the PropertySet of the instance but you can define a trigger-function just like any other Java- or BeanShell-based function. To invoke this trigger-function, you will need an OSWorkflow built-in function provider to execute trigger-functions and to schedule a custom job—the ScheduleJob FunctionProvider. More about Triggers Quartz's triggers are of two types—the SimpleTrigger and the CronTrigger. The former, as its name implies, serves for very simple purposes while the latter is more complex and powerful; it allows for unlimited flexibility for specifying time periods. SimpleTrigger SimpleTrigger is more suited for job firing at specific points in time, such as Saturday 1st at 3.00 PM, or at an exact point in time repeating the triggering at fixed intervals. The properties for this trigger are the shown in the following table: s. The properties for this trigger are the shown in the following table: Property Description Start time The fire time of the trigger. End time The end time of the trigger. If it is specified, then it overrides the repeat count. Repeat interval The interval time between repetitions. It can be 0 or a positive integer. If it is 0, then the repeat count will happen in parallel. Repeat count How many times the trigger will fire. It can be 0, a positive integer, or SimpleTrigger.REPEAT_INDEFINITELY.     CronTrigger The CronTrigger is based on the concept of the UN*X Cron utility. It lets you specify complex schedules, like every Wednesday at 5.00 AM, or every twenty minutes, or every 5 seconds on Monday. Like the SimpleTrigger, the CronTrigger has a start time property and an optional end time. A CronExpression is made of seven parts, each representing a time component: Each number represents a time part: 1 represents seconds 2 represents minutes 3 represents hours 4 represents the day-of-month 5 represents month 6 represents the day-of-week 7 represents year (optional field) Here are a couple of examples of cron expression: 0 0 6 ? * MON: This CronExpression means "Every Monday at 6 AM". 0 0 6 * *: This CronExpression mans "Every day at 6 am". For more information about CronExpressions refer to the following website: http://www.opensymphony.com/quartz/wikidocs/CronTriggers%20Tutorial.html. Scheduling a Job We will get a first taste of Quartz, by executing a very simple job. The following snippet of code shows how easy it is to schedule a job.     SchedulerFactory schedFact = new                                             org.quartz.impl.StdSchedulerFactory();     Scheduler sched = schedFact.getScheduler();     sched.start();        JobDetail jobDetail = new JobDetail("myJob", null, HelloJob.class);        Trigger trigger = TriggerUtils.makeHourlyTrigger();                                                       // fire every hour        trigger.setStartTime(TriggerUtils.getEvenHourDate(new Date()));                                                     // start on the next even hour        trigger.setName("myTrigger");        sched.scheduleJob(jobDetail, trigger); The following code assumes a HelloJob class exists. It is a very simple class that implements the job interface and just prints a message to the console. package packtpub.osw; import org.quartz.Job; import org.quartz.JobExecutionContext; import org.quartz.JobExecutionException; /** * Hello world job. */ public class HelloJob implements Job { public void execute(JobExecutionContext ctx) throws JobExecutionException { System.out.println("Hello Quartz world."); } }   The first three lines of the following code create a SchedulerFactory, an object that creates Schedulers, and then proceed to create and start a new Scheduler. SchedulerFactory schedFact = new org.quartz.impl.StdSchedulerFactory(); Scheduler sched = schedFact.getScheduler(); sched.start(); This Scheduler will fire the trigger and subsequently the jobs associated with the trigger. After creating the Scheduler, we must create a JobDetail object that contains information about the job to be executed, the job group to which it belongs, and other administrative data. JobDetail jobDetail = new JobDetail("myJob", null, HelloJob.class); This JobDetail tells the Scheduler to instantiate a HelloJob object when appropriate, has a null JobGroup, and has a Job name of "myJob". After defining the JobDetail, we must create and define the Trigger, that is, when the Job will be executed and how many times, etc. Trigger trigger = TriggerUtils.makeHourlyTrigger(); // fire every hour trigger.setStartTime(TriggerUtils.getEvenHourDate(new Date())); // start on the next even hour trigger.setName("myTrigger"); The TriggerUtils is a helper object used to simplify the trigger code. With the help of the TriggerUtils, we will create a trigger that will fire every hour. This trigger will start firing the next even hour after the trigger is registered with the Scheduler. The last line of code puts a name to the trigger for housekeeping purposes. Finally, the last line of code associates the trigger with the job and puts them under the control of the Scheduler. sched.scheduleJob(jobDetail, trigger); When the next even hour arrives after this line of code is executed, the Scheduler will fire the trigger and it will execute the job by reading the JobDetail and instantiating the HelloJob.class. This requires that the class implementing the job interface must have a no-arguments constructor. An alternative method is to use an XML file for declaring the jobs and triggers. This will not be covered in the book, but you can find more information about it in the Quartz documentation. Scheduling from a Workflow Definition The ScheduleJob FunctionProvider has two modes of operation, depending on whether you specify the jobClass parameter or not. If you declare the jobClass parameter, ScheduleJob will create a JobDetail with jobClass as the class implementing the job interface. <pre-functions> <function type="class"> <arg name="class.name">com.opensymphony.workflow.util.ScheduleJob </arg> <arg name="jobName">Scheduler Test </arg> <arg name="triggerName">SchedulerTestTrigger</arg> <arg name="triggerId">10 </arg> <arg name="jobClass">packtpub.osw.SendMailIfActive </arg> <arg name="schedulerStart">true </arg> <arg name="local">true </arg> </function> </pre-functions> This fragment will schedule a job based on the SendMailIfActive class with the current time as the start time. The ScheduleJob like any FunctionProvider can be declared as a pre or a post function. On the other hand, if you don't declare the jobClass, ScheduleJob will use the WorkflowJob.class as the class implementing the job interface. This job executes a trigger-function on the instance that scheduled it when fired.    <pre-functions> <function type="class"> <arg name="class.name">com.opensymphony.workflow.util.ScheduleJob </arg> <arg name="jobName">Scheduler Test </arg> <arg name="triggerName">SchedulerTestTrigger </arg> <arg name="triggerId">10 </arg> <arg name="schedulerStart">true </arg> <arg name="local">true </arg> </function> </pre-functions>   This definition fragment will execute the trigger-function with ID 10 as soon as possible, because no CronExpression or start time arguments have been specified. This FunctionProvider has the arguments shown in the following table:
Read more
  • 0
  • 0
  • 3216

article-image-software-documentation-trac
Packt
21 Oct 2009
4 min read
Save for later

Software Documentation with Trac

Packt
21 Oct 2009
4 min read
Documentation—if there is one word that installs fear in most developers, it must be this one. No one in their right mind would argue the value of documentation, but it is the actual act of writing it that concerns developers so. The secret of creating good documentation is to make the process of doing so as painless as possible, and if we are lucky maybe even attractive, to the developers. The only practical way to achieve that is to reduce friction. The last thing we need when we are in middle of fixing a bug is to wrestle with our word processor, or even worse try to find the right document to update. What's in a name?Throughout the rest of this article, we will refer to various URLs that point to specific areas of our Trac environment, Subversion repository, or WebDAV folders. Whenever you see servername, replace it with your own server name. Making Documentation Easy One of the reasons Trac works so well for managing software development is because it is browser based. Apart from our development environment, the browser, along with our email client, are the next most likely applications we are going to have installed and running on our computer. If access to our Trac environment is only a click away, it stands to reason that we are more likely to use it. We can refer to Trac as a "wiki on steroids" because of the way the developers have integrated the typical features of a wiki throughout the whole product. However, for all the extra features and integration, at its heart Trac is basically just a wiki and this is the main reason why it so useful in helping smooth the documentation process. A wiki is a web application that allows visitors to create and modify its content. Let's expand on that slightly. As well as letting us view content—like a normal website—a wiki lets us create or edit the content as we desire. This could take the form of creating new content, or simply touching up the spelling on something that already exists. While the general idea with a wiki is that anyone can edit them, in practice this can lead to abuse, vandalism, or spam. The obvious solution to this is to involve people to authenticate the edit. Do we really need this security?Yes. Having these security requirements provides us with accountability. We will always be able to see when something is done, but by enforcing security we can see who did it. While this does cause some administrative overhead to create and maintain authentication details for anyone involved with our development projects, the benefits outweigh the costs. Accessing Trac Before we look at how to modify and create pages, let's see how our Trac environment looks to a normal (i.e. unauthenticated) user. To do this we need to open our web browser and enter the URL http://servername/projects/sandbox into the address bar and then press the Enter key. This will take us to the default page (which is actually called WikiStart). When we access our project as an unauthenticated (or anonymous in Trac parlance) user, the majority of it will look and act like a normal website and the wiki in particular seems just like the usual collection of interlinked pages. However, as soon as we authenticate ourselves to Apache (which passes that information on to Trac), it all changes. If we click the Login link in the top right of the page now, we will be presented with our browser's usual authentication dialog box as shown in the following screenshot. Input the proper username and password and click OK. If we enter them correctly we will be taken back to the same page, but this time there will be two differences. Firstly, instead of the login link we will see the text logged in as followed by the username we used and a Logout link. Secondly, if we scroll to the bottom of the page there are some buttons that allow us to modify the page in various ways. Anonymous users have permission to only view wiki pages, while authenticated users have full control. We should try that out now—click the Logout link and scroll down again, and you will see that the buttons are absent.
Read more
  • 0
  • 0
  • 4971

Packt
21 Oct 2009
6 min read
Save for later

Binding Web Services in ESB—Web Services Gateway

Packt
21 Oct 2009
6 min read
Web Services Web services separate out the service contract from the service interface. This feature is one of the many characteristic required for an SOA-based architecture. Thus, even though it is not mandatory that we use the web service to implement an SOA-based architecture, yet it is clearly a great enabler for SOA. Web services are hardware, platform, and technology neutral The producers and/or consumers can be swapped without notifying the other party, yet the information can flow seamlessly. An ESB can play a vital role to provide this separation. Binding Web Services A web service's contract is specified by its WSDL and it gives the endpoint details to access the service. When we bind the web service again to an ESB, the result will be a different endpoint, which we can advertise to the consumer. When we do so, it is very critical that we don't lose any information from the original web service contract. Why Another Indirection? There can be multiple reasons for why we require another level of indirection between the consumer and the provider of a web service, by binding at an ESB. Systems exist today to support business operations as defined by the business processes. If a system doesn't support a business process of an enterprise, that system is of little use. Business processes are never static. If they remain static then there is no growth or innovation, and it is doomed to fail. Hence, systems or services should facilitate agile business processes. The good architecture and design practices will help to build "services to last" but that doesn't mean our business processes should be stable. Instead, business processes will evolve by leveraging the existing services. Thus, we need a process workbench to assemble and orchestrate services with which we can "Mix and Match" the services. ESB is one of the architectural topologies where we can do the mix and match of services. To do this, we first bind the existing (and long lasting) services to the ESB. Then leverage the ESB services, such as aggregation and translation, to mix and match them and advertise new processes for businesses to use. Moreover, there are cross service concerns such as versioning, management, and monitoring, which we need to take care to implement the SOA at higher levels of maturity. The ESB is again one way to do these aspects of service orientation. HTTP HTTP is the World Wide Web (www) protocol for information exchange. HTTP is based on character-oriented streams and is firewall-friendly. Hence, we can also exchange XML streams (which are XML encoded character streams) over HTTP. In a web service we exchange XML in the SOAP (Simple Object Access Protocol) format over HTTP. Hence, the HTTP headers exchanged will be slightly different than a normal web page interaction. A sample web service request header is shown as follows: GET /AxisEndToEnd/services/HelloWebService?WSDL HTTP/1.1User-Agent: Java/1.6.0-rcHost: localhost:8080Accept: text/html, image/gif, image/jpeg, *; q=.2, */*; q=.2Connection: keep-alivePOST /AxisEndToEnd/services/HelloWebService HTTP/1.0Content-Type: text/xml; charset=utf-8Accept: application/soap+xml, application/dime, multipart/related, text/*User-Agent: Axis/1.4Host: localhost:8080Cache-Control: no-cachePragma: no-cacheSOAPAction: ""Content-Length: 507 The first line contains a method, a URI and an HTTP version, each separated by one or more blank spaces. The succeeding lines contain more information regarding the web service exchanged. ESB-based integration heavily leverages the HTTP protocol due to its open nature, maturity, and acceptability. We will now look at the support provided by the ServiceMix in using HTTP. ServiceMix's servicemix-http Binding external web services at the ESB layer can be done in multiple ways but the best way is to leverage JBI components such as the servicemix-http component within ServiceMix. We will look in detail at how to bind the web services onto the JBI bus. servicemix-http in Detail servicemix-http is used for HTTP or SOAP binding of services and components into the ServiceMix NMR. For this ServiceMix uses an embedded HTTP server based on the Jetty. The following are the two ServiceMix components: org.apache.servicemix.components.http.HttpInvoker org.apache.servicemix.components.http.HttpConnector As of today, these components are deprecated and the functionality is replaced by the servicemix-http standard JBI component. A few of the features of the servicemix-http are as follows: Supports SOAP 1.1 and 1.2 Supports MIME with attachments Supports SSL Supports WS-Addressing and WS-Security Supports WSDL-based and XBean-based deployments Support for all MEPs as consumers or providers Since servicemix-http can function both as a consumer and a provider, it can effectively replace the previous HttpInvoker and HttpConnector component. Consumer and Provider Roles When we speak of the Consumer and Provider roles for the ServiceMix components, the difference is very subtle at first sight, but very important from a programmer perspective. The following figure shows the Consumer and Provider roles in the ServiceMix ESB: The above figure shows two instances of servicemix-http deployed in the ServiceMix ESB, one in a provider role and the other in the consumer role. As it is evident, these roles are with respect to the NMR of the ESB. In other words, a consumer role implies that the component is a consumer to the NMR whereas a provider role implies the NMR is the consumer to the component. Based on these roles, the NMR will take responsibility of any format or protocol conversions for the interacting components. Let us also introduce two more parties here to make the role of a consumer and a provider clear—a client and a service. In a traditional programming paradigm, the client interacts directly with the server (or service) to avail the functionality. In the ESB model, both the client and the service interact with each other only through the ESB. Hence, the client and the service need peers with their respective roles assigned, which in turn will interact with each other. Thus, the ESB consumer and provider roles can be regarded as the peer roles for the client and the service respectively. Any client request will be delegated to the consumer peer who in turn interacts with the NMR. This is because the client is unaware of the ESB and the NMR protocol or format. However, the servicemix-http consumer knows how to interact with the NMR. Hence any request from the client will be translated by the servicemix-http consumer and delivered to the NMR. On the service side also, the NMR needs to invoke the service. But the server service is neutral of any specific vendor's NMR and doesn't understand the NMR language as such. A peer provider role will help here. The provider receives the request from the NMR, translates it into the actual format or protocol of the server service and invokes the service. Any response will also follow the reverse sequence.
Read more
  • 0
  • 0
  • 1936
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-change-control-personal-projects-subversion-style
Packt
21 Oct 2009
5 min read
Save for later

Change Control for Personal Projects - Subversion Style

Packt
21 Oct 2009
5 min read
Who Should Read This Read on if you are new to change control, or believe that change control only applies to software, or that it is only meant for large projects. If you are a software pro working with large software projects, you can still read this if you want a gentle introduction to Subversion or svn as it is called. Introduction We have all heard those trite remarks about change -- “... change is the only constant ...”, or similar ones, especially before an unpleasant corporate announcement. These overused remarks about change are unfortunately true. During the course of a day, we make numerous (hopefully!) interrelated changes, updates, or transformations to our work products to reach specific project goals. Needless to say, these changes need to be tracked along with the rationale behind each if we are to prevent ourselves from repeating mistakes, or simply want to recall why we did what we did one month ago! Note that we are not talking about only code or documents here; your work products could be a portfolios of photographs, animations, or some arbitrary binary format. A change control discipline also gives you additional advantages such as being able to develop simultaneous versions of work products for different purposes or clients, rolling back to a previous arbitrary version, or setting up trial development in a so-called branch to bring it back to the main work stream after due review. You also have a running history of how your work product has evolved over time and features. Fetching from a change managed repository also prevents you from creating those fancifully named multiple copies of a file just to keep track of its versions. To reiterate: we use the words 'work product' and 'development' in the broadest sense and not just as applied to software. You might as well be creating a banner ad for your client as much as a Firefox plugin. In the rest of this article we will see how to build a simple personal change control discipline for your day-to-day work using a version control tool. As you will note, 'control' and 'management' have been used interchangeably, though a little hair splitting will yield rich dividends in terms of how different these terms are. Subversion Subversion is version control system available on the Linux (and similar) platforms. If you are trapped in a proprietary world by choice, circumstance, or compulsion, you should try TortoiseSVN. Here, we confine ourselves to the Linux platform. Subversion works by creating a time line of your work products from their inception (or from the point they are brought under version control) to the present point in time, by capturing snapshots of your work products at discrete points that you decide. Each snapshot is a version. You can traverse this time line and extract specific versions for use. How does subversion do it? It versions entire directories. A new version of your directory will be created even if you change one file in it. Don't worry; this does not lead to an explosion of file size with each version. Explaining some terminology, albeit informally, should make the going easier from here. Subversion stores your project(s) in a repository. For the purpose of this article, our repository will stay on the local machine. A revision is nothing but a particular snapshot of the project directory. A working directory is your sandbox. This is where you check out a particular version of your project directory from the repository, make any modifications to it, and then do a check in back into the repository. Revision numbers are bumped up with each check in. You can revert a configuration item, which is like undoing any changes you made. If all this sounds a little abstruse, don't worry, because we will shortly set up our repository so that you can try things out. A commit is when you...., well commit a change done to a file into the repository. Subversion is mostly bundled with a Linux distribution. Find out if you have yours with a 'man svn' or 'svn -h' or a 'whereis svn' command. Setting up Your Repository You can set up your repository in your home directory if you are working on a shared environment. If you have a machine to yourself, you might want to create an 'svn' account with /sbin/nologin (politely refuses logins) as the shell. Your repository might then be '/home/svn/repos'. Subversion is a command line tool. But the only command you will ever issue for the purpose of this article will be to set up your repository: $ svnadmin create /path/to/your/repository The rest, as they say, is GUI! Let Us Get Visual A GUI for subversion is a great tool for learning and working even if you decide to settle for the command line once you get more proficient. eSvn (http://zoneit.free.fr/esvn/) is a Qt-based graphical front end for Subversion. Follow the instructions with the download to compile and install eSvn. Run esvn and this is how it will look with the File | Options... dialog open. Make sure you enter the correct path to svn if not for the other items.    
Read more
  • 0
  • 0
  • 1793

article-image-data-migration-scenarios-sap-business-one-application-part-1
Packt
20 Oct 2009
25 min read
Save for later

Data Migration Scenarios in SAP Business ONE Application- part 1

Packt
20 Oct 2009
25 min read
Just recently, I found myself in a data migration project that served as an eye-opener. Our team had to migrate a customer system that utilized Act! and Peachtree. Both systems are not very famous for having good accessibility to their data. In fact, Peachtree is a non-SQL database that does not enforce data consistency. Act! also uses a proprietary table system that is based on a non-SQL database. The general migration logic was rather straightforward. However, our team found that the migration and consolidation of data into the new system posed multiple challenges, not only on the technical front, but also for the customer when it came to verifying the data. We used the on-the-edge tool xFusion Studio for data migration. This tool allows migrating and synchronizing data by using simple and advanced SQL data messaging techniques. The xFusion Studio tool has a graphical representation of how the data flows from the source to the target. When I looked at one section of this graphical representation, I started humming the song Welcome to the Jungle. Take a look at the following screenshot and find out why Guns and Roses may have provided the soundtrack for this data migration project: What we learned from the above screenshot is quite obvious and I have dedicated this article to helping you overcome these potential issues. Keep it simple and focus on information rather than data. You know that just by having more data does not always mean you’ve added more information. Sometimes, it just means a data jungle has been created. Making the right decisions at key milestones during the migration can keep the project simple and guarantee the success. Your goal should be to consolidate the islands of data into a more efficient and consistent database that provides real-time information. What you will learn about data migration In order to accomplish the task of migrating data from different sources into SAP Business ONE application, a strategy must be designed that addresses the individual needs of the project at hand. The data migration strategy uses proven processes and templates. The data migration itself can be managed as a mini project depending on the complexity. During the course of this article, the following key topics will be covered. The goal is to help you make crucial decisions, which will keep a project simple and manageable: Position the data migration tasks in the project plan – We will start by positioning the data migration tasks in the project plan. I will further define the required tasks that you need to complete as a part of the data migration. Data types and scenarios – With the general project plan structure in place, it is time to cover the common terms related to data migration. I will introduce you to the main aspects, such as master data and transactional data, as well as the impact they have on the complexity of data migration. SAP tools available for migration – During the course of our case study, I will introduce you to the data migration tools that come with SAP. However, there are also more advanced tools for complex migrations. You will learn about the main player in this area and how to use it. Process of migration – To avoid problems and guarantee success, the data migration project must follow a proven procedure. We will update the project plan to include the procedure and will also use the process during our case study. Making decisions about what data to bring – I mentioned that it is important to focus on information versus data. With the knowledge of the right tools and procedures, it is a good time to summarize the primary known issues and explain how to tackle them. The project plan We are still progressing in Phase 2 – Analysis and Design. The data migration is positioned in the Solution Architecture section and is called Review Data Conversion Needs (Amount and Type of Data). A thorough evaluation of the data conversion needs will also cover the next task in the project plan called Review Integration Points with any 3rd Party Solution. As you can see, the data migration task stands as a small task in the project plan. But as mentioned earlier, it can wind up being a large project depending on the number and size of data sources that need to be migrated. To honor this, we will add some more details to this task. As the task name suggests, we must review data conversion needs and identify the amount and type of data. This simple task must be structured in phases, just like the entire project that is structured in phases. Therefore, data migration needs to go through the following phases to be successful: 1. Design - Identify all of the Data Sources 2. Extraction of data into Excel or SQL for review and consistency 3. Review of Data and Verification(Via Customer Feedback) 4. Load into SAP System and Verification Note that the validation process and the consequential load could be iterative processes. For example, if the validated data has many issues, it only makes sense to perform a load into SAP if an additional verification takes place before the load. You only want to load data into an SAP system for testing if you know the quality of the records going to be loaded is good. Therefore, new phases were added in the project plan (seen below). Please do this in your project too based on the actual complexity and the number of data sources you have. A thorough look at the tasks above will be provided when we talk about the process of migration. Before we do that, the basic terms related to data migration will be covered. Data sources—where is my data There is a great variety in the potential types data sources. We will now identify the most common sources and explain their key characteristics. However, if there is a source that is not mentioned here, you can still migrate the data easily by transitioning it into one of the following formats. Microsoft Excel and text data migration The most common format for data migration is Excel, or text-based files. Text-based files are formatted using commas or tabs as field separators. When a comma is used as a field separator, the file format is referred to as Comma Separated Values (CSV). Most of the migration templates and strategies are based on Excel files that have specific columns where you can manually enter data, or copy and paste larger chunks. Therefore, if there is any way for you to extract data from your current system and present it in Excel, you have already done a great deal of data migration work. Microsoft Access An Access database is essentially an Excel sheet on a larger scale with added data consistency capability. It is a good idea to consider extracting Access tables to Excel in order to prepare for data migration. SQL If you have very large sets of data, then instead of using Excel, we usually employ an SQL database. The database then has a set of tables instead of Excel sheets. Using SQL tables, we can create SQL statements that can verify data and analyze results sets. Please note that you can use any SQL database, such as Microsoft SQL Server, Oracle, IBM DB, and so on. SaaS (Netsuite, Salesforce) SaaS stands for Software as a Service. Essentially, it means you can use software functionality based on a subscription. However, you don't own the solution. All of the hardware and software is installed at the service center, so you don't need to worry about hardware and software maintenance. However, keep in mind that these services don't allow you to manage the service packs according to your requirements. You need to adjust your business to the schedule of the SaaS company. If you are migrating from a modern SaaS solution, such as Salesforce or Netsuite, you will probably know that the data is not at your site, but rather stored at your solution hosting provider. Getting the data out to migrate to another solution may be done by obtaining reports, which could then be saved in an Excel format. Other legacy data The term legacy data is often mentioned when evaluating larger old systems. Legacy data basically comprises a large set of data that a company is using on mostly obsolete systems. AS/400 or Mainframe The IBM AS/400 is a good example of a legacy data source. Experts who are capable of extracting data from these systems are highly sought after, and so the budget must be on a higher scale. AS/400 data can often be extracted into a text or an Excel format. However, the data may come without headings. The headings are usually documented in a file that describes the data. You need to make sure that you get the file definitions, without which the pure text files may be meaningless. In addition, the media format is worth considering. An older AS/400 system may utilize a backup tape format which is not available on your Intel server. Peachtree, QuickBooks, and Act! Another potential source for data migration may be a smaller PC-based system, such as Peachtree, QuickBooks, or Act!. These systems have a different data format, and are based on non-SQL databases. This means the data cannot be accessed via SQL. In order to extract data from those systems, the proprietary API must be used. For example, if Peachtree displays data in the applications forms, it uses the program logic to put the pieces together from different text files. Getting data out from these types of systems is difficult and sometimes impossible. It is recommended to employ the relevant API to access the data in a structured way. You may want to run reports and export the results to text or Excel. Data classification in SAP Business ONE There are two main groups of data that we will migrate to the SAP Business ONE application: master data and transaction data. Master data Master data is the basic information that SAP Business ONE uses to record transactions (for example, business partner information). In addition, information about your products, such as items, finished goods, and raw materials are considered master data. Master data should always be migrated if possible. It can easily be verified and structured in an Excel or SQL format. For example, the data could be displayed using Excel sheets. You can then quickly verify that the data is showing up in the correct columns. In addition, you can see if the data is broken down into its required components. For example, each Excel column should represent a target field in SAP Business ONE. You should avoid having a single column in Excel that provides data for more than one target in SAP Business ONE. Transaction data Transaction data are proposals, orders, invoices, deliveries, and other similar information that comprise a combination of master data to create a unique business document. Customers often will want to migrate historical transactions from older systems. However, the consequences of doing this may have a landslide effect. For example, inventory is valuated based on specific settings in the finance section of a system. If these settings are not identical in the new system, transactions may look different in the old and the new system. This makes the migration very risky as the data verification becomes difficult on the customer side. I recommend making historical transactions available via a reporting database. For example, often, sales history must be available when migrating data. You can create a reporting database that provides sales history information. The user can use this data via reports within the SAP Business ONE application. Therefore, closed transactions should be migrated via a reporting database . Closed transactions are all of the business-related activities that were fully completed in the old system. Open transactions, on the other hand, are all of the business-related activities that are currently not completed. It makes sense that the open transactions be migrated directly to SAP, and not to a history database because they will be completed within the new SAP system. As a result of the data migration, you would be able to access sales history information from within SAP by accessing a reporting database. Open transactions will be completed within SAP, and then consequently lead to new transactions in SAP. Create a history database for sales history and manually enter open transactions. SAP DI-API Now that we know the main data types for an SAP migration, and the most common sources, we can take a brief look at the way the data is inserted into the SAP system. Based on the SAP guidelines, you are not allowed to insert data directly in the underlying SQL tables. The reason for that is that it can cause inconsistencies. When SAP works with the database, multiple tables are often updated. If you manually update a table to insert data, there is a good chance that another table has a link that also requires updating. Therefore, unless you know the exact table structure for the data you are trying to update, don't mess with the SAP SQL tables. If you carefully read this and understand the table structure, you will now know that there may be situations where you decide to access the tables directly. If you decide to insert data directly into the SAP database tables, you run the risk of losing your warranty. Migration scenarios and key decisions Data migration not only takes place as a part of a new SAP implementation, but also if you have a running system and you want to import leads or a list of new items. Therefore, it is a good idea to learn about the scenarios that you may come across and be able to select the right migration and integration tools. As outlined before, data can be divided into two groups: master data and transaction data. It is important that you separate the two, and structure each data migration accordingly. Master data is an essential component for manifesting transactions. Therefore, even if you need to bring over transactional data, the master data must already be in place. Always start with the master data alongside a verification procedure, and then continue with the relevant transaction data. Let’s now briefly look at the most common situations where you may require the evaluation of potential data migration options. New company (start-up) In this setup, you may not have extensive amounts of existing data to migrate. However, you may want to bring over lead lists or lists of items. During the course of this article, we will import a list of leads into SAP using the Excel Import functionality. Many new companies require the capability to easily import data into SAP. As you already know by now, the import of leads and item information will be considered as importing master data. Working with this master data by entering sales orders and so forth, would constitute transaction data. Transaction data is considered closed if all of the relevant actions are performed. For example, a sales order is considered closed if the items are delivered, invoiced, and paid for. If the chain of events is not complete, the transaction is open. Islands of data scenario This is the classic situation for an SAP implementation. You will first need to identify the available data sources and their formats. Then, you select the master data you want to bring over. With multiple islands of data, an SAP master record may result from more than one source. A business partner record may come, in part, from an existing accounting system, such as QuickBooks or Peachtree. Whereas other parts may come from a CRM system, such as Act!. For example, the billing information may be retrieved from the finance system and the relevant lead and sales information, such as specific contacts and notes, may come from the CRM system. In such a case, you need to merge this information into a new consistent master record in SAP. For this situation, first manually put the pieces together. Once the manual process works, you can attempt to automate the process. Don't try to directly import all of the data. You should always establish an intermediary level that allows for data verification. Only then import the data into SAP. For example, if you have QuickBooks and Act!, first merge the information into Excel for verification, and then import it into SAP. If the amount of data is large, you can also establish an SQL database. In that case, the Excel sheets would be replaced by SQL tables. IBM legacy data migration The migration of IBM legacy data is potentially the most challenging because the IBM systems are not directly compatible with Windows-based systems. Therefore, almost naturally, you will establish a text-based, or an Excel-formatted, representation of the IBM data. You can then proceed with verifying the information. SQL migration The easiest migration type is obviously the one where all of the data is already structured and consistent. However, you will not always have documentation of the table structure where the data resides. In this case, you need to create queries against the SQL tables to verify the data. The queries can then be saved as views. The views you create should always represent a consistent set of information that you can migrate. For example, if you have one table with address information, and another table with customer ID fields, you can create a view that consolidates this information into a single consistent set. Process of migration for your project I briefly touched upon the most common data migration scenarios so you can get a feel for the process. As you can see, whatever the source of data is, we always attempt to create an intermediary platform that allows the data to be verified. This intermediary platform is most commonly Excel or an SQL database. The process of data migration has the following subtasks: Identify available data sources Structure data into master data and transaction data Establish an intermediary platform with Excel or SQL Verify data Match data columns with Excel templates Run migration based on templates and verify data Based on this procedure, I have added more detail to the project plan. As you can see in this example, based on the required level of detail, we can make adjustments to the project plan to address the requirements: SAP standard import features Let's take a look at the available data exchange features in SAP. SAP provides two main tools for data migration. The fi rst option is to use the available menu in the SAP Business ONE client interface to exchange data. The other option is to use the Data Transfer Workbench (DTW). Standard import/export features— walk-through You can reach the Import from Excel form via Administration | Data Import/Export. As you can see in the following screenshot on the right top section of the form, the type of import is a drop-down selection. The options are BP and Items. In the screenshot, we have selected BP, which allows business partner information to be imported. There are drop-down fields that you can select based on the data you want to import. However, keep in mind that certain fields are mandatory, such as the BP Code field, whereas others are optional. The fields you select are associated with a column as you can see here: If you want to find out if a field is mandatory or not, simply open SAP and attempt to enter the data directly in the relevant SAP form. For example, if you are trying to import business partner information, enter the fields you want to import and see if the record can be saved. If you are missing any mandatory fields, SAP will provide an error message. You can modify the data that you are planning to import based on that. When you click on the OK button in the Import from Excel form (seen above), the Excel sheet with all of the data needs to be selected. In the following screenshot, you can see how the Excel sheet in our example looks. For example, column A has all of the BP Codes. This is in line with the mapping of columns to fields that we can see on the Import from Excel form. Please note that the file we select must be in a .txt format. For this example, I used the Save As feature in Excel (seen in the following screenshot) to save the file in the Text MS-DOS (*.txt) format. I was then able to select the BP Migration.txt file. This is actually a good thing because it points to the fact that you can use any application that can save data in the .txt format as the data source. The following screenshot shows the Save As screen: I imported the file and a success message confirms that the records were imported into SAP: A subsequent check in SAP confirms that the BP records that I had in the text file are now available in SAP: In the example, we only used two records. It is recommended to start out with a limited number of records to verify that the import is working. Therefore, you may start by reducing your import file to five records. This has the advantage of the import not taking a long time and you can immediately verify the result. See the following screenshot: Sometimes, it is not clear what kind of information SAP expects when importing. For example, at first Lead, Customer, Vendor were used in Column C to indicate the type of BP that was to be imported. However, this resulted in an error message upon completion of the import. Therefore, system information was activated to check what information SAP requires for the BP Type representation. As you can see in the screenshot of the Excel sheet you get when you click on the OK button in the Import from Excel form, the BP Type information is indicated by only one letter using L, C, or V. In the example screenshot above, you can clearly see L in the lower left section. The same thing is done for Country in the Addresses section. You can try that by navigating to Administration | Sales | Countries, and then hovering over the country you will be importing. In my example, USA was internally represented by SAP as US. It is a minor issue. However, when importing data, all of these issues need to be addressed. Please note that the file you are trying to import should not be open in Excel at the same time, as this may trigger an error. The Excel or text file does not have a header with a description of the data. Standard import/export features for your own project SAP’s standard import functionality for business partners and items is very straightforward. For your own project, you can prepare an Excel sheet for business partners and items. If you need to import BP or item information from another system, you can get this done quickly. If you get an error during the import process, try to manually enter the data in SAP. In addition, you can use the System Information feature to identify how SAP stores information in the database. I recommend you first create an Excel sheet with a maximum of two records to see if the basic information and data format is correct. Once you have this running, you can add all of the data you want to import. Overall, this functionality is a quick way to get your own data into the system. This feature can also be used in case you regularly receive address information. For example, if you have salespeople visiting trade fairs, you can provide them with the Excel sheet that you may have prepared for BP import. The salespeople can directly add their information there. Once they return from the trade fair with the Excel fi les, you can easily import the information into SAP and schedule follow-up activities using the Opportunity Management System. The item import is useful if you work with a vendor who updates his or her price lists and item information on a monthly basis. You can prepare an Excel template where the item information will regularly be entered and you can easily import the updates into SAP. Data Migration Workbench (DTW) The SAP standard import/export features are straightforward, but may not address the full complexity of the data that you need to import. For this situation, you may want to evaluate the SAP Data Migration Workbench (DTW). The functionality of this tool provides a greater level of detail to address the potential data structures that you want to import. To understand the basic concept of the DTW, it is a good idea to look at the different master data sections in SAP as business objects. A business object groups related information together. For example, BP information can have much more detail than what was previously shown in the standard import. The DTW templates and business objects To better understand the business object metaphor, you need to navigate to the DTW directory and evaluate the Templates folder. The templates are organized by business objects. The oBusinessPartners business object is represented by the folder with the same name (seen below). In this folder, you can find Excel template files that can be used to provide information for this type of business object. The following objects are available as Excel templates: BPAccountReceivables BPAddresses BPBankAccounts BPPaymentDates BPPaymentMethods BPWithholdingTax BusinessPartners ContactEmployees Please notice that these templates are Excel .xlt files, which is the Excel template extension. It is a good idea to browse through the list of templates and see the relevant templates. In a nutshell, you essentially add your own data to the templates and use DTW to import the data. Connecting to DTW In order to work with DTW, you need to connect to your SAP system using the DTW interface. The following screenshot shows the parameters I used to connect to the Lemonade Stand database: Once you are connected, a wizard-type interface walks you through the required steps to get started. Look at the next screenshot: The DTW examples and templates There is also an example folder in the DTW installation location on your system. This example folder has information about how to add information to your Excel templates. The following screenshot shows an example for business partner migration. You can see that the Excel template does have a header line on top that explains the content in the particular column. The actual template files also have comments in the header fi le, which provide information about the data format expected, such as String, Date, and so on. See the example in this screenshot: The actual template is empty and you need to add your information as shown here:   DTW for your own project If you realize that the basic import features in SAP are not sufficient, and your requirements are more challenging, evaluate DTW. Think of the data you want to import as business objects where information is logically grouped. If you are able to group your data together, you can modify the Excel templates with your own information. The DTW example folder provides working examples that you can use to get started. Please note that you should establish a test database before you start importing data this way. This is because once new data arrives in SAP, you need to verify the results based on the procedure discussed earlier. In addition, be prepared to fine-tune the import. Often, an import and data verification process takes four attempts of data importing and verification. Summary In this article, we covered the tasks related to data migration. This also included some practical examples for simple data imports related to business partner information and items. In addition, more advanced topics were covered by introducing the SAP DTW (Data Transfer Workbench) and the related aspects to get you started. During the course of this article, we positioned the data migration task in the project plan. The project plan was then fine-tuned with more detail to give some justice to the potential complexity of a data migration project. The data migration tasks established a process, from design to data mapping and verification of the data. Notably, the establishment of an intermediary data platform was recommended for your projects. This will help you verify data at each step of the migration. The key message of keeping it simple will be the basis for every migration project. The data verification task ensures simplicity and the quality of your data. If you have read this article you may be interested to view : Competitive Service and Contract Management in SAP Business ONE Implementation: Part 1 Competitive Service and Contract Management in SAP Business ONE Implementation: Part 2 Data Migration Scenarios in SAP Business ONE Application- part 2
Read more
  • 0
  • 0
  • 11269

article-image-soa-service-oriented-architecture
Packt
20 Oct 2009
17 min read
Save for later

SOA—Service Oriented Architecture

Packt
20 Oct 2009
17 min read
What is SOA? SOA is the acronym for Service Oriented Architecture. As it has come to be known, SOA is an architectural design pattern by which several guiding principles determine the nature of the design. Basically, SOA states that every component of a system should be a service, and the system should be composed of several loosely-coupled services. A service here means a unit of a program that serves a business process. "Loosely-coupled" here means that these services should be independent of each other, so that changing one of them should not affect any other services. SOA is not a specific technology, nor a specific language. It is just a blueprint, or a system design approach. It is an architecture model that aims to enhance the efficiency, agility, and productivity of an enterprise system. The key concepts of SOA are services, high interoperability and loose coupling. Several other architecture/technologies such as RPC, DCOM, and CORBA have existed for a long time, and attempted to address the client/server communication problems. The difference between SOA and these other approaches is that SOA is trying to address the problem from the client side, and not from the server side. It tries to decouple the client side from the server side, instead of bundling them, to make the client side application much easier to develop and maintain. This is exactly what happened when object-oriented programming (OOP) came into play 20 years ago. Prior to object-oriented programming, most designs were procedure-oriented, meaning the developer had to control the process of an application. Without OOP, in order to finish a block of work, the developer had to be aware of the sequence that the code would follow. This sequence was then hard-coded into the program, and any change to this sequence would result in a code change. With OOP, an object simply supplied certain operations; it was up to the caller of the object to decide the sequence of those operations. The caller could mash up all of the operations, and finish the job in whatever order needed. There was a paradigm shift from the object side to the caller side. This same paradigm shift is happening today. Without SOA, every application is a bundled, tightly coupled solution. The client-side application is often compiled and deployed along with the server-side applications, making it impossible to quickly change anything on the server side. DCOM and CORBA were on the right track to ease this problem by making the server-side components reside on remote machines. The client application could directly call a method on a remote object, without knowing that this object was actually far away, just like calling a method on a local object. However, the client-side applications continue to remain tightly coupled with these remote objects, and any change to the remote object will still result in a recompiling or redeploying of the client application. Now, with SOA, the remote objects are truly treated as remote objects. To the client applications, they are no longer objects; they are services. The client application is unaware of how the service is implemented, or of the signature that should be used when interacting with those services. The client application interacts with these services by exchanging messages. What a client application knows now is only the interfaces, or protocols of the services, such as the format of the messages to be passed in to the service, and the format of the expected returning messages from the service. Historically, there have been many other architectural design approaches, technologies, and methodologies to integrate existing applications. EAI (Enterprise Application Integration) is just one of them. Often, organizations have many different applications, such as order management systems, accounts receivable systems, and customer relationship management systems. Each application has been designed and developed by different people using different tools and technologies at different times, and to serve different purposes. However, between these applications, there are no standard common ways to communicate. EAI is the process of linking these applications and others in order to realize financial and operational competitive advantages. It may seem that SOA is just an extension of EAI. The similarity is that they are both designed to connect different pieces of applications in order to build an enterprise-level system for business. But fundamentally, they are quite different. EAI attempts to connect legacy applications without modifying any of the applications, while SOA is a fresh approach to solve the same problem. Why SOA? So why do we need SOA now? The answer is in one word—agility. Business requirements change frequently, as they always have. The IT department has to respond more quickly and cost-effectively to those changes. With a traditional architecture, all components are bundled together with each other. Thus, even a small change to one component will require a large number of other components to be recompiled and redeployed. Quality assurance (QA) effort is also huge for any code changes. The processes of gathering requirements, designing, development, QA, and deployment are too long for businesses to wait for, and become actual bottlenecks. To complicate matters further, some business processes are no longer static. Requirements change on an ad-hoc basis, and a business needs to be able to dynamically define its own processes whenever it wants. A business needs a system that is agile enough for its day-to-day work. This is very hard, if not impossible, with existing traditional infrastructure and systems. This is where SOA comes into play. SOA's basic unit is a service. These services are building blocks that business users can use to define their own processes. Services are designed and implemented so that they can serve different purposes or processes, and not just specific ones. No matter what new processes a business needs to build or what existing processes a business needs need to modify, the business users should always be able to use existing service blocks, in order to compete with others according to current marketing conditions. Also, if necessary, some new service blocks can be used. These services are also designed and implemented so that they are loosely coupled, and independent of one another. A change to one service does not affect any other service. Also, the deployment of a new service does not affect any existing service. This greatly eases release management and makes agility possible. For example, a GetBalance service can be designed to retrieve the balance for a loan. When a borrower calls in to query the status of a specific loan, this GetBalance service may be called by the application that is used by the customer service representatives. When a borrower makes a payment online, this service can also be called to get the balance of the loan, so that the borrower will know the balance of his or her loan after the payment. Yet in the payment posting process, this service can still be used to calculate the accrued interest for a loan, by multiplying the balance with the interest rate. Even further, a new process can be created by business users to utilize this service if a loan balance needs to be retrieved. The GetBalance service is developed and deployed independently from all of the above processes. Actually, the service exists without even knowing who the client will be or even how many clients there will be. All of the client applications communicate with this service through its interface, and its interface will remain stable once it is in production. If we have to change the implementation of this service, for example by fixing a bug, or changing an algorithm inside a method of the service, all of the client applications can still work without any change. When combined with the more mature Business Process Management (BPM) technology, SOA plays an even more important role in an organization's efforts to achieve agility. Business users can create and maintain processes within BPM, and through SOA they can plug a service into any of the processes. The front-end BPM application is loosely coupled to the back-end SOA system. This combination of BPM and SOA will give an organization much greater flexibility in order to achieve agility. How do we implement SOA? Now that we've established why SOA is needed by the business, the question becomes—how do we implement SOA? To implement SOA in an organization, three key elements have to be evaluated—people, process, and technology. Firstly, the people in the organization must be ready to adopt SOA. Secondly, the organization must know the processes that the SOA approach will include, including the definition, scope, and priority. Finally, the organization should choose the right technology to implement it. Note that people and processes take precedence over technology in an SOA implementation, but they are out of the scope of this article. In this article, we will assume people and processes are all ready for an organization to adopt SOA. Technically, there are many SOA approaches. At certain degrees, traditional technologies such as RPC, DCOM, CORBA, or some modern technologies such as IBM WebSphere MQ, Java RMI, and .NET Remoting could all be categorized as service-oriented, and can be used to implement SOA for one organization. However, all of these technologies have limitations, such as language or platform specifications, complexity of implementation, or the ability to support binary transports only. The most important shortcoming of these approaches is that the server-side applications are tightly coupled with the client-side applications, which is against the SOA principle. Today, with the emergence of web service technologies, SOA becomes a reality. Thanks to the dramatic increase in network bandwidth, and given the maturity of web service standards such as WS-Security, and WS-AtomicTransaction, an SOA back-end can now be implemented as a real system. SOA from different users' perspectives However, as we said earlier, SOA is not a technology, but only a style of architecture, or an approach to building software products. Different people view SOA in different ways. In fact, many companies now have their own definitions for SOA. Many companies claim they can offer an SOA solution, while they are really just trying to sell their products. The key point here is—SOA is not a solution. SOA alone can't solve any problem. It has to be implemented with a specific approach to become a real solution. You can't buy an SOA solution. You may be able to buy some kinds of products to help you realize your own SOA, but this SOA should be customized to your specific environment, for your specific needs. Even within the same organization, different players will think about SOA in quite different ways. What follows are just some examples of how different players in an organization judge the success of an SOA initiative using different criteria. [Gartner, Twelve Common SOA Mistakes and How to Avoid Them, Publication Date: 26 October 2007 ID Number: G00152446] To a programmer, SOA is a form of distributed computing in which the building blocks (services) may come from other applications or be offered to them. SOA increases the scope of a programmer's product and adds to his or her resources, while also closely resembling familiar modular software design principles. To a software architect, SOA translates to the disappearance of fences between applications. Architects turn to the design of business functions rather than to self-contained and isolated applications. The software architect becomes interested in collaboration with a business analyst to get a clear picture of the business functionality and scope of the application. SOA turns software architects into integration architects and business experts. For the Chief Investment Officers (CIOs), SOA is an investment in the future. Expensive in the short term, its long-term promises are lower costs, and greater flexibility in meeting new business requirements. Re-use is the primary benefit anticipated as a means to reduce the cost and time of new application development. For business analysts, SOA is the bridge between them and the IT organization. It carries the promise that IT designers will understand them better, because the services in SOA reflect the business functions in business process models. For CEOs, SOA is expected to help IT become more responsive to business needs and facilitate competitive business change. Complexities in SOA implementation Although SOA will make it possible for business parties to achieve agility, SOA itself is technically not simple to implement. In some cases, it even makes software development more complex than ever, because with SOA you are building for unknown problems. On one hand, you have to make sure that the SOA blocks you are building are useful blocks. On the other, you need a framework within which you can assemble those blocks to perform business activities. The technology issues associated with SOA are more challenging than vendors would like users to believe. Web services technology has turned SOA into an affordable proposition for most large organizations by providing a universally-accepted, standard foundation. However, web services play a technology role only for the SOA backplane, which is the software infrastructure that enables SOA-related interoperability and integration. The following figure shows the technical complexity of SOA. It has been taken from Gartner, Twelve Common SOA Mistakes and How to Avoid Them, Publication Date: 26 October 2007 ID Number: G00152446. As Gartner says, users must understand the complex world of middleware, and point-to-point web service connections only for small-scale, experimental SOA projects. If the number of services deployed grows to more than 20 or 30, then use a middleware-based intermediary—the SOA backplane. The SOA backplane could be an Enterprise Service Bus (ESB), a Message-Oriented Middleware (MOM), or an Object Request Broker (ORB). However, in this article, we will not cover it. We will build only point-to-point services using WCF. Web services There are many approaches to realizing SOA, but the most popular and practical one is—using web services. What is a web service? A web service is a software system designed to support interoperable machine-to-machine interaction over a network. A web service is typically hosted on a remote machine (provider), and called by a client application (consumer) over a network. After the provider of a web service publishes the service, the client can discover it and invoke it. The communications between a web service and a client application use XML messages. A web service is hosted within a web server and HTTP is used as the transport protocol between the server and the client applications. The following diagram shows the interaction of web services: Web services were invented to solve the interoperability problem between applications. In the early 90s, along with the LAN/WAN/Internet development, it became a big problem to integrate different applications. An application might have been developed using C++, or Java, and run on a Unix box, a Windows PC, or even a mainframe computer. There was no easy way for it to communicate with other applications. It was the development of XML that made it possible to share data between applications across hardware boundaries and networks, or even over the Internet. For example, a Windows application might need to display the price of a particular stock. With a web service, this application can make a request to a URL, and/or pass an XML string such as <QuoteRequest><GetPrice Symble='XYZ'/></QuoteRequest>. The requested URL is actually the Internet address of a web service, which, upon receiving the above quote request, gives a response, <QuoteResponse><QuotePrice Symble='XYZ'>51.22</QuotePrice></QuoteResponse/>. The Windows application then uses an XML parser to interpret the response package, and display the price on the screen. The reason it is called a web service is that it is designed to be hosted in a web server, such as Microsoft Internet Information Server, and called over the Internet, typically via the HTTP or HTTPS protocols. This is to ensure that a web service can be called by any application, using any programming language, and under any operating system, as long as there is an active Internet connection, and of course, an open HTTP/HTTPS port, which is true for almost every computer on the Internet. Each web service has a unique URL, and contains various methods. When calling a web service, you have to specify which method you want to call, and pass the required parameters to the web service method. Each web service method will also give a response package to tell the caller the execution results. Besides new applications being developed specifically as web services, legacy applications can also be wrapped up and exposed as web services. So, an IBM mainframe accounting system might be able to provide external customers with a link to check the balance of an account. Web service WSDL In order to be called by other applications, each web service has to supply a description of itself, so that other applications will know how to call it. This description is provided in a language called a WSDL. WSDL stands for Web Services Description Language. It is an XML format that defines and describes the functionalities of the web service, including the method names, parameter names, and types, and returning data types of the web service. For a Microsoft ASMX web service, you can get the WSDL by adding ?WSDL to the end of the web service URL, say http://localhost/MyService/MyService.asmx?WSDL. Web service proxy A client application calls a web service through a proxy. A web service proxy is a stub class between a web service and a client. It is normally auto-generated by a tool such as Visual Studio IDE, according to the WSDL of the web service. It can be re-used by any client application. The proxy contains stub methods mimicking all of methods of the web service so that a client application can call each method of the web service through these stub methods. It also contains other necessary information required by the client to call the web service such as custom exceptions, custom data and class types, and so on. The address of the web service can be embedded within the proxy class, or it can be placed inside a configuration file. A proxy class is always for a specific language. For each web service, there could be a proxy class for Java clients, a proxy class for C# clients, and yet another proxy class for COBOL clients. To call a web service from a client application, the proper proxy class first has to be added to the client project. Then, with an optional configuration file, the address of the web service can be defined. Within the client application, a web service object can be instantiated, and its methods can be called just as for any other normal method. SOAP There are many standards for web services. SOAP is one of them. SOAP was originally an acronym for Simple Object Access Protocol, and was designed by Microsoft. As this protocol became popular with the spread of web services, and its original meaning was misleading, the original acronym was dropped with version 1.2 of the standard. It is now merely a protocol, maintained by W3C. SOAP, now, is a protocol for exchanging XML-based messages over computer networks. It is widely-used by web services and has become its de-facto protocol. With SOAP, the client application can send a request in XML format to a server application, and the server application will send back a response in XML format. The transport for SOAP is normally HTTP / HTTPS, and the wide acceptance of HTTP is one of the reasons why SOAP is widely accepted today.
Read more
  • 0
  • 0
  • 4659

article-image-human-readable-rules-drools-jboss-rules-50part-1
Packt
20 Oct 2009
6 min read
Save for later

Human-readable Rules with Drools JBoss Rules 5.0(Part 1)

Packt
20 Oct 2009
6 min read
Domain Specific Language The domain in this sense represents the business area (for example, life insurance or billing). Rules are expressed with the terminology of the problem domain. This means that domain experts can understand, validate, and modify these rules more easily. You can think of DSL as a translator. It defines how to translate sentences from the problem-specific terminology into rules. The translation process is defined in a .dsl file. The sentences themselves are stored in a .dslr file. The result of this process must be a valid .drl file. Building a simple DSL might look like: [condition][]There is a Customer with firstName {name}=$customer : Customer(firstName == {name})[consequence][]Greet Customer=System.out.println("Hello " + $customer.getFirstName()); Code listing 1: Simple DSL file, simple.dsl. The code listing above contains only two lines (each begins with [). However, because the lines are too long, they are wrapped effectively creating four lines. This will be the case in most of the code listings.When you are using the Drools Eclipse plugin to write this DSL, enter the text before the first equal sign into the field called Language expression, the text after equal sign into Rule mapping, leave the object field blank, and select the correct scope. The previous DSL defines two DSL mappings. They map a DSLR sentence to a DRL rule. The first one translates to a condition that matches a Customer object with the specified first name. The first name is captured into a variable called name. This variable is then used in the rule condition. The second line translates to a greeting message that is printed on the console. The following .dslr file can be written based on the previous DSL: package droolsbook.dsl;import droolsbook.bank.model.*;expander simple.dslrule "hello rule" when There is a Customer with firstName "David" then Greet Customerend Code listing 2: Simple .dslr file (simple.dslr) with rule that greets a customer with name David. As can be seen, the structure of a .dslr file is the same as the structure of a .drl file. Only the rule conditions and consequences are different. Another thing to note is the line containing expander simple.dsl. It informs Drools how to translate sentences in this file into valid rules. Drools reads the simple.dslr file and tries to translate/expand each line by applying all mappings from the simple.dsl file (it does it in a single pass process, line-by-line from top to bottom). The order of lines is important in a .dsl file. Please note that one condition/consequence must be written on one line, otherwise the expansion won't work (for example, the condition after the when clause, from the rule above, must be on one line). When you are writing .dslr files, consider using the Drools Eclipse plugin. It provides a special editor for .dslr files that has an editing mode and a read-only mode for viewing the resulting .drl file. A simple DSL editor is provided as well. The result of the translation process will look like the following screenshot: This translation process happens in memory and no .drl file is physically stored. We can now run this example. First of all, a knowledge base must be created from the simple.dsl and simple.dslr files. The process of creating a package using a DSL is as follows (only the package creation is shown) KnowledgeBuilder acts as the translator. It takes the .dslr file, and based on the .dsl file, creates the DRL. This DRL is then used as normal (we don't see it; it's internal to KnowledgeBuilder). The implementation is as follows: private KnowledgeBase createKnowledgeBaseFromDSL() throws Exception { KnowledgeBuilder builder = KnowledgeBuilderFactory.newKnowledgeBuilder(); builder.add(ResourceFactory.newClassPathResource( "simple.dsl"), ResourceType.DSL); builder.add(ResourceFactory.newClassPathResource( "simple.dslr"), ResourceType.DSLR); if (builder.hasErrors()) { throw new RuntimeException(builder.getErrors() .toString()); } KnowledgeBase knowledgeBase = KnowledgeBaseFactory .newKnowledgeBase(); knowledgeBase.addKnowledgePackages( builder.getKnowledgePackages()); return knowledgeBase; } Code listing 3: Creating knowledge base from .dsl and .dslr files. The .dsl and subsequently the .dslr files are passed into KnowledgeBuilder. The rest is similar to what we've seen before. DSL as an interface DSLs can be also looked at as another level of indirection between your .drl files and business requirements. It works as shown in the following figure: The figure above shows DSL as an interface (dependency diagram). At the top are the business requirements as defined by the business analyst. These requirements are represented as DSL sentences (.dslr file). The DSL then represents the interface between DSL sentences and rule implementation (.drl file) and the domain model. For example, we can change the transformation to make the resulting rules more efficient without changing the language. Further, we can change the language, for example, to make it more user friendly, without changing the rules. All of this can be done just by changing the .dsl file. DSL for validation rules We'll rewrite the three usually implemented object/field required rules as follows: If the Customer does not have an address, then Display warning message If the Customer does not have a phone number or it is blank, then Display error message If the Account does not have an owner, then Display error message for Account We can clearly see that all of them operate on some object (Customer/Account), test its property (address/phone/owner), and display a message (warning/error) possibly with some context (account). Our validation.dslr file might look like the following code: expander validation.dslrule "address is required" when The Customer does not have address then Display warningendrule "phone number is required" when The Customer does not have phone number or it is blank then Display errorendrule "account owner is required" when The Account does not have owner then Display error for Accountend Code listing 4: First DSL approach at defining the required object/field rules (validation.dslr file). The conditions could be mapped like this: [condition][]The {object} does not have {field}=${object} : {object}( {field} == null ) Code listing 5: validation.dsl. This covers the address and account conditions completely. For the phone number rule, we have to add the following mapping at the beginning of the validation.dsl file: [condition][] or it is blank = == "" || Code listing 6: Mapping that checks for a blank phone number. As it stands, the phone number condition will be expanded to: $Customer : Customer( phone number == "" || == null ) Code listing 7: Unfinished phone number condition. To correct it, phone number has to be mapped to phoneNumber. This can be done by adding the following at the end of the validation.dsl file: [condition][]phone number=phoneNumber Code listing 8: Phone number mapping. The conditions are working. Now, let's focus on the consequences. The following mapping will do the job: [consequence][]Display {message_type} for {object}={message_type}( kcontext, ${object} ); [consequence][]Display {message_type}={message_type}( kcontext ); Code listing 9: Consequence mappings. The three validation rules are now being expanded to the .drl representation
Read more
  • 0
  • 0
  • 6674
article-image-primitive-data-types-variables-and-operators-object-oriented-javascript
Packt
20 Oct 2009
10 min read
Save for later

Primitive Data Types, Variables, and Operators in Object-Oriented JavaScript

Packt
20 Oct 2009
10 min read
Let's get started. Variables Variables are used to store data. When writing programs, it is convenient to use variables instead of the actual data, as it's much easier to write pi instead of 3.141592653589793 especially when it happens several times inside your program. The data stored in a variable can be changed after it was initially assigned, hence the name "variable". Variables are also useful for storing data that is unknown to the programmer when the code is written, such as the result of later operations. There are two steps required in order to use a variable. You need to: Declare the variable Initialize it, that is, give it a value In order to declare a variable, you use the var statement, like this: var a;var thisIsAVariable;var _and_this_too;var mix12three; For the names of the variables, you can use any combination of letters, numbers, and the underscore character. However, you can't start with a number, which means that this is invalid: var 2three4five; To initialize a variable means to give it a value for the first (initial) time. You have two ways to do so: Declare the variable first, then initialize it, or Declare and initialize with a single statement An example of the latter is: var a = 1; Now the variable named a contains the value 1. You can declare (and optionally initialize) several variables with a single var statement; just separate the declarations with a comma: var v1, v2, v3 = 'hello', v4 = 4, v5; Variables are Case Sensitive Variable names are case-sensitive. You can verify this statement using the Firebug console. Try typing this, pressing Enter after each line: var case_matters = 'lower';var CASE_MATTERS = 'upper';case_mattersCASE_MATTERS To save keystrokes, when you enter the third line, you can only type ca and press the Tab key. The console will auto-complete the variable name to case_matters. Similarly, for the last line—type CA and press Tab. The end result is shown on the following figure. Throughout the rest of this article series, only the code for the examples will be given, instead of a screenshot: >>> var case_matters = 'lower';>>> var CASE_MATTERS = 'upper';>>> case_matters"lower">>> CASE_MATTERS"upper" The three consecutive greater-than signs (>>>) show the code that you type, the rest is the result, as printed in the console. Again, remember that when you see such code examples, you're strongly encouraged to type in the code yourself and experiment tweaking it a little here and there, so that you get a better feeling of how it works exactly. Operators Operators take one or two values (or variables), perform an operation, and return a value. Let's check out a simple example of using an operator, just to clarify the terminology. >>> 1 + 23 In this code: + is the operator The operation is addition The input values are 1 and 2 (the input values are also called operands) The result value is 3 Instead of using the values 1 and 2 directly in the operation, you can use variables. You can also use a variable to store the result of the operation, as the following example demonstrates: >>> var a = 1;>>> var b = 2;>>> a + 12>>> b + 24>>> a + b3>>> var c = a + b;>>> c3 The following table lists the basic arithmetic operators: Operator symbol Operation Example + Addition >>> 1 + 2 3 - Subtraction >>> 99.99 - 11 88.99 * Multiplication >>> 2 * 3 6 / Division >>> 6 / 4 1.5 % Modulo, the reminder of a division >>> 6 % 3 0 >>> 5 % 3 2 It's sometimes useful to test if a number is even or odd. Using the modulo operator it's easy. All odd numbers will return 1 when divided by 2, while all even numbers will return 0. >>> 4 % 2 0 >>> 5 % 2 1 ++ Increment a value by 1 Post-increment is when the input value is incremented after it's returned. >>> var a = 123; var b = a++; >>> b 123 >>> a 124 The opposite is pre-increment; the input value is first incremented by 1 and then returned. >>> var a = 123; var b = ++a; >>> b 124 >>> a 124 -- Decrement a value by 1 Post-decrement >>> var a = 123; var b = a--; >>> b 123 >>> a 122 Pre-decrement >>> var a = 123; var b = --a; >>> b 122 >>> a 122 When you type var a = 1; this is also an operation; it's the simple assignment operation and = is the simple assignment operator. There is also a family of operators that are a combination of an assignment and an arithmetic operator. These are called compound operators. They can make your code more compact. Let's see some of them with examples. >>> var a = 5;>>> a += 3;8 In this example a += 3; is just a shorter way of doing a = a + 3; >>> a -= 3;5 Here a -= 3; is the same as a = a - 3; Similarly: >>> a *= 2;10>>> a /= 5;2>>> a %= 2;0 In addition to the arithmetic and assignment operators discussed above, there are other types of operators, as you'll see later in this article series.   Primitive Data Types Any value that you use is of a certain type. In JavaScript, there are the following primitive data types: Number—this includes floating point numbers as well as integers, for example 1, 100, 3.14. String—any number of characters, for example "a", "one", "one 2 three". Boolean—can be either true or false. Undefined—when you try to access a variable that doesn't exist, you get the special value undefined. The same will happen when you have declared a variable, but not given it a value yet. JavaScript will initialize it behind the scenes, with the value undefined. Null—this is another special data type that can have only one value, the null value. It means no value, an empty value, nothing. The difference with undefined is that if a variable has a value null, it is still defined, it only happens that its value is nothing. You'll see some examples shortly. Any value that doesn't belong to one of the five primitive types listed above is an object. Even null is considered an object, which is a little awkward—having an object (something) that is actually nothing. The data types in JavaScript the data types are either: Primitive (the five types listed above), or Non-primitive (objects) Finding out the Value Type —the typeof Operator If you want to know the data type of a variable or a value, you can use the special typeof operator. This operator returns a string that represents the data type. The return values of using typeof can be one of the following—"number", "string", "boolean", "undefined", "object", or "function". In the next few sections, you'll see typeof in action using examples of each of the five primitive data types. Numbers The simplest number is an integer. If you assign 1 to a variable and then use the typeof operator, it will return the string "number". In the following example you can also see that the second time we set a variable's value, we don't need the var statement. >>> var n = 1;>>> typeof n;"number">>> n = 1234;>>> typeof n;"number" Numbers can also be floating point (decimals): >>> var n2 = 1.23;>>> typeof n;"number" You can call typeof directly on the value, without assigning it to a variable first: >>> typeof 123;"number" Octal and Hexadecimal Numbers When a number starts with a 0, it's considered an octal number. For example, the octal 0377 is the decimal 255. >>> var n3 = 0377;>>> typeof n3;"number">>> n3;255 The last line in the example above prints the decimal representation of the octal value. While you may not be very familiar with octal numbers, you've probably used hexadecimal values to define, for example, colors in CSS stylesheets. In CSS, you have several options to define a color, two of them being: Using decimal values to specify the amount of R (red), G (green) and B (blue) ranging from 0 to 255. For example rgb(0, 0, 0) is black and rgb(255, 0, 0) is red (maximum amount of red and no green or blue). Using hexadecimals, specifying two characters for each R, G and B. For example, #000000 is black and #ff0000 is red. This is because ff is the hexadecimal for 255. In JavaScript, you put 0x before a hexadecimal value (also called hex for short). >>> var n4 = 0x00;>>> typeof n4;"number">>> n4;0>>> var n5 = 0xff;>>> typeof n5;"number">>> n5;255 Exponent Literals 1e1 (can also be written as 1e+1 or 1E1 or 1E+1) represents the number one with one zero after it, or in other words 10. Similarly, 2e+3 means the number 2 with 3 zeros after it, or 2000. >>> 1e110>>> 1e+110>>> 2e+32000>>> typeof 2e+3;"number" 2e+3 means moving the decimal point 3 digits to the right of the number 2. There's also 2e-3 meaning you move the decimal point 3 digits to the left of the number 2. >>> 2e-30.002>>> 123.456E-30.123456>>> typeof 2e-3"number" Infinity There is a special value in JavaScript called Infinity. It represents a number too big for JavaScript to handle. Infinity is indeed a number, as typing typeof Infinity in the console will confirm. You can also quickly check that a number with 308 zeros is ok, but 309 zeros is too much. To be precise, the biggest number JavaScript can handle is 1.7976931348623157e+308 while the smallest is 5e-324. >>> InfinityInfinity>>> typeof Infinity"number">>> 1e309Infinity>>> 1e3081e+308 Dividing by 0 will give you infinity. >>> var a = 6 / 0;>>> aInfinity Infinity is the biggest number (or rather a little bigger than the biggest), but how about the smallest? It's infinity with a minus sign in front of it, minus infinity. >>> var i = -Infinity;>>> i-Infinity>>> typeof i"number" Does this mean you can have something that's exactly twice as big as Infinity—from 0 up to infinity and then from 0 down to minus infinity? Well, this is purely for amusement and there's no practical value to it. When you sum infinity and minus infinity, you don't get 0, but something that is called NaN (Not A Number). >>> Infinity - InfinityNaN>>> -Infinity + InfinityNaN Any other arithmetic operation with Infinity as one of the operands will give you Infinity: >>> Infinity - 20Infinity>>> -Infinity * 3-Infinity>>> Infinity / 2Infinity>>> Infinity - 99999999999999999Infinity NaN What was this NaN you saw in the example above? It turns out that despite its name, "Not A Number", NaN is a special value that is also a number. >>> typeof NaN"number">>> var a = NaN;>>> aNaN You get NaN when you try to perform an operation that assumes numbers but the operation fails. For example, if you try to multiply 10 by the character "f", the result is NaN, because "f" is obviously not a valid operand for a multiplication. >>> var a = 10 * "f";>>> aNaN NaN is contagious, so if you have even only one NaN in your arithmetic operation, the whole result goes down the drain. >>> 1 + 2 + NaNNaN
Read more
  • 0
  • 0
  • 4091

article-image-dwr-java-ajax-user-interface-basic-elements-part-2
Packt
20 Oct 2009
21 min read
Save for later

DWR Java AJAX User Interface: Basic Elements (Part 2)

Packt
20 Oct 2009
21 min read
Implementing Tables and Lists The first actual sample is very common in applications: tables and lists. In this sample, the table is populated using the DWR utility functions, and a remoted Java class. The sample code also shows how DWR is used to do inline table editing. When a table cell is double-clicked, an edit box opens, and it is used to save new cell data. The sample will have country data in a CSV file: country Name, Long Name, two-letter Code, Capital, and user-defined Notes. The user interface for the table sample appears as shown in the following screenshot: Server Code for Tables and Lists The first thing to do is to get the country data. Country data is in a CSV file (named countries.csv and located in the samples Java package). The following is an excerpt of the content of the CSV file (data is from http://www.state.gov ). Short-form name,Long-form name,FIPS Code,CapitalAfghanistan,Islamic Republic of Afghanistan,AF,KabulAlbania,Republic of Albania,AL,TiranaAlgeria,People's Democratic Republic of Algeria,AG,AlgiersAndorra,Principality of Andorra,AN,Andorra la VellaAngola,Republic of Angola,AO,LuandaAntigua andBarbuda,(no long-form name),AC,Saint John'sArgentina,Argentine Republic,AR,Buenos AiresArmenia,Republic of Armenia,AM,Yerevan The CSV file is read each time a client requests country data. Although this is not very efficient, it is good enough here. Other alternatives include an in-memory cache or a real database such as Apache Derby or IBM DB2. As an example, we have created a CountryDB class that is used to read and write the country CSV. We also have another class, DBUtils, which has some helper methods. The DBUtils code is as follows: package samples;import java.io.BufferedReader;import java.io.File;import java.io.FileReader;import java.io.FileWriter;import java.io.IOException;import java.io.InputStream;import java.io.InputStreamReader;import java.io.PrintWriter;import java.util.List;import java.util.Vector;public class DBUtils { private String fileName=null; public void initFileDB(String fileName) { this.fileName=fileName; // copy csv file to bin-directory, for easy // file access File countriesFile = new File(fileName); if (!countriesFile.exists()) { try { List<String> countries = getCSVStrings(null); PrintWriter pw; pw = new PrintWriter(new FileWriter(countriesFile)); for (String country : countries) { pw.println(country); } pw.close(); } catch (IOException e) { e.printStackTrace(); } } } protected List<String> getCSVStrings(String letter) { List<String> csvData = new Vector<String>(); try { File csvFile = new File(fileName); BufferedReader br = null; if(csvFile.exists()) { br=new BufferedReader(new FileReader(csvFile)); } else { InputStream is = this.getClass().getClassLoader() .getResourceAsStream("samples/"+fileName); br=new BufferedReader(new InputStreamReader(is)); br.readLine(); } for (String line = br.readLine(); line != null; line = br.readLine()) { if (letter == null || (letter != null && line.startsWith(letter))) { csvData.add(line); } } br.close(); } catch (IOException ioe) { ioe.printStackTrace(); } return csvData; }} The DBUtils class is a straightforward utility class that returns CSV content as a List of Strings. It also copies the original CSV file to the runtime directory of any application server we might be running. This may not be the best practice, but it makes it easier to manipulate the CSV file, and we always have the original CSV file untouched if and when we need to go back to the original version. The code for CountryDB is given here: package samples;import java.io.FileWriter;import java.io.IOException;import java.io.PrintWriter;import java.util.Arrays;import java.util.List;import java.util.Vector;public class CountryDB { private DBUtils dbUtils = new DBUtils(); private String fileName = "countries.csv"; public CountryDB() { dbUtils.initFileDB(fileName); } public String[] getCountryData(String ccode) { List<String> countries = dbUtils.getCSVStrings(null); for (String country : countries) { if (country.indexOf("," + ccode + ",") > -1) { return country.split(","); } } return new String[0]; } public List<List<String>> getCountries(String startLetter) { List<List<String>> allCountryData = new Vector<List<String>>(); List<String> countryData = dbUtils.getCSVStrings(startLetter); for (String country : countryData) { String[] data = country.split(","); allCountryData.add(Arrays.asList(data)); } return allCountryData; } public String[] saveCountryNotes(String ccode, String notes) { List<String> countries = dbUtils.getCSVStrings(null); try { PrintWriter pw = new PrintWriter(new FileWriter(fileName)); for (String country : countries) { if (country.indexOf("," + ccode + ",") > -1) { if (country.split(",").length == 4) { // no existing notes country = country + "," + notes; } else { if (notes.length() == 0) { country = country.substring(0, country .lastIndexOf(",")); } else { country = country.substring(0, country .lastIndexOf(",")) + "," + notes; } } } pw.println(country); } pw.close(); } catch (IOException ioe) { ioe.printStackTrace(); } String[] rv = new String[2]; rv[0] = ccode; rv[1] = notes; return rv; }} The CountryDB class is a remoted class. The getCountryData() method returns country data as an array of strings based on the country code. The getCountries() method returns all the countries that start with the specified parameter, and saveCountryNotes() saves user added notes to the country specified by the country code. In order to use CountryDB, the following script element must be added to the index.jsp file together with other JavaScript elements. <script type='text/javascript' src='/DWREasyAjax/dwr/interface/CountryDB.js'></script> There is one other Java class that we need to create and remote. That is the AppContent class that was already present in the JavaScript functions of the home page. The AppContent class is responsible for reading the content of the HTML file and parses the possible JavaScript function out of it, so it can become usable by the existing JavaScript functions in index.jsp file. package samples;import java.io.ByteArrayOutputStream;import java.io.IOException;import java.io.InputStream;import java.util.List;import java.util.Vector;public class AppContent { public AppContent() { } public List<String> getContent(String contentId) { InputStream is = this.getClass().getClassLoader().getResourceAsStream( "samples/"+contentId+".html"); String content=streamToString(is); List<String> contentList=new Vector<String>(); //Javascript within script tag will be extracted and sent separately to client for(String script=getScript(content);!script.equals("");script=getScript(content)) { contentList.add(script); content=removeScript(content); } //content list will have all the javascript //functions, last element is executed last //and all other before html content if(contentList.size()>1) { contentList.add(contentList.size()-1, content); } else { contentList.add(content); } return contentList; } public List<String> getLetters() { List<String> letters=new Vector<String>(); char[] l=new char[1]; for(int i=65;i<91;i++) { l[0]=(char)i; letters.add(new String(l)); } return letters; } public String removeScript(String html) { //removes first script element int sIndex=html.toLowerCase().indexOf("<script "); if(sIndex==-1) { return html; } int eIndex=html.toLowerCase().indexOf("</script>")+9; return html.substring(0, sIndex)+html.substring(eIndex); } public String getScript(String html) { //returns first script element int sIndex=html.toLowerCase().indexOf("<script "); if(sIndex==-1) { return ""; } int eIndex=html.toLowerCase().indexOf("</script>")+9; return html.substring(sIndex, eIndex); } public String streamToString(InputStream is) { String content=""; try { ByteArrayOutputStream baos=new ByteArrayOutputStream(); for(int b=is.read();b!=-1;b=is.read()) { baos.write(b); } content=baos.toString(); } catch(IOException ioe) { content=ioe.toString(); } return content; }} The getContent() method reads the HTML code from a file based on the contentId. ContentId was specified in the dwrapplication.properties file, and the HTML is just contentId plus the extension .html in the package directory. There is also a getLetters() method that simply lists letters from A to Z and returns a list of letters to the browser. If we test the application now, we will get an error as shown in the following screenshot: We know why the AppContent is not defined error occurs, so let's fix it by adding AppContent to the allow element in the dwr.xml file. We also add CountryDB to the allow element. The first thing we do is to add required elements to the dwr.xml file. We add the following creators within the allow element in the dwr.xml file. <create creator="new" javascript="AppContent"> <param name="class" value="samples.AppContent" /> <include method="getContent" /> <include method="getLetters" /> </create> <create creator="new" javascript="CountryDB"> <param name="class" value="samples.CountryDB" /> <include method="getCountries" /> <include method="saveCountryNotes" /> <include method="getCountryData" /></create> We explicitly define the methods we are remoting using the include elements. This is a good practice, as we don't accidentally allow access to any methods that are not meant to be remoted. Client Code for Tables and Lists We also need to add a JavaScript interface to the index.jsp page. Add the following with the rest of the scripts in the index.jsp file. <script type='text/javascript' src='/DWREasyAjax/dwr/interface/AppContent.js'></script> Before testing, we need the sample HTML for the content area. The following HTML is in the TablesAndLists.html file under the samples directory: <h3>Countries</h3><p>Show countries starting with <select id="letters" onchange="selectLetter(this);return false;"> </select><br/>Doubleclick "Notes"-cell to add notes to country.</p><table border="1"> <thead> <tr> <th>Name</th> <th>Long name</th> <th>Code</th> <th>Capital</th> <th>Notes</th> </tr> </thead> <tbody id="countryData"> </tbody></table><script type='text/javascript'>//TO BE EVALEDAppContent.getLetters(addLetters);</script> The script element at the end is extracted by our Java class, and it is then evaluated by the browser when the client-side JavaScript receives the HTML. There is the select element, and its onchange event calls the selectLetter() JavaScript function. We will implement the selectLetter() function shortly. JavaScript functions are added in the index.jsp file, and within the head element. Functions could be in separate JavaScript files, but the embedded script is just fine here. function selectLetter(selectElement){ var selectedIndex = selectElement.selectedIndex; var selectedLetter= selectElement.options[selectedIndex ].value; CountryDB.getCountries(selectedLetter,setCountryRows);}function addLetters(letters){dwr.util.addOptions('letters',['letter...']);dwr.util.addOptions('letters',letters);}function setCountryRows(countryData){var cellFuncs = [ function(data) { return data[0]; }, function(data) { return data[1]; }, function(data) { return data[2]; }, function(data) { return data[3]; }, function(data) { return data[4]; }];dwr.util.removeAllRows('countryData');dwr.util.addRows( 'countryData',countryData,cellFuncs, { cellCreator:function(options) { var td = document.createElement("td"); if(options.cellNum==4) { var notes=options.rowData[4]; if(notes==undefined) { notes='&nbsp;';// + options.rowData[2]+'notes'; } var ccode=options.rowData[2]; var divId=ccode+'_Notes'; var tdId=divId+'Cell'; td.setAttribute('id',tdId); var html=getNotesHtml(ccode,notes); td.innerHTML=html; options.data=html; } return td; }, escapeHtml:false });}function getNotesHtml(ccode,notes){ var divId=ccode+'_Notes'; return "<div onDblClick="editCountryNotes('"+divId+"','"+ccode+"');" id=""+divId+"">"+notes+"</div>";}function editCountryNotes(id,ccode){ var notesElement=dwr.util.byId(id); var tdId=id+'Cell'; var notes=notesElement.innerHTML; if(notes=='&nbsp;') { notes=''; } var editBox='<input id="'+ccode+'NotesEditBox" type="text" value="'+notes+'"/><br/>'; editBox+="<input type='button' id='"+ccode+"SaveNotesButton' value='Save' onclick='saveCountryNotes(""+ccode+"");'/>"; editBox+="<input type='button' id='"+ccode+"CancelNotesButton' value='Cancel' onclick='cancelEditNotes(""+ccode+"");'/>"; tdElement=dwr.util.byId(tdId); tdElement.innerHTML=editBox; dwr.util.byId(ccode+'NotesEditBox').focus();}function cancelEditNotes(ccode){ var countryData=CountryDB.getCountryData(ccode, { callback:function(data) { var notes=data[4]; if(notes==undefined) { notes='&nbsp;'; } var html=getNotesHtml(ccode,notes); var tdId=ccode+'_NotesCell'; var td=dwr.util.byId(tdId); td.innerHTML=html; } });}function saveCountryNotes(ccode){ var editBox=dwr.util.byId(ccode+'NotesEditBox'); var newNotes=editBox.value; CountryDB.saveCountryNotes(ccode,newNotes, { callback:function(newNotes) { var ccode=newNotes[0]; var notes=newNotes[1]; var notesHtml=getNotesHtml(ccode,notes); var td=dwr.util.byId(ccode+"_NotesCell"); td.innerHTML=notesHtml; } });} There are lots of functions for table samples, and we go through each one of them. The first is the selectLetter() function. This function gets the selected letter from the select element and calls the CountryDB.getCountries() remoted Java method. The callback function is setCountryRows. This function receives the return value from the Java getCountries() method, that is List<List<String>>, a List of Lists of Strings. The second function is addLetters(letters), and it is a callback function for theAppContent.getLetters() method, which simply returns letters from A to Z. The addLetters() function uses the DWR utility functions to populate the letter list. Then there is a callback function for the CountryDB.getCountries() method. The parameter for the function is an array of countries that begin with a specified letter. Each array element has a format: Name, Long name, (country code) Code, Capital, Notes. The purpose of this function is to populate the table with country data; and let's see how it is done. The variable, cellFuncs, holds functions for retrieving data for each cell in a column. The parameter named data is an array of country data that was returned from the Java class. The table is populated using the DWR utility function, addRows(). The cellFuncs variable is used to get the correct data for the table cell. The cellCreator function is used to create custom HTML for the table cell. Default implementation generates just a td element, but our custom implementation generates the td-element with the div placeholder for user notes. The getNotesHtml() function is used to generate the div element with the event listener for double-click. The editCountryNotes() function is called when the table cell is double-clicked. The function creates input fields for editing notes with the Save and Cancel buttons. The cancelEditNotes() and saveCountryNotes() functions cancel the editing of new notes, or saves them by calling the CountryDB.saveCountryNotes() Java method. The following screenshot shows what the sample looks like with the populated table: Now that we have added necessary functions to the web page we can test the application. Testing Tables and Lists The application should be ready for testing if we have had the test environment running during development. Eclipse automatically deploys our new code to the server whenever something changes. So we can go right away to the test page http://127.0.0.1:8080/DWREasyAjax. On clicking Tables and lists we can see the page we have developed. By selecting some letter, for example "I" we get a list of all the countries that start with letter "I" (as shown in the previous screenshot). Now we can add notes to countries. We can double-click any table cell under Notes. For example, if we want to enter notes to Iceland, we double-click the Notes cell in Iceland's table row, and we get the edit box for the notes as shown in the following screenshot: The edit box is a simple text input field. We didn't use any forms. Saving and canceling editing is done using JavaScript and DWR. If we press Cancel, we get the original notes from the CountryDB Java class using DWR and saving also uses DWR to save data. CountryDB.saveCountryNotes() takes the country code and the notes that the user entered in the edit box and saves them to the CSV file. When notes are available, the application will show them in the country table together with other country information as shown in the following screenshot: Afterword The sample in this section uses DWR features to get data for the table and list from the server. We developed the application so that most of the application logic is written in JavaScript and Java beans that are remoted. In principle, the application logic can be thought of as being fully browser based, with some extensions in the server. Implementing Field Completion Nowadays, field completion is typical of many web pages. A typical use case is getting a stock quote, and field completion shows matching symbols as users type letters. Many Internet sites use this feature. Our sample here is a simple license text finder. We enter the license name in the input text field, and we use DWR to show the license names that start with the typed text. A list of possible completions is shown below the input field. The following is a screenshot of the field completion in action: Selected license content is shown in an iframe element from http://www.opensource.org. Server Code for Field Completion We will re-use some of the classes we developed in the last section. AppContent is used to load the sample page, and the DBUtils class is used in the LicenseDB class. The LicenseDB class is shown here: package samples;import java.util.List;import java.util.Vector;public class LicenseDB{ private DBUtils dbUtils=new DBUtils(); public LicenseDB() { dbUtils.initFileDB("licenses.csv"); } public List<String> getLicensesStartingWith(String startLetters) { List<String> list=new Vector<String>(); List<String> licenses=dbUtils.getCSVStrings(startLetters); for(String license : licenses) { list.add(license.split(",")[0]); } return list; } public String getLicenseContentUrl(String licenseName) { List<String> licenses=dbUtils.getCSVStrings(licenseName); if(licenses.size()>0) { return licenses.get(0).split(",")[1]; } return ""; }} The getLicenseStartingWith() method goes through the license names and returns valid license names and their URLs. Similar to the data in the previous section, license data is in a CSV file named licenses.csv in the package directory. The following is an excerpt of the file content: Academic Free License, http://opensource.org/licenses/afl-3.0.phpAdaptive Public License, http://opensource.org/licenses/apl1.0.phpApache Software License, http://opensource.org/licenses/apachepl-1.1.phpApache License, http://opensource.org/licenses/apache2.0.phpApple Public Source License, http://opensource.org/licenses/apsl-2.0.phpArtistic license, http://opensource.org/licenses/artistic-license-1.0.php... There are quite a few open-source licenses. Some are more popular than others (like the Apache Software License) and some cannot be re-used (like the IBM Public License). We want to remote the LicenseDB class, so we add the following to the dwr.xml file. <create creator="new" javascript="LicenseDB"> <param name="class" value="samples.LicenseDB"/> <include method="getLicensesStartingWith"/> <include method="getLicenseContentUrl"/></create> Client Code for Field Completion The following script element will go in the index.jsp page. <script type='text/javascript' src='/DWREasyAjax/dwr/interface/LicenseDB.js'></script> The HTML for the field completion is as follows: <h3>Field completion</h3><p>Enter Open Source license name to see it's contents.</p><input type="text" id="licenseNameEditBox" value="" onkeyup="showPopupMenu()" size="40"/><input type="button" id="showLicenseTextButton" value="Show license text" onclick="showLicenseText()"/><div id="completionMenuPopup"></div><div id="licenseContent"></div> The input element, where we enter the license name, listens to the onkeyup event which calls the showPopupMenu() JavaScript function. Clicking the Input button calls the showLicenseText() function (the JavaScript functions are explained shortly). Finally, the two div elements are place holders for the pop-up menu and the iframe element that shows license content. For the pop-up box functionality, we use existing code and modify it for our purpose (many thanks to http://www.jtricks.com). The following is the popup.js file, which is located under the WebContent | js directory. //<script type="text/javascript"><!--/* Original script by: www.jtricks.com * Version: 20070301 * Latest version: * www.jtricks.com/javascript/window/box.html * * Modified by Sami Salkosuo. */// Moves the box object to be directly beneath an object.function move_box(an, box){ var cleft = 0; var ctop = 0; var obj = an; while (obj.offsetParent) { cleft += obj.offsetLeft; ctop += obj.offsetTop; obj = obj.offsetParent; } box.style.left = cleft + 'px'; ctop += an.offsetHeight + 8; // Handle Internet Explorer body margins, // which affect normal document, but not // absolute-positioned stuff. if (document.body.currentStyle && document.body.currentStyle['marginTop']) { ctop += parseInt( document.body.currentStyle['marginTop']); } box.style.top = ctop + 'px';}var popupMenuInitialised=false;// Shows a box if it wasn't shown yet or is hidden// or hides it if it is currently shownfunction show_box(html, width, height, borderStyle,id){ // Create box object through DOM var boxdiv = document.getElementById(id); boxdiv.style.display='block'; if(popupMenuInitialised==false) { //boxdiv = document.createElement('div'); boxdiv.setAttribute('id', id); boxdiv.style.display = 'block'; boxdiv.style.position = 'absolute'; boxdiv.style.width = width + 'px'; boxdiv.style.height = height + 'px'; boxdiv.style.border = borderStyle; boxdiv.style.textAlign = 'right'; boxdiv.style.padding = '4px'; boxdiv.style.background = '#FFFFFF'; boxdiv.style.zIndex='99'; popupMenuInitialised=true; //document.body.appendChild(boxdiv); } var contentId=id+'Content'; var contents = document.getElementById(contentId); if(contents==null) { contents = document.createElement('div'); contents.setAttribute('id', id+'Content'); contents.style.textAlign= 'left'; boxdiv.contents = contents; boxdiv.appendChild(contents); } move_box(html, boxdiv); contents.innerHTML= html; return false;}function hide_box(id){ document.getElementById(id).style.display='none'; var boxdiv = document.getElementById(id+'Content'); if(boxdiv!=null) { boxdiv.parentNode.removeChild(boxdiv); } return false;}//--></script> Functions in the popup.js file are used as menu options directly below the edit box. The show_box() function takes the following arguments: HTML code for the pop-up, position of the pop-up window, and the "parent" element (to which the pop-up box is related). The function then creates a pop-up window using DOM. The move_box() function is used to move the pop-up window to its correct place under the edit box and the hide_box() function hides the pop-up window by removing the pop-up window from the DOM tree. In order to use functions in popup.js, we need to add the following script-element to the index.jsp file: <script type='text/javascript' src='js/popup.js'></script> Our own JavaScript code for the field completion is in the index.jsp file. The following are the JavaScript functions, and an explanation follows the code: function showPopupMenu(){ var licenseNameEditBox=dwr.util.byId('licenseNameEditBox'); var startLetters=licenseNameEditBox.value; LicenseDB.getLicensesStartingWith(startLetters, { callback:function(licenses) { var html=""; if(licenses.length==0) { return; } if(licenses.length==1) { hidePopupMenu(); licenseNameEditBox.value=licenses[0]; } else { for (index in licenses) { var licenseName=licenses[index];//.split(",")[0]; licenseName=licenseName.replace(/"/g,"&quot;"); html+="<div style="border:1px solid #777777;margin-bottom:5;" onclick="completeEditBox('"+licenseName+"');">"+licenseName+"</div>"; } show_box(html, 200, 270, '1px solid','completionMenuPopup'); } } });}function hidePopupMenu(){ hide_box('completionMenuPopup');}function completeEditBox(licenseName){ var licenseNameEditBox=dwr.util.byId('licenseNameEditBox'); licenseNameEditBox.value=licenseName; hidePopupMenu(); dwr.util.byId('showLicenseTextButton').focus();}function showLicenseText(){ var licenseNameEditBox=dwr.util.byId('licenseNameEditBox'); licenseName=licenseNameEditBox.value; LicenseDB.getLicenseContentUrl(licenseName,{ callback:function(licenseUrl) { var html='<iframe src="'+licenseUrl+'" width="100%" height="600"></iframe>'; var content=dwr.util.byId('licenseContent'); content.style.zIndex="1"; content.innerHTML=html; } });} The showPopupMenu() function is called each time a user enters a letter in the input box. The function gets the value of the input field and calls the LicenseDB. getLicensesStartingWith() method. The callback function is specified in the function parameters. The callback function gets all the licenses that match the parameter, and based on the length of the parameter (which is an array), it either shows a pop-up box with all the matching license names, or, if the array length is one, hides the pop-up box and inserts the full license name in the text field. In the pop up box, the license names are wrapped within the div element that has an onclick event listener that calls the completeEditBox() function. The hidePopupMenu() function just closes the pop-up menu and the competeEditBox() function inserts the clicked license text in the input box and moves the focus to the button. The showLicenseText() function is called when we click the Show license text button. The function calls the LicenseDB. getLicenseContentUrl() method and the callback function creates an iframe element to show the license content directly from http://www.opensource.org, as shown in the following screenshot: Afterword Field completion improves user experience in web pages and the sample code in this section showed one way of doing it using DWR. It should be noted that the sample for field completion presented here is only for demonstration purposes.
Read more
  • 0
  • 0
  • 3104

article-image-dwr-java-ajax-user-interface-basic-elements-part-1
Packt
20 Oct 2009
16 min read
Save for later

DWR Java AJAX User Interface: Basic Elements (Part 1)

Packt
20 Oct 2009
16 min read
  Creating a Dynamic User Interface The idea behind a dynamic user interface is to have a common "framework" for all samples. We will create a new web application and then add new features to the application as we go on. The user interface will look something like the following figure: The user interface has three main areas: the title/logo that is static, the tabs that are dynamic, and the content area that shows the actual content. The idea behind this implementation is to use DWR functionality to generate tabs and to get content for the tab pages. The tabbed user interface is created using a CSS template from the Dynamic Drive CSS Library (http://dynamicdrive.com/style/csslibrary/item/css-tabs-menu). Tabs are read from a properties file, so it is possible to dynamically add new tabs to the web page. The following screenshot shows the user interface. The following sequence diagram shows the application flow from the logical perspective. Because of the built-in DWR features we don't need to worry very much about how asynchronous AJAX "stuff" works. This is, of course, a Good Thing. Now we will develop the application using the Eclipse IDE and the Geronimo test environment Creating a New Web Project First, we will create a new web project. Using the Eclipse IDE we do the following: select the menu File | New | Dynamic Web Project. This opens the New Dynamic Web Project dialog; enter the project name DWREasyAjax and click Next, and accept the defaults on all the pages till the last page, where Geronimo Deployment Plan is created as shown in the following screenshot: Enter easyajax as Group Id and DWREasyAjax as Artifact Id. On clicking Finish, Eclipse creates a new web project. The following screen shot shows the generated project and the directory hierarchy. Before starting to do anything else, we need to copy DWR to our web application. All DWR functionality is present in the dwr.jar file, and we just copy that to the WEB-INF | lib directory. A couple of files are noteworthy: web.xml and geronimo-web.xml. The latter is generated for the Geronimo application server, and we can leave it as it is. Eclipse has an editor to show the contents of geronimo-web.xml when we double-click the file. Configuring the Web Application The context root is worth noting (visible in the screenshot above). We will need it when we test the application. The other XML file, web.xml, is very important as we all know. This XML will hold the DWR servlet definition and other possible initialization parameters. The following code shows the full contents of the web.xml file that we will use: <?xml version="1.0" encoding="UTF-8"?> <web-app xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web- app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>DWREasyAjax</display-name> <servlet> <display-name>DWR Servlet</display-name> <servlet-name>dwr-invoker</servlet-name> <servlet-class> org.directwebremoting.servlet.DwrServlet </servlet-class> <init-param> <param-name>debug</param-name> <param-value>true</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>dwr-invoker</servlet-name> <url-pattern>/dwr/*</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list> </web-app> DWR cannot function without the dwr.xml configuration file. So we need to create the configuration file. We use Eclipse to create a new XML file in the WEB-INF directory. The following is required for the user interface skeleton. It already includes the allow-element for our DWR based menu. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE dwr PUBLIC "-//GetAhead Limited//DTD Direct Web Remoting 2.0//EN" "http://getahead.org/dwr/dwr20.dtd"> <dwr> <allow> <create creator="new" javascript="HorizontalMenu"> <param name="class" value="samples.HorizontalMenu" /> </create> </allow> </dwr> In the allow element, there is a creator for the horizontal menu Java class that we are going to implement here. The creator that we use here is the new creator, which means that DWR will use an empty constructor to create Java objects for clients. The parameter named class holds the fully qualified class name. Developing the Web Application Since we have already defined the name of the Java class that will be used for creating the menu, the next thing we do is implement it. The idea behind the HorizontalMenu class is that it is used to read a properties file that holds the menus that are going to be on the web page. We add properties to a file named dwrapplication.properties, and we create it in the same samples-package as the HorizontalMenu-class. The properties file for the menu items is as follows: menu.1=Tables and lists,TablesAndLists menu.2=Field completion,FieldCompletion The syntax for the menu property is that it contains two elements separated by a comma. The first element is the name of the menu item. This is visible to user. The second is the name of HTML template file that will hold the page content of the menu item. The class contains just one method, which is used from JavaScript and via DWR to retrieve the menu items. The full class implementation is shown here: package samples; import java.io.IOException; import java.io.InputStream; import java.util.List; import java.util.Properties; import java.util.Vector; public class HorizontalMenu { public HorizontalMenu() { } public List<String> getMenuItems() throws IOException { List<String> menuItems = new Vector<String>(); InputStream is = this.getClass().getClassLoader().getResourceAsStream( "samples/dwrapplication.properties"); Properties appProps = new Properties(); appProps.load(is); is.close(); for (int menuCount = 1; true; menuCount++) { String menuItem = appProps.getProperty("menu." + menuCount); if (menuItem == null) { break; } menuItems.add(menuItem); } return menuItems; } } The implementation is straightforward. The getMenuItems() method loads properties using the ClassLoader.getResourceAsStream() method, which searches the class path for the specified resource. Then, after loading properties, a for loop is used to loop through menu items and then a List of String-objects is returned to the client. The client is the JavaScript callback function that we will see later. DWR automatically converts the List of String objects to JavaScript arrays, so we don't have to worry about that. Testing the Web Application We haven't completed any client-side code now, but let's test the code anyway. Testing uses the Geronimo test environment. The Project context menu has the Run As menu that we use to test the application as shown in the following screenshot: Run on Server opens a wizard to define a new server runtime. The following screenshot shows that the Geronimo test environment has already been set up, and we just click Finish to run the application. If the test environment is not set up, we can manually define a new one in this dialog: After we click Finish, Eclipse starts the Geronimo test environment and our application with it. When the server starts, the Console tab in Eclipse informs us that it's been started. The Servers tab shows that the server is started and all the code has been synchronized, that is, the code is the most recent (Synchronization happens whenever we save changes on some deployed file.) The Servers tab also has a list of deployed applications under the server. Just the one application that we are testing here is visible in the Servers tab. Now comes the interesting part—what are we going to test if we haven't really implemented anything? If we take a look at the web.xml file, we will find that we have defined one initialization parameter. The Debug parameter is true, which means that DWR generates test pages for our remoted Java classes. We just point the browser (Firefox in our case) to the URL http://127.0.0.1:8080/DWREasyAjax/dwr and the following page opens up: This page will show a list of all the classes that we allow to be remoted. When we click the class name, a test page opens as in the following screenshot: This is an interesting page. We see all the allowed methods, in this case, all public class methods since we didn't specifically include or exclude anything. The most important ones are the script elements, which we need to include in our HTML pages. DWR does not automatically know what we want in our web pages, so we must add the script includes in each page where we are using DWR and a remoted functionality. Then there is the possibility of testing remoted methods. When we test our own method, getMenuItems(), we see a response in an alert box: The array in the alert box in the screenshot is the JavaScript array that DWR returns from our method. Developing Web Pages The next step is to add the web pages. Note that we can leave the test environment running. Whenever we change the application code, it is automatically published to test the environment, so we don't need to stop and start the server each time we make some changes and want to test the application. The CSS style sheet is from the Dynamic Drive CSS Library. The file is named styles.css, and it is in the WebContent directory in Eclipse IDE. The CSS code is as shown: /*URL: http://www.dynamicdrive.com/style/ */ .basictab{ padding: 3px 0; margin-left: 0; font: bold 12px Verdana; border-bottom: 1px solid gray; list-style-type: none; text-align: left; /*set to left, center, or right to align the menu as desired*/ } .basictab li{ display: inline; margin: 0; } .basictab li a{ text-decoration: none; padding: 3px 7px; margin-right: 3px; border: 1px solid gray; border-bottom: none; background-color: #f6ffd5; color: #2d2b2b; } .basictab li a:visited{ color: #2d2b2b; } .basictab li a:hover{ background-color: #DBFF6C; color: black; } .basictab li a:active{ color: black; } .basictab li.selected a{ /*selected tab effect*/ position: relative; top: 1px; padding-top: 4px; background-color: #DBFF6C; color: black; } This CSS is shown for the sake of completion, and we will not go into details of CSS style sheets. It is sufficient to say that CSS provides an excellent method to create websites with good presentation. The next step is the actual web page. We create an index.jsp page, in the WebContent directory, which will have the menu and also the JavaScript functions for our samples. It should be noted that although all JavaScript code is added to a single JSP page here in this sample, in "real" projects it would probably be more useful to create a separate file for JavaScript functions and include the JavaScript file in the HTML/JSP page using a code snippet such as this: <script type="text/javascript" src="myjavascriptcode/HorizontalMenu.js"/>. We will add JavaScript functions later for each sample. The following is the JSP code that shows the menu using the remoted HorizontalMenu class. <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link href="styles.css" rel="stylesheet" type="text/css"/> <script type='text/javascript' src='/DWREasyAjax/dwr/engine.js'></script> <script type='text/javascript' src='/DWREasyAjax/dwr/util.js'></script> <script type='text/javascript' src='/DWREasyAjax/dwr/interface/HorizontalMenu.js'></script> <title>DWR samples</title> <script type="text/javascript"> function loadMenuItems() { HorizontalMenu.getMenuItems(setMenuItems); } function getContent(contentId) { AppContent.getContent(contentId,setContent); } function menuItemFormatter(item) { elements=item.split(','); return '<li><a href="#" onclick="getContent(''+elements[1]+'');return false;">'+elements[0]+'</a></li>'; } function setMenuItems(menuItems) { menu=dwr.util.byId("dwrMenu"); menuItemsHtml=''; for(var i=0;i<menuItems.length;i++) { menuItemsHtml=menuItemsHtml+menuItemFormatter(menuItems[i]); } menu.innerHTML=menuItemsHtml; } function setContent(htmlArray) { var contentFunctions=''; var scriptToBeEvaled=''; var contentHtml=''; for(var i=0;i<htmlArray.length;i++) { var html=htmlArray[i]; if(html.toLowerCase().indexOf('<script')>-1) { if(html.indexOf('TO BE EVALED')>-1) { scriptToBeEvaled=html.substring(html.indexOf('>')+1,html.indexOf('</')); } else { eval(html.substring(html.indexOf('>')+1,html.indexOf('</'))); contentFunctions+=html; } } else { contentHtml+=html; } } contentScriptArea=dwr.util.byId("contentAreaFunctions"); contentScriptArea.innerHTML=contentFunctions; contentArea=dwr.util.byId("contentArea"); contentArea.innerHTML=contentHtml; if(scriptToBeEvaled!='') { eval(scriptToBeEvaled); } } </script> </head> <body onload="loadMenuItems()"> <h1>DWR Easy Java Ajax Applications</h1> <ul class="basictab" id="dwrMenu"> </ul> <div id="contentAreaFunctions"> </div> <div id="contentArea"> </div> </body> </html> This JSP is our user interface. The HTML is just normal HTML with a head element and a body element. The head includes reference to a style sheet and to DWR JavaScript files, engine.js, util.js, and our own HorizontalMenu.js. The util.js file is optional, but as it contains very useful functions, it could be included in all the web pages where we use the functions in util.js. The body element has a contentArea place holder for the content pages just below the menu. It also contains the content area for JavaScript functions for a particular content. The body element onload-event executes the loadMenuItems() function when the page is loaded. The loadMenuItems() function calls the remoted method of the HorizontalMenu Java class. The parameter of the HorizontalMenu. getMenuItems() JavaScript function is the callback function that is called by DWR when the Java method has been executed and it returns menu items. The setMenuItems() function is a callback function for the loadMenuItems() function mentioned in the previous paragraph. While loading menu items, the Horizontal.getMenuItems() remoted method returns menu items as a List of Strings as a parameter to the setMenuItems() function. The menu items are formatted using the menuItemFormatter() helper function. The menuItemFormatter() function creates li elements of menu texts. Menus are formatted as links, (a href) and they have an onclick event that has a function call to the getContent-function, which in turn calls the AppContent.getContent() function. The AppContent is a remoted Java class, which we haven't implemented yet, and its purpose is to read the HTML from a file based on the menu item that the user clicked. Implementation of AppContent and the content pages are described in the next section. The setContent() function sets the HTML content to the content area and also evaluates JavaScript options that are within the content to be inserted in the content area (this is not used very much, but it is there for those who need it). Our dynamic user interface looks like this: Note the Firebug window at the bottom of the browser screen. The Firebug console in the screenshot shows one POST request to our HorizontalMenu.getMenuItems() method. Other Firebug features are extremely useful during development work, and we find it useful that Firebug has been enabled throughout the development work. Callback Functions We saw our first callback function as a parameter in the HorizontalMenu.getMenuItems(setMenuItems) function, and since callbacks are an important concept in DWR, it would be good to discuss a little more about them now that we have seen their first usage. Callbacks are used to operate on the data that was returned from a remoted method. As DWR and AJAX are asynchronous, typical return values in RPCs (Remote Procedure Calls), as in Java calls, do not work. DWR hides the details of calling the callback functions and handles everything internally from the moment we return a value from the remoted Java method to receiving the returned value to the callback function. Two methods are recommended while using callback functions. We have already seen the first method in the HorizontalMenu.getMenuItems(setMenuItems) function call. Remember that there are no parameters in the getMenuItems()Java method, but in the JavaScript call, we added the callback function name at the end of the parameter list. If the Java method has parameters, then the JavaScript call is similar to CountryDB.getCountries(selectedLetters,setCountryRows), where selectedLetters is the input parameter for the Java method and setCountryRows is the name of the callback function (we see the implementation later on). The second method to use callbacks is a meta-data object in the remote JavaScript call. An example (a full implementation is shown later in this article) is shown here: CountryDB.saveCountryNotes(ccode,newNotes, { callback:function(newNotes) { //function body here } }); Here, the function is anonymous and its implementation is included in the JavaScript call to the remoted Java method. One advantage here is that it is easy to read the code, and the code is executed immediately after we get the return value from the Java method. The other advantage is that we can add extra options to the call. Extra options include timeout and error handler as shown in the following example: CountryDB.saveCountryNotes(ccode,newNotes, { callback:function(newNotes) { //function body here }, timeout:10000, errorHandler:function(errorMsg) { alert(errorMsg);} }); It is also possible to add a callback function to those Java methods that do not return a value. Adding a callback to methods with no return values would be useful in getting a notification when a remote call has been completed. Afterword Our first sample is ready, and it is also the basis for the following samples. We also looked at how applications are tested in the Eclipse environment. Using DWR, we can look at JavaScript code on the browser and Java code on the server as one. It may take a while to get used to it, but it will change the way we develop web applications. Logically, there is no longer a client and a server but just a single run time platform that happens to be physically separate. But in practice, of course, applications using DWR, JavaScript on the client and Java in the server, are using the typical client-server interaction. This should be remembered when writing applications in the logically single run-time platform.
Read more
  • 0
  • 0
  • 3544
article-image-drools-jboss-rules-50-flow-part-1
Packt
16 Oct 2009
10 min read
Save for later

Drools JBoss Rules 5.0 Flow (Part 1)

Packt
16 Oct 2009
10 min read
Loan approval service Loan approval is a complex process starting with customer requesting a loan. This request comes with information such as amount to be borrowed, duration of the loan, and destination account where the borrowed amount will be transferred. Only the existing customers can apply for a loan. The process starts with validating the request. Upon successful validation, a customer rating is calculated. Only customers with a certain rating are allowed to have loans. The loan is processed by a bank employee. As soon as an approved event is received from a supervisor, the loan is approved and money can be transferred to the destination account. An email is sent to inform the customer about the outcome. Model If we look at this process from the domain modeling perspective, in addition to the model that we already have, we'll need a Loan class. An instance of this class will be a part of the context of this process. The screenshot above shows Java Bean, Loan, for holding loan-related information. The Loan bean defines three properties. amount (which is of type BigDecimal), destinationAccount (which is of type Account; if the loan is approved, the amount will be transferred to this account), and durationYears (which represents a period for which the customer will be repaying this loan). Loan approval ruleflow We'll now represent this process as a ruleflow. It is shown in the following figure. Try to remember this figure because we'll be referring back to it throughout this article. The preceding figure shows the loan approval process—loanApproval.rf file. You can use the Ruleflow Editor that comes with the Drools Eclipse plugin to create this ruleflow. The rest of the article will be a walk through this ruleflow explaining each node in more detail. The process starts with Validate Loan ruleflow group. Rules in this group will check the loan for missing required values and do other more complex validation. Each validation rule simply inserts Message into the knowledge session. The next node called Validated? is an XOR type split node. The ruleflow will continue through the no errors branch if there are no error or warning messages in the knowledge session—the split node constraint for this branch says: not Message() Code listing 1: Validated? split node no errors branch constraint (loanApproval.rf file). For this to work, we need to import the Message type into the ruleflow. This can be done from the Constraint editor, just click on the Imports... button. The import statements are common for the whole ruleflow. Whenever we use a new type in the ruleflow (constraints, actions, and so on), it needs to be imported. The otherwise branch is a "catch all" type branch (it is set to 'always true'). It has higher priority number, which means that it will be checked after the no errors branch. The .rf files are pure XML files that conform with a well formed XSD schema. They can be edited with any XML editor. Invalid loan application form If the validation didn't pass, an email is sent to the customer and the loan approval process finishes as Not Valid. This can be seen in the otherwise branch. There are two nodes-Email and Not Valid. Email is a special ruleflow node called work item. Email work item Work item is a node that encapsulates some piece of work. This can be an interaction with another system or some logic that is easier to write using standard Java. Each work item represents a piece of logic that can be reused in many systems. We can also look at work items as a ruleflow alternative to DSLs. By default, Drools Flow comes with various generic work items, for example, Email (for sending emails), Log (for logging messages), Finder (for finding files on a file system), Archive (for archiving files), and Exec (for executing programs/system commands). In a real application, you'd probably want to use a different work item than a generic one for sending an email. For example, a custom work item that inserts a record into your loan repository. Each work item can take multiple parameters. In case of email, these are: From, To, Subject, Text, and others. Values for these parameters can be specified at ruleflow creation time or at runtime. By double-clicking on the Email node in the ruleflow, Custom Work Editor is opened (see the following screenshot). Please note that not all work items have a custom editor. In the first tab (not visible), we can specify recipients and the source email address. In the second tab (visible), we can specify the email's subject and body. If you look closer at the body of the email, you'll notice two placeholders. They have the following syntax: #{placeholder}. A placeholder can contain any mvel code and has access to all of the ruleflow variables (we'll learn more about ruleflow variables later in this article). This allows us to customize the work item parameters based on runtime conditions. As can be seen from the screenshot above, we use two placeholders: customer.firstName and errorList. customer and errorList are ruleflow variables. The first one represents the current Customer object and the second one is ValidationReport. When the ruleflow execution reaches this email work item, these placeholders are evaluated and replaced with the actual values (by calling the toString method on the result). Fault node The second node in the otherwise branch in the loan approval process ruleflow is a fault node. Fault node is similar to an end node. It accepts one incoming connection and has no outgoing connections. When the execution reaches this node, a fault is thrown with the given name. We could, for example, register a fault handler that will generate a record in our reporting database. However, we won't register a fault handler, and in that case, it will simply indicate that this ruleflow finished with an error. Test setup We'll now write a test for the otherwise branch. First, let's set up the test environment. Then a new session is created in the setup method along with some test data. A valid Customer with one Account is requesting a Loan. The setup method will create a valid loan configuration and the individual tests can then change this configuration in order to test various exceptional cases. @Before public void setUp() throws Exception { session = knowledgeBase.newStatefulKnowledgeSession(); trackingProcessEventListener = new TrackingProcessEventListener(); session.addEventListener(trackingProcessEventListener); session.getWorkItemManager().registerWorkItemHandler( "Email", new SystemOutWorkItemHandler()); loanSourceAccount = new Account(); customer = new Customer(); customer.setFirstName("Bob"); customer.setLastName("Green"); customer.setEmail("bob.green@mail.com"); Account account = new Account(); account.setNumber(123456789l); customer.addAccount(account); account.setOwner(customer); loan = new Loan(); loan.setDestinationAccount(account); loan.setAmount(BigDecimal.valueOf(4000.0)); loan.setDurationYears(2); Code listing 2: Test setup method called before every test execution (DefaulLoanApprovalServiceTest.java file). A tracking ruleflow event listener is created and added to the knowledge session. This event listener will record the execution path of a ruleflow—store all of the executed ruleflow nodes in a list. TrackingProcessEventListener overrides the beforeNodeTriggered method and gets the node to be executed by calling event.getNodeInstance(). loanSourceAccount represents the bank's account for sourcing loans. The setup method also registers an Email work item handler. A work item handler is responsible for execution of the work item (in this case, connecting to the mail server and sending out emails). However, the SystemOutWorkItemHandler implementation that we've used is only a dummy implementation that writes some information to the console. It is useful for our testing purposes. Testing the 'otherwise' branch of 'Validated?' node We'll now test the otherwise branch, which sends an email informing the applicant about missing data and ends with a fault. Our test (the following code) will set up a loan request that will fail the validation. It will then verify that the fault node was executed and that the ruleflow process was aborted. @Test public void notValid() { session.insert(new DefaultMessage()); startProcess(); assertTrue(trackingProcessEventListener.isNodeTriggered( PROCESS_LOAN_APPROVAL, NODE_FAULT_NOT_VALID)); assertEquals(ProcessInstance.STATE_ABORTED, processInstance.getState()); } Code listing 3: Test method for testing Validated? node's otherwise branch (DefaultLoanApprovalServiceTest.java file). By inserting a message into the session, we're simulating a validation error. The ruleflow should end up in the otherwise branch. Next, the test above calls the startProcess method. It's implementation is as follows: private void startProcess() { Map<String, Object> parameterMap = new HashMap<String, Object>(); parameterMap.put("loanSourceAccount", loanSourceAccount); parameterMap.put("customer", customer); parameterMap.put("loan", loan); processInstance = session.startProcess( PROCESS_LOAN_APPROVAL, parameterMap); session.insert(processInstance); session.fireAllRules(); } Code listing 4: Utility method for starting the ruleflow (DefaultLoanApprovalServiceTest.java file). The startProcess method starts the loan approval process. It also sets loanSourceAccount, loan, and customer as ruleflow variables. The resulting process instance is, in turn, inserted into the knowledge session. This will enable our rules to make more sophisticated decisions based on the state of the current process instance. Finally, all of the rules are fired. We're already supplying three variables to the ruleflow; however, we haven't declared them yet. Let's fix this. Ruleflow variables can be added through Eclipse's Properties editor as can be seen in the following screenshot (just click on the ruleflow canvas, this should give the focus to the ruleflow itself). Each variable needs a name type and, optionally, a value. The preceding screenshot shows how to set the loan ruleflow variable. Its Type is set to Object and ClassName is set to the full type name droolsbook.bank.model.Loan. The other two variables are set in a similar manner. Now back to the test from code listing 3. It verifies that the correct nodes were triggered and that the process ended in aborted state. The isNodeTriggered method takes the process ID, which is stored in a constant called PROCESS_LOAN_APPROVAL. The method also takes the node ID as second argument. This node ID can be found in the properties view after clicking on the fault node. The node ID—NODE_FAULT_NOT_VALID—is a constant of type long defined as a property of this test class. static final long NODE_FAULT_NOT_VALID = 21;static final long NODE_SPLIT_VALIDATED = 20; Code listing 5: Constants that holds fault and Validated? node's IDs (DefaultLoanApprovalServiceTest.java file). By using the node ID, we can change node's name and other properties without breaking this test (node ID is least likely to change). Also, if we're performing bigger re-factorings involving node ID changes, we have only one place to update—the test's constants. Ruleflow unit testingDrools Flow support for unit testing isn't the best. With every test, we have to run the full process from start to the end. We'll make it easier with some helper methods that will set up a state that will utilize different parts of the flow. For example, a loan with high amount to borrow or a customer with low rating.Ideally we should be able to test each node in isolation. Simply start the ruleflow in a particular node. Just set the necessary parameters needed for a particular test and verify that the node executed as expected.Drools support for snapshots may resolve some of these issues; however, we'd have to first create all snapshots that we need before executing the individual test methods. Another alternative is to dig deeper into Drools internal API, but this is not recommended. The internal API can change in the next release without any notice.
Read more
  • 0
  • 0
  • 2831

article-image-flex-101-flash-builder-4-part-1
Packt
16 Oct 2009
11 min read
Save for later

Flex 101 with Flash Builder 4: Part 1

Packt
16 Oct 2009
11 min read
  The article is intended towards developers who have never used Flex before and would like to exercise a “Hello World” kind of tutorial. The article does not aim to cover Flex and FB4 in detail but rather focuses on the mechanics of FB4 and getting an application running with minimal effort. For developers familiar with Flex and the predecessor to Flash Builder 4 (Flex Builder 2 or 3), it contains an introduction to FB4 and some differences in the way you go about building Flex Applications using FB4. Even if you have not programmed before and are looking at understanding how to make a start in developing applications, this would serve as a good start. The Flex Ecosystem The Flex ecosystem is a set of libraries, tools, languages and a deployment runtime that provides an end-to-end framework for designing, developing and deploying RIAs. All these together are being branded as a part of the Flash platform. In its latest release, Flex 4, special efforts have been put in to address the designer to developer workflow by letting graphic designers address layout, skinning, effects and general look and feel of your application and then the developers taking over to address the application logic, events, etc. To understand this at a high level, take a look at the diagram shown below. This is a very simplified diagram and the intention is to project a 10,000 ft view of the development, compilation and execution process. Let us understand the diagram now: The developer will typically work in the Flash Builder Application. Flash Builder is the Integrated Development Environment (IDE) that provides an environment for coding, compiling, running / debugging your Flex based applications. Your Flex Application will typically consist of MXML and ActionScript code. ActionScript is an ECMAScript compatible Object Oriented language, whereas MXML is an XML-based markup language. Using MXML you can define/layout your visual components like buttons, combobox, data grids, and others. Your application logic will be typically coded inside ActionScript classes/methods. While coding your Flex Application, you will make use of the Flex framework classes that provide most of the core functionality. Additional libraries like Flex Charting libraries and 3rd party components can be used in your application too. Flash Builder compiles all of this into object byte code that can be executed inside the Flash Player. Flash Player is the runtime host that executes your application. This is high level introduction to the ecosystem and as we work through the samples later on in the article, things will start falling into place. Flash Builder 4 Flash Builder is the new name for the development IDE previously known as Flex Builder. The latest release is 4 and it is currently in public beta. Flash Builder 4 is based on the Eclipse IDE, so if you are familiar with Eclipse based tools, you will be able to navigate your way quite easily. Flash Builder 4 like Flex Builder 3 previously is a commercial product and you need to purchase a development license. FB4 currently is in public beta and is available as a 30-day evaluation. Through the rest of the article, we will make use of FB4 and will be focused completely on that to build and run the sample applications. Let us now take a look at setting up FB4. Setting up your Development Environment To setup Flash Builder 4, follows these steps: The first step should be installing Flash Player 10 on your system. We will be developing with the Flex 4 SDK that comes along with Flash Builder 4 and it requires Flash Player 10. You can download the latest version of Flash Player from here: http://www.adobe.com/products/flashplayer/ Download Flash Builder 4 Public Beta from http://labs.adobe.com/technologies/flashbuilder4/. The page is shown below: After you download, run the installer program and proceed with the rest of the installation. Launch the Adobe Flash Builder Beta. It will prompt first with a message that it is a Trial version as shown below: To continue in evaluation mode, select the option highlighted above and click Next. This will launch the Flash Builder IDE. Let us start coding with Flash Builder 4 IDE. We will stick to tradition and write the “Hello World” application. Hello World using Flash Builder 4 In this section, we will be developing a basic Hello World application. While the application does not do much, it will help you get comfortable with the Flash Builder IDE. Launch the Flash Builder IDE. We will be creating a Flex Project. Flash Builder will help us create the Project that will contain all our files. To create a new Flex Project, click on the File → New → Flex Project as shown below: This will bring up a dialog in which you will need to specify more details about the Flex Project that you plan to develop. The dialog is shown below: You will need to provide at least the following information: Project Name: This is the name of your project. Enter a name that you want over here. In our case, we have named our project MyFirstFB4App. Application Type: We can develop both a Web version and a desktop version of our application using Flash Builder. The web application will then run inside of a web browser and execute within the Flash Player plug-in. We will go with the Web option over here. The Desktop application runs inside the Adobe Integrated Runtime environment and can have more desktop like features. We will skip that option for now. We will let the other options remain as is. We will use the Flex 4.0 SDK and currently we are not integrating with any Server side layer so we will leave that option as None/Other. Click on Finish at this point to create your Flex Project. This will create a main application file called MyFirstFB4App.mxml as shown below. We will come back to our coding a little later but first we must familiarize ourselves with the Flash Builder IDE. Let us first look at the Package Explorer to understand the files that have been created for the Flex Project. The screenshot is shown below: It consists of the main source file MyFirstFB4App.mxml. This is the main application file or in other words the bootstrap. All your source files (MXML and ActionScript code along with assets like images, and others should go under the src folder. They can optionally be placed in packages too. The Flex 4.0 framework consists of several libraries that you compile your code against. You would end up using its framework code, components (visual and non-visual) and other classes. These classes are packaged in a library file with an extension .swc. A list of library files is shown above. You do not need to typically do anything with it. Optionally, you can also use 3rd party components written by other companies and developers that are not part of the Flex framework. These libraries are packages as .SWC files too and they can be placed in the libs folder as shown in the previous screenshot. The typical step is to write and compile your code—build your project. If your build is successful, the object code is generated in the bin-debug folder. When you deploy your application to a Web Server, you will need to pickup the contents from this folder. We will come to that a little later. The html-template folder contains some boiler-plate code that contains the container HTML into which your object code will be referenced. It is possible to customize this but for now, we will not discuss that. Double-click MyFirstFB4App.mxml file. This is our main application file. The code listing is given below: <?xml version="1.0" encoding="utf-8"?> <s:Application minWidth="1024" minHeight="768"> </s:Application> As discussed before, you will typically write one or more MXML files that will contain typically your visual components (although there can be non-visual components also). By visual components, we mean controls like button, combobox, list, tree, and others. It could also contain layout components and containers that help you layout your design as per the application screen design. To view what components, you can place on the main application canvas, select the Design View as shown below: Have a look at the lower half of the left pane. You will see the Components tab as shown below, which would address most needs of your Application Visual design. Click on the Controls tree node as shown below. You will see several controls that you can use and from which, we will use the Button control for this application. Simply select the Button control and drag it to the Design View Canvas as shown below: This will drop an instance of the Button control on the Design View as shown below: Select the Button to see its properties panel as shown below. Properties Panel is where you can set several attributes at design time for the control. In case the Properties panel is not visible, you can get to that by selecting Window → Properties from the main menu. In the Properties panel, we can change several key attributes. All controls can be uniquely identified and addressed in your code via the ID attribute. This is a unique name that you need to provide. Go ahead and give it some meaningful name. In our case, we name it btnSayHello. Next we can change the label so that instead of Button, it can display a message for example, Say Hello. Finally we want to wire some code such that if the button is clicked, we can do some action like display a Message Box saying Hello World. To do that, click the icon next to the On click edit field as shown below. It will provide you two options. Select the option for Generate Event Handler. This will generate the code and switch to the Source view. The code is listed below for your reference. <?xml version="1.0" encoding="utf-8"?> <s:Application minWidth="1024" minHeight="768"> <fx:Script> <![CDATA[ protected function btnSayHello_clickHandler(event:MouseEvent):void { // TODO Auto-generated method stub } ]]> </fx:Script> <s:Button x="17" y="14" label="Button" id="btnSayHello" click="btnSayHello_clickHandler(event)"/> </s:Application> There are few things to note here. As mentioned most of your application logic will be written in ActionScript and that is exactly what Flash Builder has generated for you. All such code is typically added inside a scripting block marked with the <fx:Script> tag. You can place your ActionScript methods over here that can be used by the rest of the application. When we clicked on Generate Event Handler, Flash Builder generated the Event Handler code. This code is in ActionScript and was appropriately placed inside the <fx:Script> block for us. If you look at the code, you can see that it has added a function that is invoked when the click event is fired on the button. The method is btnSayHello_clickHandler and if you notice it has an empty method that is, no implementation. Let us run the application now to see what it looks like. To run the application, click on the   Run icon in the main toolbar of Flash Builder. This will launch the web application as shown below. Clicking the Say Hello button will not do anything at this point since there is no code written inside the handler as we saw above. To display the MessageBox, we add the code shown below (Only the Script section is shown below): <fx:Script> <![CDATA[ import mx.controls.Alert; protected function btnSayHello_clickHandler(event:MouseEvent):void { Alert.show("Hello World"); } ]]> </fx:Script> We use one of the classes (called Alert) from the Flex framework. Like any other language, we need to specify which package we are using the class from so that the compiler can understand it. The Alert class belongs to the mx.controls package and it has a static method called show() which takes a single parameter of type String. This String parameter is the message to be displayed and in our case it is "Hello World". To run this, click Ctrl-S to save your file or File →  Save from the main menu. And click on Run icon in the main toolbar. This will launch the application and on clicking the SayHello button, you will see the Hello World Alert window as shown below.
Read more
  • 0
  • 0
  • 3150
Modal Close icon
Modal Close icon