Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-creating-spring-application
Packt
25 May 2015
18 min read
Save for later

Creating a Spring Application

Packt
25 May 2015
18 min read
In this article by Jérôme Jaglale, author of the book Spring Cookbook , we will cover the following recipes: Installing Java, Maven, Tomcat, and Eclipse on Mac OS Installing Java, Maven, Tomcat, and Eclipse on Ubuntu Installing Java, Maven, Tomcat, and Eclipse on Windows Creating a Spring web application Running a Spring web application Using Spring in a standard Java application (For more resources related to this topic, see here.) Introduction In this article, we will first cover the installation of some of the tools for Spring development: Java: Spring is a Java framework. Maven: This is a build tool similar to Ant. It makes it easy to add Spring libraries to a project. Gradle is another option as a build tool. Tomcat: This is a web server for Java web applications. You can also use JBoss, Jetty, GlassFish, or WebSphere. Eclipse: This is an IDE. You can also use NetBeans, IntelliJ IDEA, and so on. Then, we will build a Springweb application and run it with Tomcat. Finally, we'll see how Spring can also be used in a standard Java application (not a web application). Installing Java, Maven, Tomcat, and Eclipse on Mac OS We will first install Java 8 because it's not installed by default on Mac OS 10.9 or higher version. Then, we will install Maven 3, a build tool similar to Ant, to manage the external Java libraries that we will use (Spring, Hibernate, and so on). Maven 3 also compiles source files and generates JAR and WAR files. We will also install Tomcat 8, a popular web server for Java web applications, which we will use throughout this book. JBoss, Jetty, GlassFish, or WebSphere could be used instead. Finally, we will install the Eclipse IDE, but you could also use NetBeans, IntelliJ IDEA, and so on. How to do it… Install Java first, then Maven, Tomcat, and Eclipse. Installing Java Download Java from the Oracle website http://oracle.com. In the Java SE downloads section, choose the Java SE 8 SDK. Select Accept the License Agreement and download the Mac OS X x64 package. The direct link to the page is http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html. Open the downloaded file, launch it, and complete the installation. In your ~/.bash_profile file, set the JAVA_HOME environment variable. Change jdk1.8.0_40.jdk to the actual folder name on your system (this depends on the version of Java you are using, which is updated regularly): export JAVA_HOME="/Library/Java/JavaVirtualMachines/ jdk1.8.0_40.jdk/Contents/Home" Open a new terminal and test whether it's working: $ java -versionjava version "1.8.0_40"Java(TM) SE Runtime Environment (build 1.8.0_40-b26)Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25, mixed mode) Installing Maven Download Maven from the Apache website http://maven.apache.org/download.cgi. Choose the Binary zip file of the current stable version: Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). In your ~/.bash_profile file, add a MAVEN HOME environment variable pointing to that folder. For example: export MAVEN_HOME=~/bin/apache-maven-3.3.1 Add the bin subfolder to your PATH environment variable: export PATH=$PATH:$MAVEN_HOME/bin Open a new terminal and test whether it's working: $ mvn –vApache Maven 3.3.1 (12a6b3...Maven home: /Users/jerome/bin/apache-maven-3.3.1Java version: 1.8.0_40, vendor: Oracle CorporationJava home: /Library/Java/JavaVirtualMachines/jdk1.8.0_...Default locale: en_US, platform encoding: UTF-8OS name: "mac os x", version: "10.9.5", arch... … Installing Tomcat Download Tomcat from the Apache website http://tomcat.apache.org/download-80.cgi and choose the Core binary distribution. Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Make the scripts in the bin subfolder executable: chmod +x bin/*.sh Launch Tomcat using the catalina.sh script: $ bin/catalina.sh runUsing CATALINA_BASE:   /Users/jerome/bin/apache-tomcat-7.0.54...INFO: Server startup in 852 ms Tomcat runs on the 8080 port by default. In a web browser, go to http://localhost:8080/ to check whether it's working. Installing Eclipse Download Eclipse from http://www.eclipse.org/downloads/. Choose the Mac OS X 64 Bit version of Eclipse IDE for Java EE Developers. Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Launch Eclipse by executing the eclipse binary: ./eclipse There's more… Tomcat can be run as a background process using these two scripts: bin/startup.shbin/shutdown.sh On a development machine, it's convenient to put Tomcat's folder somewhere in the home directory (for example, ~/bin) so that its contents can be updated without root privileges. Installing Java, Maven, Tomcat, and Eclipse on Ubuntu We will first install Java 8. Then, we will install Maven 3, a build tool similar to Ant, to manage the external Java libraries that we will use (Spring, Hibernate, so on). Maven 3 also compiles source files and generates JAR and WAR files. We will also install Tomcat 8, a popular web server for Java web applications, which we will use throughout this book. JBoss, Jetty, GlassFish, or WebSphere could be used instead. Finally, we will install the EclipseIDE, but you could also use NetBeans, IntelliJ IDEA, and so on. How to do it… Install Java first, then Maven, Tomcat, and Eclipse. Installing Java Add this PPA (Personal Package Archive): sudo add-apt-repository -y ppa:webupd8team/java Refresh the list of the available packages: sudo apt-get update Download and install Java 8: sudo apt-get install –y oracle-java8-installer Test whether it's working: $ java -versionjava version "1.8.0_40"Java(TM) SE Runtime Environment (build 1.8.0_40-b25)...Java HotSpot(TM) 64-Bit Server VM (build 25.40-b25… Installing Maven Download Maven from the Apache website http://maven.apache.org/download.cgi. Choose the Binary zip file of the current stable version:   Uncompress the downloaded file and move the resulting folder to a convenient location (for example, ~/bin). In your ~/.bash_profile file, add a MAVEN HOME environment variable pointing to that folder. For example: export MAVEN_HOME=~/bin/apache-maven-3.3.1 Add the bin subfolder to your PATH environment variable: export PATH=$PATH:$MAVEN_HOME/bin Open a new terminal and test whether it's working: $ mvn –vApache Maven 3.3.1 (12a6b3...Maven home: /home/jerome/bin/apache-maven-3.3.1Java version: 1.8.0_40, vendor: Oracle Corporation... Installing Tomcat Download Tomcat from the Apache website http://tomcat.apache.org/download-80.cgi and choose the Core binary distribution.   Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Make the scripts in the bin subfolder executable: chmod +x bin/*.sh Launch Tomcat using the catalina.sh script: $ bin/catalina.sh run Using CATALINA_BASE:   /Users/jerome/bin/apache-tomcat-7.0.54 ... INFO: Server startup in 852 ms Tomcat runs on the 8080 port by default. Go to http://localhost:8080/ to check whether it's working. Installing Eclipse Download Eclipse from http://www.eclipse.org/downloads/. Choose the Linux 64 Bit version of Eclipse IDE for Java EE Developers.   Uncompress the downloaded file and move the extracted folder to a convenient location (for example, ~/bin). Launch Eclipse by executing the eclipse binary: ./eclipse There's more… Tomcat can be run as a background process using these two scripts: bin/startup.sh bin/shutdown.sh On a development machine, it's convenient to put Tomcat's folder somewhere in the home directory (for example, ~/bin) so that its contents can be updated without root privileges. Installing Java, Maven, Tomcat, and Eclipse on Windows We will first install Java 8. Then, we will install Maven 3, a build tool similar to Ant, to manage the external Java libraries that we will use (Spring, Hibernate, and so on). Maven 3 also compiles source files and generates JAR and WAR files. We will also install Tomcat 8, a popular web server for Java web applications, which we will use throughout this book. JBoss, Jetty, GlassFish, or WebSphere could be used instead. Finally, we will install the Eclipse IDE, but you could also use NetBeans, IntelliJ IDEA, and so on. How to do it… Install Java first, then Maven, Tomcat, and Eclipse. Installing Java Download Java from the Oracle website http://oracle.com. In the Java SE downloads section, choose the Java SE 8 SDK. Select Accept the License Agreement and download the Windows x64 package. The direct link to the page is http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html.   Open the downloaded file, launch it, and complete the installation. Navigate to Control Panel | System and Security | System | Advanced system settings | Environment Variables…. Add a JAVA_HOME system variable with the C:Program FilesJavajdk1.8.0_40 value. Change jdk1.8.0_40 to the actual folder name on your system (this depends on the version of Java, which is updated regularly). Test whether it's working by opening Command Prompt and entering java –version. Installing Maven Download Maven from the Apache website http://maven.apache.org/download.cgi. Choose the Binary zip file of the current stable version:   Uncompress the downloaded file. Create a Programs folder in your user folder. Move the extracted folder to it. Navigate to Control Panel | System and Security | System | Advanced system settings | Environment Variables…. Add a MAVEN_HOME system variable with the path to the Maven folder. For example, C:UsersjeromeProgramsapache-maven-3.2.1. Open the Path system variable. Append ;%MAVEN_HOME%bin to it.   Test whether it's working by opening a Command Prompt and entering mvn –v.   Installing Tomcat Download Tomcat from the Apache website http://tomcat.apache.org/download-80.cgi and choose the 32-bit/64-bit Windows Service Installer binary distribution.   Launch and complete the installation. Tomcat runs on the 8080 port by default. Go to http://localhost:8080/ to check whether it's working. Installing Eclipse Download Eclipse from http://www.eclipse.org/downloads/. Choose the Windows 64 Bit version of Eclipse IDE for Java EE Developers.   Uncompress the downloaded file. Launch the eclipse program. Creating a Spring web application In this recipe, we will build a simple Spring web application with Eclipse. We will: Create a new Maven project Add Spring to it Add two Java classes to configure Spring Create a "Hello World" web page In the next recipe, we will compile and run this web application. How to do it… In this section, we will create a Spring web application in Eclipse. Creating a new Maven project in Eclipse In Eclipse, in the File menu, select New | Project…. Under Maven, select Maven Project and click on Next >. Select the Create a simple project (skip archetype selection) checkbox and click on Next >. For the Group Id field, enter com.springcookbook. For the Artifact Id field, enter springwebapp. For Packaging, select war and click on Finish. Adding Spring to the project using Maven Open Maven's pom.xml configuration file at the root of the project. Select the pom.xml tab to edit the XML source code directly. Under the project XML node, define the versions for Java and Spring. Also add the Servlet API, Spring Core, and Spring MVC dependencies: <properties> <java.version>1.8</java.version> <spring.version>4.1.5.RELEASE</spring.version> </properties>   <dependencies> <!-- Servlet API --> <dependency>    <groupId>javax.servlet</groupId>    <artifactId>javax.servlet-api</artifactId>    <version>3.1.0</version>    <scope>provided</scope> </dependency>   <!-- Spring Core --> <dependency>    <groupId>org.springframework</groupId>    <artifactId>spring-context</artifactId>    <version>${spring.version}</version> </dependency>   <!-- Spring MVC --> <dependency>    <groupId>org.springframework</groupId>    <artifactId>spring-webmvc</artifactId>    <version>${spring.version}</version> </dependency> </dependencies> Creating the configuration classes for Spring Create the Java packages com.springcookbook.config and com.springcookbook.controller; in the left-hand side pane Package Explorer, right-click on the project folder and select New | Package…. In the com.springcookbook.config package, create the AppConfig class. In the Source menu, select Organize Imports to add the needed import declarations: package com.springcookbook.config; @Configuration @EnableWebMvc @ComponentScan (basePackages = {"com.springcookbook.controller"}) public class AppConfig { } Still in the com.springcookbook.config package, create the ServletInitializer class. Add the needed import declarations similarly: package com.springcookbook.config;   public class ServletInitializer extends AbstractAnnotationConfigDispatcherServletInitializer {    @Override    protected Class<?>[] getRootConfigClasses() {        return new Class<?>[0];    }       @Override    protected Class<?>[] getServletConfigClasses() {        return new Class<?>[]{AppConfig.class};    }      @Override    protected String[] getServletMappings() {        return new String[]{"/"};    } } Creating a "Hello World" web page In the com.springcookbook.controller package, create the HelloController class and its hi() method: @Controller public class HelloController { @RequestMapping("hi") @ResponseBody public String hi() {      return "Hello, world."; } } How it works… This section will give more you details of what happened at every step. Creating a new Maven project in Eclipse The generated Maven project is a pom.xml configuration file along with a hierarchy of empty directories: pom.xml src |- main    |- java    |- resources    |- webapp |- test    |- java    |- resources Adding Spring to the project using Maven The declared Maven libraries and their dependencies are automatically downloaded in the background by Eclipse. They are listed under Maven Dependencies in the left-hand side pane Package Explorer. Tomcat provides the Servlet API dependency, but we still declared it because our code needs it to compile. Maven will not include it in the generated .war file because of the <scope>provided</scope> declaration. Creating the configuration classes for Spring AppConfig is a Spring configuration class. It is a standard Java class annotated with: @Configuration: This declares it as a Spring configuration class @EnableWebMvc: This enables Spring's ability to receive and process web requests @ComponentScan(basePackages = {"com.springcookbook.controller"}): This scans the com.springcookbook.controller package for Spring components ServletInitializer is a configuration class for Spring's servlet; it replaces the standard web.xml file. It will be detected automatically by SpringServletContainerInitializer, which is automatically called by any Servlet 3. ServletInitializer extends the AbstractAnnotationConfigDispatcherServletInitializer abstract class and implements the required methods: getServletMappings(): This declares the servlet root URI. getServletConfigClasses(): This declares the Spring configuration classes. Here, we declared the AppConfig class that was previously defined. Creating a "Hello World" web page We created a controller class in the com.springcookbook.controller package, which we declared in AppConfig. When navigating to http://localhost:8080/hi, the hi()method will be called and Hello, world. will be displayed in the browser. Running a Spring web application In this recipe, we will use the Spring web application from the previous recipe. We will compile it with Maven and run it with Tomcat. How to do it… Here are the steps to compile and run a Spring web application: In pom.xml, add this boilerplate code under the project XML node. It will allow Maven to generate .war files without requiring a web.xml file: <build>    <finalName>springwebapp</finalName> <plugins>    <plugin>      <groupId>org.apache.maven.plugins</groupId>      <artifactId>maven-war-plugin</artifactId>      <version>2.5</version>      <configuration>       <failOnMissingWebXml>false</failOnMissingWebXml>      </configuration>    </plugin> </plugins> </build> In Eclipse, in the left-hand side pane Package Explorer, select the springwebapp project folder. In the Run menu, select Run and choose Maven install or you can execute mvn clean install in a terminal at the root of the project folder. In both cases, a target folder will be generated with the springwebapp.war file in it. Copy the target/springwebapp.war file to Tomcat's webapps folder. Launch Tomcat. In a web browser, go to http://localhost:8080/springwebapp/hi to check whether it's working.   How it works… In pom.xml the boilerplate code prevents Maven from throwing an error because there's no web.xml file. A web.xml file was required in Java web applications; however, since Servlet specification 3.0 (implemented in Tomcat 7 and higher versions), it's not required anymore. There's more… On Mac OS and Linux, you can create a symbolic link in Tomcat's webapps folder pointing to the.war file in your project folder. For example: ln -s ~/eclipse_workspace/spring_webapp/target/springwebapp.war ~/bin/apache-tomcat/webapps/springwebapp.war So, when the.war file is updated in your project folder, Tomcat will detect that it has been modified and will reload the application automatically. Using Spring in a standard Java application In this recipe, we will build a standard Java application (not a web application) using Spring. We will: Create a new Maven project Add Spring to it Add a class to configure Spring Add a User class Define a User singleton in the Spring configuration class Use the User singleton in the main() method How to do it… In this section, we will cover the steps to use Spring in a standard (not web) Java application. Creating a new Maven project in Eclipse In Eclipse, in the File menu, select New | Project.... Under Maven, select Maven Project and click on Next >. Select the Create a simple project (skip archetype selection) checkbox and click on Next >. For the Group Id field, enter com.springcookbook. For the Artifact Id field, enter springapp. Click on Finish. Adding Spring to the project using Maven Open Maven's pom.xml configuration file at the root of the project. Select the pom.xml tab to edit the XML source code directly. Under the project XML node, define the Java and Spring versions and add the Spring Core dependency: <properties> <java.version>1.8</java.version> <spring.version>4.1.5.RELEASE</spring.version> </properties>   <dependencies> <!-- Spring Core --> <dependency>    <groupId>org.springframework</groupId>    <artifactId>spring-context</artifactId>    <version>${spring.version}</version> </dependency> </dependencies> Creating a configuration class for Spring Create the com.springcookbook.config Java package; in the left-hand side pane Package Explorer, right-click on the project and select New | Package…. In the com.springcookbook.config package, create the AppConfig class. In the Source menu, select Organize Imports to add the needed import declarations: @Configuration public class AppConfig { } Creating the User class Create a User Java class with two String fields: public class User { private String name; private String skill; public String getName() {    return name; } public void setName(String name) {  this.name = name; } public String getSkill() {    return skill; } public void setSkill(String skill) {    this.skill = skill; } } Defining a User singleton in the Spring configuration class In the AppConfig class, define a User bean: @Bean public User admin(){    User u = new User();    u.setName("Merlin");    u.setSkill("Magic");    return u; } Using the User singleton in the main() method Create the com.springcookbook.main package with the Main class containing the main() method: package com.springcookbook.main; public class Main { public static void main(String[] args) { } } In the main() method, retrieve the User singleton and print its properties: AnnotationConfigApplicationContext springContext = new AnnotationConfigApplicationContext(AppConfig.class);   User admin = (User) springContext.getBean("admin");   System.out.println("admin name: " + admin.getName()); System.out.println("admin skill: " + admin.getSkill());   springContext.close(); Test whether it's working; in the Run menu, select Run.   How it works... We created a Java project to which we added Spring. We defined a User bean called admin (the bean name is by default the bean method name). In the Main class, we created a Spring context object from the AppConfig class and retrieved the admin bean from it. We used the bean and finally, closed the Spring context. Summary In this article, we have learned how to install some of the tools for Spring development. Then, we learned how to build a Springweb application and run it with Tomcat. Finally, we saw how Spring can also be used in a standard Java application.
Read more
  • 0
  • 0
  • 5487

article-image-getting-started-scratch-14-part-2
Packt
16 Oct 2009
7 min read
Save for later

Getting Started with Scratch 1.4 (Part 2)

Packt
16 Oct 2009
7 min read
Add sprites to the stage In the first part we learned that if we want something done in Scratch, we tell a sprite by using blocks in the scripts area. A single sprite can't be responsible for carrying out all our actions, which means we'll often need to add sprites to accomplish our goals. We can add sprites to the stage in one of the following four ways: paint new sprite, choose new sprite from file, get a surprise sprite, or by duplicating a sprite. Duplicating a sprite is not in the scope of this article. The buttons to insert a new sprite using the other three methods are directly above the sprites list. Let's be surprised. Click on get surprise sprite (the button with the "?" on it.). If the second sprite covers up the first sprite, grab one of them with your mouse and drag it around the screen to reposition it. If you don't like the sprite that popped up, delete it by selecting the scissors from the tool bar and clicking on the sprite. Then click on get surprise sprite again. Each sprite has a name that displays beneath the icon. See the previous screenshot for an example. Right now, our sprites are cleverly named Sprite1 and Sprite2. Get new sprites The create new sprite option allows you to draw a sprite using the Paint Editor when you need a sprite that you can't find anywhere else. You can also create sprites using third-party graphics programs, such as Adobe Photoshop, GIMP, and Tux Paint. If you create a sprite in a different program, then you need to import the sprite using the choose new sprite from file option. Scratch also bundles many sprites with the installation, and the choose new sprite from file option will allow you to select one of the included files. The bundled sprites are categorized into Animals, Fantasy, Letters, People, Things, and Transportation, as seen in the following screenshot: If you look at the screenshot carefully, you'll notice the folder path lists Costumes, not sprites. A costume is really a sprite. If you want to be surprised, then use the get surprise sprite option to add a sprite to the project. This option picks a random entry from the gallery of bundled sprites. We can also add a new sprite by duplicating a sprite that's already in the project by right-clicking on the sprite in the sprites list and choosing duplicate (command C on Mac). As the name implies, this creates a clone of the sprite. The method we use to add a new sprite depends on what we are trying to do and what we need for our project. Time for action – spin sprite spin Let's get our sprites spinning. To start, click on Sprite1 from the sprites list. This will let us edit the script for Sprite1. From the Motion palette, drag the turn clockwise 15 degrees block into the script for Sprite1 and snap it in place after the if on edge, bounce block. Change the value on the turn block to 5. From the sprites list, click on Sprite2. From the Motion palette, drag the turn clockwise 15 degrees block into the scripts area. Find the repeat 10 block from the Control palette and snap it around the turn clockwise 15 degrees block. Wrap the script in the forever block. Place the when space key pressed block on top of the entire stack of blocks. From the Looks palette, snap the say hello for 2 secs block onto the bottom of the repeat block and above the forever block. Change the value on the repeat block to 100. Change the value on the turn clockwise 15 degrees block to 270. Change the value on the say block to I'm getting dizzy! Press the Space bar and watch the second sprite spin. Click the flag and set the second sprite on a trip around the stage. What just happened? We have two sprites on the screen acting independently of each other. It seems simple enough, but let's step through our script. Our cat got bored bouncing in a straight line across the stage, so we introduced some rotation. Now as the cat walked, it turned five degrees each time the blocks in the forever loop ran. This caused the cat to walk in an arc. As the cat bounced off the stage, it got a new trajectory. We told Sprite2 to turn 270 degrees for 100 consecutive times. Then the sprite stopped for two seconds and displayed a message, "I'm getting dizzy!" Because the script was wrapped in a forever block, Sprite2 started tumbling again. We used the space bar as the control to set Sprite2 in motion. However, you noticed that Sprite1 did not start until we clicked the flag. That's because we programmed Sprite1 to start when the flag was clicked. Have a go hero Make Sprite2 less spastic. Instead of turning 270 degrees, try a smaller value, such as 5. Sometimes we need inspiration So far, we've had a cursory introduction to Scratch, and we've created a few animations to illustrate some basic concepts. However, now is a good time to pause and talk about inspiration. Sometimes we learn by examining the work of other people and adapting that work to create something new that leads to creative solutions. When we want to see what other people are doing with Scratch, we have two places to turn. First, our Scratch installation contains dozens of sample projects. Second, the Scratch web site at http://scratch.mit.edu maintains a thriving community of Scratchers. Browse Scratch's projects Scratch includes several categories of projects for Animation, Games, Greetings, Interactive Art, Lists, Music and Dance, Names, Simulations, Speak up, and Stories. Time for action – spinner Let's dive right in. From the Scratch interface, click the Open button to display the Open Project dialog box, as seen in the following screenshot. Click on the Examples button. Select Simulations and click OK. Select Spinner and click OK to load the Spinner project. Follow the instructions on the screen and spin the arrow by clicking on the arrow. We're going to edit the spinner wheel. From the sprites list, click on Stage. From the scripts area, click the Backgrounds tab. Click Edit on background number 1 to open the Paint Editor. Select a unique color from the color palette, such as purple. Click on the paint bucket from the toolbar, then click on one of the triangles in the circle to change its color. The paint bucket is highlighted in the following screenshot. Click OK to return to our project. What just happened? We opened a community project called Spinner that came bundled with Scratch. When we clicked on the arrow, it spun and randomly selected a color from the wheel. We got our first look at a project that uses a background for the stage and modified the background using Scratch's built-in image editor. The Paint Editor in Scratch provides a basic but functional image editing environment. Using the Paint Editor, we can create a new sprite/background and modify a sprite/background. This can be useful if we are working with a sprite or background that someone else has created. Costume versus background A costume defines the look of a sprite while a background defines the look of the stage. A sprite may have multiple costumes just as the stage can have multiple backgrounds. When we want to work with the backgrounds on the stage, we use the switch to background and next background blocks. We use the switch to costume and next costume blocks when we want to manipulate a sprite's costume. Actually, if you look closely at the available looks blocks when you're working with a sprite, you'll realize that you can't select the backgrounds. Likewise, if you're working with the stage, you can't select costumes.
Read more
  • 0
  • 0
  • 5462

Packt
08 Aug 2013
4 min read
Save for later

Ext.NET – Understanding Direct Methods and Direct Events

Packt
08 Aug 2013
4 min read
(For more resources related to this topic, see here.) How to do it... The steps to handle events raised by different controls are as follows: Open the Pack.Ext2.Examples solution Press F5 or click on the Start button to run the solution. Click on the Direct Methods & Events hyperlink. This will run the example code for this recipe. Familiarize yourself with the code behind and the client-side markup. How it works... Applying the [DirectMethod(namespace="ExtNetExample")] attribute to the server-side method GetDateTime(int timeDiff) has exposed this method to our client-side code with the namespace of ExtNetExample, which we append to the method name call on the client side. As we can see in the example code, we call this server method in the markup using the Ext.NET button btnDateTime and the code ExtNetExamples.GetDateTime(3). When the call hits the server, we update the Ext.NET control lblDateTime text property, which updates the control related to the property. Adding namespace="ExtNetExample" allows us to neatly group server-side methods and the JavaScript calls in our code. A good notation is CompanyName.ProjectName. BusinessDomain.MethodName. Without applying the namespace attribute, we would access our server-side method using the default namespace of App.direct. So, to call the GetDateTime method without the namespace attribute, we would use App.direct. GetDateTime(3). We can also see how to return a response from Direct Method to the client-side JavaScript. If a Direct Method returns a value, it is sent back to the success function defined in a configuration object. This configuration object contains a number of functions, properties, and objects. We have dealt with the two most common functions in our example, the success and failure responses. The server-side method GetCar()returns a custom object called Car. If the btnReturnResponse button is clicked on and GetCar() successfully returns a response, we can access the value when Ext.NET calls the JavaScript function named in the success configuration object CarResponseSuccess. This JavaScript function accepts the response parameter from the method and we can process it accordingly. The response parameter is serialized into JSON, and so object values can be accessed using the JavaScript object notation of object.propertyValue. Note that we alert the FirstRegistered property of the Car object returned. Likewise, if a failure response is received, we call the client-side method CarResponseFailure alerting the response, which is a string value. There are a number of other properties that form a part of the configuration object, which can be accessed as part of the callback, for example, failure to return a response. Please refer to the Direct Methods Overview Ext.NET examples website (http://examples.ext.net/#/ Events/DirectMethods/Overview/ ). To demonstrate DirectEvent in action, we've declared a button called btnFireEvent and secondly, a checkbox called chkFireEvent. Note that each control points to the same DirectEvent method called WhoFiredMe. You'll notice that in the markup we declare the WhoFiredMe method using the OnEvent property of the controls. This means that when the Click event is fired on the btnFireEvent button and the Change event is fired on the chkFireEvent checkbox, a request to the server is made where we call the WhoFiredMe method. From this, we can get the control that invoked the request via the object sender parameter and the arguments of the event using the DirectEventArgs e method. Note that we don't have to decorate the DirectEvent method, WhoFiredMe, with any attributes. Ext.NET takes care of all the plumbing. We just need to specify the method, which needs to be called on the server. There's more... Raising DirectMethods is far more flexible in terms of being able to specify the parameters you want to send to the server. You also have the ability to send the control objects to the server or to client-side functions using the #{controlId} notation. It is generally not a good idea though to send the whole control to the server from a Direct Method, as Ext.NET controls can contain references to themselves. Therefore, when Ext.NET encodes the control, it can end up in an infinite loop, and you will end up breaking your code. With a DirectEvent method, you can send extra parameters to the server using the ExtraParams property inside the controls event element. This can then be accessed using the e parameter on the server. Summary In this article we discussed about how to connect client-side and server-side code. Resources for Article : Further resources on this subject: Working with Microsoft Dynamics AX and .NET: Part 1 [Article] Working with Microsoft Dynamics AX and .NET: Part 2 [Article] Dynamically enable a control (Become an expert) [Article]
Read more
  • 0
  • 0
  • 5391

article-image-trixbox-ce-functions-and-features
Packt
15 Oct 2009
6 min read
Save for later

trixbox CE Functions and Features

Packt
15 Oct 2009
6 min read
Standard features The following sections will break down the list of available features by category. While the codes listed are the default settings, they can be modified in the PBX Configuration tool using the Feature Codes module. These features are invoked by dialing the code from a registered SIP or IAX endpoint, or via an analog extension plugged into an FXS port. Some of the following features require the appropriate PBX Configuration tool module to be installed. Call forwarding The call forwarding mechanism is both powerful and flexible. With the different options, you can perform a number of different functions or even create a basic find-me/follow-me setup when using a feature like call forward on no answer, or send callers to your assistant if you are on a call using call forward on busy. Function Code Call Forward All Activate *72 Call Forward All Deactivate *73 Call Forward All Prompting *74 Call Forward Busy Activate *90 Call Forward Busy Deactivate *91 Call Forward Busy Prompting Deactivate *92 Call Forward No Answer/Unavailable Activate *52 Call Forward No Answer/Unavailable Deactivate *53 Call waiting The call waiting setting determines whether a call will be put through to your phone if you are already on a call. This can be useful in some call center environments where you don't want agents to be disturbed by other calls when they are working with clients. Function Code Call Waiting Activate *70 Call Waiting Deactivate *71 Core features The core features control basic functions such as transfers and testing inbound calls. Simulating an inbound call is useful for testing a system without having to call into it. If you don't have any trunks hooked up, it is the easiest way to check your call flow. Once you have telephone circuits connected, you can still use the function to test your call flow without having to take up any of your circuits. Function Code Call Pickup ** Dial System FAX 666 Simulate Incoming Call 7777 Active call codes These codes are active during a call for features like transferring and recording calls. While some phones have some of these features built into the device itself, others are only available via feature codes. For example, you can easily do call transfers using most modern SIP phones, like Aastra's or Polycom's, by hitting the transfer button during a call. Function Code In-Call Asterisk Attended Transfer *2 In-Call Asterisk Blind Transfer ## Transfer call directly to extension's mailbox *+Extension Begin recording current call *1 End Recording current call *2 Park current call #70 Agent features The agent features are used most often in a Call Center environment to monitor different calls and for agents to log in and log out of queues. Function Code Agent Logoff *12 Agent Logon *11 ChanSpy (Monitor different channels) 555 ZapBarge (Monitor Zap channels) 888 Blacklisting If you have the PBX Configuration tool Blacklist module installed, then you have the ability to blacklist callers from being able to call into the system. This is great for blocking telemarketers, bill collectors, ex-girl/boyfriends, and your mother-in-law. Function Code Blacklist a number *30 Blacklist the last caller *32 Remove a number from the blacklist *31 Day / Night mode If you have the PBX Configuration tool Day/Night mode module installed, then you can use a simple key command to switch between day and night IVR recordings. This is great for companies that don't work off a set schedule everyday but want to manually turn on and off an off-hours greeting. Function Code Toggle Day / Night Mode *28 Do not disturb Usually, do-not-disturb functions are handled at the phone level. If you do not have phones with a DND button on them, then you can install this module to enable key commands to toggle Do Not Disturb on and off. Function Code DND Activate *78 DNS Deactivate *79 Info services The info services are some basic functions that provide information back to you without changing any settings. These are most often used for testing and debugging purposes. Function Code Call Trace *69 Directory # Echo Test *43 Speak your extension number *65 Speaking Clock *60 Intercom If you have a supported model of phone then you can install the PBX Configuration tool module to enable paging and intercom via the telephone's speakerphones. Function Code Intercom Prefix *80 User Allow Intercom *54 User Disallow Intercom *55 Voicemail If you want to access your voicemail from any extension then you need to choose 'Dial Voicemail System', otherwise using 'Dial My Voicemail' will use the extension number you are calling from and only prompt for the password. Function Code Dial Voicemail System *98 Dial My Voicemail *97 Adding new features The ability to add new features is built into the system. One common thing to do is to redirect 411 calls to a free service like Google's free service. The following steps will walk you through how to add a custom feature like this to your system. Begin by going to the Misc Destination module and enter a Description of the destination you want to create. Next, go to Misc Application to create the application. Here we will enter anotherDescription and the number we want to use to dial the application, make sure the feature is enabled, and then point to the destination that we created in the previous step. As you can see, any code can be assigned to any destination and a custom destination can consist of anything you can dial. This allows you to create many different types of custom features within your system. Voicemail features trixbox CE comes with the Asterisk Mail voicemail system. Asterisk mail is a fairly robust and useful voicemail system. The Asterisk Mail voicemail system can be accessed by any internal extension or by dialing into the main IVR system. As we saw earlier in this article, there are two ways of accessing the voicemail system, 'Dial Voicemail' and 'Dial My Voicemail'. To access the main voicemail system, we can dial *98 from any extension; we will then be prompted for our extension and our voicemail password. If we dial *97 for the 'My Voicemail' feature, the system will use the extension number you dialed in from and only prompt you for your voicemail password. The following tables will show you the basic structure of the voicemail menu system: Voicemail main menu options Press: 1 to Listen to (New) Messages 2 to Change Folders 0 for Mailbox Options * for Help # to Exit Listen to messages Press: 5 to Repeat Message 6 to Play Next Message 7 to Delete Message 8 to Forward to another userEnter Extension and press # 1 to Prepend a Message to forwarded message 2 to Forward without prepending 9 to Save Message 0 for New Messages 1 for Old Messages 2 for Work Messages 3 for Family Messages 4 for Friends Messages * for Help # to Cancel/Exit to Main Menu Change folders Press: 0 for New Messages 1 for Old Messages 2 for Work Messages 3 for Family Messages 4 for Friends' Messages # to Cancel/Exit to Main Menu Mailbox options Press: 1 to Record your Un-Available Message 2 to Record your Busy message 3 to Record your Name 4 to Change your Password # to Cancel/Exit to Main Menu
Read more
  • 0
  • 0
  • 5387

article-image-testing-groovy
Packt
18 Oct 2013
24 min read
Save for later

Testing with Groovy

Packt
18 Oct 2013
24 min read
(For more resources related to this topic, see here.) This article is completely devoted to testing in Groovy. Testing is probably the most important activity that allows us to produce better software and make our users happier. The Java space has countless tools and frameworks that can be used for testing our software. In this article, we will direct our focus on some of those frameworks and how they can be integrated with Groovy. We will discuss not only unit testing techniques, but also integration and load testing strategies. Starting from the king of all testing frameworks, JUnit and its seamless Groovy integration, we move to explore how to test: SOAP and REST web services Code that interacts with databases The web application interface using Selenium The article also covers Behavior Driven Development (BDD) with Spock, advanced web service testing using soapUI, and load testing using JMeter. Unit testing Java code with Groovy One of the ways developers start looking into the Groovy language and actually using it is by writing unit tests. Testing Java code with Groovy makes the tests less verbose and it's easier for the developers that clearly express the intent of each test method. Thanks to the Java/Groovy interoperability, it is possible to use any available testing or mocking framework from Groovy, but it's just simpler to use the integrated JUnit based test framework that comes with Groovy. In this recipe, we are going to look at how to test Java code with Groovy. Getting ready This recipe requires a new Groovy project that we will use again in other recipes of this article. The project is built using Gradle and contains all the test cases required by each recipe. Let's create a new folder called groovy-test and add a build.gradle file to it. The build file will be very simple: apply plugin: 'groovy' apply plugin: 'java' repositories { mavenCentral() maven { url 'https://oss.sonatype.org' + '/content/repositories/snapshots' } } dependencies { compile 'org.codehaus.groovy:groovy-all:2.1.6' testCompile 'junit:junit:4.+' } The standard source folder structure has to be created for the project. You can use the following commands to achieve this: mkdir -p src/main/groovy mkdir -p src/test/groovy mkdir -p src/main/java Verify that the project builds without errors by typing the following command in your shell: gradle clean build How to do it... To show you how to test Java code from Groovy, we need to have some Java code first! So, this recipe's first step is to create a simple Java class called StringUtil, which we will test in Groovy. The class is fairly trivial, and it exposes only one method that concatenates String passed in as a List: package org.groovy.cookbook; import java.util.List; public class StringUtil { public String concat(List<String> strings, String separator) { StringBuilder sb = new StringBuilder(); String sep = ""; for(String s: strings) { sb.append(sep).append(s); sep = separator; } return sb.toString(); } } Note that the class has a specific package, so don't forget to create the appropriate folder structure for the package when placing the class in the src/main/java folder. Run the build again to be sure that the code compiles. Now, add a new test case in the src/test/groovy folder: package org.groovy.cookbook.javatesting import org.groovy.cookbook.StringUtil class JavaTest extends GroovyTestCase { def stringUtil = new StringUtil() void testConcatenation() { def result = stringUtil.concat(['Luke', 'John'], '-') assertToString('Luke-John', result) } void testConcatenationWithEmptyList() { def result = stringUtil.concat([], ',') assertEquals('', result) } } Again, pay attention to the package when creating the test case class and create the package folder structure. Run the test by executing the following Gradle command from your shell: gradle clean build Gradle should complete successfully with the BUILD SUCCESSFUL message. Now, add a new test method and run the build again: void testConcatenationWithNullShouldReturnNull() { def result = stringUtil.concat(null, ',') assertEquals('', result) } This time the build should fail: 3 tests completed, 1 failed :test FAILED FAILURE: Build failed with an exception. Fix the test by adding a null check to the Java code as the first statement of the concat method: if (strings == null) { return ""; } Run the build again to verify that it is now successful. How it works... The test case shown at step 3 requires some comments. The class extends GroovyTestCase is a base test case class that facilitates the writing of unit tests by adding several helper methods to the classes extending it. When a test case extends from GroovyTestCase, each test method name must start with test and the return type must be void. It is possible to use the JUnit 4.x @Test annotation, but, in that case, you don't have to extend from GroovyTestCase. The standard JUnit assertions (such as assertEquals and assertNull) are directly available in the test case (without explicit import) plus some additional assertion methods are added by the super class. The test case at step 3 uses assertToString to verify that a String matches the expected result. There are other assertions added by GroovyTestCase, such as assertArrayEquals, to check that two arrays contain the same values, or assertContains to assert that an array contains a given element. There's more... The GroovyTestCase class also offers an elegant method to test for expected exceptions. Let's add the following rule to the concat method: if (separator.length() != 1) { throw new IllegalArgumentException( "The separator must be one char long"); } Place the separator length check just after the null check for the List. Add the following new test method to the test case: void testVerifyExceptionOnWrongSeparator() { shouldFail IllegalArgumentException, { stringUtil(['a', 'b'], ',,') } shouldFail IllegalArgumentException, { stringUtil(['c', 'd'], '') } } The shouldFail method takes a closure that is executed in the context of a try-catch block. We can also specify the expected exception in the shouldFail method. The shouldFail method makes testing for exceptions very elegant and simple to read. See also http://junit.org/ http://groovy.codehaus.org/api/groovy/util/GroovyTestCase.html Testing SOAP web services This recipe shows you how to use the JUnit 4 annotations instead of the JUnit 3 API offered by GroovyTestCase. Getting ready For this recipe, we are going to use a publicly available web service, the US holiday date web service hosted at http://www.holidaywebservice.com. The WSDL of the service can be found on the /Holidays/US/Dates/USHolidayDates.asmx?WSDL path. We have already encountered this service in the Issuing a SOAP request and parsing a response recipe in Article 8, Working with Web Services in Groovy. Each service operation simply returns the date of a given US holiday such as Easter or Christmas. How to do it... We start from the Gradle build that we created in the Unit testing Java code with Groovy recipe. Add the following dependency to the dependencies section of the build.gradle file: testCompile 'com.github.groovy-wslite:groovy-wslite:0.8.0' Let's create a new unit test for verifying one of the web service operations. As usual, the test case is created in the src/test/groovy/org/groovy/cookbook folder: package org.groovy.cookbook.soap import static org.junit.Assert.* import org.junit.Test import wslite.soap.* class SoapTest { ... } Add a new test method to the body of the class: @Test void testMLKDay() { def baseUrl = 'http://www.holidaywebservice.com' def service = '/Holidays/US/Dates/USHolidayDates.asmx?WSDL' def client = new SOAPClient("${baseUrl}${service}") def baseNS = 'http://www.27seconds.com/Holidays/US/Dates/' def action = "${baseNS}GetMartinLutherKingDay" def response = client.send(SOAPAction: action) { body { GetMartinLutherKingDay('>gradle -Dtest.single=SoapTest clean test How it works... The test code creates a new SOAPClient with the URI of the target web service. The request is created using Groovy's MarkupBuilder. The body closure (and if needed, also the header closure) is passed to the MarkupBuilder for the SOAP message creation. The assertion code gets the result from the response, which is automatically parsed by XMLSlurper, allowing easy access to elements of the response such as the header or the body elements. In the previous test, we simply check that the returned Martin Luther King day matches with the expected one for the year 2013. There's more... If you require more control over the content of the SOAP request, the SOAPClient also supports sending the SOAP envelope as a String, such as in this example: def response = client.send ( """<?xml version='1.0' encoding='UTF-8'?> <soapenv:Envelope> <soapenv:Header/> <soapenv:Body> <dat:GetMartinLutherKingDay> <dat:year>2013</dat:year> </dat:GetMartinLutherKingDay> </soapenv:Body> </soapenv:Envelope> """ ) Replace the call to the send method in step 3 with the one above and run your test again. See also https://github.com/jwagenleitner/groovy-wslite http://www.holidaywebservice.com Testing RESTful services This recipe is very similar to the previous recipe Testing SOAP web services, except that it shows how to test a RESTful service using Groovy and JUnit. Getting ready For this recipe, we are going to use a test framework aptly named Rest-Assured. This framework is a simple DSL for testing and validating REST services returning either JSON or XML. Before we start to delve into the recipe, we need to start a simple REST service for testing purposes. We are going to use the Ratpack framework. The test REST service, which we will use, exposes three APIs to fetch, add, and delete books from a database using JSON as lingua franca. For the sake of brevity, the code for the setup of this recipe is available in the rest-test folder of the companion code for this article. The code contains the Ratpack server, the domain objects, Gradle build, and the actual test case that we are going to analyze in the next section. How to do it... The test case takes care of starting the Ratpack server and execute the REST requests. Here is the RestTest class located in src/test/groovy/org/groovy/ cookbok/rest folder: src/test/groovy/org/groovy/ cookbok/rest folder: package org.groovy.cookbook.rest import static com.jayway.restassured.RestAssured.* import static com.jayway.restassured.matcher. RestAssuredMatchers.* import static org.hamcrest.Matchers.* import static org.junit.Assert.* import groovy.json.JsonBuilder import org.groovy.cookbook.server.* import org.junit.AfterClass import org.junit.BeforeClass import org.junit.Test class RestTest { static server final static HOST = 'http://localhost:5050' @BeforeClass static void setUp() { server = App.init() server.startAndWait() } @AfterClass static void tearDown() { if(server.isRunning()) { server.stop() } } @Test void testGetBooks() { expect(). body('author', hasItems('Ian Bogost', 'Nate Silver')). when().get("${HOST}/api/books") } @Test void testGetBook() { expect(). body('author', is('Steven Levy')). when().get("${HOST}/api/books/5") } @Test void testPostBook() { def book = new Book() book.author = 'Haruki Murakami' book.date = '2012-05-14' book.title = 'Kafka on the shore' JsonBuilder jb = new JsonBuilder() jb.content = book given(). content(jb.toString()). expect().body('id', is(6)). when().post("${HOST}/api/books/new") } @Test void testDeleteBook() { expect().statusCode(200). when().delete("${HOST}/api/books/1") expect().body('id', not(hasValue(1))). when().get("${HOST}/api/books") } } Build the code and execute the test from the command line by typing: gradle clean test How it works... The JUnit test has a @BeforeClass annotated method, executed at the beginning of the unit test, that starts the Ratpack server and the associated REST services. The @AfterClass annotated method, on the contrary, shuts down the server when the test is over. The unit test has four test methods. The first one, testGetBooks executes a GET request against the server and retrieves all the books. The rather readable DSL offered by the Rest-Assured framework should be easy to follow. The expect method starts building the response expectation returned from the get method. The actual assert of the test is implemented via a Hamcrest matcher (hence the static org.hamcrest.Matchers.* import in the test). The test is asserting that the body of the response contains two books that have the author named Ian Bogost or Greg Grandin. The get method hits the URL of the embedded Ratpack server, started at the beginning of the test. The testGetBook method is rather similar to the previous one, except that it uses the is matcher to assert the presence of an author on the returned JSON message. The testPostBook tests that the creation of a new book is successful. First, a new book object is created and transformed into a JSON object using JsonBuilder. Instead of the expect method, we use the given method to prepare the POST request. The given method returns a RequestSpecification to which we assign the newly created book and finally, invoke the post method to execute the operation on the server. As in our book database, the biggest identifier is 8, the new book should get the id 9, which we assert in the test. The last test method (testDeleteBook) verifies that a book can be deleted. Again we use the expect method to prepare the response, but this time we verify that the returned HTTP status code is 200 (for deletion) upon deleting a book with the id 1. The same test also double-checks that fetching the full list of books does not contain the book with id equals to 1. See also https://code.google.com/p/rest-assured/ https://code.google.com/p/hamcrest/ https://github.com/ratpack/ratpack Writing functional tests for web applications If you are developing web applications, it is of utmost importance that you thoroughly test them before allowing user access. Web GUI testing can require long hours of very expensive human resources to repeatedly exercise the application against a varied list of input conditions. Selenium, a browser-based testing and automation framework, aims to solve these problems for software developers, and it has become the de facto standard for web interface integration and functional testing. Selenium is not just a single tool but a suite of software components, each catering to different testing needs of an organization. It has four main parts: Selenium Integrated Development Environment (IDE): It is a Firefox add-on that you can only use for creating relatively simple test cases and test suites. Selenium Remote Control (RC): It also known as Selenium 1, is the first Selenium tool that allowed users to use programming languages in creating complex tests. WebDriver: It is the newest Selenium implementation that allows your test scripts to communicate directly to the browser, thereby controlling it from the OS level. Selenium Grid: It is a tool that is used with Selenium RC to execute parallel tests across different browsers and operating systems. Since 2008, Selenium RC and WebDriver are merged into a single framework to form Selenium 2. Selenium 1 refers to Selenium RC. This recipe will show you how to write a Selenium 2 based test using HtmlUnitDriver. HtmlUnitDriver is the fastest and most lightweight implementation of WebDriver at the moment. As the name suggests, it is based on HtmlUnit, a relatively old framework for testing web applications. The main disadvantage of using this driver instead of a WebDriver implementation that "drives" a real browser is the JavaScript support. None of the popular browsers use the JavaScript engine used by HtmlUnit (Rhino). If you test JavaScript using HtmlUnit, the results may divert considerably from those browsers. Still, WebDriver and HtmlUnit can be used for fast paced testing against a web interface, leaving more JavaScript intensive tests to other, long running, WebDriver implementations that use specific browsers. Getting ready Due to the relative complexity of the setup required to demonstrate the steps of this recipe, it is recommended that the reader uses the code that comes bundled with this recipe. The code is located in the selenium-test located in the code directory for this article. The source code, as other recipes in this article, is built using Gradle and has a standard structure, containing application code and test code. The web application under test is very simple. It is composed of two pages. A welcome page that looks similar to the following screenshot: And a single field test form page: The Ratpack framework is utilized to run the fictional web application and serve the HTML pages along with some JavaScript and CSS. How to do it... The following steps will describe the salient points of Selenium testing with Groovy. Let's open the build.gradle file. We are interested in the dependencies required to execute the tests: testCompile group: 'org.seleniumhq.selenium', name: 'selenium-htmlunit-driver', version: '2.32.0' testCompile group: 'org.seleniumhq.selenium', name: 'selenium-support', version: '2.9.0' Let's open the test case, SeleniumTest.groovy located in test/groovy/org/ groovy/cookbook/selenium: test/groovy/org/ groovy/cookbook/selenium: package org.groovy.cookbook.selenium import static org.junit.Assert.* import org.groovy.cookbook.server.* import org.junit.AfterClass import org.junit.BeforeClass import org.junit.Test import org.openqa.selenium.By import org.openqa.selenium.WebDriver import org.openqa.selenium.WebElement import org.openqa.selenium.htmlunit.HtmlUnitDriver import org.openqa.selenium.support.ui.WebDriverWait import com.google.common.base.Function class SeleniumTest { static server static final HOST = 'http://localhost:5050' static HtmlUnitDriver driver @BeforeClass static void setUp() { server = App.init() server.startAndWait() driver = new HtmlUnitDriver(true) } @AfterClass static void tearDown() { if (server.isRunning()) { server.stop() } } @Test void testWelcomePage() { driver.get(HOST) assertEquals('welcome', driver.title) } @Test void testFormPost() { driver.get("${HOST}/form") assertEquals('test form', driver.title) WebElement input = driver.findElement(By.name('username')) input.sendKeys('test') input.submit() WebDriverWait wait = new WebDriverWait(driver, 4) wait.until ExpectedConditions. presenceOfElementLocated(By.className('hello')) assertEquals('oh hello,test', driver.title) } } How it works... The test case initializes the Ratpack server and the HtmlUnit driver by passing true to the HtmlUnitDriver instance. The boolean parameter in the constructor indicates whether the driver should support JavaScript. The first test, testWelcomePage, simply verifies that the title of the website's welcome page is as expected. The get method executes an HTTP GET request against the URL specified in the method, the Ratpack server in our test. The second test, testFormPost, involves the DOM manipulation of a form, its submission, and waiting for an answer from the server. The Selenium API should be fairly readable. For a start, the test checks that the page containing the form has the expected title. Then the element named username (a form field) is selected, populated, and finally submitted. This is how the HTML looks for the form field: <input type="text" name="username" placeholder="Your username"> The test uses the findElement method to select the input field. The method expects a By object that is essentially a mechanism to locate elements within a document. Elements can be identified by name, id, text link, tag name, CSS selector, or XPath expression. The form is submitted via AJAX. Here is part of the JavaScript activated by the form submission: complete:function(xhr, status) { if (status === 'error' || !xhr.responseText) { alert('error') } else { document.title = xhr.responseText jQuery(e.target). replaceWith('<p class="hello">' + xhr.responseText + '</p>') } } After the form submission, the DOM of the page is manipulated to change the page title of the form page and replace the form DOM element with a message wrapped in a paragraph element. To verify that the DOM changes have been applied, the test uses the WebDriverWait class to wait until the DOM is actually modified and the element with the class hello appears on the page. The WebDriverWait is instantiated with a four seconds timeout. This recipe only scratches the surface of the Selenium 2 framework's capabilities, but it should get you started to implement your own integration and functional test. See also http://docs.seleniumhq.org/ http://htmlunit.sourceforge.net/ Writing behavior-driven tests with Groovy Behavior Driven Development, or simply BDD, is a methodology where QA, business analysts, and marketing people could get involved in defining the requirements of a process in a common language. It could be considered an extension of Test Driven Development, although is not a replacement. The initial motivation for BDD stems from the perplexity of business people (analysts, domain experts, and so on) to deal with "tests" as these seem to be too technical. The employment of the word "behaviors" in the conversation is a way to engage the whole team. BDD states that software tests should be specified in terms of the desired behavior of the unit. The behavior is expressed in a semi-formal format, borrowed from user story specifications, a format popularized by agile methodologies. For a deeper insight into the BDD rationale, it is highly recommended to read the original paper from Dan North available at http://dannorth.net/introducing-bdd/. Spock is one of the most widely used frameworks in the Groovy and Java ecosystem that allows the creation of BDD tests in a very intuitive language and facilitates some common tasks such as mocking and extensibility. What makes it stand out from the crowd is its beautiful and highly expressive specification language. Thanks to its JUnit runner, Spock is compatible with most IDEs, build tools, and continuous integration servers. In this recipe, we are going to look at how to implement both a unit test and a web integration test using Spock. Getting ready This recipe has a slightly different setup than most of the recipes in this book, as it resorts to an existing web application source code, freely available on the Internet. This is the Spring Petclinic application provided by SpringSource as a demonstration of the latest Spring framework features and configurations. The web application works as a pet hospital and most of the interactions are typical CRUD operations against certain entities (veterinarians, pets, pet owners). The Petclinic application is available in the groovy-test/spock/web folder of the companion code for this article. All Petclinic's original tests have been converted to Spock. Additionally we created a simple integration test that uses Spock and Selenium to showcase the possibilities offered by the integration of the two frameworks. As usual, the recipe uses Gradle to build the reference web application and the tests. The Petclinic web application can be started by launching the following Gradle command in your shell from the groovy-test/spock/web folder: gradle tomcatRunWar If the application starts without errors, the shell should eventually display the following message: The Server is running at http://localhost:8080/petclinic Take some time to familiarize yourself with the Petclinic application by directing your browser to http://localhost:8080/petclinic and browsing around the website: How to do it... The following steps will describe the key concepts for writing your own behavior-driven unit and integration tests: Let's start by taking a look at the dependencies required to implement a Spock-based test suite: testCompile 'org.spockframework:spock-core:0.7-groovy-2.0' testCompile group: 'org.seleniumhq.selenium', name: 'selenium-java', version: '2.16.1' testCompile group: 'junit', name: 'junit', version: '4.10' testCompile 'org.hamcrest:hamcrest-core:1.2' testRuntime 'cglib:cglib-nodep:2.2' testRuntime 'org.objenesis:objenesis:1.2' This is what a BDD unit test for the application's business logic looks like: package org.springframework.samples.petclinic.model import spock.lang.* class OwnerTest extends Specification { def "test pet and owner" () { given: def p = new Pet() def o = new Owner() when: p.setName("fido") then: o.getPet("fido") == null o.getPet("Fido") == null when: o.addPet(p) then: o.getPet("fido").equals(p) o.getPet("Fido").equals(p) } } The test is named OwnerTest.groovy, and it is available in the spock/web/src/ test folder of the main groovy-test project that comes with this article. The third test in this recipe mixes Spock and Selenium, the web testing framework already discussed in Writing functional tests for web applications recipe: package org.cookbook.groovy.spock import static java.util.concurrent.TimeUnit.SECONDS import org.openqa.selenium.By import org.openqa.selenium.WebElement import org.openqa.selenium.htmlunit.HtmlUnitDriver import spock.lang.Shared import spock.lang.Specification class HomeSpecification extends Specification { static final HOME = 'http://localhost:9966/petclinic' @Shared def driver = new HtmlUnitDriver(true) def setup() { driver.manage().timeouts().implicitlyWait 10, SECONDS } def 'user enters home page'() { when: driver.get(HOME) then: driver.title == 'PetClinic :: ' + 'a Spring Framework demonstration' } def 'user clicks on menus'() { when: driver.get(HOME) def vets = driver.findElement(By.linkText('Veterinarians')) vets.click() then: driver.currentUrl == 'http://localhost:9966/petclinic/vets.html' } } The test above is available in the spock/specs/src/test folder of the accompanying project. How it works... The first step of this recipe lays out the dependencies required to set up a Spock-based BDD test suite. Spock requires Java 5 or higher, and it's pretty picky with regard to the matching Groovy version to use. In the case of this recipe, as we are using Groovy 2.x, we set the dependency to the 0.7-groovy-2.0 version of the Spock framework. The full build.gradle file is located in the spock/specs folder of the recipe's code. The first test case demonstrated in the recipe is a direct conversion of a JUnit test written by Spring for the Petclinic application. This is the original test written in Java: public class OwnerTests { @Test public void testHasPet() { Owner owner = new Owner();Pet fido = new Pet(); fido.setName("Fido"); assertNull(owner.getPet("Fido")); assertNull(owner.getPet("fido")); owner.addPet(fido); assertEquals(fido, owner.getPet("Fido")); assertEquals(fido, owner.getPet("fido")); } } All we need to import in the Spock test is spock.lang.* that contains the most important types for writing specifications. A Spock test extends from spock.lang.Specification. The name of a specification normally relates to the system or system operation under test. In the case of the Groovy test at step 2, we reused the original Java test name, but it would have been better renamed to something more meaningful for a specification such as OwnerPetSpec. The class Specification exposes a number of practical methods for implementing specifications. Additionally, it tells JUnit to run the specification with Sputnik, Spock's own JUnit runner. Thanks to Sputnik, Spock specifications can be executed by all IDEs and build tools. Following the class definition, we have the feature method: def 'test pet and owner'() { ... } Feature methods lie at the core of a specification. They contain the description of the features (properties, aspects) that define the system that is under specification test. Feature methods are conventionally named with String literals: it's a good idea to choose meaningful names for feature methods. In the test above, we are testing that: given two entities, pet and owner, the getPet method of the owner instance will return null until the pet is not assigned to the owner, and that the getPet method will accept both "fido" and "Fido" in order to verify the ownership. Conceptually, a feature method consists of four phases: Set up the feature's fixture Provide an input to the system under specification (stimulus) Describe the response expected from the system Clean up Whereas the first and last phases are optional, the stimulus and response phases are always present and may occur more than once. Each phase is defined by blocks: blocks are defined by a label and extend to the beginning of the next block or the end of the method. In the test at step 2, we can see 3 types of blocks: In the given block, data gets initialized The when and then blocks always occur together. They describe a stimulus and the expected response. Whereas when blocks may contain arbitrary code; then blocks are restricted to conditions, exception checking, interactions, and variable definitions. The first test case has two when/then pairs. A pet is assigned the name "fido", and the test verifies that calling getPet on an owner object only returns something if the pet is actually "owned" by the owner. The second test is slightly more complex because it employs the Selenium framework to execute a web integration test with a BDD flavor. The test is located in the groovy-test/spock/specs/src/test folder. You can launch it by typing gradle test from the groovy-test/spock/specs folder. The test takes care of starting the web container and run the application under test, Petclinic. The test starts by defining a shared Selenium driver marked with the @Shared annotation, which is visible by all the feature methods. The first feature method simply opens the Petclinic main page and checks that the title matches the specification. The second feature method uses the Selenium API to select a link, click on it, and verify that the link brings the user to the right page. The verification is performed against the currentUrl of the browser that is expected to match the URL of the link we clicked on. See also http://dannorth.net/introducing-bdd/ http://en.wikipedia.org/wiki/Behavior-driven_development https://code.google.com/p/spock/ https://github.com/spring-projects/spring-petclinic/
Read more
  • 0
  • 0
  • 5369

article-image-metaprogramming-and-groovy-mop
Packt
31 May 2010
6 min read
Save for later

Metaprogramming and the Groovy MOP

Packt
31 May 2010
6 min read
(For more resources on Groovy DSL, see here.) In a nutshell, the term metaprogramming refers to writing code that can dynamically change its behavior at runtime. A Meta-Object Protocol (MOP) refers to the capabilities in a dynamic language that enable metaprogramming. In Groovy, the MOP consists of four distinct capabilities within the language: reflection, metaclasses, categories, and expandos. The MOP is at the core of what makes Groovy so useful for defining DSLs. The MOP is what allows us to bend the language in different ways in order to meet our needs, by changing the behavior of classes on the fly. This section will guide you through the capabilities of MOP. Reflection To use Java reflection, we first need to access the Class object for any Java object in which are interested through its getClass() method. Using the returned Class object, we can query everything from the list of methods or fields of the class to the modifiers that the class was declared with. Below, we see some of the ways that we can access a Class object in Java and the methods we can use to inspect the class at runtime. import java.lang.reflect.Field;import java.lang.reflect.Method;public class Reflection { public static void main(String[] args) { String s = new String(); Class sClazz = s.getClass(); Package _package = sClazz.getPackage(); System.out.println("Package for String class: "); System.out.println(" " + _package.getName()); Class oClazz = Object.class; System.out.println("All methods of Object class:"); Method[] methods = oClazz.getMethods(); for(int i = 0;i < methods.length;i++) System.out.println(" " + methods[i].getName()); try { Class iClazz = Class.forName("java.lang.Integer"); Field[] fields = iClazz.getDeclaredFields(); System.out.println("All fields of Integer class:"); for(int i = 0; i < fields.length;i++) System.out.println(" " + fields[i].getName()); } catch (ClassNotFoundException e) { e.printStackTrace(); } }} We can access the Class object from an instance by calling its Object.getClass() method. If we don't have an instance of the class to hand, we can get the Class object by using .class after the class name, for example, String.class. Alternatively, we can call the static Class.forName, passing to it a fully-qualified class name. Class has numerous methods, such as getPackage(), getMethods(), and getDeclaredFields() that allow us to interrogate the Class object for details about the Java class under inspection. The preceding example will output various details about String, Integer, and Double. Groovy Reflection shortcuts Groovy, as we would expect by now, provides shortcuts that let us reflect classes easily. In Groovy, we can shortcut the getClass() method as a property access .class, so we can access the class object in the same way whether we are using the class name or an instance. We can treat the .class as a String, and print it directly without calling Class.getName(), as follows: The variable greeting is declared with a dynamic type, but has the type java.lang.String after the "Hello" String is assigned to it. Classes are first class objects in Groovy so we can assign String to a variable. When we do this, the object that is assigned is of type java.lang.Class. However, it describes the String class itself, so printing will report java.lang.String. Groovy also provides shortcuts for accessing packages, methods, fields, and just about all other reflection details that we need from a class. We can access these straight off the class identifier, as follows: println "Package for String class"println " " + String.packageprintln "All methods of Object class:"Object.methods.each { println " " + it }println "All fields of Integer class:"Integer.fields.each { println " " + it } Incredibly, these six lines of code do all of the same work as the 30 lines in our Java example. If we look at the preceding code, it contains nothing that is more complicated than it needs to be. Referencing String.package to get the Java package of a class is as succinct as you can make it. As usual, String.methods and String.fields return Groovy collections, so we can apply a closure to each element with the each method. What's more, the Groovy version outputs a lot more useful detail about the package, methods, and fields. When using an instance of an object, we can use the same shortcuts through the class field of the instance. def greeting = "Hello"assert greeting.class.package == String.package Expandos An Expando is a dynamic representation of a typical Groovy bean. Expandos support typical get and set style bean access but in addition to this they will accept gets and sets to arbitrary properties. If we try to access, a non-existing property, the Expando does not mind and instead of causing an exception it will return null. If we set a non-existent property, the Expando will add that property and set the value. In order to create an Expando, we instantiate an object of class groovy.util.Expando. def customer = new Expando()assert customer.properties == [:]assert customer.id == nullassert customer.properties == [:]customer.id = 1001customer.firstName = "Fred"customer.surname = "Flintstone"customer.street = "1 Rock Road"assert customer.id == 1001assert customer.properties == [ id:1001, firstName:'Fred', surname:'Flintstone', street:'1 Rock Road']customer.properties.each { println it } The id field of customer is accessible on the Expando shown in the preceding example even when it does not exist as a property of the bean. Once a property has been set, it can be accessed by using the normal field getter: for example, customer.id. Expandos are a useful extension to normal beans where we need to be able to dump arbitrary properties into a bag and we don't want to write a custom class to do so. A neat trick with Expandos is what happens when we store a closure in a property. As we would expect, an Expando closure property is accessible in the same way as a normal property. However, because it is a closure we can apply function call syntax to it to invoke the closure. This has the effect of seeming to add a new method on the fly to the Expando. customer.prettyPrint = { println "Customer has following properties" customer.properties.each { if (it.key != 'prettyPrint') println " " + it.key + ": " + it.value }}customer.prettyPrint() Here we appear to be able to add a prettyPrint() method to the customer object, which outputs to the console: Customer has following properties surname: Flintstone street: 1 Rock Road firstName: Fred id: 1001
Read more
  • 0
  • 0
  • 5326
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-binding-ms-chart-control-linq-data-source-control-2
Packt
03 Sep 2009
4 min read
Save for later

Binding MS Chart Control to LINQ Data Source Control

Packt
03 Sep 2009
4 min read
Introduction LINQ, short for Language Integrated Query, provides an object oriented approach to not only querying relational databases but also any kind of source such as XML, Collection of objects, etc. LINQ to SQL provides (O/R) object-relational mapping and Visual Studio 2008 IDE provides a (O/R) designer. Visual Studio 2008 also has a web server control called LinqDataSource control. This control requires a DataContext which is provided by the LINQ-to-SQL classes, a class generator that maps SQL objects to the model. Without this control one may have to generate the classes from scratch or by using the SQLMetal.exe utility which generates tables and columns. The readers may benefit by reading my  previous article on Displaying SQL Server Data using a Linq data source. We will continue to use the PrincetonTemp table used in the previous article on MySQL Data Transfer using Sql Server Integration Services. We will map this table to LINQ via LinqDataSource Control and then use it as the source of data for the chart. Create a Framework 3.5 Web Site project Open Visual Studio 2008 from its shortcut on the desktop. Click File | New | Web Site...(or Shift+Alt+N) to open the New Web Site window. Change the default name of the site to a name of your choice (herein PieChartWithLinq) on your local web server as shown. Make sure you are creating a .NET Framework 3.5 web site as shown here. This is very similar to creating a web site that can reside on the file system. Add a LinqDataControl and provide the data context Drag and drop a LinqDataSource control from Toolbox | Data shown in the next figure on to the Default.aspx This creates an instance of the control LinqDataSource1 as shown. The source page Default.aspx will display the  control as shown in the next figure. When you try to configure a data source for this control using the smart tasks (see Figure 3) the program would take you to a window which does not allow you create a new data context. But if there are existing items the program allows you to choose. Create a data context for the LinqDataSource control We will add a new data context. Right click the web site in Solution Explorer and pick Add New Item... from the drop-down menu. This opens the Add New Item window as shown. Highlight the item Linq to SQL Classes in the Add New Item window. Replace the default name DataClasses.dbml with one of your own. Herein it is replaced with Princeton.dbml. When you click OK after renaming you will get a Microsoft Visual Studio warning as shown. Read the message on this window. Click Yes to add Princeton.dbml consisting of two items to the App_Code folder on the web site project as shown. One is a layout designer and the other a code page. The (O/R) designer containing two panes is shown in the next figure. In the left pane you drag and drop table (s) from the server explorer assuming you a made a connection with a database. To the right you can add a stored procedure from the server explorer. Read the instructions on this page. Click (View | Server Explorer) to display the connections in the server explorer as shown. In the next figure you can see the expanded node of TestNorthwind displaying the table PrincetonTemp in the SQL Server 2008 Hodentek2Mysorian. The database and the server names are the ones used for this article and you may have different names in your machine. The connection used is already present but you can create a new connection in the server explorer window if you choose to do so. Drag and drop the PrincetonTemp table to the left pane of the (O/R) designer as shown. Build the project. This will make the objects available for the LinqDataSource1 on the default page.
Read more
  • 0
  • 0
  • 5286

article-image-making-your-code-better
Packt
14 Feb 2014
8 min read
Save for later

Making Your Code Better

Packt
14 Feb 2014
8 min read
(For more resources related to this topic, see here.) Code quality analysis The fact that you can compile your code does not mean your code is good. It does not even mean it will work. There are many things that can easily break your code. A good example is an unhandled NullReferenceException. You will be able to compile your code, you will be able to run your application, but there will be a problem. ReSharper v8 comes with more than 1400 code analysis rules and more than 700 quick fixes, which allow you to fix detected problems. What is really cool is that ReSharper provides you with code inspection rules for all supported languages. This means that ReSharper not only improves your C# or VB.NET code, but also HTML, JavaScript, CSS, XAML, XML, ASP.NET, ASP.NET MVC, and TypeScript. Apart from finding possible errors, code quality analysis rules can also improve the readability of your code. ReSharper can detect code, which is unused and mark it as grayed, prompts you that maybe you should use auto properties or objects and collection initializers, or use the var keyword instead of an explicit type name. ReSharper provides you with five severity levels for rules and allows you to configure them according to your preference. Code inspection rules can be configured in the ReSharper's Options window. A sample view of code inspection rules with the list of available severity levels is shown in the following screenshot: Background analysis One of the best features in terms of code quality in ReSharper is Background analysis. This means that all the rules are checked as you are writing your code. You do not need to compile your project to see the results of the analysis. ReSharper will display appropriate messages in real time. Solution-wide inspections By default, the described rules are checked locally, which means that they should be checked in the current class. Because of this, ReSharper can mark some code as unused if it is used only locally; for example, there can be any unused private method or some part of code inside your method. These two cases are shown in the following screenshot: Additionally, for local analysis, ReSharper can check some rules in your entire project. To do this, you need to enable Solution-wide inspections. The easiest way to enable Solution-wide inspections is to double-click the circle icon in the bottom-right corner of Visual Studio, as seen in the following screenshot: With enabled Solution-wide inspections, ReSharper can mark the public methods or returned values that are unused. Please note that running Solution-wide inspections can hit Visual Studio’s performance in big projects. In such cases, it is better to disable this feature. Disabling code inspections With ReSharper v8, you can easily mark some part of your code as code that should not be checked by ReSharper. You can do this by adding the following comments: // ReSharper disable all // [your code] // ReSharper restore all All code between these two comments will be skipped by ReSharper in code inspections. Of course, instead of the all word, you can use the name of any ReSharper rule such as UseObjectOrCollectionInitializer. You can also disable ReSharper analysis for a single line with the following comment: // ReSharper disable once UseObjectOrCollectionInitializer ReSharper can generate these comments for you. If ReSharper highlights some issue, then just press Alt + Enter and select Options for “YOUR_RULE“ inspection, as shown in the following screenshot: Code Issues You can also an ad-hoc run code analysis. An ad-hoc analysis can be run on the solution or project level. To run ad-hoc analysis, just navigate to RESHARPER | Inspect | Code Issues in Solution or RESHARPER | Inspect | Code Issues in Current Project from the Visual Studio toolbar. This will display a dialog box that shows us the progress of analysis and will finally display the results in the Inspection Results window. You can filter and group the displayed issues as and when you need to. You can also quickly go to a place where the issue occurs just by double-clicking on it. A sample report is shown in the following screenshot: Eliminating errors and code smells We think you will agree that the code analysis provided by ReSharper is really cool and helps create better code. What is even cooler is that ReSharper provides you with features that can fix some issues automatically. Quick fixes Most errors and issues found by ReSharper can be fixed just by pressing Alt + Enter. This will display a list of the available solutions and let you select the best one for you. Fix in scope Quick fixes described above allow you to fix the issues in one particular place. However, sometimes there are issues that you would like to fix in every file in your project or solution. A great example is removing unused using statements or the this keyword. With ReSharper v8, you do not need to fix such issues manually. Instead, you can use a new feature called Fix in scope. You start as usual by pressing Alt + Enter but instead of just selecting some solution, you can select more options by clicking the small arrow on the right from the available options. A sample usage of the Fix in scope feature is shown in the following screenshot: This will allow you to fix the selected issue with just one click! Structural Search and Replace Even though ReSharper contains a lot of built-in analysis, it also allows you to create your own analyses. You can create your own patterns that will be used to search some structures in your code. This feature is called Structural Search and Replace (SSR). To open the Search with Pattern window, navigate to RESHARPER | Find | Search with Pattern…. A sample window is shown in the following screenshot: You can see two things here: On the left, there is place to write you pattern On the right, there is place to define placeholders In the preceding example, we were looking for if statements to compare them with some false expression. You can now simply click on the Find button and ReSharper will display every code that matches this pattern. Of course, you can also save your patterns. You can create new search patterns from the code editor. Just select some code, click on the right mouse button and select Find Similar Code….This will automatically generate the pattern for this code, which you can easily adjust to your needs. SSR allows you not only to find code based on defined patterns, but also replace it with different code. Click on the Replace button available on the top in the preceding screenshot. This will display a new section on the left called Replace pattern. There, you can write code that will be placed instead of code that matches the defined pattern. For the pattern shown, you can write the following code: if (false = $value$) { $statement$ } This will simply change the order of expressions inside the if statement. The saved patterns can also be presented as Quick fixes. Simply navigate to RESHARPER | Options | Code Inspection | Custom Patterns and set proper severity for your pattern, as shown in the following screenshot: This will allow you to define patterns in the code editor, which is shown in the following screenshot: Code Cleanup ReSharper also allows you to fix more than one issue in one run. Navigate to RESHARPER | Tools | Cleanup Code… from the Visual Studio toolbar or just press Ctrl + E, Ctrl + C. This will display the Code Cleanup window, which is shown in the following screenshot: By clicking on the Run button, ReSharper will fix all issues configured in the selected profile. By default, there are two patterns: Full Cleanup Reformat Code You can add your own pattern by clicking on the Edit Profiles button. Summary Code quality analysis is a very powerful feature in ReSharper. As we have described in this article, ReSharper not only prompts you when something is wrong or can be written better, but also allows you to quickly fix these issues. If you do not agree with all rules provided by ReSharper, you can easily configure them to meet your needs. There are many rules that will open your eyes and show you that you can write better code. With ReSharper, writing better, cleaner code is as easy as just pressing Alt + Enter. Resources for Article: Further resources on this subject: Ensuring Quality for Unit Testing with Microsoft Visual Studio 2010 [Article] Getting Started with Code::Blocks [Article] Core .NET Recipes [Article]
Read more
  • 0
  • 0
  • 5275

article-image-tcltk-handling-string-expressions
Packt
02 Mar 2011
11 min read
Save for later

Tcl/Tk: Handling String Expressions

Packt
02 Mar 2011
11 min read
Tcl/Tk 8.5 Programming Cookbook Over 100 great recipes to effectively learn Tcl/Tk 8.5 The quickest way to solve your problems with Tcl/Tk 8.5 Understand the basics and fundamentals of the Tcl/Tk 8.5 programming language Learn graphical User Interface development with the Tcl/Tk 8.5 Widget set Get a thorough and detailed understanding of the concepts with a real-world address book application Each recipe is a carefully organized sequence of instructions to efficiently learn the features and capabilities of the Tcl/Tk 8.5 language When I first started using Tcl, everything I read or researched stressed the mantra "Everything is a string". Coming from a hard-typed coding environment, I was used to declaring variable types and in Tcl this was not needed. A set command could—and still does—create the variable and assigns the type on the fly. For example, set variable "7" and set variable 7 will both create a variable containing 7. However, with Tcl, you can still print the variable containing a numeric 7 and add 1 to the variable containing a string representation of 7. It still holds true today that everything in Tcl is a string. When we explore the TK Toolkit and widget creation, you will rapidly see that widgets themselves have a set of string values that determine their appearance and/or behavior. As a pre-requisite for the recipes in this article, launch the Tcl shell as appropriate for your operating system. You can access Tcl from the command line to execute the commands. As with everything else we have seen, Tcl provides a full suite of commands to assist in handling string expressions. However due to the sheer number of commands and subsets, I won't be listing every item individually in the following section. Instead we will be creating numerous recipes and examples to explore in the following sections. A general list of the commands is as follows: CommandDescriptionstringThe string command contains multiple keywords allowing for manipulation and data gathering functions.appendAppends to a string variable.formatFormat a string in the same manner as C sprint.regexpRegular Expression matching.regsubPerforms substitution, based on Regular Expression matching.scanParses a string using conversion specifiers in the same manner as C sscanf.substPerform backslash, command, and variable substitution on a string. Using the commands listed in the table, a developer can address all their needs as applies to strings. In the following sections, we will explore these commands as well as many subsets of the string command. Appending to a string Creating a string in Tcl using the set command is the starting point for all string commands. This will be the first command for most, if not all of the following recipes. As we have seen previously, entering a set variable value on the command line does this. However, to fully implement strings within a Tcl script, we need to interact with these strings from time to time, for example, with an open channel to a file or HTTP pipe. To accomplish this, we will need to read from the channel and append to the original string. To accomplish appending to a string, Tcl provides the append command. The append command is as follows: append variable value value value... How to do it… In the following example, we will create a string of comma-delimited numbers using the for control construct. Return values from the commands are provided for clarity. Enter the following command: % set var 0 0 % for {set x 1} {$x<=10}{$x<=10} {incr x} { append var , $x } %puts $var 0,1,2,3,4,5,6,7,8,9,10 How it works… The append command accepts a named variable to contain the resulting string and a space delimited list of strings to append. As you can see, the append command accepted our variable argument and a string containing the comma. These values were used to append to original variable (containing a starting value of 0). The resulting string output with the puts command displays our newly appended variable complete with commas. Formatting a string Strings, as we all know, are our primary way of interacting with the end-user. Whether presented in a message box or simply directed to the Tcl shell, they need to be as fluid as possible, in the values they present. To accomplish this, Tcl provides the format command. This command allows us to format a string with variable substitution in the same manner as the ANSI C sprintf procedure. The format command is as follows: format string argument argument argument... The format command accepts a string containing the value to be formatted as well as % conversion specifiers. The arguments contain the values to be substituted into the final string. Each conversion specifier may contain up to six (6) sections—an XPG2 position specifier, a set of fags, minimum field width, a numeric precision specifier, size modifier, and a conversion character. The conversion specifiers are as follows: SpecifierDescriptiond or iFor converting an integer to a signed decimal string.uFor converting an integer to an unsigned decimal string.oFor converting an integer to an unsigned octal sting.x or XFor converting an integer to an unsigned hexadecimal string. The lowercase x is used for lowercase hexadecimal notations. The uppercase X will contain the uppercase hexadecimal notations.cFor converting an integer to the Unicode character it represents.sNo conversion is performed.fFor converting the number provided to a signed decimal string of the form xxx.yyy, where the number of y's is determined with the precision of 6 decimal places (by default).e or EIf the uppercase E is used, it is utilized in the string in place of the lowercase e.g or GIf the exponent is less than -4 or greater than or equal to the precision, then this is used for converting the number utilized for the %e or %E; otherwise for converting in the same manner as %f.%The % sign performs no conversion; it merely inserts a % character into the string. There are three differences between the Tcl format and the ANSI C sprintf procedure: The %p and %n conversion switches are not supported. The % conversion for %c only accepts an integer value. Size modifiers are ignored for formatting of floating-point values. How to do it… In the following example, we format a long date string for output on the command line. Return values from the commands are provided for clarity. Enter the following command: % set month May May % set weekday Friday Friday % set day 5 5 % set extension th th %set year 2010 2010 %puts [format "Today is %s, %s %d%s %d" $weekday $month $day $extension $year] Today is Friday, May 5th 2010 How it works… The format command successfully replaced the desired conversion fag delimited regions with the variables assigned. Matching a regular expression within a string Regular expressions provide us with a powerful method to locate an arbitrarily complex pattern within a string. The regexp command is similar to a Find function in a text editor. You search for a defined string for the character or the pattern of characters you are looking for and it returns a Boolean value that indicates success or failure and populates a list of optional variables with any matched strings. The -indices and -inline options must be used to modify the behavior, as indicated by this statement. But it doesn't stop there; by providing switches, you can control the behavior of regexp. The switches are as follows: SwitchBehavior-aboutNo actual matching is made. Instead regexp returns a list containing information about the regular expression where the first element is a subexpression count and the second is a list of property names describing various attributes about the expression.-expandedAllows the use of expanded regular expression, wherein whitespaces and comments are ignored.-indicesReturns a list of two decimal strings, containing the indices in the string to match for the first and last characters in the range-lineEnables the newline-sensitive matching similar to passing the -linestop and -lineanchor switches. -linestop Changes the behavior of [^] bracket expressions and the "." character so that they stop at newline characters.-lineanchorChanges the behavior of ^ and $ (anchors) so that they match both the beginning and end of a line.-nocaseTreats uppercase characters in the search string as lowercase.-allCauses the command to match as many times as possible and returns the count of the matches found.-inline Causes regexp to return a list of the data that would otherwise have been placed in match variables. Match variables may NOT be used if -inline is specified.   -startAllows us to specify a character index from which searching should start.--Denotes the end of switches being passed to regexp. Any argument following this switch will be treated as an expression, even if they start with a "-". Now that we have a background in switches, let's look at the command itself: regexp switches expression string submatchvar submatchvar... The regexp command determines if the expression matches part or all of the string and returns a 1 if the match exists or a 0 if it is not found. If the variables (submatchvar) (for example myNumber or myData) are passed after the string, they are used as variables to store the returned submatchvar. Keep in mind that if the –inline switch has been passed, no return variables should be included in the command. Getting ready To complete the following example, we will need to create a Tcl script file in your working directory. Open the text editor of your choice and follow the next set of instructions. How to do it… A common use for regexp is to accept a string containing multiple words and to split it into its constituent parts. In the following example, we will create a string containing an IP address and assign the values to the named variables. Enter the following command: % regexp "([0-9]{1,3}).([0-9]{1,3}).([0-9]{1,3}).([0-9]{1,3})" $ip all first second third fourth % puts "$all n$first n$second n$third n$fourth" 192.168.1.65 192 168 1 65 How it works… As you can see, the IP Address has been split into its individual octet values. What regexp has done is match the groupings of decimal characters [0-9] of a varying length of 1 to 3 characters {1, 3} delimited by a "." character. The original IP address is assigned to the first variable (all) while the octet values are assigned to the remaining variables (first, second, third and fourth). Performing character substitution on a string If regexp is a Find function, then regsub is equivalent to Find and Replace. The regsub command accepts a string and using Regular Expression pattern matching, it locates and, if desired, replaces the pattern with the desired value. The syntax of regsub is similar to regexp as are the switches. However, additional control over the substitution is added. The switches are as listed next: SwitchDescription-allCauses the command to perform substitution for each match found The & and n sequences are handled for each substitution-expandedAllows use of expanded regular expression wherein whitespace and comments are ignored-lineEnables newline sensitive matching similar to passing the -linestop and -lineanchor switches-linestopChanges the behavior of [^] bracket expressions so that they stop at newline characters-lineanchorChanges the behavior of ^ and $ (anchors) so that they match both the beginning and end of a line-nocaseTreats Upper Case characters in the search string as Lower Case-startAllows specification of a character offset in the string from which to start matching Now that we have a background in switches as they apply to the regsub command, let's look at the command: regsub switches expression string substitution variable The regsub command matches the expression against the string provided and either copies the string to the variable or returns the string if a variable is not provided. If a match is located, the portion of the string that matched is replaced by substitution. Whenever a substitution contains an & or a character, it is replaced with the portion of the string that matches the expression. If the substitution contains the switch "n" (where n represents a numeric value between 1 and 9), it is replaced with the portion of the string that matches with the nth sub-expression of the expression. Additional backslashes may be used in the substitution to prevent interpretation of the &, , n, and the backslashes themselves. As both the regsub command and the Tcl interpreter perform backslash substitution, you should enclose the string in curly braces to prevent unintended substitution. How to do it… In the following example, we will substitute every instance of the word one, which is a word by itself, with the word three. Return values from the commands are provided for clarity. Enter the following command: % set original "one two one two one two" one two one two one two % regsub -all {one} $original three new 3 % puts $new three two three two three two How it works… As you can see, the value returned from the regsub command lists the number of matches found. The string original has been copied into the string new, with the substitutions completed. With the addition of additional switches, you can easily parse a lengthy string variable and perform bulk updates. I have used this to rapidly parse a large text file prior to importing data into a database.  
Read more
  • 0
  • 0
  • 5238

article-image-using-xml-facade-dom
Packt
13 Sep 2013
25 min read
Save for later

Using XML Facade for DOM

Packt
13 Sep 2013
25 min read
(For more resources related to this topic, see here.) The Business Process Execution Language (BPEL) is based on XML, which means that all the internal variables and data are presented in XML. BPEL and Java technologies are complementary, we seek ways to ease the integration of the technologies. In order to handle the XML content from BPEL variables in Java resources (classes), we have a couple of possibilities: Use DOM (Document Object Model) API for Java, where we handle the XML content directly through API calls. An example of such a call would be reading from the input variable: oracle.xml.parser.v2.XMLElement input_cf= (oracle.xml.parser.v2.XMLElement)getVariableData("inputVariable","payload","/client:Cashflows"); We receive the XMLElement class, which we need to handle further, either be assignment, reading of content, iteration, or something else. As an alternative, we can use XML facade though Java Architecture for XML Binding (JAXB). JAXB provides a convenient way of transforming XML to Java or vice-versa. The creation of XML facade is supported through the xjc utility and of course via the JDeveloper IDE. The example code for accessing XML through XML facade is: java.util.List<org.packt.cashflow.facade.PrincipalExchange>princEx= cf.getPrincipalExchange(); We can see that there is neither XML content nor DOM API anymore. Furthermore, we have to access the whole XML structure represented by Java classes. The latest specification of JAXB at the time of writing is 2.2.7, and its specification can be found at the following location: https://jaxb.java.net/. The purpose of an XML facade operation is the marshalling and un-marshalling of Java classes. When the originated content is presented in XML, we use un-marshalling methods in order to generate the correspondent Java classes. In cases where we have content stored in Java classes and we want to present the content in XML, we use the marshalling methods. JAXB provides the ability to create XML facade from an XML schema definition or from the WSDL (Web Service Definition/Description Language). The latter method provides a useful approach as we, in most cases, orchestrate web services whose operations are defined in WSDL documents. Throughout this article, we will work on a sample from the banking world. On top of this sample, we will show how to build the XML facade. The sample contains the simple XML types, complex types, elements, and cardinality, so we cover all the essential elements of functionality in XML facade. Setting up an XML facade project We start generating XML facade by setting up a project in a JDeveloper environment which provides convenient tools for building XML facades. This recipe will describe how to set up a JDeveloper project in order to build XML facade. Getting ready To complete the recipe, we need the XML schema of the BPEL process variables based on which we build XML facade. Explore the XML schema of our banking BPEL process. We are interested in the structure of the BPEL request message: <xsd:complexType name="PrincipalExchange"><xsd:sequence><xsd:element minOccurs="0"name="unadjustedPrincipalExchangeDate" type="xsd:date"/><xsd:element minOccurs="0"name="adjustedPrincipalExchangeDate" type="xsd:date"/><xsd:element minOccurs="0" name="principalExchangeAmount"type="xsd:decimal"/><xsd:element minOccurs="0" name="discountFactor"type="xsd:decimal"/></xsd:sequence><xsd:attribute name="id" type="xsd:int"/></xsd:complexType><xsd:complexType name="CashflowsType"><xsd:sequence><xsd:element maxOccurs="unbounded" minOccurs="0"name="principalExchange" type="prc:PrincipalExchange"/></xsd:sequence></xsd:complexType><xsd:element name="Cashflows" type="prc:CashflowsType"/> The request message structure presents just a small fragment of cash flows modeled in the banks. The concrete definition of a cash flow is much more complex. However, our definition contains all the right elements so that we can show the advantages of using XML facade in a BPEL process. How to do it... The steps involved in setting up a JDeveloper project for XML façade are as follows: We start by opening a new Java Project in JDeveloper and naming it CashflowFacade. Click on Next. In the next window of the Create Java Project wizard, we select the default package name org.packt.cashflow.facade. Click on Finish. We now have the following project structure in JDeveloper: We have created a project that is ready for XML facade creation. How it works... After the wizard has finished, we can see the project structure created in JDeveloper. Also, the corresponding file structure is created in the filesystem. Generating XML facade using ANT This recipe explains how to generate XML facade with the use of the Apache ANT utility. We use the ANT scripts when we want to build or rebuild the XML facade in many iterations, for example, every time during nightly builds. Using ANT to build XML façade is very useful when XML definition changes are constantly in phases of development. With ANT, we can ensure continuous synchronization between XML and generated Java code. The official ANT homepage along with detailed information on how to use it can be found at the following URL: http://ant.apache.org/. Getting ready By completing our previous recipe, we built up a JDeveloper project ready to create XML facade out of XML schema. To complete this recipe, we need to add ANT project technology to the project. We achieve this through the Project Properties dialog: How to do it... The following are the steps we need to take to create a project in JDeveloper for building XML façade with ANT: Create a new ANT build file by right-clicking on the CashflowFacade project node, select New, and choose Buildfile from Project (Ant): The ANT build file is generated and added into the project under the Resources folder. Now we need to amend the build.xml file with the code to build XML facade. We will first define the properties for our XML facade: <property name="schema_file" location="../Banking_BPEL/xsd/Derivative_Cashflow.xsd"/><property name="dest_dir" location="./src"/><property name="package" value="org.packt.cashflow.facade"/> We define the location of the source XML schema (it is located in the BPEL process). Next, we define the destination of the generated Java files and the name of the package. Now, we define the ANT target in order to build XML facade classes. The ANT target presents one closed unit of ANT work. We define the build task for the XML façade as follows: <target name="xjc"><delete dir="src"/><mkdir dir="src"/><echo message="Compiling the schema..." /><exec executable="xjc"><arg value="-xmlschema"/><arg value="${schema_file}"/><arg value="-d"/><arg value="${dest_dir}"/><arg value="-p"/><arg value="${package}"/></exec></target> Now we have XML facade packaged and ready to be used in BPEL processes. How it works… ANT is used as a build tool and performs various tasks. As such, we can easily use it to build XML facade. Java Architecture for XML Binding provides the xjc utility, which can help us in building XML facade. We have provided the following parameters to the xjc utility: Xmlschema: This is the threat input schema as XML schema d: This specifies the destination directory of the generated classes p: This specifies the package name of the generated classes There are a number of other parameters, however we will not go into detail about them here. Based on the parameters we provided to the xjc utility, the Java representation of the XML schema is generated. If we examine the generated classes, we can see that there exists a Java class for every type defined in the XML schema. Also, we can see that the ObjectFactory class is generated, which eases the generation of Java class instances. There's more... There is a difference in creating XML facade between Versions 10g and 11g of Oracle SOA Suite. In Oracle SOA Suite 10g, there was a convenient utility named schema, which is used for building XML facade. However, in Oracle SOA Suite 11g, the schema utility is not available anymore. To provide a similar solution, we create a template class, which is later copied to a real code package when needed to provide functionality for XML facade. We create a new class Facade in the called facade package. The only method in the class is static and serves as a creation point of facade: public static Object createFacade(String context, XMLElement doc)throws Exception {JAXBContext jaxbContext;Object zz= null;try {jaxbContext = JAXBContext.newInstance(context);Unmarshaller unmarshaller = jaxbContext.createUnmarshaller();zz = unmarshaller.unmarshal(doc);return zz;} catch (JAXBException e) {throw new Exception("Cannot create facade from the XML content. "+ e.getMessage());}} The class code implementation is simple and consists of creating the JAXB context. Further, we un-marshall the context and return the resulting class to the client. In case of problems, we either throw an exception or return a null object. Now the calling code is trivial. For example, to create XML facade for the XML content, we call as follows: Object zz = facade.Facade.createFacade("org.packt.cashflow.facade",document.getSrcRoot()); Creating XML facade from XSD This recipe describes how to create XML facade classes from XSD. Usually, the necessity to access XML content out of Java classes comes from already defined XML schemas in BPEL processes. How to do it... We have already defined the BPEL process and the XML schema (Derivative_Cashflow.xsd) in the project. The following steps will show you how to create the XML facade from the XML schema: Select the CashflowFacade project, right-click on it, and select New. Select JAXB 2.0 Content Model from XML Schema. Select the schema file from the Banking_BPEL project. Select the Package Name for Generated Classes checkbox and click on the OK button. The corresponding Java classes for the XML schema were generated. How it works... Now compare the classes generated via the ANT utility in the Generating XML facade using ANT recipe with this one. In essence, the generated files are the same. However, we see the additional file jaxb.properties, which holds the configuration of the JAXB factory used for the generation of Java classes. It is recommended to create the same access class (Facade.java) in order to simplify further access to XML facade. Creating XML facade from WSDL It is possible to include the definitions of schema elements into WSDL. To overcome the extraction of XML schema content from the WSDL document, we would rather take the WSDL document and create XML facade for it. This recipe explains how to create XML facade out of the WSDL document. Getting ready To complete the recipe, we need the WSDL document with the XML schema definition. Luckily, we already have one automatically generated WSDL document, which we received during the Banking_BPEL project creation. We will amend the already created project, so it is recommended to complete the Generating XML facade using ANT recipe before continuing with this recipe. How to do it... The following are the steps involved in creating XML façade from WSDL: Open the ANT configuration file (build.xml) in JDeveloper. We first define the property which identifies the location of the WSDL document: <property name="wsdl_file" location="../Banking_BPEL/Derivative_Cashflow.wsdl"/> Continue with the definition of a new target inside the ANT configuration file in order to generate Java classes from the WSDL document: <target name="xjc_wsdl"><delete dir="src/org"/><mkdir dir="src/org"/><echo message="Compiling the schema..." /><exec executable="xjc"><arg value="-wsdl"/><arg value="${schema_file}"/><arg value="-d"/><arg value="${dest_dir}"/><arg value="-p"/><arg value="${package}"/></exec></target> From the configuration point of view, this step completes the recipe. To run the newly defined ANT task, we select the build.xml file in the Projects pane. Then, we select the xjc_wsdl task in the Structure pane of JDeveloper, right-click on it, and select Run Target "xjc_wsdl": How it works... The generation of Java representation classes from WSDL content works similar to the generation of Java classes from XSD content. Only the source of the XML input content is different from the xjc utility. In case we execute the ANT task with the wrong XML or WSDL content, we receive a kind notification from the xjc utility. For example, if we run the utility xjc with the parameter –xmlschema over the WSDL document, we get a warning that we should use different parameters for generating XML façade from WSDL. Note that generation of Java classes from the WSDL document via JAXB is only available through ANT task definition or the xjc utility. If we try the same procedure with JDeveloper, an error is reported. Packaging XML facade into JAR This recipe explains how to prepare a package containing XML facade to be used in BPEL processes and in Java applications in general. Getting ready To complete this recipe, we need the XML facade created out of the XML schema. Also, the generated Java classes need to be compiled. How to do it... The steps involved for packaging XML façade into JAR are as follows: We open the Project Properties by right-clicking on the CashflowFacade root node. From the left-hand side tree, select Deployment and click on the New button. The Create Deployment Profile window opens where we set the name of the archive. Click on the OK button. The Edit JAR Deployment Profile Properties dialog opens where you can configure what is going into the JAR archive. We confirm the dialog and deployment profile as we don't need any special configuration. Now, we right-click on the project root node (CashflowFacade), then select Deploy and CFacade. The window requesting the deployment action appears. We simply confirm it by pressing the Finish button: As a result, we can see the generated JAR file created in the deploy folder of the project. There's more... In this article, we also cover the building of XML facade with the ANT tool. To support an automatic build process, we can also define an ANT target to build the JAR file. We open the build.xml file and define a new target for packaging purposes. With this target, we first recreate the deploy directory and then prepare the package to be utilized in the BPEL process: <target name="pack" depends="compile"><delete dir="deploy"/><mkdir dir="deploy"/><jar destfile="deploy/CFacade.jar"basedir="./classes"excludes="**/*data*"/></target> To automate the process even further, we define the target to copy generated JAR files to the location of the BPEL process. Usually, this means copying the JAR files to the SCA-INF/lib directory: <target name="copyLib" depends="pack"><copy file="deploy/CFacade.jar" todir="../Banking_BPEL/SCAINF/lib"/></target> The task depends on the successful creation of a JAR package, and when the JAR package is created, it is copied over to the BPEL process library folder. Generating Java documents for XML facade Well prepared documentation presents important aspect of further XML facade integration. Suppose we only receive the JAR package containing XML facade. It is virtually impossible to use XML facade if we don't know what the purpose of each data type is and how we can utilize it. With documentation, we receive a well-defined XML facade capable of integrating XML and Java worlds together. This recipe explains how to document the XML facade generated Java classes. Getting ready To complete this recipe, we only need the XML schema defined. We already have the XML schema in the Banking_BPEL project (Derivative_Cashflow.xsd). How to do it... The following are the steps we need to take in order to generate Java documents for XML facade: We open the Derivative_Cashflow.xsd XML schema file. Initially, we need to add an additional schema definition to the XML schema file: <xsd:schema attributeFormDefault="unqualified"elementFormDefault="qualified"targetNamespace="http:// jxb_version="2.1"></xsd:schema> In order to put documentation at the package level, we put the following code immediately after the <xsd:schema> tag in the XML schema file: <xsd:annotation><xsd:appinfo><jxb:schemaBindings><jxb:package name="org.packt.cashflow.facade"><jxb:javadoc>This package represents the XML facadeof the cashflows in the financial derivativesstructure.</jxb:javadoc></jxb:package></jxb:schemaBindings></xsd:appinfo></xsd:annotation> In order to add documentation at the complexType level, we need to put the following lines into the XML schema file. The code goes immediately after the complexType definition: <xsd:annotation><xsd:appinfo><jxb:class><jxb:javadoc>This class defines the data for theevents, when principal exchange occurs.</jxb:javadoc></jxb:class></xsd:appinfo></xsd:annotation> The elements of the complexType definition are annotated in a similar way. We put the annotation data immediately after the element definition in the XML schema file: <xsd:annotation><xsd:appinfo><jxb:property><jxb:javadoc>Raw principal exchangedate.</jxb:javadoc></jxb:property></xsd:appinfo></xsd:annotation> In JDeveloper, we are now ready to build the javadoc documentation. So, select the project CashflowFacade root node. Then, from the main menu, select the Build and Javadoc CashflowFacade.jpr option. The javadoc content will be built in the javadoc directory of the project. How it works... During the conversion from XML schema to Java classes, JAXB is also processing possible annotations inside the XML schema file. When the conversion utility (xjc or execution through JDeveloper) finds the annotation in the XML schema file, it decorates the generated Java classes according to the specification. The XML schema file must contain the following declarations. In the <xsd:schema> element, the following declaration of the JAXB schema namespace must exist: jxb:version="2.1" Note that the xjb:version attribute is where the Version of the JAXB specification is defined. The most common Version declarations are 1.0, 2.0, and 2.1. The actual definition of javadoc resides within the <xsd:annotation> and <xsd:appinfo> blocks. To annotate at package level, we use the following code: <jxb:schemaBindings><jxb:package name="PKG_NAME"><jxb:javadoc>TEXT</jxb:javadoc></jxb:package></jxb:schemaBindings> We define the package name to annotate and a javadoc text containing the documentation for the package level. The annotation of javadoc at class or attribute level is similar to the following code: <jxb:class|property><jxb:javadoc>TEXT</jxb:javadoc></jxb:class|property> If we want to annotate the XML schema at complexType level, we use the <jaxb:class> element. To annotate the XML schema at element level, we use the <jaxb:property> element. There's more... In many cases, we need to annotate the XML schema file directly for various reasons. The XML schema defined by different vendors is automatically generated. In such cases, we would need to annotate the XML schema each time we want to generate Java classes out of it. This would require additional work just for annotation decoration tasks. In such situations, we can separate the annotation part of the XML schema to a separate file. With such an approach, we separate the annotating part from the XML schema content itself, over which we usually don't have control. For that purpose, we create a binding file in our CashflowFacade project and name it extBinding.xjb. We put the annotation documentation into this file and remove it from the original XML schema. We start by defining the binding file header declaration: <jxb:bindings version="1.0"><jxb:bindings schemaLocation="file:/D:/delo/source_code/Banking_BPEL/xsd/Derivative_Cashflow.xsd" node="/xs:schema"> We need to specify the name of the schema file location and the root node of the XML schema which corresponds to our mapping. We continue by declaring the package level annotation: <jxb:schemaBindings><jxb:package name="org.packt.cashflow.facade"><jxb:javadoc><![CDATA[<body>This package representsthe XML facade of the cashflows in the financialderivatives structure.</body>]]></jxb:javadoc></jxb:package><jxb:nameXmlTransform><jxb:elementName suffix="Element"/></jxb:nameXmlTransform></jxb:schemaBindings> We notice that the structure of the package level annotation is identical to those in the inline XML schema annotation. To annotate the class and its attribute, we use the following declaration: <jxb:bindings node="//xs:complexType[@name='CashflowsType']"><jxb:class><jxb:javadoc><![CDATA[This class defines the data for the events, whenprincipal exchange occurs.]]></jxb:javadoc></jxb:class><jxb:bindingsnode=".//xs:element[@name='principalExchange']"><jxb:property><jxb:javadoc>TEST prop</jxb:javadoc></jxb:property></jxb:bindings></jxb:bindings> Notice the indent annotation of attributes inside the class annotation that naturally correlates to the object programming paradigm. Now that we have the external binding file, we can regenerate the XML facade. Note that external binding files are not used only for the creation of javadoc. Inside the external binding file, we can include various rules to be followed during conversion. One such rule is aimed at data type mapping; that is, which Java data type will match the XML data type. In JDeveloper, if we are building XML facade for the first time, we follow either the Creating XML facade from XSD or the Creating XML facade from WSDL recipe. To rebuild XML facade, we use the following procedure: Select the XML schema file (Cashflow_Facade.xsd) in the CashflowFacade project. Right-click on it and select the Generate JAXB 2.0 Content Model option. The configuration dialog opens with some already pre-filled fields. We enter the location of the JAXB Customization File (in our case, the location of the extBinding.xjb file) and click on the OK button. Next, we build the javadoc part to get the documentation. Now, if we open the generated documentation in the web browser, we can see our documentation lines inside. Invoking XML facade from BPEL processes This recipe explains how to use XML facade inside BPEL processes. We can use XML façade to simplify access of XML content from Java code. When using XML façade, the XML content is exposed over Java code. Getting ready To complete the recipe, there are no special prerequisites. Remember that in the Packaging XML facade into JAR recipe, we defined the ANT task to copy XML facade to the BPEL process library directory. This task basically presents all the prerequisites for XML facade utilization. How to do it... Open a BPEL process (Derivative_Cashflow.bpel) in JDeveloper and insert the Java Embedding activity into it: We first insert a code snippet. The whole code snippet is enclosed by a try catch block: try { Read the input cashflow variable data: oracle.xml.parser.v2.XMLElement input_cf= (oracle.xml.parser.v2.XMLElement)getVariableData("inputVariable","payload","/client:Cashflows"); Un-marshall the XML content through the XML facade: Object obj_cf = facade.Facade.createFacade("org.packt.cashflow.facade", input_cf); We must cast the serialized object to the XML facade class: javax.xml.bind.JAXBElement<org.packt.cashflow.facade.CashflowsType> cfs = (javax.xml.bind.JAXBElement<org.packt.cashflow.facade.CashflowsType>)obj_cf; Retrieve the Java class out of the JAXBElement content class: org.packt.cashflow.facade.CashflowsType cf= cfs.getValue(); Finally, we close the try block and handle any exceptions that may occur during processing: } catch (Exception e) {e.printStackTrace();addAuditTrailEntry("Error in XML facade occurred: " +e.getMessage());} We close the Java Embedding activity dialog. Now, we are ready to deploy the BPEL process and test the XML facade. Actually, the execution of the BPEL process will not produce any output, since we have no output lines defined. In case some exception occurs, we will receive information about the exception in the audit trail as well as the BPEL server console. How it works... We add the XML facade JAR file to the BPEL process library directory (<BPEL_process_home>SCA-INFlib). Before we are able to access the XML facade classes, we need to extract the XML content from the BPEL process. To create the Java representation classes, we transform the XML content through the JAXB context. As a result, we receive an un-marshalled Java class ready to be used further in Java code. Accessing complex types through XML facade The advantage of using XML facade is to provide the ability to access the XML content via Java classes and methods. This recipe explains how to access the complex types through XML facade. Getting ready To complete the recipe, we will amend the example BPEL process from the Invoking XML facade from BPEL processes recipe. How to do it... The steps involved in accessing the complex types through XML façade are as follows: Open the Banking_BPEL process and double-click on the XML_facade_node Java Embedding activity. We amend the code snippet with the following code to access the complex type: java.util.List<org.packt.cashflow.facade.PrincipalExchange>princEx= cf.getPrincipalExchange(); We receive a list of principal exchange cash flows that contain various data. How it works... In the previous example, we receive a list of cash flows. The corresponding XML content definition states: <xsd:complexType name="PrincipalExchange"><xsd:sequence></xsd:sequence><xsd:attribute name="id" type="xsd:int"/></xsd:complexType> We can conclude that each of the principle exchange cash flows is modeled as an individual Java class. Depending on the hierarchy level of the complex type, it is modeled either as a Java class or as a Java class member. Complex types are organized in the Java object hierarchy according to the XML schema definition. Mostly, complex types can be modeled as a Java class and at the same time as a member of an other Java class. Accessing simple types through XML facade This recipe explains how to access simple types through XML facade. Getting ready To complete the recipe, we will amend the example BPEL process from our previous recipe, Accessing complex types through XML facade. How to do it... Open the Banking_BPEL process and double-click on the XML_facade_node Java Embedding activity. We amend the code snippet with the code to access the XML simple types: for (org.packt.cashflowfacade.PrincipalExchange pe: princEx) {addAuditTrailEntry("Received cashflow with id: " + pe.getId() +"n" +" Unadj. Principal Exch. Date ...: " + pe.getUnadjustedPrincipalExchangeDate() + "n" +" Adj. Principal Exch. Date .....: " + pe.getAdjustedPrincipalExchangeDate() + "n" +" Discount factor ...............: " +pe.getDiscountFactor() + "n" +" Principal Exch. Amount ........: " +pe.getPrincipalExchangeAmount() + "n");} With the preceding code, we output all Java class members to the audit trail. Now if we run the BPEL process, we can see the following part of output in the BPEL flow trace: How it works... The XML schema simple types are mapped to Java classes as members. If we check our example, we have three simple types in the XML schema: <xsd:complexType name="PrincipalExchange"><xsd:sequence><xsd:element minOccurs="0" name="unadjustedPrincipalExchangeDate"type="xsd:date"/><xsd:element minOccurs="0" name="adjustedPrincipalExchangeDate"type="xsd:date"/><xsd:element minOccurs="0" name="principalExchangeAmount"type="xsd:decimal"/><xsd:element minOccurs="0" name="discountFactor"type="xsd:decimal"/></xsd:sequence><xsd:attribute name="id" type="xsd:int"/></xsd:complexType> The simple types defined in the XML schema are <xsd:date>, <xsd:decimal>, and <xsd:int>. Let us find the corresponding Java class member definitions. Open the PrincipalExchange.java file. The definition of members we can see is as follows: @XmlSchemaType(name = "date")protected XMLGregorianCalendar unadjustedPrincipalExchangeDate;@XmlSchemaType(name = "date")protected XMLGregorianCalendar adjustedPrincipalExchangeDate;protected BigDecimal principalExchangeAmount;protected BigDecimal discountFactor;@XmlAttributeprotected Integer id; We can see that the mapping between the XML content and the Java classes was performed as shown in the following table: XML schema simple type Java class member <xsd:date> javax.xml.datatype.XMLGregorianCalendar <xsd:decimal> java.math.BigDecimal <xsd:int> java.lang.Integer Also, we can identify that the XML simple type definitions as well as the XML attributes are always mapped as members in corresponding Java class representations. Summary In this article, we have learned how to set up an XML facade project, generate XML facade using ANT, create XML facade from XSD and WSDL, Package XML facade into a JAR file, generate Java documents for XML facade, Invoke XML facade from BPEL processes, and access complex and simple types through XML facade. Resources for Article: Further resources on this subject: BPEL Process Monitoring [Article] Human Interactions in BPEL [Article] Business Processes with BPEL [Article]
Read more
  • 0
  • 0
  • 5236
article-image-stylecop-analysis
Packt
12 Sep 2013
6 min read
Save for later

StyleCop analysis

Packt
12 Sep 2013
6 min read
(For more resources related to this topic, see here.) Integrating StyleCop analysis results in Jenkins/Hudson (Intermediate) In this article we will see how to build and display StyleCop errors in Jenkins/Hudson jobs. To do so, we will need to see how to configure the Jenkins job with a full analysis of the C# files in order to display the technical debt of the project. As we want it to diminish, we will also set in the job an automatic recording of the last number of violations. Finally, we will return an error if we add any violations when compared to the previous build. Getting ready For this article you will need to have: StyleCop 4.7 installed with the option MSBuild integration checked A Subversion server A working Jenkins server including: The MSBuild plug in for Jenkins The Violation plug in for Jenkins A C# project followed in a subversion repository. How to do it... The first step is to build a working build script for your project. All solutions have their advantages and drawbacks. I will use MSBuild in this article. The only difference here will be that I won't separate files on a project basis but take the "whole" solution: <?xml version="1.0" encoding="utf-8" ?> <Project DefaultTargets="StyleCop" > <UsingTask TaskName="StyleCopTask" AssemblyFile="$(MSBuildExtens ionsPath)..StyleCop 4.7StyleCop.dll" /> <PropertyGroup> <!-- Set a default value of 1000000 as maximum Stylecop violations found --> <StyleCopMaxViolationCount>1000000</StyleCopMaxViolationCount> </PropertyGroup> <Target Name="StyleCop"> <!-- Get last violation count from file if exists --> <ReadLinesFromFile Condition="Exists('violationCount.txt')" File="violationCount.txt"> <Output TaskParameter="Lines" PropertyName="StyleCopMaxViola tionCount" /> </ReadLinesFromFile> <!-- Create a collection of files to scan --> <CreateItem Include=".***.cs"> <Output TaskParameter="Include" ItemName="StyleCopFiles" /> </CreateItem> <!-- Launch Stylecop task itself --> <StyleCopTask ProjectFullPath="$(MSBuildProjectFile)" SourceFiles="@(StyleCopFiles)" ForceFullAnalysis="true" TreatErrorsAsWarnings="true" OutputFile="StyleCopReport.xml" CacheResults="true" OverrideSettingsFile= "StylecopCustomRuleSettings.Stylecop" MaxViolationCount="$(StyleCopMaxViolationCount)"> <!-- Set the returned number of violation --> <Output TaskParameter="ViolationCount" PropertyName="StyleCo pViolationCount" /> </StyleCopTask> <!-- Write number of violation founds in last build --> <WriteLinesToFile File="violationCount.txt" Lines="$(StyleCopV iolationCount)" Overwrite="true" /> </Target> </Project> After that, we prepare the files that will be scanned by the StyleCop engine and we launch the StyleCop task on it. We redirect the current number of violations to the StyleCopViolationCount property. Finally, we write the result in the violationsCount.txt file to find out the level of technical debt remaining. This is done with the WriteLinesToFile element. Now that we have our build script for our job, let's see how to use it with Jenkins. First, we have to create the Jenkins job itself. We will create a Build a free-style software project. After that, we have to set how the subversion repository will be accessed, as shown in the following screenshot: We also set it to check for changes on the subversion repository every 15 minutes. Then, we have to launch our MSBuild script using the MSBuild task. The task is quite simple to configure and lets you fill in three fields: MSBuild Version: You need to select one of the MSBuild versions you configured in Jenkins (Jenkins | Manage Jenkins | Configure System) MSBuild Build File: Here we will provide the Stylecop.proj file we previously made Command Line Arguments: In our case, we don't have any to provide, but it might be useful when you have multiple targets in your MSBuild file Finally we have to configure the display of StyleCop errors. This were we will use the violation plugin of Jenkins. It permits the display of multiple quality tools' results on the same graphic. In order to make it work, you have to provide an XML file containing the violations. As you can see in the preceding screenshot, Jenkins is again quite simple to configure. After providing the XML filename for StyleCop, you have to fix thresholds to build health and the maximum number of violations you want to display in the detail screen of each file in violation. How it works... In the first part of the How to do it…section, we presented a build script. Let's explain what it does: First, as we don't use the premade MSBuild integration, we have to declare in which assembly the StyleCop task is defined and how we will call it. This is achieved through the use of the UsingTask element. Then we try to retrieve the previous count of violations and set the maximum number of violations that are acceptable at this stage of our project. This is the role of the ReadLinesFromFile element, which reads the content of a file. As we added a condition to ascertain the existence of the violationsCount.txt file, it will only be executed if the file exists. We redirect the output to the property StyleCopMaxViolationCount. After that we have configured the Jenkins job to follow our project with StyleCop. We have configured some strict rules to ensure nobody will add new violations over time, and with the violation plugin and the way we addressed StyleCop, we are able to follow the technical debt of the project regarding StyleCop violations in the Violations page. A summary of each file is also present and if we click on one of them, we will be able to follow the violations of the file. How to address multiple projects with their own StyleCop settings As far as I know, this is the limit of the MSBuild StyleCop task. When I need to address multiple projects with their own settings, I generally switch to StyleCopCmd using NAnt or a simple batch script and process the stylecop-report.violations.xml file with an XSLT to get the number of violations. Summary This article talked about integrating StyleCop analysis in Jensons/Hudkins. This article helped in building a job analysis for our project. Resources for Article : Further resources on this subject: Organizing, Clarifying and Communicating the R Data Analyses [Article] Generating Reports in Notebooks in RStudio [Article] Data Profiling with IBM Information Analyzer [Article]
Read more
  • 0
  • 0
  • 5190

article-image-how-to-build-a-koa-web-application-part-1
Christoffer Hallas
15 Dec 2014
8 min read
Save for later

How to Build a Koa Web Application - Part 1

Christoffer Hallas
15 Dec 2014
8 min read
You may be a seasoned or novice web developer, but no matter your level of experience, you must always be able to set up a basic MVC application. This two part series will briefly show you how to use Koa, a bleeding edge Node.js web application framework to create a web application using MongoDB as its database. Koa has a low footprint and tries to be as unbiased as possible. For this series, we will also use Jade and Mongel, two Node.js libraries that provide HTML template rendering and MongoDB model interfacing, respectively. Note that this series requires you to use Node.js version 0.11+. At the end of the series, we will have a small and basic app where you can create pages with a title and content, list your pages, and view them. Let’s get going! Using NPM and Node.js If you do not already have Node.js installed, you can download installation packages at the official Node.js website, http://nodejs.org. I strongly suggest that you install Node.js in order to code along with the article. Once installed, Node.js will add two new programs to your computer that you can access from your terminal; they’re node and npm. The first program is the main Node.js program and is used to run Node.js applications, and the second program is the Node Package Manager and it’s used to install Node.js packages. For this application we start out in an empty folder by using npm to install four libraries: $ npm install koa jade mongel co-body Once this is done, open your favorite text editor and create an index.js file in the folder in which we will now start our creating our application. We start by using the require function to load the four libraries we just installed: var koa = require('koa'); var jade = require('jade'); var mongel = require('mongel'); var parse = require(‘co-body'); This simply loads the functionality of the libraries into the respective variables. This lets us create our Page model and our Koa app variables: var Page = mongel('pages', ‘mongodb://localhost/app'); var app = koa(); As you can see, we now use the variables mongel and koa that we previously loaded into our program using require. To create a model with mongel, all we have to do is give the name of our MongoDB collection and a MongoDB connection URI that represents the network location of the database; in this case we’re using a local installation of MongoDB and a database called app. It’s simple to create a basic Koa application, and as seen in the code above, all we do is create a new variable called app that is the result of calling the Koa library function. Middleware, generators, and JavaScript Koa uses a new feature in JavaScript called generators. Generators are not widely available in browsers yet except for some versions of Google Chrome, but since Node.js is built on the same JavaScript as Google Chrome it can use generators. The generators function is much like a regular JavaScript function, but it has a special ability to yield several values along with the normal ability of returning a single value. Some expert JavaScript programmers used this to create a new and improved way of writing asynchronous code in JavaScript, which is required when building a networked application such as a web application. The generators function is a complex subject and we won’t cover it in detail. We’ll just show you how to use it in our small and basic app. In Koa, generators are used as something called middleware, a concept that may be familiar to you from other languages such as Ruby and Python. Think of middleware as a stack of functions through which an HTTP request must travel in order to create an appropriate response. Middleware should be created so that the functionality of a given middleware is encapsulated together. In our case, this means we’ll be creating two pieces of middleware: one to create pages and one to list pages or show a page. Let’s create our first middleware: app.use(function* (next) { … }); As you can see, we start by calling the app.use function, which takes a generator as its argument, and this effectively pushes the generator into the stack. To create a generator, we use a special function syntax where an asterisk is added as seen in the previous code snippet. We let our generator take a single argument called next, which represents the next middleware in the stack, if any. From here on, it is simply a matter of checking and responding to the parameters of the HTTP request, which are accessible to us in the Koa context. This is also the function context, which in JavaScript is the keyword this, similar to other languages and the keyword self: if (this.path != '/create') { yield next; return } Since we’re creating some middleware that helps us create pages, we make sure that this request is for the right path, in our case, /create; if not, we use the yield keyword and the next argument to pass the control of the program to the next middleware. Please note the return keyword that we also use; this is very important in this case as the middleware would otherwise continue while also passing control to the next middleware. This is not something you want to happen unless the middleware you’re in will not modify the Koa context or HTTP response, because subsequent middleware will always expect that they’re now in control. Now that we have checked that the path is correct, we still have to check the method to see if we’re just showing the form to create a page, or if we should actually create a page in the database: if (this.method == 'POST') { var body = yield parse.form(this); var page = yield Page.createOne({    title: body.title,    contents: body.contents }); this.redirect('/' + page._id); return } else if (this.method != 'GET') { this.status = 405; this.body = 'Method Not Allowed'; return } To check the method, we use the Koa context again and the method attribute. If we’re handling a POST request we now know how to create a page, but this also means that we must extract extra information from the request. Koa does not process the body of a request, only the headers, so we use the co-body library that we downloaded early and loaded in as the parse variable. Notice how we yield on the parse.form function; this is because this is an asynchronous function and we have to wait until it is done before we continue the program. Then we proceed to use our mongel model Page to create a page using the data we found in the body of the request, again this is an asynchronous function and we use yield to wait before we finally redirect the request using the page’s database id. If it turns out the method was not POST, we still want to use this middleware to show the form that is actually used to issue the request. That means we have to make sure that the method is GET, so we added an else if statement to the original check, and if the request is neither POST or GET we respond with an HTTP status 405 and the message Method Not Allowed, which is the appropriate response for this case. Notice how we don’t yield next; this is because the middleware was able to determine a satisfying response for the request and it requires no further processing. Finally, if the method was actually POST, we use the Jade library that we also installed using npm to render a create.jade template in HTML: var html = jade.renderFile('create.jade'); this.body = html; Notice how we set the Koa context’s body attribute to the rendered HTML from Jade; all this does is tell Koa that we want to send that back to the browser that sent the request. Wrapping up You are well on your way to creating your Koa app. In Part 2 we will implement Jade templates and list and view pages. Ready for the next step? Read Part 2 here. Explore all of our top Node.js content in one place - visit our Node.js page today! About the author Christoffer Hallas is a software developer and entrepreneur from Copenhagen, Denmark. He is a computer polyglot and contributes to and maintains a number of open source projects. When not contemplating his next grand idea (which remains an idea) he enjoys music, sports, and design of all kinds. Christoffer can be found on GitHub as hallas and at Twitter as @hamderhallas.
Read more
  • 0
  • 0
  • 5178

article-image-adf-proof-concept
Packt
10 Jun 2011
12 min read
Save for later

The ADF Proof of Concept

Packt
10 Jun 2011
12 min read
Oracle ADF Enterprise Application Development—Made Simple Successfully plan, develop, test and deploy enterprise applications with Oracle ADF You can compare the situation at the start of a project to standing in front of a mountain with the task to excavate a tunnel. The mountainsides are almost vertical, and there is no way for you to climb the mountain to figure out how wide it is. You can take two approaches: You can either start blasting and drilling in the full width of the tunnel you need You can start drilling a very small pilot tunnel all through the mountain, and then expand it to full width later It's probably more efficient to build in the full width of the tunnel straight from the beginning, but this approach has some serious disadvantages as well. You don't know how wide the mountain is, so you can't tell how long it will take to build the tunnel. In addition, you don't know what kind of surprises might lurk in the mountain—porous rock, aquifers, or any number of other obstacles to your tunnel building. That's why you should build the pilot tunnel first—so you know the size of the task and have an idea of the obstacles you might meet on the way. The Proof of Concept is that pilot tunnel. The very brief ADF primer Since you have decided to evaluate ADF for your enterprise application, you probably already have a pretty good idea of its architecture and capabilities. Therefore, this section will only give a very brief overview of ADF—there are many whitepapers, tutorials, and demonstrations available at the Oracle Technology Network website. Your starting point for ADF information is http://otn.oracle. com/developer-tools/jdev/overview. Enterprise architecture A modern enterprise application typically consists of a frontend, user-facing part and a backend business service part. Frontend The frontend part is constructed from several layers. In a web-based application, these are normally arranged in the common Model-View-Controller (MVC) pattern as illustrated next: The View layer is interacting with the user, displaying data as well as receiving updates and user actions. The Controller layer is in charge of interpreting user actions and deciding which screens are presented to the user in which order. And the Model layer is representing the backend business services to the View and Controller, hiding the complexity of storing and retrieving data. This architecture implements a clean separation of duties— the page doesn't have to worry about where to go next, because that is the task of the controller. And the controller doesn't have to worry about how to store data in the data service, because that is the task of the model. Other Frontends An enterprise application could also have a desktop application frontend, and might have additional frontends for mobile users or even use existing desktop applications like Microsoft Excel to interact with data. In the ADF technology stack, all of these alternative frontends interact with the same model, making it easy to develop multiple frontend applications against the same data services. Backend The backend part consists of a business service layer that implements the business logic and provide some way of accessing the underlying data services. Business services can be implemented as API code written in Java, PL/SQL or other languages, web services, or using a business service framework such as ADF Business Components. Under the business services layer there will be a data service layer actually storing persistent data. Typically, this is based on relational tables, but it could also be XML files in a file system or data in other systems accessed through an interface. ADF architecture There are many different ways of building applications with Oracle Application Development Framework, but Oracle has chosen a modern SOA-based architecture for Oracle Fusion Applications. This brand new product has been built from the ground up as the successor to Oracle E-Business Suite, Siebel, PeopleSoft, J.D. Edwards and many other applications Oracle has acquired over the last couple of years. If it is good enough for Oracle Fusion Applications, arguably the biggest enterprise application development effort ever undertaken by mankind, it is probably good enough for you, too. Oracle Fusion Applications are using the following parts of the ADF framework: ADF Faces Rich Client (ADFv), a very rich set of user interface components implementing advanced functionality in a web application. ADF Controller (ADFc), implementing the features of a normal JSF controller, but extended with the possibility to define modular, reusable page flows. ADFc also allows you to declare transaction boundaries so one database transaction can span many pages. ADF binding layer (ADFm), standard defining a common backend model that the user interface can communicate with. ADF Business Components (ADFbc), a highly productive, declarative way of defining business services based on relational tables. You can see all of these in the following figure: There are many ways of getting from A to B—this article is about travelling the straight and well-paved road Oracle has built for Fusion Applications. However, other routes might be appropriate in some situations: You could build the user interface as a desktop application using ADF Swing components, you could use ADF for a mobile device, or you could use ADF Desktop Integration to access your data directly from within Microsoft Excel. Your business services could be based on Web Services, EJBs or many other technologies, using the ADF binding layer to connect to the user interface. Entity objects and associations Entity objects (EOs) takes care of object-relational mapping: Making your relational tables available to the application as Java objects. Entity objects are the base that view objects are built on, and all data modifications go through the entity object. You will normally have one entity object for every database table or database view your application uses, and this object is responsible for producing the correct SQL statements to insert, update or delete in the underlying relational tables. The entity objects helps you build scalable and well-performing applications by intelligently caching records on the application server in order to minimize the load the application places on the database. Like entity objects are the middle-tier reflection of database tables and database views, Associations are the reflection of foreign key relationships between tables. An association represents a connection between two entity objects and allows ADF to relate data in one entity object with data in another. JDeveloper is normally able to create these automatically by simply inspecting the database, but in case your database does not contain foreign keys, you can build associations by hand to tell ADF about the relationships in your data. View objects and View Links While you do not really need to make any major decisions when building the entity objects for the Proof of Concept, you do need to consider the consumers of your business services when you start building view objects—for example, what information you would display on a screen. View objects are typically based on entity objects and you will be using them for two purposes: To provide data for your screens To provide data for lists of values (LOVs) The data handling view objects are normally specific for each screen or business service. One screen can use multiple view objects—in general, you need to create one view object for each master-detail level you wish to display on your screen. One view object can pull together data from several entity objects, so if you just need to retrieve a reference value from another table, you do not need to create a separate view object for this. The LOV view objects are used for drop-down lists and other selections in your user interface. They will typically be defined as read-only and because they are reusable, you will define them once and re-use them everywhere you need a drop-down list on a specific data set. View Links are used to define the relationships between the view objects and are typically based on associations (again often based on foreign keys in the database). The following figure shows an example of two ways to display the data from the familiar EMP and DEPT tables. The left-hand illustration shows a situation where you wish to display a department with all the employees of the department in a master-detail screen. In this case, you create two view objects connected by a view link. The right-hand illustration shows a situation where you wish to display all employees, together with the name of the department where they work. In this case, you only need one view object, pulling together data from both the EMP and DEPT tables through the entity objects. Application modules Application modules encapsulate the view object instances and business service methods necessary to perform a unit of work. Each application module has its own transactional context and holds its own database connection. This means that all of the work a user performs using view objects from one application module is part of one database transaction. Application modules can have different granularity, but typically, you will have one application module for each major piece of functionality. If your requirements are specified with use cases, there will often be one application module for each major use case. However, multiple use cases can also be grouped together into one application module – indeed, it is possible to build a small application using just one application modules. Application modules for Oracle Forms If you come from an Oracle Forms background and are developing a replacement for an Oracle Forms application, your application will often have a relatively small number of complex, major Forms, and larger number of simple data maintenance Forms. You will often create one Application Module per major Form, and a few application modules that each provide data for a number of simple Forms. If you wish, you can combine multiple application modules inside one root application module. This is called nesting and allows several application modules to participate in the transaction of the root application module. This also saves database connections because only the root application module needs a connection. The ADF user interface The preferred way to build the user interface in an ADF enterprise application is with JavaServer Faces (JSF). JSF is a component-based framework for building webbased user interfaces that overcome many of the limitations of earlier technologies like JavaServer Pages (JSP). In a JSF application, the user interface does not contain any code, but is instead built from configurable components from a component library. For your application, you will want to use the sophisticated ADF 11g JavaServer Faces (JSF) component library, known as the ADF Faces Rich Client. There are other JSF component libraries—for example, the previous version of the ADF Faces components (version 10g) has been released by Oracle as Open Source and is now part of the Apache MyFaces Trinidad project. But for a modern enterprise application, use ADF Faces Rich Client. ADF Task Flows One of the great improvements in ADF 11g was the addition of ADF Task Flows. It had long been clear to web developers that in a web application, you cannot just let each page decide where to go next—you need the controller from the MVC architecture. Various frameworks and technologies have implemented controllers (both the popular Struts framework and JSF has this), but the controller in ADF Task Flows is the first controller capable of handling large enterprise applications. An ADF web application has one Unbounded Task Flow where you place all the publicly accessible pages and define the navigation between them. This corresponds to other controller architectures. But ADF also has Bounded Task Flows, which are complete, reusable mini-applications that can be called from the unbounded task flow or from another bounded task flow. A bounded task flow has a well-defined entry point, accepts input parameters and can deliver an outcome back to the caller. For example, you might build a customer management task flow to handle customer data. In this way, your application can be built in a modular fashion—the developers in charge of implementing each use case can define their own bounded task flow with a well-defined interface for others to call. The team building the customer management task flow is thus free to add new pages or change the navigation flow without affecting the rest of the application. ADF pages and fragments In your task flows, you can define either pages or page fragments. Pages are complete web pages that you can run on their own, while page fragments are reusable components that you place inside regions on pages. An enterprise application will often have a small number of pages (possibly only one), and a larger number of page fragments that dynamically replace each other inside a region. This design means that the user does not see the whole browser window redraw itself—only parts of the page will change as one fragment is replaced with another. It is this technique that makes an ADF application seem more like a desktop application than a traditional web application. On your pages or page fragments, you add content using layout components, data components and control components: The layout components are containers for other components and control the screen layout. Often, multiple layout components are nested inside each other to achieve the desired layout. The data components are the fields, drop-down lists, radio buttons and so on that the user interacts with to create and modify data. The control components are the buttons and links used to perform actions in an ADF application.
Read more
  • 0
  • 0
  • 5145
article-image-introduction-applicationcfc-object-and-application-variables-coldfusion-9
Packt
26 Jul 2010
9 min read
Save for later

Introduction to the Application.cfc Object and Application Variables in ColdFusion 9

Packt
26 Jul 2010
9 min read
(For more resources on ColdFusion, see here.) Life span Each of our shared scopes has a life span. They come into existence at a given point, and cease to exist at a predictable point. Learning to recognize these points is very important. It is also the first aspect of "Scope". The request scope is created when a request is made to your ColdFusion server from any source. This could either be a web browser, or any type of web application that can make an HTTP request to your server. Any variable placed into the request structure persists until the request processing is complete. Variable persistence is the property of data remaining available for a set period of time. Without persistence, we would have to make information last by passing all the information from one web page to another, in all forms and in all links. You may have heard people say that web pages are stateless. If you have passed all the information into the browser, they would be closer to stateful applications, but would be difficult to manage. In this article, we will learn how to create a stateful web application. Here is a chart of the "life spans" of the key scopes: Scope Begins Ends Request Begins when a server receives a request from any source.Created before any session or an application is created. Ends when the processing for this request is complete.Ending has nothing to do with the end of applications or sessions. Application Begins before a session but after the request.Begins only when an Application.cfc file is first run with the current unique application name, or when the <cfapplication> tag is called in older code CF applications. Ends when the amount of time since a request is greater than the expiration time set for the application. Session Begins after an application is created.Created inside the same sources as the application. Ends when the amount of time since a request is greater than the expiration time set for the session. Client Begins when a unique visitor first visits the server. If you want them to expire, you can store client variables in encrypted cookies. Cookies have limited storage space and are not always available. We will be discussing the scopes in more detail later in this article series. All the scopes, except the client scope, expire if you shut down your server. When we close our browser window or reboot the client side, a session does not come to an end, but our connectivity to that particular session scope ends. The information and resources for storing that session are held until the session expires. Then, when connecting the server starts a new session and we are unable to reconnect to the former session. Introducing the Application.cfc object The first thing we need to do is to understand how this application page is called. When a .cfm or .cfc file is called, the server looks for an Application.cfc file in the current directory where the web page is being called from. It also looks for an Application.cfm file. We do not create an application or session scope with the .cfm version because .cfc provides many advantages and is much more powerful than the .cfm version. It provides better encapsulation and code reuse. If the application file is found, ColdFusion runs it. If the file is not found, then it moves up one directory towards the sever root directory in order to search for an Application.cfc file. The search stops either when a file is found, or once it reaches the root directory and a file is not found. There are several methods in the Application.cfc file. It is worth noting that this file does not exist by default, the developer must create it. The following table gives the method names and the details as to when those methods are called: Method name When a method is called onApplicationEnd The application ends; the application times out. onApplicationStart The application first starts: the first request for a page is processed or the first CFC method is invoked by an event gateway instance, a web service, or Macromedia Flash Remoting CFC. onCFCRequest HTTP or AMF (remote special Flash) calls. onError An exception occurs that is not caught by a try or catch block. onMissingTemplate ColdFusion receives a request for a non-existent page. onRequest The onRequestStart() method finishes(this method can filter request contents). onRequestEnd All pages in the request have been processed. onRequestStart A request starts. onSessionEnd A session ends. onSessionStart A session starts. onServerStart A ColdFusion server starts. When the Application.cfc file runs for the first time, these methods are called in the order as shown in the following diagram. The request variable scope is available at all times. Yet, to make the code flow right, the designers of this object made sure of the order in which the server runs the code. You will also find that for technical reasons, there are some issues that arise when we use the onRequest() method. Therefore, we will not be using this method. The steps in the previous screenshot are explained as follows: Browser Request: The browser sends a request to the server. The server passes the processing to the Application.cfc file, if it exists. It skips the step if it does not exist. The Application.cfc file has methods that execute if they exist too. The first method is onApplicationStart(). This executes on the basis of the application name. If the unique named application is not currently running on the server, then this method is called. Application Start: The next thing that Application.cfc does is to check if the request to the server is a pre-existing application. If the request is to an application which has not started, then it calls the onApplicationStart() method, if the method exists. Session Start: On every request to the server, if the onSessionStart() method exists, then it is called at this particular point in the processing. Request Start: On every request to the server, if the onRequestStart() method exists, then it is called at this particular point in the processing. OnRequest: This step normally occurs after the onRequestStart() method. If the onRequest() method was used, then by default it prevented the calling of CFCs. We do not say that it is always wrong to use this method. However, we will avoid it as much as possible. Requested Page: Now, the actual page requested is called and processed. Request End: After the requested page is processed, the control is passed back to the onRequestEnd() method, if it exists in Application.cfc. return response to browser: This is the point when ColdFusion has completed its work of processing information to respond to the browser request. At this point, you could either send HTML to the browser, a redirect, or any other response. Session End: The onSessionEnd() method is called if the method exists but only when the time since the user has last made a request to the server is less than the time for the session timeout. Application End: The onApplicationEnd() method is called if it exists when the time since the last request was received by the server is greater than the timeout for the application. The application and session scopes have already been created on the server and they do not need to be reinitialized. Once the application is created and other requests are made to the server, the following methods are called with each request: onRequestStart() onRequest() onRequestEnd() In previous versions of ColdFusion, when the onRequest() method of the Application.cfc was called, it blocked CFCs from operating correctly. You may see some fancy code in older frameworks that check if the current request is calling a CFC. They would then delete the onRequest() method for that request. Now there is a new method called onCFCRequest(). If you need backwards capability to previous versions of ColdFusion, then you would delete the onRequest() method. You can use either of these approaches depending on whether you need the code to run on prior versions of ColdFusion. The onCFCRequest() method will execute at the same point as the onRequest() method in the previous examples. You can add this code in or not depending on your own preferences. The previous example still operates as expected if you leave the method out. The OnRequestEnd.cfm side of using Application.cfm based page calls does not execute if the page runs a <cflocation> tag before the OnRequestEnd.cfm is run. It is also not a part of Application.cfc based applications and was intended for use with Application.cfm in older versions of ColdFusion. Here is a representation of the complete process that is less granular. We can see that the application behaves just as it did in the earlier illustration; we just do not go into explicit detail about every method that is called internally. We also see that the requested page can call additional code segments. These code segments can be a CFC, a custom tag, or any included page. Those pages can also include other pages, so that they create a proper hierarchy of functionality. Always try to make sure that functionality is layered, so the separation of layers provides a structured and simpler approach to creation, maintenance, and long-term support for your web applications. The variables set in Application.cfc can be modified before the requested page is called, and even later. Let us say for example, you want to record the time at which the session was last hit. You could choose to set a variable, <cfset session._stat.lasthit = now()>. This could be set either at the beginning of a request, or at the end. Here, the question is where you would put this function. At first, you might think that you would add this function to the OnSessionStart() method or OnSessionEnd() method. This would cause an issue because these two methods are only called when a session has begun or when it has come to an end. You would actually put the code into the OnRequestStart() or OnRequestEnd() code segment of the Application.cfc file for the function to work properly. The request methods are called with every server request. We will have a complete Application.cfc to use as a model for creating our own variations in future projects. Remember to place the code in the right place and test your code using CFDump or by some other means to make sure it works when creating changes.
Read more
  • 0
  • 0
  • 5113

article-image-application-session-and-request-scope-coldfusion-9
Packt
27 Jul 2010
8 min read
Save for later

Application, Session, and Request Scope in ColdFusion 9

Packt
27 Jul 2010
8 min read
(For more resources on ColdFusion, see here.) The start methods We will have a look at the start methods and make some observations now. Each method has its own set of arguments. All Application.cfc methods return a Boolean value of true or false to declare if they completed correctly or not. Any code you place inside a method will execute when the start event occurs. These are the events that match with the name of the method. We will also include some basic code that will help you build an application core that is good for reuse and discuss what those features provide. Application start method—onApplicationStart() The following is the code structure of the application start method. You could actually place these methods in any order in the CFC, as the order does not matter. Code that uses CFCs only require the methods to exist. If they exist, then it will call them. We place them in our code so that it helps us to read and understand the structure from a human perspective. <cffunction name="onApplicationStart" output="false"> <cfscript> // create default stat structure and pre-request values application._stat = structNew(); application._stat.started = now(); application._stat.thisHit = now(); application._stat.hits = 0; application._stat.sessions = 0; </cfscript></cffunction> There are no arguments for the onApplicationStart() method. We have included some extra code to show you an example of what can be done in this function. Please note that if we change the code in this method, it will only run at the very first time when an application running in ColdFusion is hit. To hit it again, we need to either change the application name or restart the ColdFusion server. The Application variables section that was previously explained shows how to change the application's name. From the start methods, we can see that we can access the variable scopes that allow persistence of key information. To understand the power of this object, we will be creating some statistics that can be used in most situations. We could use them for debugging, logging, or in any other appropriate use case. Again, we have to be aware that this only gets hit the first time a request is made to a ColdFusion server for that application. We will be updating many of our statistics in the request methods. We will also be updating one of our variables in the session end method. Session start method—onSessionStart() The session start method only gets called when a request is made for a new session. It is good that ColdFusion can keep track of these things. The following is example code that allows us to keep a record of the session-based statistics that is similar to the application-based statistics: <cffunction name="onSessionStart" output="false"> <cfscript> // create default session stat structure and pre-request values session._stat.started = now(); session._stat.thisHit = now(); session._stat.hits = 0; // at start of each session update count for application stat application._stat.sessions += 1; </cfscript></cffunction> You might have noticed that in the previous code we used +=. In ColdFusion prior to version 8, you had to type that particular line in a different way. The following two examples are the same in functionality (example one works in all versions and two works only in version 8 and higher): Example 1: myTotal = myTotal +3 Example 2: myTotal += 3 This is common in JavaScript, ActionScript, and many other languages. This syntax was added in ColdFusion version 8. We change the application-based setting because sessions are hidden from one another and cannot see each other. Therefore, we use the application CFC to either count or add a count every time a new session starts. Request start method—onRequestStart() This is one of the longest methods in the article. The first thing you will notice is that the script that is called is passed to the onRequestStart() method by ColdFusion. In this example, we will instruct ColdFusion to block any scripts from execution that begin with an underscore when called remotely. This means that you can call the server and request any .cfm page or .cfc page with an underscore at the start, and this protects it from being called outside the local server. The files can still be run if called from pages inside the server. This makes all these files locally accessible: <cffunction name="onRequestStart" output="false"> <cfargument name="thePage" type="string" required="true"> <cfscript> var myReturn = true; //fancy code to block pages that start with underscore if(left(listLast(arguments.thePage,"/"),1) EQ "_") { myReturn = false; } // update application stat on each request application._stat.lastHit = application._stat.thisHit; application._stat.thisHit = now(); application._stat.hits += 1; // update session stat on each request session._stat.lastHit = session._stat.thisHit; session._stat.thisHit = now(); session._stat.hits += 1; </cfscript> <cfreturn myReturn></cffunction> The methods in the following sections are used to update all the application and session statistics variables that need to be updated with each request. You should also notice that we are recording the last time the application or session was requested. The end methods Previously, some of the methods in this object were impossible to achieve with the earlier versions of ColdFusion. It was possible to code an end request function, but only a few programmers made use of it. We find that by using this object many more people are taking advantage of these features. The new methods that are added have the ability to run code specifically when a session ends, and when an application ends. This allows us to do things that we could not do previously. We can keep a record of how long a user is online without having to access the database with each request. When the session starts, you can store it in the session scope. When the session ends, you can take all that information and store it in the session log table if logging is desired in your site. Request end method—onRequestEnd() We are not going to use every method that is available to us. As we have the concept from the other sections, this would be redundant. The concepts of this method are very similar to the onRequestStart() method with the exception that it occurs after the requested page has been called. If you create content in this method and set the output attribute to true, then it will be sent back to browser requests. Here you can place the code that logs information about our requests: <cffunction name="onRequestEnd" returnType="void" output="false"> <cfargument name="thePage" type="string" required="true"></cffunction> Session end method—onSessionEnd() In the session end method, we can perform logging functions for analytical statistics that are specific to the end of a session if desired for your site. You need to use the argument's scope variables to read both the application and session variables. If you are changing application variables as in our example code, then you must use the argument's scope for that. <cffunction name="onSessionEnd" returnType="void" output="false"> <cfargument name="SessionScope" type="struct" required="true"> <cfargument name="ApplicationScope" type="struct" required="false"> <cfscript> // NOTE: You must use the variable scope below to access the // application structure inside this method. arguments.ApplicationScope._stat.sessions -= 1; </cfscript></cffunction> Application end method—onApplicationEnd This is our end method for applications. Here is where you can do the logging activity. As in the session method, you need to use the argument's scope in order to read variables for the application. It is also good to note that at this point, you can no longer access the session scope. <cffunction name="onApplicationEnd" returnType="void" output="false"> <cfargument name="applicationScope" required="true"></cffunction> On Error method—onError() The following code demonstrates how we can be flexible in managing errors sent to this method. If the error comes from Application.cfc, then the event (or method that had an issue) will be contained in the value of the arguments.eventname variable. Otherwise, it will be an empty string. In our code, we change the label on our dump statement, so that it is a bit more obvious where it was generated. <cffunction name="onError" returnType="void" output="true"> <cfargument name="exception" required="true"> <cfargument name="eventname" type="string" required="true"> <cfif arguments.eventName NEQ ""> <cfdump var="#arguments.exception#" label="Application core exception"> <cfelse> <cfdump var="#arguments.exception#" label="Application exception"> </cfif></cffunction>
Read more
  • 0
  • 0
  • 5086
Modal Close icon
Modal Close icon