Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-new-features-jpa-20
Packt
28 Jul 2010
9 min read
Save for later

New Features in JPA 2.0

Packt
28 Jul 2010
9 min read
(For more resources on Java, see here.) Version 2.0 of the JPA specification introduces some new features to make working with JPA even easier. In the following sections, we discuss some of these new features: Criteria API One of the main additions to JPA in the 2.0 specification is the introduction of the Criteria API. The Criteria API is meant as a complement to the Java Persistence Query Language (JPQL). Although JPQL is very flexible, it has some problems that make working with it more difficult than necessary. For starters, JPQL queries are stored as strings and the compiler has no way of validating JPQL syntax. Additionally, JPQL is not type safe. We could write a JPQL query in which our where clause could have a string value for a numeric property and our code would compile and deploy just fine. To get around the JPQL limitations described in the previous paragraph, the Criteria API was introduced to JPA in version 2.0 of the specification. The Criteria API allows us to write JPA queries programmatically, without having to rely on JPQL. The following code example illustrates how to use the Criteria API in our Java EE 6 applications: package net.ensode.glassfishbook.criteriaapi;import java.io.IOException;import java.io.PrintWriter;import java.util.List;import javax.persistence.EntityManager;import javax.persistence.EntityManagerFactory;import javax.persistence.PersistenceUnit;import javax.persistence.TypedQuery;import ja vax.persistence.criteria.CriteriaBuilder;import javax.persistence.criteria.CriteriaQuery;import javax.persistence.criteria.Path;import javax.persistence.criteria.Predicate;import javax.persistence.criteria.Root;import javax.persistence.metamodel.EntityType;import javax.persistence.metamodel.Metamodel;import javax.persistence.metamodel.SingularAttribute;import javax.servlet.ServletException;import javax.servlet.annotation.WebServlet;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse;@WebServlet(urlPatterns = {"/criteriaapi"})public class CriteriaApiDemoServlet extends HttpServlet{ @PersistenceUnit(unitName = "customerPersistenceUnit") private EntityManagerFactory entityManagerFactory; @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter printWriter = response.getWriter(); List<UsState> matchingStatesList; EntityManager entityManager = entityManagerFactory.createEntityManager(); CriteriaBuilder criteriaBuilder = entityManager.getCriteriaBuilder(); CriteriaQuery<UsState> criteriaQuery = criteriaBuilder.createQuery(UsState.class); Root<UsState> root = criteriaQuery.from(UsState.class); Metamodel metamodel = entityManagerFactory.getMetamodel(); EntityType<UsState> usStateEntityType = metamodel.entity(UsState.class); SingularAttribute<UsState, String> usStateAttribute = usStateEntityType.getDeclaredSingularAttribute("usStateNm", String.class); Path<String> path = root.get(usStateAttribute); Predicate predicate = criteriaBuilder.like(path, "New%"); criteriaQuery = criteriaQuery.where(predicate); TypedQuery typedQuery = entityManager.createQuery(criteriaQuery); matchingStatesList = typedQuery.getResultList(); response.setContentType("text/html"); printWriter.println("The following states match the criteria:<br/>"); for (UsState state : matchingStatesList) { printWriter.println(state.getUsStateNm() + "<br/>"); } }} This example takes advantage of the Criteria APIc. When writing code using the Criteria API, the first thing we need to do is to obtain an instance of a class implementing the javax.persistence.criteria. CriteriaBuilder interface. As we can see in the previous example, we need to obtain said instance by invoking the getCriteriaBuilder() method on our EntityManager. From our CriteriaBuilder implementation, we need to obtain an instance of a class implementing the javax.persistence.criteria.CriteriaQuery interface. We do this by invoking the createQuery() method in our CriteriaBuilder implementation. Notice that CriteriaQuery is generically typed. The generic type argument dictates the type of result that our CriteriaQuery implementation will return upon execution. By taking advantage of generics in this way, the Criteria API allows us to write type safe code. Once we have obtained a CriteriaQuery implementation, from it we can obtain an instance of a class implementing the javax.persistence.criteria.Root interface. The Root implementation dictates what JPA entity we will be querying from. It is analogous to the FROM query in JPQL (and SQL). The next two lines in our example take advantage of another new addition to the JPA specification—the Metamodel API. In order to take advantage of the Metamodel API, we need to obtain an implementation of the javax.persistence. metamodel.Metamodel interface by invoking the getMetamodel() method on our EntityManagerFactory. From our Metamodel implementation, we can obtain a generically typed instance of the javax.persistence.metamodel.EntityType interface. The generic type argument indicates the JPA entity our EntityType implementation corresponds to. EntityType allows us to browse the persistent attributes of our JPA entities at runtime. This is exactly what we do in the next line in our example. In our case, we are getting an instance of SingularAttribute, which maps to a simple, singular attribute in our JPA entity. EntityType has methods to obtain attributes that map to collections, sets, lists, and maps. Obtaining these types of attributes is very similar to obtaining a SingularAttribute, therefore we won't be covering those directly. Refer to the Java EE 6 API documentation at http://java.sun.com/javaee/6/ docs/api/ for more information. As we can see in our example, SingularAttribute contains two generic type arguments. The first argument dictates the JPA entity we are working with and the second one indicates the type of attribute. We obtain our SingularAttribute by invoking the getDeclaredSingularAttribute() method on our EntityType implementation and passing the attribute name (as declared in our JPA entity) as a String. Once we have obtained our SingularAttribute implementation, we need to obtain an import javax.persistence.criteria.Path implementation by invoking the get() method in our Root instance and passing our SingularAttribute as a parameter. In our example, we will get a list of all the "new" states in the United States (that is, all states whose names start with "New"). Of course, this is the job of a "like" condition. We can do this with the Criteria API by invoking the like() method on our CriteriaBuilder implementation. The like() method takes our Path implementation as its first parameter and the value to search for as its second parameter. CriteriaBuilder has a number of methods that are analogous to SQL and JPQL clauses such as equals(), greaterThan(), lessThan(), and(), or(), and so on and so forth (for the complete list, refer to the Java EE 6 documentation at http://java.sun.com/javaee/6/docs/api/). These methods can be combined to create complex queries via the Criteria API. The like() method in CriteriaBuilder returns an implementation of the javax.persistence.criteria.Predicate interface, which we need to pass to the where() method in our CriteriaQuery implementation. This method returns a new instance of CriteriaBuilder which we assign to our criteriaBuilder variable. At this point, we are ready to build our query. When working with the Criteria API, we deal with the javax.persistence.TypedQuery interface, which can be thought of as a type-safe version of the Query interface we use with JPQL. We obtain an instance of TypedQuery by invoking the createQuery() method in EntityManager and passing our CriteriaQuery implementation as a parameter. To obtain our query results as a list, we simply invoke getResultList() on our TypedQuery implementation. It is worth reiterating that the Criteria API is type safe. Therefore, attempting to assign the results of getResultList() to a list of the wrong type would result in a compilation error. After building, packaging, and deploying our code, then pointing the browser to our servlet's URL, we should see all the "New" states displayed in the browser. Bean Validation support Another new feature introduced in JPA 2.0 is support for JSR 303, Bean Validation. Bean Validation support allows us to annotate our JPA entities with Bean Validation annotations. These annotations allow us to easily validate user input and perform data sanitation. Taking advantage of Bean Validation is very simple, all we need to do is annotate our JPA entity fields or getter methods with any of the validation annotations defined in the javax.validation.constraints package. Once our fields are annotated as appropriate, the EntityManager will prevent non-validating data from being persisted. The following code example is a modified version of the Customer JPA entity. It has been modifed to take advantage of Bean Validation in some of its fields. package net.ensode.glassfishbook.jpa.beanvalidation;import java.io.Serializable;import javax.persistence.Column;import javax.persistence.Entity;import javax.persistence.Id;import javax.persistence.Table;import javax.validation.constraints.NotNull;import javax.validation.constraints.Size;@Entity@Table(name = "CUSTOMERS")public class Customer implements Serializable{ @Id @Column(name = "CUSTOMER_ID") private Long customerId; @Column(name = "FIRST_NAME") @NotNull @Size(min=2, max=20) private String firstName; @Column(name = "LAST_NAME") @NotNull @Size(min=2, max=20) private String lastName; private String email; public Long getCustomerId() { return customerId; } public void setCustomerId(Long customerId) { this.customerId = customerId; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; }} In this example, we used the @NotNull annotation to prevent the firstName and lastName of our entity from being persisted with null values. We also used the @Size annotation to restrict the minimum and maximum length of these fields. This is all we need to do to take advantage of Bean Validation in JPA. If our code attempts to persist or update an instance of our entity that does not pass the declared validation, an exception of type javax.validation.ConstraintViolationException will be thrown and the entity will not be persisted. As we can see, Bean Validation pretty much automates data validation, freeing us from having to manually write validation code. In addition to the two annotations discussed in the previous example, the javax.validation.constraints package contains several additional annotations that we can use to automate validation on our JPA entities. Refer to the Java EE 6 API documentation at http://java.sun.com/javaee/6/docs/api/ for the complete list. Summary In this article, we discussed some new JPA 2.0 features such as the Criteria API that allows us to build JPA queries programmatically, the Metamodel API that allows us to take advantage of Java's type safety when working with JPA, and Bean Validation that allows us to easily validate input by simply annotating our JPA entity fields. Further resources on this subject: Interacting with Databases through the Java Persistence API [article] Setting up GlassFish for JMS and Working with Message Queues [article]
Read more
  • 0
  • 0
  • 3403

article-image-application-session-and-request-scope-coldfusion-9
Packt
27 Jul 2010
8 min read
Save for later

Application, Session, and Request Scope in ColdFusion 9

Packt
27 Jul 2010
8 min read
(For more resources on ColdFusion, see here.) The start methods We will have a look at the start methods and make some observations now. Each method has its own set of arguments. All Application.cfc methods return a Boolean value of true or false to declare if they completed correctly or not. Any code you place inside a method will execute when the start event occurs. These are the events that match with the name of the method. We will also include some basic code that will help you build an application core that is good for reuse and discuss what those features provide. Application start method—onApplicationStart() The following is the code structure of the application start method. You could actually place these methods in any order in the CFC, as the order does not matter. Code that uses CFCs only require the methods to exist. If they exist, then it will call them. We place them in our code so that it helps us to read and understand the structure from a human perspective. <cffunction name="onApplicationStart" output="false"> <cfscript> // create default stat structure and pre-request values application._stat = structNew(); application._stat.started = now(); application._stat.thisHit = now(); application._stat.hits = 0; application._stat.sessions = 0; </cfscript></cffunction> There are no arguments for the onApplicationStart() method. We have included some extra code to show you an example of what can be done in this function. Please note that if we change the code in this method, it will only run at the very first time when an application running in ColdFusion is hit. To hit it again, we need to either change the application name or restart the ColdFusion server. The Application variables section that was previously explained shows how to change the application's name. From the start methods, we can see that we can access the variable scopes that allow persistence of key information. To understand the power of this object, we will be creating some statistics that can be used in most situations. We could use them for debugging, logging, or in any other appropriate use case. Again, we have to be aware that this only gets hit the first time a request is made to a ColdFusion server for that application. We will be updating many of our statistics in the request methods. We will also be updating one of our variables in the session end method. Session start method—onSessionStart() The session start method only gets called when a request is made for a new session. It is good that ColdFusion can keep track of these things. The following is example code that allows us to keep a record of the session-based statistics that is similar to the application-based statistics: <cffunction name="onSessionStart" output="false"> <cfscript> // create default session stat structure and pre-request values session._stat.started = now(); session._stat.thisHit = now(); session._stat.hits = 0; // at start of each session update count for application stat application._stat.sessions += 1; </cfscript></cffunction> You might have noticed that in the previous code we used +=. In ColdFusion prior to version 8, you had to type that particular line in a different way. The following two examples are the same in functionality (example one works in all versions and two works only in version 8 and higher): Example 1: myTotal = myTotal +3 Example 2: myTotal += 3 This is common in JavaScript, ActionScript, and many other languages. This syntax was added in ColdFusion version 8. We change the application-based setting because sessions are hidden from one another and cannot see each other. Therefore, we use the application CFC to either count or add a count every time a new session starts. Request start method—onRequestStart() This is one of the longest methods in the article. The first thing you will notice is that the script that is called is passed to the onRequestStart() method by ColdFusion. In this example, we will instruct ColdFusion to block any scripts from execution that begin with an underscore when called remotely. This means that you can call the server and request any .cfm page or .cfc page with an underscore at the start, and this protects it from being called outside the local server. The files can still be run if called from pages inside the server. This makes all these files locally accessible: <cffunction name="onRequestStart" output="false"> <cfargument name="thePage" type="string" required="true"> <cfscript> var myReturn = true; //fancy code to block pages that start with underscore if(left(listLast(arguments.thePage,"/"),1) EQ "_") { myReturn = false; } // update application stat on each request application._stat.lastHit = application._stat.thisHit; application._stat.thisHit = now(); application._stat.hits += 1; // update session stat on each request session._stat.lastHit = session._stat.thisHit; session._stat.thisHit = now(); session._stat.hits += 1; </cfscript> <cfreturn myReturn></cffunction> The methods in the following sections are used to update all the application and session statistics variables that need to be updated with each request. You should also notice that we are recording the last time the application or session was requested. The end methods Previously, some of the methods in this object were impossible to achieve with the earlier versions of ColdFusion. It was possible to code an end request function, but only a few programmers made use of it. We find that by using this object many more people are taking advantage of these features. The new methods that are added have the ability to run code specifically when a session ends, and when an application ends. This allows us to do things that we could not do previously. We can keep a record of how long a user is online without having to access the database with each request. When the session starts, you can store it in the session scope. When the session ends, you can take all that information and store it in the session log table if logging is desired in your site. Request end method—onRequestEnd() We are not going to use every method that is available to us. As we have the concept from the other sections, this would be redundant. The concepts of this method are very similar to the onRequestStart() method with the exception that it occurs after the requested page has been called. If you create content in this method and set the output attribute to true, then it will be sent back to browser requests. Here you can place the code that logs information about our requests: <cffunction name="onRequestEnd" returnType="void" output="false"> <cfargument name="thePage" type="string" required="true"></cffunction> Session end method—onSessionEnd() In the session end method, we can perform logging functions for analytical statistics that are specific to the end of a session if desired for your site. You need to use the argument's scope variables to read both the application and session variables. If you are changing application variables as in our example code, then you must use the argument's scope for that. <cffunction name="onSessionEnd" returnType="void" output="false"> <cfargument name="SessionScope" type="struct" required="true"> <cfargument name="ApplicationScope" type="struct" required="false"> <cfscript> // NOTE: You must use the variable scope below to access the // application structure inside this method. arguments.ApplicationScope._stat.sessions -= 1; </cfscript></cffunction> Application end method—onApplicationEnd This is our end method for applications. Here is where you can do the logging activity. As in the session method, you need to use the argument's scope in order to read variables for the application. It is also good to note that at this point, you can no longer access the session scope. <cffunction name="onApplicationEnd" returnType="void" output="false"> <cfargument name="applicationScope" required="true"></cffunction> On Error method—onError() The following code demonstrates how we can be flexible in managing errors sent to this method. If the error comes from Application.cfc, then the event (or method that had an issue) will be contained in the value of the arguments.eventname variable. Otherwise, it will be an empty string. In our code, we change the label on our dump statement, so that it is a bit more obvious where it was generated. <cffunction name="onError" returnType="void" output="true"> <cfargument name="exception" required="true"> <cfargument name="eventname" type="string" required="true"> <cfif arguments.eventName NEQ ""> <cfdump var="#arguments.exception#" label="Application core exception"> <cfelse> <cfdump var="#arguments.exception#" label="Application exception"> </cfif></cffunction>
Read more
  • 0
  • 0
  • 5086

article-image-objects-python
Packt
27 Jul 2010
15 min read
Save for later

Objects in Python

Packt
27 Jul 2010
15 min read
(For more resources on Python 3, see here.) Creating Python classes We don't have to write much Python code to realize that Python is a very "clean" language. When we want to do something, we just do it, without having to go through a lot of setup. The ubiquitous, "hello world" in Python, as you've likely seen, is only one line. Similarly, the simplest class in Python 3 looks like this: class MyFirstClass: pass There's our first object-oriented program! The class definition starts with the class keyword. This is followed by a name (of our choice) identifying the class, and is terminated with a colon. The class name must follow standard Python variable naming rules (must start with a letter or underscore, can only be comprised of letters, underscores, or numbers). In addition, the Python style guide (search the web for "PEP 8"), recommends that classes should be named using CamelCase notation (start with a capital letter, any subsequent words should also start with a capital).   The class definition line is followed by the class contents, indented. As with other Python constructs, indentation is used to delimit the classes, rather than braces or brackets as many other languages use. Use four spaces for indentation unless you have a compelling reason not to (such as fitting in with somebody else's code that uses tabs for indents). Any decent programming editor can be configured to insert four spaces whenever the Tab key is pressed. Since our first class doesn't actually do anything, we simply use the pass keyword on the second line to indicate that no further action needs to be taken. We might think there isn't much we can do with this most basic class, but it does allow us to instantiate objects of that class. We can load the class into the Python 3 interpreter so we can play with it interactively. To do this, save the class definition mentioned earlier into a file named first_class.py and then run the command python -i first_class.py. The -i argument tells Python to "run the code and then drop to the interactive interpreter". The following interpreter session demonstrates basic interaction with this class: >>> a = MyFirstClass() >>> b = MyFirstClass() >>> print(a) <__main__.MyFirstClass object at 0xb7b7faec> >>> print(b) <__main__.MyFirstClass object at 0xb7b7fbac> >>> This code instantiates two objects from the new class, named a and b. Creating an instance of a class is a simple matter of typing the class name followed by a pair of parentheses. It looks much like a normal function call, but Python knows we're "calling" a class and not a function, so it understands that its job is to create a new object. When printed, the two objects tell us what class they are and what memory address they live at. Memory addresses aren't used much in Python code, but here,it demonstrates that there are two distinctly different objects involved. Adding attributes Now, we have a basic class, but it's fairly useless. It doesn't contain any data, and it doesn't do anything. What do we have to do to assign an attribute to a given object? It turns out that we don't have to do anything special in the class definition. We can set arbitrary attributes on an instantiated object using the dot notation: class Point: pass p1 = Point() p2 = Point() p1.x = 5 p1.y = 4 p2.x = 3 p2.y = 6 print(p1.x, p1.y) print(p2.x, p2.y) If we run this code, the two print statements at the end tell us the new attribute values on the two objects: 5 4 3 6 This code creates an empty Point class with no data or behaviors. Then it creates two instances of that class and assigns each of those instances x and y coordinates to identify a point in two dimensions. All we need to do to assign a value to an attribute on an object is use the syntax <object>.<attribute>=<value>. This is sometimes referred to as dot notation. The value can be anything: a Python primitive, a built-in data type, another object. It can even be a function or another class! Making it do something Now, having objects with attributes is great, but object-oriented programming is really about the interaction between objects. We're interested in invoking actions that cause things to happen to those attributes. It is time to add behaviors to our classes. Let's model a couple of actions on our Point class. We can start with a method called reset that moves the point to the origin (the origin is the point where x and y are both zero). This is a good introductory action because it doesn't require any parameters: class Point: def reset(self): self.x = 0 self.y = 0 p = Point() p.reset() print(p.x, p.y) That print statement shows us the two zeros on the attributes: 0 0 A method in Python is identical to defining a function. It starts with the keyword def followed by a space and the name of the method. This is followed by a set of parentheses containing the parameter list (we'll discuss the self parameter in just a moment), and terminated with a colon. The next line is indented to contain the statements inside the method. These statements can be arbitrary Python code operating on the object itself and any parameters passed in as the method sees fit. The one difference between methods and normal functions is that all methods have one required argument. This argument is conventionally named self; I've never seen a programmer use any other name for this variable (convention is a very powerful thing). There's nothing stopping you, however, from calling it this or even Martha. The self argument to a method is simply a reference to the object that the method is being invoked on. We can access attributes and methods of that object as if it were any other object. This is exactly what we do inside the reset method when we set the x and y attributes of the self object. Notice that when we call the p.reset() method, we do not have to pass the self argument into it. Python automatically takes care of this for us. It knows we're calling a method on the p object, so it automatically passes that object to the method. However, the method really is just a function that happens to be on a class. Instead of calling the method on the object, we could invoke the function on the class, explicitly passing our object as the self argument: p = Point() Point.reset(p) print(p.x, p.y) The output is the same as the previous example because, internally, the exact same process has occurred. What happens if we forget to include the self argument in our class definition? Python will bail with an error message: >>> class Point: ... def reset(): ... pass ... >>> p = Point() >>> p.reset() Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: reset() takes no arguments (1 given) The error message is not as clear as it could be ("You silly fool, you forgot the self argument" would be more informative). Just remember that when you see an error message that indicates missing arguments, the first thing to check is whether you forgot self in the method definition. So how do we pass multiple arguments to a method? Let's add a new method that allows us to move a point to an arbitrary position, not just the origin. We can also include one that accepts another Point object as input and returns the distance between them: import math class Point: def move(self, x, y): self.x = x self.y = y def reset(self): self.move(0, 0) def calculate_distance(self, other_point): return math.sqrt((self.x - other_point.x)**2 +(self.y - other_point.y) **2) # how to use it: point1 = Point() point2 = Point() point1.reset() point2.move(5,0) print(point2.calculate_distance(point1)) assert (point2.calculate_distance(point1) == point1.calculate_distance(point2)) point1.move(3,4) print(point1.calculate_distance(point2)) print(point1.calculate_distance(point1)) The print statements at the end give us the following output: 5.0 4.472135955 0.0 A lot has happened here. The class now has three methods. The move method accepts two arguments, x and y, and sets the values on the self object, much like the old reset method from the previous example. The old reset method now calls move, since a reset is just a move to a specific known location. The calculate_distance method uses the not-too-complex Pythagorean Theorem to calculate the distance between two points. I hope you understand the math (** means squared, and math.sqrt calculates a square root), but it's not a requirement for our current focus: learning how to write methods. The example code at the end shows how to call a method with arguments; simply include the arguments inside the parentheses, and use the same dot notation to access the method. I just picked some random positions to test the methods. The test code calls each method and prints the results on the console. The assert function is a simple test tool; the program will bail if the statement after assert is False (or zero, empty, or None). In this case, we use it to ensure that the distance is the same regardless of which point called the other point's calculate_distance method. Initializing the object If we don't explicitly set the x and y positions on our Point object, either using move or by accessing them directly, we have a broken point with no real position. What will happen when we try to access it? Well, let's just try it and see. "Try it and see" is an extremely useful tool for Python study. Open up your interactive interpreter and type away. The following interactive session shows what happens if we try to access a missing attribute. If you saved the previous example as a file or are using the examples distributed in this article, you can load it into the Python interpreter with the command python -i filename.py. >>> point = Point() >>> point.x = 5 >>> print(point.x) 5 >>> print(point.y) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'Point' object has no attribute 'y' Well, at least it threw a useful exception. You've probably seen them before (especially the ubiquitous SyntaxError, which means you typed something incorrectly!). At this point, simply be aware that it means something went wrong. The output is useful for debugging. In the interactive interpreter it tells us the error occurred at line 1, which is only partially true (in an interactive session, only one line is executed at a time). If we were running a script in a file, it would tell us the exact line number, making it easy to find the offending code. In addition, it tells us the error is an AttributeError, and gives a helpful message telling us what that error means. We can catch and recover from this error, but in this case, it feels like we should have specified some sort of default value. Perhaps every new object should be reset() by default or maybe it would be nice if we could force the user to tell us what those positions should be when they create the object. Most object-oriented programming languages have the concept of a constructor, a special method that creates and initializes the object when it is created. Python is a little different; it has a constructor and an initializer. Normally, the constructor function is rarely ever used unless you're doing something exotic. So we'll start our discussion with the initialization method. The Python initialization method is the same as any other method, except it has a special name: __init__. The leading and trailing double underscores mean, "this is a special method that the Python interpreter will treat as a special case". Never name a function of your own with leading and trailing double underscores. It may mean nothing to Python, but there's always the possibility that the designers of Python will add a function that has a special purpose with that name in the future, and when they do, your code will break. Let's start with an initialization function on our Point class that requires the user to supply x and y coordinates when the Point object is instantiated: class Point: def __init__(self, x, y): self.move(x, y) def move(self, x, y): self.x = x self.y = y def reset(self): self.move(0, 0) # Constructing a Point point = Point(3, 5) print(point.x, point.y) Now, our point can never go without a y coordinate! If we try to construct a point without including the proper initialization parameters, it will fail with a not enough arguments error similar to the one we received earlier when we forgot the self argument. What if we don't want to make those two arguments required? Well then we can use the same syntax Python functions use to provide default arguments. The keyword argument syntax appends an equals sign after each variable name. If the calling object does not provide that argument, then the default argument is used instead; the variables will still be available to the function, but they will have the values specified in the argument list. Here's an example: class Point: def __init__(self, x=0, y=0): self.move(x, y) Most of the time, we put our initialization statements in an __init__ function. But as mentioned earlier, Python has a constructor in addition to its initialization function. You may never need to use the other Python constructor, but it helps to know it exists, so we'll cover it briefly. The constructor function is called __new__ as opposed to __init__, and accepts exactly one argument, the class that is being constructed (it is called before the object is constructed, so there is no self argument). It also has to return the newly created object. This has interesting possibilities when it comes to the complicated art of meta-programming, but is not very useful in day-to-day programming. In practice, you will rarely, if ever, need to use __new__, and __init__ will be sufficient. Explaining yourself Python is an extremely easy-to-read programming language; some might say it is self-documenting. However, when doing object-oriented programming, it is important to write API documentation that clearly summarizes what each object and method does. Keeping documentation up-to-date is difficult; the best way to do it is to write it right into our code. Python supports this through the use of docstrings. Each class, function, or method header can have a standard Python string as the first line following the definition (the line that ends in a colon). This line should be indented the same as the following code. Docstrings are simply Python strings enclosed with apostrophe (') or quote (") characters. Often, docstrings are quite long and span multiple lines (the style guide suggests that line-length should not exceed 80 characters), which can be formatted as multi-line strings, enclosed in matching triple apostrophe (''') or triple quote (""") characters. A docstring should clearly and concisely summarize the purpose of the class or method it is describing. It should explain any parameters whose usage is not immediately obvious, and is also a good place to include short examples of how to use the API. Any caveats or problems an unsuspecting user of the API should be aware of should also be noted. To illustrate the use of docstrings, we will end this part with our completely documented Point class: import math class Point: 'Represents a point in two-dimensional geometric coordinates' def __init__(self, x=0, y=0): '''Initialize the position of a new point. The x and y coordinates can be specified. If they are not, the point defaults to the origin.''' self.move(x, y) def move(self, x, y): "Move the point to a new location in two-dimensional space." self.x = x self.y = y def reset(self): 'Reset the point back to the geometric origin: 0, 0' self.move(0, 0) def calculate_distance(self, other_point): """Calculate the distance from this point to a second point passed as a parameter. This function uses the Pythagorean Theorem to calculate the distance between the two points. The distance is returned as a float.""" return math.sqrt( (self.x - other_point.x)**2 + (self.y – other_point.y)**2) Try typing or loading (remember, it's python -i filename.py) this file into the interactive interpreter. Then enter help(Point)<enter> at the Python prompt. You should see nicely formatted documentation for the class, as shown in the following screenshot:
Read more
  • 0
  • 0
  • 8592

article-image-introduction-applicationcfc-object-and-application-variables-coldfusion-9
Packt
26 Jul 2010
9 min read
Save for later

Introduction to the Application.cfc Object and Application Variables in ColdFusion 9

Packt
26 Jul 2010
9 min read
(For more resources on ColdFusion, see here.) Life span Each of our shared scopes has a life span. They come into existence at a given point, and cease to exist at a predictable point. Learning to recognize these points is very important. It is also the first aspect of "Scope". The request scope is created when a request is made to your ColdFusion server from any source. This could either be a web browser, or any type of web application that can make an HTTP request to your server. Any variable placed into the request structure persists until the request processing is complete. Variable persistence is the property of data remaining available for a set period of time. Without persistence, we would have to make information last by passing all the information from one web page to another, in all forms and in all links. You may have heard people say that web pages are stateless. If you have passed all the information into the browser, they would be closer to stateful applications, but would be difficult to manage. In this article, we will learn how to create a stateful web application. Here is a chart of the "life spans" of the key scopes: Scope Begins Ends Request Begins when a server receives a request from any source.Created before any session or an application is created. Ends when the processing for this request is complete.Ending has nothing to do with the end of applications or sessions. Application Begins before a session but after the request.Begins only when an Application.cfc file is first run with the current unique application name, or when the <cfapplication> tag is called in older code CF applications. Ends when the amount of time since a request is greater than the expiration time set for the application. Session Begins after an application is created.Created inside the same sources as the application. Ends when the amount of time since a request is greater than the expiration time set for the session. Client Begins when a unique visitor first visits the server. If you want them to expire, you can store client variables in encrypted cookies. Cookies have limited storage space and are not always available. We will be discussing the scopes in more detail later in this article series. All the scopes, except the client scope, expire if you shut down your server. When we close our browser window or reboot the client side, a session does not come to an end, but our connectivity to that particular session scope ends. The information and resources for storing that session are held until the session expires. Then, when connecting the server starts a new session and we are unable to reconnect to the former session. Introducing the Application.cfc object The first thing we need to do is to understand how this application page is called. When a .cfm or .cfc file is called, the server looks for an Application.cfc file in the current directory where the web page is being called from. It also looks for an Application.cfm file. We do not create an application or session scope with the .cfm version because .cfc provides many advantages and is much more powerful than the .cfm version. It provides better encapsulation and code reuse. If the application file is found, ColdFusion runs it. If the file is not found, then it moves up one directory towards the sever root directory in order to search for an Application.cfc file. The search stops either when a file is found, or once it reaches the root directory and a file is not found. There are several methods in the Application.cfc file. It is worth noting that this file does not exist by default, the developer must create it. The following table gives the method names and the details as to when those methods are called: Method name When a method is called onApplicationEnd The application ends; the application times out. onApplicationStart The application first starts: the first request for a page is processed or the first CFC method is invoked by an event gateway instance, a web service, or Macromedia Flash Remoting CFC. onCFCRequest HTTP or AMF (remote special Flash) calls. onError An exception occurs that is not caught by a try or catch block. onMissingTemplate ColdFusion receives a request for a non-existent page. onRequest The onRequestStart() method finishes(this method can filter request contents). onRequestEnd All pages in the request have been processed. onRequestStart A request starts. onSessionEnd A session ends. onSessionStart A session starts. onServerStart A ColdFusion server starts. When the Application.cfc file runs for the first time, these methods are called in the order as shown in the following diagram. The request variable scope is available at all times. Yet, to make the code flow right, the designers of this object made sure of the order in which the server runs the code. You will also find that for technical reasons, there are some issues that arise when we use the onRequest() method. Therefore, we will not be using this method. The steps in the previous screenshot are explained as follows: Browser Request: The browser sends a request to the server. The server passes the processing to the Application.cfc file, if it exists. It skips the step if it does not exist. The Application.cfc file has methods that execute if they exist too. The first method is onApplicationStart(). This executes on the basis of the application name. If the unique named application is not currently running on the server, then this method is called. Application Start: The next thing that Application.cfc does is to check if the request to the server is a pre-existing application. If the request is to an application which has not started, then it calls the onApplicationStart() method, if the method exists. Session Start: On every request to the server, if the onSessionStart() method exists, then it is called at this particular point in the processing. Request Start: On every request to the server, if the onRequestStart() method exists, then it is called at this particular point in the processing. OnRequest: This step normally occurs after the onRequestStart() method. If the onRequest() method was used, then by default it prevented the calling of CFCs. We do not say that it is always wrong to use this method. However, we will avoid it as much as possible. Requested Page: Now, the actual page requested is called and processed. Request End: After the requested page is processed, the control is passed back to the onRequestEnd() method, if it exists in Application.cfc. return response to browser: This is the point when ColdFusion has completed its work of processing information to respond to the browser request. At this point, you could either send HTML to the browser, a redirect, or any other response. Session End: The onSessionEnd() method is called if the method exists but only when the time since the user has last made a request to the server is less than the time for the session timeout. Application End: The onApplicationEnd() method is called if it exists when the time since the last request was received by the server is greater than the timeout for the application. The application and session scopes have already been created on the server and they do not need to be reinitialized. Once the application is created and other requests are made to the server, the following methods are called with each request: onRequestStart() onRequest() onRequestEnd() In previous versions of ColdFusion, when the onRequest() method of the Application.cfc was called, it blocked CFCs from operating correctly. You may see some fancy code in older frameworks that check if the current request is calling a CFC. They would then delete the onRequest() method for that request. Now there is a new method called onCFCRequest(). If you need backwards capability to previous versions of ColdFusion, then you would delete the onRequest() method. You can use either of these approaches depending on whether you need the code to run on prior versions of ColdFusion. The onCFCRequest() method will execute at the same point as the onRequest() method in the previous examples. You can add this code in or not depending on your own preferences. The previous example still operates as expected if you leave the method out. The OnRequestEnd.cfm side of using Application.cfm based page calls does not execute if the page runs a <cflocation> tag before the OnRequestEnd.cfm is run. It is also not a part of Application.cfc based applications and was intended for use with Application.cfm in older versions of ColdFusion. Here is a representation of the complete process that is less granular. We can see that the application behaves just as it did in the earlier illustration; we just do not go into explicit detail about every method that is called internally. We also see that the requested page can call additional code segments. These code segments can be a CFC, a custom tag, or any included page. Those pages can also include other pages, so that they create a proper hierarchy of functionality. Always try to make sure that functionality is layered, so the separation of layers provides a structured and simpler approach to creation, maintenance, and long-term support for your web applications. The variables set in Application.cfc can be modified before the requested page is called, and even later. Let us say for example, you want to record the time at which the session was last hit. You could choose to set a variable, <cfset session._stat.lasthit = now()>. This could be set either at the beginning of a request, or at the end. Here, the question is where you would put this function. At first, you might think that you would add this function to the OnSessionStart() method or OnSessionEnd() method. This would cause an issue because these two methods are only called when a session has begun or when it has come to an end. You would actually put the code into the OnRequestStart() or OnRequestEnd() code segment of the Application.cfc file for the function to work properly. The request methods are called with every server request. We will have a complete Application.cfc to use as a model for creating our own variations in future projects. Remember to place the code in the right place and test your code using CFDump or by some other means to make sure it works when creating changes.
Read more
  • 0
  • 0
  • 5113

article-image-building-your-hello-mediation-project-using-ibm-websphere-process-server-7-and-enterpr
Packt
13 Jul 2010
7 min read
Save for later

Building Your Hello Mediation Project using IBM WebSphere Process Server 7 and Enterprise Service Bus 7

Packt
13 Jul 2010
7 min read
This article will help you build your first HelloWorld WebSphere Enterprise Service Bus (WESB) mediation application and WESB mediation flow-based application. This article will then give an overview of Service Message Object (SMO) standards and how they relate to building applications using an SOA approach. At the completion of this article, you will have an understanding of how to begin building, testing, and deploying WESB mediations including: Overview of WESB-based programming fundamentals including WS-*? standards and Service Message Objects (SMOs) Building the first mediation module project in WID Using mediation flows Deploying the module on a server and testing the project Logging, debugging, and troubleshooting basics Exporting projects from WID WS standards Before we get down to discussing mediation flows, it is essential to take a moment and acquaint ourselves with some of the Web Service (WS) standards that WebSphere Integration Developers (WIDs) comply with. By using WIDs, user-friendly visual interfaces and drag-and-drop paradigms, developers are automatically building process flows that are globally standardized and compliant. This becomes critical in an ever-changing business environment that demands flexible integration with business partners. Here are some of the key specifications that you should be aware of as defined by the Worldwide Web Consortium (W3C): WS-Security: This is one of the secure standards used to secure Web Service at the message level, independent of the transport protocol. To learn more about WID and WS-Security refer to http://publib.boulder.ibm.com/infocenter/dmndhelp/v7r0mx/topic/com.ibm.wbit.help.runtime.doc/deploy/topics/cwssecurity.html. WS-Policy: A framework that helps define characteristics about Web Services through policies. To learn more about WS-Policy refer to: http://www.w3.org/TR/ws-policy/. WS-Addressing: This specification aids interoperability between Web Services, by standardizing ways to address Web Services and by providing addressing information in messages. In version 7.0, enhancements were made to WID to provide Web Services support for WS-Addressing and Attachments. For more details about WS-Addressing refer to http://www.w3.org/Submission/ws-addressing/. What are mediation flows? SOA, service consumers and service providers use an ESB as a communication vehicle. When services are loosely coupled through an ESB, the overall system has more flexibility and can be easily modified and enhanced to meet changing business requirements. We also saw that an ESB by itself is an enabler of many patterns and enables protocol transformations and provides mediation services, which can inspect, modify, augment and transform a message as it flows from requestor to provider. In WebSphere Enterprise Service, mediation modules provide the ESB functionality. The heart of a mediation module is the mediation flow component, which provides the mediation logic applied to a message as it flows from a service consumer to a provider. The mediation flow component is a type of SCA component that is typically used, but not limited to a mediation module. A mediation flow component contains a source interface and target references similar to other SCA components. The source interface is described using WSDL interface and must match the WSDL definition of the export to which it is wired. The target references are described using WSDL and must match the WSDL definitions of the imports or Java components to which they are wired. The mediation flow component handles most of the ESB functions including: Message filtering which is the capability to filter messages based on the content of the incoming message. Dynamic routing and selection of service provider, which is the capability to route incoming messages to the appropriate target at runtime based on predefined policies and rules. Message transformation, which is the capability to transform messages between source and target interfaces. This transformation can be defined using XSL stylesheets or business object maps. Message manipulation/enrichment, which is the capability to manipulate or enrich incoming message fields before they are sent to the target. This capability also allows you to do database lookups as needed. If the previous functionalities do not fit your requirements, you have the capability of defining a custom mediation behavior in JAVA. The following diagram describes the architectural layout of a mediation flow:   In the diagram, on the left-hand side you see the single service requester or source interface and on the right-hand side are the multiple service providers or target references. The mediation flow is the set of logical processing steps that efficiently route messages from the service requestor to the most appropriate service provider and back to the service requestor for that particular transaction and business environment. A mediation flow can be a request flow or a request and response flow. In a request flow message the sequence of processing steps is defined only from the source to the target. No message is returned to the source. However, in a request and response flow message the sequence of processing steps are defined from the single source to the multiple targets and back from the multiple targets to the single source. In the next section we take a deeper look into message objects and how they are handled in the mediation flows. Mediation primitives What are the various mediation primitives available in WESB? Mediation primitives are the core building blocks used to process the request and response messages in a mediation flow. Built-in primitives, which perform some predefined function that is configurable through the use of properties. Custom mediation primitives, which allow you to implement the function in Java. Mediation primitives have input and output terminals, through which the message flows. Almost all primitives have only one input terminal, but multiple input terminals are possible for custom mediation primitives. Primitives can have zero, one, or more output terminals. There is also a special terminal called the fail terminal through which the message is propagated when the processing of a primitive results in an exception. The different types of mediation primitives that are available in a mediation flow, along with their purpose, are summarized in the following table: Service invocation   Service Invoke Invoke external service, message modified with result Routing primitives   Message Filter Selectively forward messages based on element values Type Filter Selectively forward messages based on element types Routing primitives   Endpoint Lookup Find potential endpoint from a registry query Fan Out Starts iterative or split flow for aggregation Fan In Check completion of a split or aggregated flow Policy Resolution Set policy constrains from a registry query Flow Order Executes or fires the output terminal in a defined order Gateway Endpoint Lookup Finds potential endpoint in special cases from a registry SLA Check Verifies if the message complies with the SLA UDDI Endpoint Lookup Finds potential endpoints from a UDDI registry query Transformation primitives   XSL Transformation Update and modify messages using XSLT Business Object Map Update and modify messages using business object maps Message element setter Set, update, copy, and delete message elements Set message type Set elements to a more specific type Database Lookup Set elements from contents within a database Data Handler Update and modify messages using a data handler Custom Mediation Read, update, and modify messages using Java code SOAP header setter Read, update, copy, and delete SOAP header elements HTTP header setter Read, update, copy, and delete HTTP header elements JMS header setter Read, update, copy, and delete JMS header elements MQ header setter Read, update, copy, and delete MQ header elements Tracing primitives   Message logger Write a log message to a database or a custom destination Event emitter Raise a common base event to CEI Trace Send a trace message to a file or system out for debugging Error handling primitives   Stop Stop a single path in flow without an exception Fail Stop the entire flow and raise an exception Message Validator Validate a message against a schema and assertions Mediation subflow primitive   Subflow Represents a user-defined subflow
Read more
  • 0
  • 0
  • 5723

article-image-connecting-microsoft-sql-server-compact-35-visual-studio
Packt
13 Jul 2010
3 min read
Save for later

Connecting to Microsoft SQL Server Compact 3.5 with Visual Studio

Packt
13 Jul 2010
3 min read
SQL Server Compact Edition 3.5 can be used to create applications that are useful for a number of business uses such as: Portable applications; Occasionally connected clients and embedded applications and devices. SQL Server Compact differs from other SQL Servers in that there is just one file which can be password protected and features 128-bit file level encryption. It is referential integrity compliant; supports multiple connections; has transactions support with rich data types. In this tutorial by Jayaram Krishnaswamy, various scenarios where you may need to connect to SQL Server Compact using Visual Studio IDE (both 2008 and 2010) are described in detail. Connecting to SQL Server Compact 4.5 using Visual Studio 2010 Express (free version of Visual Studio) is also described. The connection is the starting point for any database related program and therefore mastering the connection task is crucial to work with the SQL Server Compact. (For more resources on Microsoft, see here.) If you are familiar with SQL Server you already know much of SQL Server Compact. It can be administered from SSMS and, using SQL Syntax and ADO.NET technology you can be immediately productive with SQL Server Compact.It is free to download (also free to deploy and redistribute) and comes in the form of just one code-free file. Its small foot print makes it easily deployable to a variety of device sizes and requires no administration. It also supports a subset of T-SQL and a rich set of data types. It can be used in creating desktop/web applications using Visual Studio 2008 and Visual Studio 2010. It also comes with a sample Northwind database. Download details Microsoft SQL Server Compact 3.5 may be downloaded from this site here. Make sure you download detailed features of this program from the same site. Also several bugs have been fixed in the program as detailed in the two SP's. Link to the latest service pack SP2 is here. By applying SP2 the installed version on the machine is upgraded to the latest version. Connecting to SQL Server Compact from Windows and Web projects You can use the Server Explorer in Visual Studio to drag and drop objects from SQL Server Compact provided you add a connection to the SQL Server Compact. In fact, in Visual Studio 2008 IDE you can configure a data connection without even starting a project from the View menu as shown here. When you click Add Connection... the following window will be displayed. This brings up the Add Connection dialog shown here. Click Change... to choose the correct data source for SQL Server Compact. The default is SQL Server client. The Change Data Source window is displayed as shown. Highlight Microsoft SQL Server Compact 3.5 and click OK. You are returned to Add Connection where you can browse or create a database or, choose also from a ActiveSync connected device such as a Smart phone which has a SQL Server Compact for devices installed. Presently connect to one on the computer (My Computer default option)-the sample database Northwind. Click Browse.... The Select SQL Server Compact 3.5 Database File dialog opens where your sample database Northwind is displayed as shown. Click Open. The database file is entered in the Add Connection dialogue. You may test the connection. You should get a Test connection succeeded message from Microsoft Visual Studio. Click OK. The Northwind.sdf file is displayed as a tree with Tables and View as shown in the next figure. Right click Northwind.sdf in the Server Explorer above and click Properties drop-down menu item. You will see the connection string for this conneciton as shown here.
Read more
  • 0
  • 1
  • 18109
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-fine-tune-view-layer-your-fusion-web-application
Packt
08 Jul 2010
5 min read
Save for later

Fine Tune the View layer of your Fusion Web Application

Packt
08 Jul 2010
5 min read
(For more resources on Oracle, see here.) The following diadram illustrates the roles of each layer. Use AJAX to boost up the performance of your web pages Asynchronous JavaScript and XML (AJAX) avoid the full page refresh and minimize the data that being transferred between client and server during each round trip. ADF Faces is packaged with 150+ AJAX enabled components which adds AJAX capability to your applications with zero effort. Certain events on an ADF Faces component trigger Partial Page Rendering (PPR) by default. However action components, by default, triggers full page refresh which is quite expensive and may not be required in most of the cases. Make sure that you set partialSubmit attribute to true whenever possible to optimize the page lifecycle. When partialSubmit is set to true, then only the components that have values for their partialTriggers attribute will be processed through the lifecycle. Avoid mixing of html tags with ADF Faces components Don’t mix html tags and ADF Faces components though JSF design let you to do so using <f:verbatim> tag. Mixing row html contents with ADF Faces components may produce undesired output, especially when you have complex layout design for your page. It's highly discouraged to use <f:verbatim> to embed JavaScript or CSS, instead you can use <af:resource> which adds the resource to the document element and optimizes further processing during tree rendering. Avoid long Ids for User Interface components It's always recommended to use short Ids for your User Interface (UI) components. Your JSF page finally boils down to html contents, whose size decides the network bandwidth usage for your web application. If you use long Ids for UI (User Interface) component, that increases the size of the generated html content. This becomes even worse, if you have many UI elements with long Ids. If there's no Id is set for a UI component explicitly, ADF Faces runtime auto generates Ids for you. ADF Faces let you to control the auto generation of component ids by setting context parameter 'oracle.adf.view.rich.SUPPRESS_IDS' in the web.xml file. The <context-param> entry in the web.xml file may look like as shown below. <context-param> <param-name> oracle.adf.view.rich.SUPPRESS_IDS </param-name> <param-value>auto or explicit </param-value></context-param> Possible values for oracle.adf.view.rich.SUPPRESS_IDS are listed below. auto: Components can suppress auto generated IDs, but explicitly set ID will be honored. explicit: This is the default value for oracle.adf.view.rich.SUPPRESS_IDS parameter. In this case both auto generated IDs and explicitly set IDs would get suppressed. Avoid inline usage of JavaScript/Cascading Style Sheets (CSS) whenever possible If you need to use custom JavaScript functions or CSS in your application, try using external files to hold the same. Avoid inline usage of JavaScripts/CSS as much as possible. A better idea is to logically group them in external files and embed the required one in the candidate page using <af:resource> tag. If you keep JavaScript and CSS in external files, they are cached by the browser. Apparently, subsequent requests for these resources are served from the cache. This in turn reduces the network usage and improves the performance. Avoid mixing JSF/ADF Faces and JavaServer Pages Standard Tag Library (JSTL) tags Stick on JSF/ADF Faces components for building your UI as much as you can. JSF component may not work properly with some JSTL tags as they are not designed to co-exist. Relying on JSF/ADF Faces components may give you better extensibility and portability for your application as bonus. Don't generate client component unless it's really needed ADF Faces runtime generates the client components only when they are really required on the client. However you can override this behavior by setting the attribute clientComponent to true, as shown in the following code snippet. <af:commandButton text="DoSomething" clientComponent="true"> <af:clientListener method="doSomething" type="action"/></af:commandButton> Set clientComponent to true only if you need to access the component on the client side using JavaScript. Otherwise this may result in increased Document Object Model (DOM) size at the client side and may affect the performance of your web page. The following diagram shows the runtime coordination between client side and server side component trees. In the above diagram, you can see that no client side component is generated for the server component whose clientComponent attribute is set to false. Prefer not to render the components over hiding components from DOM tree If you need to hide UI components conditionally on a page, try achieving this with rendered property of the component instead of using visible property. Because the later creates the component instance and then hides the same from client side DOM tree, where as the first approach skips the component creation at the server side itself and client side DOM does not have this element added. Apparently setting rendered to false, reduces the client content size and gives better performance as bonus.
Read more
  • 0
  • 0
  • 2831

article-image-java-oracle-database
Packt
07 Jul 2010
5 min read
Save for later

Java in Oracle Database

Packt
07 Jul 2010
5 min read
"The views expressed in this article are the author's own and do not necessarily reflect the views of Oracle." (For more resources on Oracle, see here.) Introduction This article is better understood by people who have some familiarity with Oracle database, SQL, PL/SQL, and of course Java (including JDBC). Beginners can also understand the article to some extent, because it does not contain many specifics/details. The article can be useful to software developers, designers and architects working with Java. Oracle database provides a Java runtime in its database server process. Because of this, it is possible not only to store Java sources and Java classes in an Oracle database, but also to run the Java classes within the database server. Such Java classes will be 'executed' by the Java Virtual Machine embedded in the database server. The Java platform provided is J2SE-compliant, and in addition to the JVM, it includes all the Java system classes. So, conceptually, whatever Java code that can be run using the JREs (like Sun's JRE) on the operating system, can be run within the Oracle database too. Java stored procedure The key unit of the Java support inside the Oracle database is the 'Java Stored Procedure' (that may be referred to as JSP, as long as it is not confused with JavaServer Pages). A Java stored procedure is an executable unit stored inside the Oracle database, and whose implementation is in Java. It is similar to PL/SQL stored procedures and functions. Creation Let us see an example of how to create a simple Java stored procedure. We will create a Java stored procedure that adds two given numbers and returns the sum. The first step is to create a Java class that looks like the following: public class Math{ public static int add(int x, int y) { return x + y; }} This is a very simple Java class that just contains one static method that returns the sum of two given numbers. Let us put this code in a file called Math.java, and compile it (say, by doing 'javac Math.java') to get Math.class file. The next step is to 'load' Math.class into the Oracle database. That is, we have to put the class file located in some directory into the database, so that the class file gets stored in the database. There are a few ways to do this, and one of them is to use the command-line tool called loadjava provided by Oracle, as follows: loadjava -v -u scott/tiger Math.class Generally, in Oracle database, things are always stored in some 'schema' (also known as 'user'). Java classes are no exception. So, while loading a Java class file into the database, we need to specify the schema where the Java class should be stored. Here, we have given 'scott' (along with the password). There are a lot of other things that can be done using loadjava, but we will not go into them here. Next, we have to create a 'PL/SQL wrapper' as follows: SQL> connect scott/tigerConnected.SQL>SQL> create or replace function addition(a IN number, b IN number) return number 2 as language java name 'Math.add(int, int) return int'; 3 /Function created.SQL> We have created the PL/SQL wrapper called 'addition', for the Java method Math.add(). The syntax is same as the one used to create a PL/SQL function/procedure, but here we have specified that the implementation of the function is in the Java method Math.add(). And that's it. We've created a Java stored procedure! Basically, what we have done is, implemented our requirement in Java, and then exposed the Java implementation via PL/SQL. Using Jdeveloper, an IDE from Oracle, all these steps (creating the Java source, compiling it, loading it into the database, and creating the PL/SQL wrapper) can be done easily from within the IDE. One thing to remember is that, we can create Java stored procedures for Java static methods only, but not for instance methods. This is not a big disadvantage, and in fact makes sense, because even the main() method, which is the entry point for a Java program, is also 'static'. Here, since Math.add() is the entry point, it has to be 'static'. So, we can write as many static methods in our Java code as needed and make them entry points by creating the PL/SQL wrappers for them. Invocation We can call the Java stored procedure we have just created, just like any PL/SQL procedure/function is called, either from SQL or PL/SQL: SQL> select addition(10, 20) from dual;ADDITION(10,20)--------------- 30SQL>SQL> declare 2 s number; 3 begin 4 s := addition(10, 20); 5 dbms_output.put_line('SUM = ' || s); 6 end; 7 /SUM = 30PL/SQL procedure successfully completed.SQL> Here, the 'select' query, as well as the PL/SQL block, invoked the PL/SQL function addition(), which in turn invoked the underlying Java method Math.add(). A main feature of the Java stored procedure is that, the caller (like the 'select' query above) has no idea that the procedure is indeed implemented in Java. Thus, the stored procedures implemented in PL/SQL and Java can be called alike, without requiring to know the language in which the underlying implementation is. So, in general, whatever Java code we have, can be seamlessly integrated into the PL/SQL code via the PL/SQL wrappers. Putting in other words, we now have more than one language option to implement a stored procedure - PL/SQL and Java. If we have any project where stored procedures are to be implemented, then Java is a good option, because today it is relatively easier to find a Java programmer.
Read more
  • 0
  • 0
  • 4635

article-image-creating-catalyst-application-catalyst-58
Packt
30 Jun 2010
7 min read
Save for later

Creating a Catalyst Application in Catalyst 5.8

Packt
30 Jun 2010
7 min read
Creating the application skeleton Catalyst comes with a script called catalyst.pl to make this task as simple as possible. catalyst.pl takes a single argument, the application's name, and creates an application with that specified name. The name can be any valid Perl module name such as MyApp or MyCompany::HR::Timesheets. Let's get started by creating MyApp, which is the example application for this article: $ catalyst.pl MyAppcreated "MyApp"created "MyApp/script"created "MyApp/lib"created "MyApp/root"created "MyApp/root/static"created "MyApp/root/static/images"created "MyApp/t"created "MyApp/lib/MyApp"created "MyApp/lib/MyApp/Model"created "MyApp/lib/MyApp/View"18 ]created "MyApp/lib/MyApp/Controller"created "MyApp/myapp.conf"created "MyApp/lib/MyApp.pm"created "MyApp/lib/MyApp/Controller/Root.pm"created "MyApp/README"created "MyApp/Changes"created "MyApp/t/01app.t"created "MyApp/t/02pod.t"created "MyApp/t/03podcoverage.t"created "MyApp/root/static/images/catalyst_logo.png"created "MyApp/root/static/images/btn_120x50_built.png"created "MyApp/root/static/images/btn_120x50_built_shadow.png"created "MyApp/root/static/images/btn_120x50_powered.png"created "MyApp/root/static/images/btn_120x50_powered_shadow.png"created "MyApp/root/static/images/btn_88x31_built.png"created "MyApp/root/static/images/btn_88x31_built_shadow.png"created "MyApp/root/static/images/btn_88x31_powered.png"created "MyApp/root/static/images/btn_88x31_powered_shadow.png"created "MyApp/root/favicon.ico"created "MyApp/Makefile.PL"created "MyApp/script/myapp_cgi.pl"created "MyApp/script/myapp_fastcgi.pl"created "MyApp/script/myapp_server.pl"created "MyApp/script/myapp_test.pl"created "MyApp/script/myapp_create.pl"Change to application directory, and run "perl Makefile.PL" to make sureyour installation is complete. At this point it is a good idea to check if the installation is complete by switching to the newly-created directory (cd MyApp) and running perl Makefile.PL. You should see something like the following: $ perl Makefile.PLinclude /Volumes/Home/Users/solar/Projects/CatalystBook/MyApp/inc/Module/Install.pminclude inc/Module/Install/Metadata.pminclude inc/Module/Install/Base.pmCannot determine perl version info from lib/MyApp.pminclude inc/Module/Install/Catalyst.pm*** Module::Install::Catalystinclude inc/Module/Install/Makefile.pmPlease run "make catalyst_par" to create the PAR package!*** Module::Install::Catalyst finished.include inc/Module/Install/Scripts.pminclude inc/Module/Install/AutoInstall.pminclude inc/Module/Install/Include.pminclude inc/Module/AutoInstall.pm*** Module::AutoInstall version 1.03*** Checking for Perl dependencies...[Core Features]- Test::More ...loaded. (0.94 >= 0.88)- Catalyst::Runtime ...loaded. (5.80021 >= 5.80021)- Catalyst::Plugin::ConfigLoader ...loaded. (0.23)- Catalyst::Plugin::Static::Simple ...loaded. (0.29)- Catalyst::Action::RenderView ...loaded. (0.14)- Moose ...loaded. (0.99)- namespace::autoclean ...loaded. (0.09)- Config::General ...loaded. (2.42)*** Module::AutoInstall configuration finished.include inc/Module/Install/WriteAll.pminclude inc/Module/Install/Win32.pminclude inc/Module/Install/Can.pminclude inc/Module/Install/Fetch.pmWriting Makefile for MyAppWriting META.yml Note that it mentions that all the required modules are available. If any modules are missing, you may have to install those modules using cpan.You can also alternatively install the missing modules by running make followed by make install. We will discuss what each of these files do but for now, let's just change to the newly-created MyApp directory (cd MyApp) and run the following command: $ perl script/myapp_server.pl This will start up the development web server. You should see some debugging information appear on the console, which is shown as follows: [debug] Debug messages enabled[debug] Loaded plugins:.--------------------------------------------.| Catalyst::Plugin::ConfigLoader 0.23| Catalyst::Plugin::Static::Simple 0.21 |'--------------------------------------------'[debug] Loaded dispatcher "Catalyst::Dispatcher" [debug] Loaded engine"Catalyst::Engine::HTTP"[debug] Found home "/home/jon/projects/book/chapter2/MyApp"[debug] Loaded Config "/home/jon/projects/book/chapter2/MyApp/myapp.conf"[debug] Loaded components:.-----------------------------+----------.| Class | Type +-------------------------------+----------+| MyApp::Controller::Root | instance |'----------------------------------+----------'[debug] Loaded Private actions:.----------------------+-------- --------+----.| Private | Class | Method |+-----------+------------+--------------+| /default | MyApp::Controller::Root | default || /end | MyApp::Controller::Root | end|/index | MyApp::Controller::Root | index'--------------+--------------+-------'[debug] Loaded Path actions:.----------------+--------------.| Path | Private|+-------------------+-------------------------+| / | /default|| / | /index|'----------------+--------------------------------'[info] MyApp powered by Catalyst 5.80004You can connect to your server at http://localhost:3000 This debugging information contains a summary of plugins, Models, Views, and Controllers that your application uses, in addition to showing a map of URLs to actions. As we haven't added anything to the application yet, this isn't particularly helpful, but it will become helpful as we add features. To see what your application looks like in a browser, simply browse to http://localhost:3000. You should see the standard Catalyst welcome page as follows: Let's put the application aside for a moment, and see the usage of all the files that were created. The list of files is as shown in the following screenshot: Before we modify MyApp, let's take a look at how a Catalyst application is structured on a disk. In the root directory of your application, there are some support files. If you're familiar with CPAN modules, you'll be at home with Catalyst. A Catalyst application is structured in exactly the same way (and can be uploaded to the CPAN unmodified, if desired). This article will refer to MyApp as your application's name, so if you use something else, be sure to substitute properly. Latest helper scripts Catalyst 5.8 is ported to Moose and the helper scripts for Catalyst were upgraded much later. Therefore, it is necessary for you to check if you have the latest helper scripts. We will discuss helper scripts later. For now, catalyst.pl is a helper script and if you're using an updated helper script, then the lib/MyApp.pm file (or lib/whateverappname.pm) will have the following line: use Moose; If you don't see this line in your application package in the lib directory, then you will have to update the helper scripts. You can do that by executing the following command: cpan Catalyst::Helper Files in the MyApp directory The MyApp directory contains the following files: Makefile.PL: This script generates a Makefile to build, test, and in stall your application. It can also contain a list of your application's CPAN dependencies and automatically install them. To run Makefile.PL and generate a Makefile, simply type perl Makefile.PL. After that, you can run make to build the application, make test to test the application (you can try this right now, as some sample tests have already been created), make install to install the application, and so on. For more details, see the Module::Install documentation. It's important that you don't delete this file. Catalyst looks for it to determine where the root of your application is. Changes: This is simply a free-form text file where you can document changes to your application. It's not required, but it can be helpful to end users or other developers working on your application, especially if you're writing an open source application. README: This is just a text file with information on your application. If you're not going to distribute your application, you don't need to keep it around. myapp.conf: This is your application's main confi guration file, which is loaded when you start your application. You can specify configuration directly inside your application, but this file makes it easier to tweak settings without worrying about breaking your code. myapp.conf is in Apache-style syntax, but if you rename the file to myapp.pl, you can write it in Perl (or myapp.yml for YML format; see the Config::Any manual for a complete list). The name of this file is based on your application's name. Everything is converted to lowercase, double colons are replaced with underscores, and the .conf extension is appended. Files in the lib directory The heart of your application lives in the lib directory. This directory contains a file called MyApp.pm. This file defines the namespace and inheritance that are necessary to make this a Catalyst application. It also contains the list of plugins to load application-specific configurations. These configurations can also be defined in the myapp.conf file mentioned previously. However, if the same configuration is mentioned in both the files, then the configuration mentioned here takes precedence. Inside the lib directory, there are three key directories, namely MyApp/Controller, MyApp/Model, and MyApp/View. Catalyst loads the Controllers, Models, and Views from these directories respectively.
Read more
  • 0
  • 0
  • 2753

article-image-modeling-relationships-gorm
Packt
09 Jun 2010
6 min read
Save for later

Modeling Relationships with GORM

Packt
09 Jun 2010
6 min read
(For more resources on Groovy DSL, see here.) Storing and retrieving simple objects is all very well, but the real power of GORM is that it allows us to model the relationships between objects, as we will now see. The main types of relationships that we want to model are associations, where one object has an associated relationship with another, for example, Customer and Account, composition relationships, where we want to build an object from sub components, and inheritance, where we want to model similar objects by describing their common properties in a base class. Associations Every business system involves some sort of association between the main business objects. Relationships between objects can be one-to-one, one-to-many, or many-to-many. Relationships may also imply ownership, where one object only has relevance in relation to another parent object. If we model our domain directly in the database, we need to build and manage tables, and make associations between the tables by using foreign keys. For complex relationships, including many-to-many relationships, we may need to build special tables whose sole function is to contain the foreign keys needed to track the relationships between objects. Using GORM, we can model all of the various associations that we need to establish between objects directly within the GORM class definitions. GORM takes care of all of the complex mappings to tables and foreign keys through a Hibernate persistence layer. One-to-one The simplest association that we need to model in GORM is a one-to-one association. Suppose our customer can have a single address; we would create a new Address domain class using the grails create-domain-class command, as before. class Address { String street String city static constraints = { }} To create the simplest one-to-one relationship with Customer, we just add an Address field to the Customer class. class Customer { String firstName String lastName Address address static constraints = { }} When we rerun the Grails application, GORM will recreate a new address table. It will also recognize the address field of Customer as an association with the Address class, and create a foreign key relationship between the customer and address tables accordingly. This is a one-directional relationship. We are saying that a Customer "has an" Address but an Address does not necessarily "have a" Customer. We can model bi-directional associations by simply adding a Customer field to the Address. This will then be reflected in the relational model by GORM adding a customer_id field to the address table. class Address { String street String city Customer customer static constraints = { } }mysql> describe address;+-------------+--------------+------+-----+---------+----------------+| Field | Type | Null | Key | Default | Extra |+-------------+--------------+------+-----+---------+----------------+| id | bigint(20) | NO | PRI | NULL | auto_increment || version | bigint(20) | NO | | | || city | varchar(255) | NO | | | || customer_id | bigint(20) | YES | MUL | NULL | || street | varchar(255) | NO | | | |+-------------+--------------+------+-----+---------+----------------+5 rows in set (0.01 sec)mysql> These basic one-to-one associations can be inferred by GORM just by interrogating the fields in each domain class via reflection and the Groovy metaclasses. To denote ownership in a relationship, GORM uses an optional static field applied to a domain class, called belongsTo. Suppose we add an Identity class to retain the login identity of a customer in the application. We would then use class Customer { String firstName String lastName Identity ident}class Address { String street String city}class Identity { String email String password static belongsTo = Customer} Classes are first-class citizens in the Groovy language. When we declare static belongsTo = Customer, what we are actually doing is storing a static instance of a java.lang.Class object for the Customer class in the belongsTo field. Grails can interrogate this static field at load time to infer the ownership relation between Identity and Customer. Here we have three classes: Customer, Address, and Identity. Customer has a one-to-one association with both Address and Identity through the address and ident fields. However, the ident field is "owned" by Customer as indicated in the belongsTo setting. What this means is that saves, updates, and deletes will be cascaded to identity but not to address, as we can see below. The addr object needs to be saved and deleted independently of Customer but id is automatically saved and deleted in sync with Customer. def addr = new Address(street:"1 Rock Road", city:"Bedrock")def id = new Identity(email:"email", password:"password")def fred = new Customer(firstName:"Fred", lastName:"Flintstone", address:addr,ident:id)addr.save(flush:true)assert Customer.list().size == 0assert Address.list().size == 1assert Identity.list().size == 0fred.save(flush:true)assert Customer.list().size == 1assert Address.list().size == 1assert Identity.list().size == 1fred.delete(flush:true)assert Customer.list().size == 0assert Address.list().size == 1assert Identity.list().size == 0addr.delete(flush:true)assert Customer.list().size == 0assert Address.list().size == 0assert Identity.list().size == 0 Constraints You will have noticed that every domain class produced by the grails create-domain- class command contains an empty static closure, constraints. We can use this closure to set the constraints on any field in our model. Here we apply constraints to the e-mail and password fields of Identity. We want an e-mail field to be unique, not blank, and not nullable. The password field should be 6 to 200 characters long, not blank, and not nullable. class Identity { String email String password static constraints = { email(unique: true, blank: false, nullable: false) password(blank: false, nullable:false, size:6..200) }} From our knowledge of builders and the markup pattern, we can see that GORM could be using a similar strategy here to apply constraints to the domain class. It looks like a pretended method is provided for each field in the class that accepts a map as an argument. The map entries are interpreted as constraints to apply to the model field. The Builder pattern turns out to be a good guess as to how GORM is implementing this. GORM actually implements constraints through a builder class called ConstrainedPropertyBuilder. The closure that gets assigned to constraints is in fact some markup style closure code for this builder. Before executing the constraints closure, GORM sets an instance of ConstrainedPropertyBuilder to be the delegate for the closure. We are more accustomed to seeing builder code where the Builder instance is visible. def builder = new ConstrainedPropertyBuilder()builder.constraints {} Setting the builder as a delegate of any closure allows us to execute the closure as if it was coded in the above style. The constraints closure can be run at any time by Grails, and as it executes the ConstrainedPropertyBuilder, it will build a HashMap of the constraints it encounters for each field. We can illustrate the same technique by using MarkupBuilder and NodeBuilder. The Markup class in the following code snippet just declares a static closure named markup. Later on we can use this closure with whatever builder we want, by setting the delegate of the markup to the builder that we would like to use. class Markup { static markup = { customers { customer(id:1001) { name(firstName:"Fred", surname:"Flintstone") address(street:"1 Rock Road", city:"Bedrock") } customer(id:1002) { name(firstName:"Barney", surname:"Rubble") address(street:"2 Rock Road", city:"Bedrock") } } }}Markup.markup.setDelegate(new groovy.xml.MarkupBuilder())Markup.markup() // Outputs xmlMarkup.markup.setDelegate(new groovy.util.NodeBuilder())def nodes = Markup.markup() // builds a node tree
Read more
  • 0
  • 0
  • 4844
article-image-grails-object-relational-mapping-gorm
Packt
08 Jun 2010
5 min read
Save for later

The Grails Object Relational Mapping (GORM)

Packt
08 Jun 2010
5 min read
(For more resources on Groovy DSL, see here.) The Grails framework is an open source web application framework built for the Groovy language. Grails not only leverages Hibernate under the covers as its persistence layer, but also implements its own Object Relational Mapping layer for Groovy, known as GORM. With GORM, we can take a POGO class and decorate it with DSL-like settings in order to control how it is persisted. Grails programmers use GORM classes as a mini language for describing the persistent objects in their application. In this section, we will do a whistle-stop tour of the features of Grails. This won't be a tutorial on building Grails applications, as the subject is too big to be covered here. Our main focus will be on how GORM implements its Object model in the domain classes. Grails quick start Before we proceed, we need to install Grails and get a basic app installation up and running. The Grails' download and installation instructions can be found at http://www.grails.org/Installation. Once it has been installed, and with the Grails binaries in your path, navigate to a workspace directory and issue the following command: grails create-app GroovyDSL This builds a Grails application tree called GroovyDSL under your current workspace directory. If we now navigate to this directory, we can launch the Grails app. By default, the app will display a welcome page at http://localhost:8080/GroovyDSL/. cd GroovyDSLgrails run-app The grails-app directory The GroovyDSL application that we built earlier has a grails-app subdirectory, which is where the application source files for our application will reside. We only need to concern ourselves with the grails-app/domain directory for this discussion, but it's worth understanding a little about some of the other important directories. grails-app/conf: This is where the Grails configuration files reside. grails-app/controllers: Grails uses a Model View Controller (MVC) architecture. The controller directory will contain the Groovy controller code for our UIs. grails-app/domain: This is where Grails stores the GORM model classes of the application. grails-app/view: This is where the Groovy Server Pages (GSPs), the Grails equivalent to JSPs are stored. Grails has a number of shortcut commands that allow us to quickly build out the objects for our model. As we progress through this section, we will take a look back at these directories to see what files have been generated in these directories for us. In this section, we will be taking a whistle-stop tour through GORM. You might like to dig deeper into both GORM and Grails yourself. You can find further online documentation for GORM at http://www.grails.org/GORM. DataSource configuration Out of the box, Grails is configured to use an embedded HSQL in-memory database. This is useful as a means of getting up and running quickly, and all of the example code will work perfectly well with the default configuration. Having an in-memory database is helpful for testing because we always start with a clean slate. However, for the purpose of this section, it's also useful for us to have a proper database instance to peek into, in order to see how GORM maps Groovy objects into tables. We will configure our Grails application to persist in a MySQL database instance. Grails allows us to have separate configuration environments for development, testing, and production. We will configure our development environment to point to a MySQL instance, but we can leave the production and testing environments as they are. First of all we need to create a database, by using the mysqladmin command. This command will create a database called groovydsl, which is owned by the MySQL root user. mysqladmin -u root create groovydsl Database configuration in Grails is done by editing the DataSource.groovy source file in grails-app/conf. We are interested in the environments section of this file. environments { development { dataSource { dbCreate = "create-drop" url = "jdbc:mysql://localhost/groovydsl" driverClassName = "com.mysql.jdbc.Driver" username = "root" password = "" } } test { dataSource { dbCreate = "create-drop" url = "jdbc:hsqldb:mem:testDb" } } production { dataSource { dbCreate = "update" url = "jdbc:hsqldb:mem:testDb" } }} The first interesting thing to note is that this is a mini Groovy DSL for describing data sources. In the previous version, we have edited the development dataSource entry to point to the MySQL groovydsl database that we created. In early versions of Grails, there were three separate DataSource files that need to be configured for each environment, for example, DevelopmentDataSource.groovy. The equivalent DevelopmentDataSource.groovy file would be as follows: class DevelopmentDataSource { boolean pooling = true String dbCreate = "create-drop" String url = " jdbc:mysql://localhost/groovydsl " String driverClassName = "com.mysql.jdbc.Driver" String username = "root" String password = ""} The dbCreate field tells GORM what it should do with tables in the database, on startup. Setting this to create-drop will tell GORM to drop a table if it exists already, and create a new table, each time it runs. This will keep the database tables in sync with our GORM objects. You can also set dbCreate to update or create. DataSource.groovy is a handy little DSL for configuring the GORM database connections. Grails uses a utility class—groovu.utl. ConfigSlurper—for this DSL. The ConfigSlurper class allows us to easily parse a structured configuration file and convert it into a java.util.Properties object if we wish. Alternatively, we can navigate the ConfigObject returned by using dot notation. We can use the ConfigSlurper to open and navigate DataSource.groovy as shown in the next code snippet. ConfigSlurper has a built-in ability to partition the configuration by environment. If we construct the ConfigSlurper for a particular environment, it will only load the settings appropriate to that environment. def development = new ConfigSlurper("development").parse(newFile('DataSource.groovy').toURL())def production = new ConfigSlurper("production").parse(newFile('DataSource.groovy').toURL())assert development.dataSource.dbCreate == "create-drop"assert production.dataSource.dbCreate == "update"def props = development.toProperties()assert props["dataSource.dbCreate"] == "create-drop"
Read more
  • 0
  • 0
  • 3389

article-image-working-jrockit-runtime-analyzer
Packt
03 Jun 2010
9 min read
Save for later

Working with JRockit Runtime Analyzer

Packt
03 Jun 2010
9 min read
The JRockit Runtime Analyzer, or JRA for short, is a JRockit-specific profiling tool that provides information about both the JRockit runtime and the application running in JRockit. JRA was the main profiling tool for JRockit R27 and earlier, but has been superseded in later versions by the JRockit Flight Recorder. Because of its extremely low overhead, JRA is suitable for use in production. This article is mainly targeted at R27.x/3.x versions of JRockit and Mission Control. The need for feedback In order to make JRockit an industry-leading JVM, there has been a great need for customer collaboration. As the focus for JRockit consistently has been on performance and scalability in server-side applications, the closest collaboration has been with customers with large server installations. An example is the financial industry. The birth of the JRockit Runtime Analyzer, or JRA, originally came from the need for gathering profiling information on how well JRockit performed at customer sites. One can easily understand that customers were rather reluctant to send us, for example, their latest proprietary trading applications to play with in our labs. And, of course, allowing us to poke around in a customer's mission critical application in production was completely out of the question. Some of these applications shuffle around billions of dollars per week. We found ourselves in a situation where we needed a tool to gather as much information as possible on how JRockit, and the application running on JRockit, behaved together; both to find opportunities to improve JRockit and to find erratic behavior in the customer application. This was a bit of a challenge, as we needed to get high quality data. If the information was not accurate, we would not know how to improve JRockit in the areas most needed by customers or perhaps at all. At the same time, we needed to keep the overhead down to a minimum. If the profiling itself incurred significant overhead, we would no longer get a true representation of the system. Also, with anything but near-zero overhead, the customer would not let us perform recordings on their mission critical systems in production. JRA was invented as a method of recording information in a way that the customer could feel confident with, while still providing us with the data needed to improve JRockit. The tool was eventually widely used within our support organization to both diagnose problems and as a tuning companion for JRockit. In the beginning, a simple XML format was used for our runtime recordings. A human-readable format made it simple to debug, and the customer could easily see what data was being recorded. Later, the format was upgraded to include data from a new recording engine for latency-related data. When the latency data came along, the data format for JRA was split into two parts, the human-readable XML and a binary file containing the latency events. The latency data was put into JRockit internal memory buffers during the recording, and to avoid introducing unnecessary latencies and performance penalties that would surely be incurred by translating the buffers to XML, it was decided that the least intrusive way was to simply dump the buffers to disk. To summarize, recordings come in two different flavors having either the .jra extension (recordings prior to JRockit R28/JRockit Mission Control 4.0) or the .jfr (JRockit Flight Recorder) extension (R28 or later). Prior to the R28 version of JRockit, the recording files mainly consisted of XML without a coherent data model. As of R28, the recording files are binaries where all data adheres to an event model, making it much easier to analyze the data. To open a JFR recording, a JRockit Mission Control of version 3.x must be used. To open a Flight Recorder recording, JRockit Mission Control version 4.0 or later must be used. Recording The recording engine that starts and stops recordings can be controlled in several different ways: By using the JRCMD command-line tool. By using the JVM command-line parameters. For more information on this, see the -XXjra parameter in the JRockit documentation. From within the JRA GUI in JRockit Mission Control. The easiest way to control recordings is to use the JRA/JFR wizard from within the JRockit Mission Control GUI. Simply select the JVM on which to perform a JRA recording in the JVM Browser and click on the JRA button in the JVM Browser toolbar. You can also click on Start JRA Recording from the context menu. Usually, one of the pre-defined templates will do just fine, but under special circumstances it may be necessary to adjust them. The pre-defined templates in JRockit Mission Control 3.x are: Full Recording: This is the standard use case. By default, it is configured to do a five minute recording that contains most data of interest. Minimal Overhead Recording: This template can be used for very latency-sensitive applications. It will, for example, not record heap statistics, as the gathering of heap statistics will, in effect, cause an extra garbage collection at the beginning and at the end of the recording. Real Time Recording: This template is useful when hunting latency-related problems, for instance when tuning a system that is running on JRockit Real Time. This template provides an additional text field for setting the latency threshold. The latency threshold is explained later in the article in the section on the latency analyzer. The threshold is by default lowered to 5 milliseconds for this type of recording, from the default 20 milliseconds, and the default recording time is longer. Classic Recording: This resembles a classic JRA recording from earlier versions of Mission Control. Most notably, it will not contain any latency data. Use this template with JRockit versions prior to R27.3 or if there is no interest in recording latency data.   All recording templates can be customized by checking the Show advanced options check box. This is usually not needed, but let's go through the options and why you may want to change them: Enable GC sampling: This option selects whether or not GC-related information should be recorded. It can be turned off if you know that you will not be interested in GC-related information. It is on by default, and it is a good idea to keep it enabled. Enable method sampling: This option enables or disables method sampling. Method sampling is implemented by using sample data from the JRockit code optimizer. If profiling overhead is a concern (it is usually very low, but still), it is usually a good idea to use the Method sample interval option to control how much method sampling information to record. Enable native sampling: This option determines whether or not to attempt to sample time spent executing native code as a part of the method sampling. This feature is disabled by default, as it is mostly used by JRockit developers and support. Most Java developers probably do fine without it. Hardware method sampling: On some hardware architectures, JRockit can make use of special hardware counters in the CPU to provide higher resolution for the method sampling. This option only makes sense on such architectures. Stack traces: Use this option to not only get sample counts but also stack traces from method samples. If this is disabled, no call traces are available for sample points in the methods that show up in the Hot Methods list. Trace depth: This setting determines how many stack frames to retrieve for each stack trace. For JRockit Mission Control versions prior to 4.0, this defaulted to the rather limited depth of 16. For applications running in application containers or using large frameworks, this is usually way too low to generate data from which any useful conclusions can be drawn. A tip, when profiling such an application, would be to bump this to 30 or more. Method sampling interval: This setting controls how often thread samples should be taken. JRockit will stop a subset of the threads every Method sample interval milliseconds in a round robin fashion. Only threads executing when the sample is taken will be counted, not blocking threads. Use this to find out where the computational load in an application takes place. See the section, Hot Methods for more information. Thread dumps: When enabled, JRockit will record a thread stack dump at the beginning and the end of the recording. If the Thread dump interval setting is also specified, thread dumps will be recorded at regular intervals for the duration of the recording. Thread dump interval: This setting controls how often, in seconds, to record the thread stack dumps mentioned earlier. Latencies: If this setting is enabled, the JRA recording will contain latency data. For more information on latencies, please refer to the section Latency later in this article. Latency threshold: To limit the amount of data in the recording, it is possible to set a threshold for the minimum latency (duration) required for an event to actually be recorded. This is normally set to 20 milliseconds. It is usually safe to lower this to around 1 millisecond without incurring too much profiling overhead. Less than that and there is a risk that the profiling overhead will become unacceptably high and/or that the file size of the recording becomes unmanageably large. Latency thresholds can be set as low as nanosecond values by changing the unit in the unit combo box. Enable CPU sampling: When this setting is enabled, JRockit will record the CPU load at regular intervals. Heap statistics: This setting causes JRockit to do a heap analysis pass at the beginning and at the end of the recording. As heap analysis involves forcing extra garbage collections at these points in order to collect information, it is disabled in the low overhead template. Delay before starting a recording: This option can be used to schedule the recording to start at a later time. The delay is normally defined in minutes, but the unit combo box can be used to specify the time in a more appropriate unit — everything from seconds to days is supported. Before starting the recording, a location to which the finished recording is to be downloaded must be specified. Once the JRA recording is started, an editor will open up showing the options with which the recording was started and a progress bar. When the recording is completed, it is downloaded and the editor input is changed to show the contents of the recording. Analyzing JRA recordings Analyzing JRA recordings may easily seem like black magic to the uninitiated, so just like we did with the Management Console, we will go through each tab of the JRA editor to explain the information in that particular tab, with examples on when it is useful. Just like in the console, there are several tabs in different tab groups.
Read more
  • 0
  • 0
  • 2804

article-image-working-jrockit-runtime-analyzer-sequel
Packt
03 Jun 2010
7 min read
Save for later

Working with JRockit Runtime Analyzer- A Sequel

Packt
03 Jun 2010
7 min read
Code The Code tab group contains information from the code generator and the method sampler. It consists of three tabs —the Overview, Hot Methods, and Optimizations tab. Overview This tab aggregates information from the code generator with sample information from the code optimizer. This allows us to see which methods the Java program spends the most time executing. Again, this information is available virtually "for free", as the code generation system needs it anyway. For CPU-bound applications, this tab is a good place to start looking for opportunities to optimize your application code. By CPU-bound, we mean an application for which the CPU is the limiting factor; with a faster CPU, the application would have a higher throughput.   In the first section, the amount of exceptions thrown per second is shown. This number depends both on the hardware and on the application—faster hardware may execute an application more quickly, and consequently throw more exceptions. However, a higher value is always worse than a lower one on identical setups. Recall that exceptions are just that, rare corner cases. As we have explained, theJVM typically gambles that they aren't occurring too frequently. If an application throws hundreds of thousands exceptions per second, you should investigate why. Someone may be using exceptions for control flow, or there may be a configuration error. Either way, performance will suffer. In JRockit Mission Control 3.1, the recording will only provide information about how many exceptions were thrown. The only way to find out where the exceptions originated is unfortunately by changing the verbosity of the log. An overview of where the JVM spends most of the time executing Java code can be found in the Hot Packages and Hot Classes sections. The only difference between them is the way the sample data from the JVM code optimizer is aggregated. In Hot Packages, hot executing code is sorted on a per-package basis and in Hot Classes on a per-class basis. For more fine-grained information, use the Hot Methods tab. As shown in the example screenshot, most of the time is spent executing code in the weblogic.servlet.internal package. There is also a fair amount of exceptions being thrown. Hot Methods This tab provides a detailed view of the information provided by the JVM code optimizer. If the objective is to find a good candidate method for optimizing the application, this is the place to look. If a lot of the method samples are from one particular method, and a lot of the method traces through that method share the same origin, much can potentially be gained by either manually optimizing that method or by reducing the amount of calls along that call chain. In the following example, much of the time is spent in the method com.bea.wlrt.adapter.defaultprovider.internal.CSVPacketReceiver.parseL2Packet(). It seems likely that the best way to improve the performance of this particular application would be to optimize a method internal to the application container (WebLogic Event Server) and not the code in the application itself, running inside the container. This illustrates both the power of the JRockit Mission Control tools and a dilemma that the resulting analysis may reveal—the answers provided sometimes require solutions beyond your immediate control.   Sometimes, the information provided may cause us to reconsider the way we use data structures. In the next example, the program frequently checks if an object is in a java. util.LinkedList. This is a rather slow operation that is proportional to the size of the list (time complexity O(n)), as it potentially involves traversing the entire list, looking for the element. Changing to another data structure, such as a HashSet would most certainly speed up the check, making the time complexity constant (O(1)) on average, given that the hash function is good enough and the set large enough.   Optimizations This tab shows various statistics from the JIT-compiler. The information in this tab is mostly of interest when hunting down optimization-related bugs in JRockit. It shows how much time was spent doing optimizations as well as how much time was spent JIT-compiling code at the beginning and at the end of the recording. For each method optimized during the recording, native code size before and after optimization is shown, as well as how long it took to optimize the particular method.   Thread/Locks The Thread/Locks tab group contains tabs that visualize thread- and lock-related data. There are five such tabs in JRA—the Overview, Thread, Java Locks, JVM Locks, and Thread Dumps tab. Overview The Overview tab shows fundamental thread and hardware-related information, such as the number of hardware threads available on the system and the number of context switches per second.   A dual-core CPU has two hardware threads, and a hyperthreaded core also counts as two hardware threads. That is, a dual-core CPU with hyperthreading will be displayed as having four hardware threads. A high amount of context switches per second may not be a real problem, but better synchronization behavior may lead to better total throughput in the system. There is a CPU graph showing both the total CPU load on the system, as well as the CPU load generated by the JVM. A saturated CPU is usually a good thing — you are fully utilizing the hardware on which you spent a lot of money! As previously mentioned, in some CPU-bound applications, for example batch jobs, it is normally a good thing for the system to be completely saturated during the run. However, for a standard server-side application it is probably more beneficial if the system is able to handle some extra load in addition to the expected one. The hardware provisioning problem is not simple, but normally server-side systems should have some spare computational power for when things get hairy. This is usually referred to as overprovisioning, and has traditionally just involved buying faster hardware. Virtualization has given us exciting new ways to handle the provisioning problem. Threads This tab shows a table where each row corresponds to a thread. The tab has more to offer than first meets the eye. By default, only the start time, the thread duration, and the Java thread ID are shown for each thread. More columns can be made visible by changing the table properties. This can be done either by clicking on the Table Settings icon, or by using the context menu in the table. As can be seen in the example screenshot, information such as the thread group that the thread belongs to, allocation-related information, and the platform thread ID can also be displayed. The platform thread ID is the ID assigned to the thread by the operating system, in case we are working with native threads. This information can be useful if you are using operating system-specific tools together with JRA.   Java Locks This tab displays information on how Java locks have been used during the recording. The information is aggregated per type (class) of monitor object.   This tab is normally empty. You need to start JRockit with the system property jrockit.lockprofiling set to true, for the lock profiling information to be recorded. This is because lock profiling may cause anything from a small to a considerable overhead, especially if there is a lot of synchronization. With recent changes to the JRockit thread and locking model, it would be possible to dynamically enable lock profiling. This is unfortunately not the case yet, not even in JRockit Flight Recorder. For R28, the system property jrockit.lockprofiling has been deprecated and replaced with the flag -XX:UseLockProfiling. JVM Locks This tab contains information on JVM internal native locks. This is normally useful for the JRockit JVM developers and for JRockit support. An example of a native lock would be the code buffer lock that the JVM acquires in order to emit compiled methods into a native code buffer. This is done to ensure that no other code generation threads interfere with that particular code emission. Thread Dumps The JRA recordings normally contain thread dumps from the beginning and the end of the recording. By changing the Thread dump interval parameter in the JRA recording wizard, more thread dumps can be made available at regular intervals throughout the recording.  
Read more
  • 0
  • 0
  • 1914
article-image-metaprogramming-and-groovy-mop
Packt
31 May 2010
6 min read
Save for later

Metaprogramming and the Groovy MOP

Packt
31 May 2010
6 min read
(For more resources on Groovy DSL, see here.) In a nutshell, the term metaprogramming refers to writing code that can dynamically change its behavior at runtime. A Meta-Object Protocol (MOP) refers to the capabilities in a dynamic language that enable metaprogramming. In Groovy, the MOP consists of four distinct capabilities within the language: reflection, metaclasses, categories, and expandos. The MOP is at the core of what makes Groovy so useful for defining DSLs. The MOP is what allows us to bend the language in different ways in order to meet our needs, by changing the behavior of classes on the fly. This section will guide you through the capabilities of MOP. Reflection To use Java reflection, we first need to access the Class object for any Java object in which are interested through its getClass() method. Using the returned Class object, we can query everything from the list of methods or fields of the class to the modifiers that the class was declared with. Below, we see some of the ways that we can access a Class object in Java and the methods we can use to inspect the class at runtime. import java.lang.reflect.Field;import java.lang.reflect.Method;public class Reflection { public static void main(String[] args) { String s = new String(); Class sClazz = s.getClass(); Package _package = sClazz.getPackage(); System.out.println("Package for String class: "); System.out.println(" " + _package.getName()); Class oClazz = Object.class; System.out.println("All methods of Object class:"); Method[] methods = oClazz.getMethods(); for(int i = 0;i < methods.length;i++) System.out.println(" " + methods[i].getName()); try { Class iClazz = Class.forName("java.lang.Integer"); Field[] fields = iClazz.getDeclaredFields(); System.out.println("All fields of Integer class:"); for(int i = 0; i < fields.length;i++) System.out.println(" " + fields[i].getName()); } catch (ClassNotFoundException e) { e.printStackTrace(); } }} We can access the Class object from an instance by calling its Object.getClass() method. If we don't have an instance of the class to hand, we can get the Class object by using .class after the class name, for example, String.class. Alternatively, we can call the static Class.forName, passing to it a fully-qualified class name. Class has numerous methods, such as getPackage(), getMethods(), and getDeclaredFields() that allow us to interrogate the Class object for details about the Java class under inspection. The preceding example will output various details about String, Integer, and Double. Groovy Reflection shortcuts Groovy, as we would expect by now, provides shortcuts that let us reflect classes easily. In Groovy, we can shortcut the getClass() method as a property access .class, so we can access the class object in the same way whether we are using the class name or an instance. We can treat the .class as a String, and print it directly without calling Class.getName(), as follows: The variable greeting is declared with a dynamic type, but has the type java.lang.String after the "Hello" String is assigned to it. Classes are first class objects in Groovy so we can assign String to a variable. When we do this, the object that is assigned is of type java.lang.Class. However, it describes the String class itself, so printing will report java.lang.String. Groovy also provides shortcuts for accessing packages, methods, fields, and just about all other reflection details that we need from a class. We can access these straight off the class identifier, as follows: println "Package for String class"println " " + String.packageprintln "All methods of Object class:"Object.methods.each { println " " + it }println "All fields of Integer class:"Integer.fields.each { println " " + it } Incredibly, these six lines of code do all of the same work as the 30 lines in our Java example. If we look at the preceding code, it contains nothing that is more complicated than it needs to be. Referencing String.package to get the Java package of a class is as succinct as you can make it. As usual, String.methods and String.fields return Groovy collections, so we can apply a closure to each element with the each method. What's more, the Groovy version outputs a lot more useful detail about the package, methods, and fields. When using an instance of an object, we can use the same shortcuts through the class field of the instance. def greeting = "Hello"assert greeting.class.package == String.package Expandos An Expando is a dynamic representation of a typical Groovy bean. Expandos support typical get and set style bean access but in addition to this they will accept gets and sets to arbitrary properties. If we try to access, a non-existing property, the Expando does not mind and instead of causing an exception it will return null. If we set a non-existent property, the Expando will add that property and set the value. In order to create an Expando, we instantiate an object of class groovy.util.Expando. def customer = new Expando()assert customer.properties == [:]assert customer.id == nullassert customer.properties == [:]customer.id = 1001customer.firstName = "Fred"customer.surname = "Flintstone"customer.street = "1 Rock Road"assert customer.id == 1001assert customer.properties == [ id:1001, firstName:'Fred', surname:'Flintstone', street:'1 Rock Road']customer.properties.each { println it } The id field of customer is accessible on the Expando shown in the preceding example even when it does not exist as a property of the bean. Once a property has been set, it can be accessed by using the normal field getter: for example, customer.id. Expandos are a useful extension to normal beans where we need to be able to dump arbitrary properties into a bag and we don't want to write a custom class to do so. A neat trick with Expandos is what happens when we store a closure in a property. As we would expect, an Expando closure property is accessible in the same way as a normal property. However, because it is a closure we can apply function call syntax to it to invoke the closure. This has the effect of seeming to add a new method on the fly to the Expando. customer.prettyPrint = { println "Customer has following properties" customer.properties.each { if (it.key != 'prettyPrint') println " " + it.key + ": " + it.value }}customer.prettyPrint() Here we appear to be able to add a prettyPrint() method to the customer object, which outputs to the console: Customer has following properties surname: Flintstone street: 1 Rock Road firstName: Fred id: 1001
Read more
  • 0
  • 0
  • 5326

article-image-3rd-international-soa-symposium
Packt
28 May 2010
2 min read
Save for later

3rd International SOA Symposium

Packt
28 May 2010
2 min read
With 80 Speaking sessions across 16 tracks, the International SOA and Cloud Symposium will feature the top experts from around the world. The conference will take place on October 5-6, 2010. There will also be a series of SOACP post-conference certification workshops running from October  7-13, 2010, including (for the first time in Europe) the Certified Cloud Computing Specialist Workshop.   The Agenda for 2010 is now online and contains internationally recognized speakers from organizations such as Microsoft, HP, IBM, Oracle, SAP, Amazon, Red Hat, Vordel, Layer7, TIBCO, Logica, SOA Systems, US Department of Defense, and CGI. International experts including Thomas Erl, Dirk Krafzig, Stefan Tilkov, Mark Little, Brian Loesgen, John deVadoss, Nicolai Josuttis, Tony Shan, Toufic Boubez, Paul C. Brown, Clemens Utschig, Satadru Roy, David Chou and many more will provide new and exclusive coverage of the latest SOA and Cloud Computing topics and innovations. Themes and Topics Exploring Modern Service Technologies and Practices Scaling Your Business Into the Cloud •    Service Governance & Scalability •    Service Architecture & Service Engineering Innovation•    SOA Case Studies & Strategic Planning•    REST Service Design & RESTful SOA•    Service Security & Policies•    Semantic Services & Patterns•    Service Modelling & BPM •    The Latest Cloud Computing Technology •    Building and Working with Cloud-Based Services •    Cloud Computing Business Strategies •    Case Studies & Business Models •    Understanding SOA & Cloud Computing •    Cloud-based Infrastructure & Products •    Semantic Web and the Cloud For further information, see http://soasymposium.com/agenda2010.php. Aside from the opportunity to network with 500 practitioners and 80 experts, the Symposium will feature the exclusive Pattern Review Committee, multiple book launches and galleys. Following the success of previous years’ International SOA Symposia in Amsterdam and Rotterdam, this edition will be held in Berlin. Every SOA Symposium and Cloud Symposium event is dedicated to providing valuable contents specifically for IT practitioners. Register by August 31st, and receive an early bird discount.So get a 10 % discount and pay only €981 excl. VAT for two Conference Days. Register now at www.soasymposium.com or www.cloudsymposium.com
Read more
  • 0
  • 0
  • 1466
Modal Close icon
Modal Close icon