Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-advanced-data-access-patterns
Packt
11 Aug 2015
25 min read
Save for later

Advanced Data Access Patterns

Packt
11 Aug 2015
25 min read
In this article by, Suhas Chatekar, author of the book Learning NHibernate 4, we would dig deeper into that statement and try to understand what those downsides are and what can be done about them. In our attempt to address the downsides of repository, we would present two data access patterns, namely specification pattern and query object pattern. Specification pattern is a pattern adopted into data access layer from a general purpose pattern used for effectively filtering in-memory data. Before we begin, let me reiterate – repository pattern is not bad or wrong choice in every situation. If you are building a small and simple application involving a handful of entities then repository pattern can serve you well. But if you are building complex domain logic with intricate database interaction then repository may not do justice to your code. The patterns presented can be used in both simple and complex applications, and if you feel that repository is doing the job perfectly then there is no need to move away from it. (For more resources related to this topic, see here.) Problems with repository pattern A lot has been written all over the Internet about what is wrong with repository pattern. A simple Google search would give you lot of interesting articles to read and ponder about. We would spend some time trying to understand problems introduced by repository pattern. Generalization FindAll takes name of the employee as input along with some other parameters required for performing the search. When we started putting together a repository, we said that Repository<T> is a common repository class that can be used for any entity. But now FindAll takes a parameter that is only available on Employee, thus locking the implementation of FindAll to the Employee entity only. In order to keep the repository still reusable by other entities, we would need to part ways from the common Repository<T> class and implement a more specific EmployeeRepository class with Employee specific querying methods. This fixes the immediate problem but introduces another one. The new EmployeeRepository breaks the contract offered by IRepository<T> as the FindAll method cannot be pushed on the IRepository<T> interface. We would need to add a new interface IEmployeeRepository. Do you notice where this is going? You would end up implementing lot of repository classes with complex inheritance relationships between them. While this may seem to work, I have experienced that there are better ways of solving this problem. Unclear and confusing contract What happens if there is a need to query employees by a different criteria for a different business requirement? Say, we now need to fetch a single Employee instance by its employee number. Even if we ignore the above issue and be ready to add a repository class per entity, we would need to add a method that is specific to fetching the Employee instance matching the employee number. This adds another dimension to the code maintenance problem. Imagine how many such methods we would end up adding for a complex domain every time someone needs to query an entity using a new criteria. With several methods on repository contract that query same entity using different criteria makes the contract less clear and confusing for new developers. Such a pattern also makes it difficult to reuse code even if two methods are only slightly different from each other. Leaky abstraction In order to make methods on repositories reusable in different situations, lot of developers tend to add a single method on repository that does not take any input and return an IQueryable<T> by calling ISession.Query<T> inside it, as shown next: public IQueryable<T> FindAll() {    return session.Query<T>(); } IQueryable<T> returned by this method can then be used to construct any query that you want outside of repository. This is a classic case of leaky abstraction. Repository is supposed to abstract away any concerns around querying the database, but now what we are doing here is returning an IQueryable<T> to the consuming code and asking it to build the queries, thus leaking the abstraction that is supposed to be hidden into repository. IQueryable<T> returned by the preceding method holds an instance of ISession that would be used to ultimately interact with database. Since repository has no control over how and when this IQueryable would invoke database interaction, you might get in trouble. If you are using "session per request" kind of pattern then you are safeguarded against it but if you are not using that pattern for any reason then you need to watch out for errors due to closed or disposed session objects. God object anti-pattern A god object is an object that does too many things. Sometimes, there is a single class in an application that does everything. Such an implementation is almost always bad as it majorly breaks the famous single responsibility principle (SRP) and reduces testability and maintainability of code. A lot can be written about SRP and god object anti-pattern but since it is not the primary topic, I would leave the topic with underscoring the importance of staying away from god object anti-pattern. Avid readers can Google on the topic if they are interested. Repositories by nature tend to become single point of database interaction. Any new database interaction goes through repository. Over time, repositories grow organically with large number of methods doing too many things. You may spot the anti-pattern and decide to break the repository into multiple small repositories but the original single repository would be tightly integrated with your code in so many places that splitting it would be a difficult job. For a contained and trivial domain model, repository pattern can be a good choice. So do not abandon repositories entirely. It is around complex and changing domain that repositories start exhibiting the problems just discussed. You might still argue that repository is an unneeded abstraction and we can very well use NHibernate directly for a trivial domain model. But I would caution against any design that uses NHibernate directly from domain or domain services layer. No matter what design I use for data access, I would always adhere to "explicitly declare capabilities required" principle. The abstraction that offers required capability can be a repository interface or some other abstractions that we would learn. Specification pattern Specification pattern is a reusable and object-oriented way of applying business rules on domain entities. The primary use of specification pattern is to select subset of entities from a larger collection of entities based on some rules. An important characteristic of specification pattern is combining multiple rules by chaining them together. Specification pattern was in existence before ORMs and other data access patterns had set their feet in the development community. The original form of specification pattern dealt with in-memory collections of entities. The pattern was then adopted to work with ORMs such as NHibernate as people started seeing the benefits that specification pattern could bring about. We would first discuss specification pattern in its original form. That would give us a good understanding of the pattern. We would then modify the implementation to make it fit with NHibernate. Specification pattern in its original form Let's look into an example of specification pattern in its original form. A specification defines a rule that must be satisfied by domain objects. This can be generalized using an interface definition, as follows: public interface ISpecification<T> { bool IsSatisfiedBy(T entity); } ISpecification<T> defines a single method IsSatisifedBy. This method takes the entity instance of type T as input and returns a Boolean value depending on whether the entity passed satisfies the rule or not. If we were to write a rule for employees living in London then we can implement a specification as follows: public class EmployeesLivingIn : ISpecification<Employee> { public bool IsSatisfiedBy(Employee entity) {    return entity.ResidentialAddress.City == "London"; } } The EmployeesLivingIn class implements ISpecification<Employee> telling us that this is a specification for the Employee entity. This specification compares the city from the employee's ResidentialAddress property with literal string "London" and returns true if it matches. You may be wondering why I have named this class as EmployeesLivingIn. Well, I had some refactoring in mind and I wanted to make my final code read nicely. Let's see what I mean. We have hardcoded literal string "London" in the preceding specification. This effectively stops this class from being reusable. What if we need a specification for all employees living in Paris? Ideal thing to do would be to accept "London" as a parameter during instantiation of this class and then use that parameter value in the implementation of the IsSatisfiedBy method. Following code listing shows the modified code: public class EmployeesLivingIn : ISpecification<Employee> { private readonly string city;   public EmployeesLivingIn(string city) {    this.city = city; }   public bool IsSatisfiedBy(Employee entity) {    return entity.ResidentialAddress.City == city; } } This looks good without any hardcoded string literals. Now if I wanted my original specification for employees living in London then following is how I could build it: var specification = new EmployeesLivingIn("London"); Did you notice how the preceding code reads in plain English because of the way class is named? Now, let's see how to use this specification class. Usual scenario where specifications are used is when you have got a list of entities that you are working with and you want to run a rule and find out which of the entities in the list satisfy that rule. Following code listing shows a very simple use of the specification we just implemented: List<Employee> employees = //Loaded from somewhere List<Employee> employeesLivingInLondon = new List<Employee>(); var specification = new EmployeesLivingIn("London");   foreach(var employee in employees) { if(specification.IsSatisfiedBy(employee)) {    employeesLivingInLondon.Add(employee); } } We have a list of employees loaded from somewhere and we want to filter this list and get another list comprising of employees living in London. Till this point, the only benefit we have had from specification pattern is that we have managed to encapsulate the rule into a specification class which can be reused anywhere now. For complex rules, this can be very useful. But for simple rules, specification pattern may look like lot of plumbing code unless we overlook the composability of specifications. Most power of specification pattern comes from ability to chain multiple rules together to form a complex rule. Let's write another specification for employees who have opted for any benefit: public class EmployeesHavingOptedForBenefits : ISpecification<Employee> { public bool IsSatisfiedBy(Employee entity) {    return entity.Benefits.Count > 0; } } In this rule, there is no need to supply any literal value from outside so the implementation is quite simple. We just check if the Benefits collection on the passed employee instance has count greater than zero. You can use this specification in exactly the same way as earlier specification was used. Now if there is a need to apply both of these specifications to an employee collection, then very little modification to our code is needed. Let's start with adding an And method to the ISpecification<T> interface, as shown next: public interface ISpecification<T> { bool IsSatisfiedBy(T entity); ISpecification<T> And(ISpecification<T> specification); } The And method accepts an instance of ISpecification<T> and returns another instance of the same type. As you would have guessed, the specification that is returned from the And method would effectively perform a logical AND operation between the specification on which the And method is invoked and specification that is passed into the And method. The actual implementation of the And method comes down to calling the IsSatisfiedBy method on both the specification objects and logically ANDing their results. Since this logic does not change from specification to specification, we can introduce a base class that implements this logic. All specification implementations can then derive from this new base class. Following is the code for the base class: public abstract class Specification<T> : ISpecification<T> { public abstract bool IsSatisfiedBy(T entity);   public ISpecification<T> And(ISpecification<T> specification) {    return new AndSpecification<T>(this, specification); } } We have marked Specification<T> as abstract as this class does not represent any meaningful business specification and hence we do not want anyone to inadvertently use this class directly. Accordingly, the IsSatisfiedBy method is marked abstract as well. In the implementation of the And method, we are instantiating a new class AndSepcification. This class takes two specification objects as inputs. We pass the current instance and one that is passed to the And method. The definition of AndSpecification is very simple. public class AndSpecification<T> : Specification<T> { private readonly Specification<T> specification1; private readonly ISpecification<T> specification2;   public AndSpecification(Specification<T> specification1, ISpecification<T> specification2) {    this.specification1 = specification1;    this.specification2 = specification2; }   public override bool IsSatisfiedBy(T entity) {    return specification1.IsSatisfiedBy(entity) &&    specification2.IsSatisfiedBy(entity); } } AndSpecification<T> inherits from abstract class Specification<T> which is obvious. IsSatisfiedBy is simply performing a logical AND operation on the outputs of the ISatisfiedBy method on each of the specification objects passed into AndSpecification<T>. After we change our previous two business specification implementations to inherit from abstract class Specification<T> instead of interface ISpecification<T>, following is how we can chain two specifications using the And method that we just introduced: List<Employee> employees = null; //= Load from somewhere List<Employee> employeesLivingInLondon = new List<Employee>(); var specification = new EmployeesLivingIn("London")                                     .And(new EmployeesHavingOptedForBenefits());   foreach (var employee in employees) { if (specification.IsSatisfiedBy(employee)) {    employeesLivingInLondon.Add(employee); } } There is literally nothing changed in how the specification is used in business logic. The only thing that is changed is construction and chaining together of two specifications as depicted in bold previously. We can go on and implement other chaining methods but point to take home here is composability that the specification pattern offers. Now let's look into how specification pattern sits beside NHibernate and helps in fixing some of pain points of repository pattern. Specification pattern for NHibernate Fundamental difference between original specification pattern and the pattern applied to NHibernate is that we had an in-memory list of objects to work with in the former case. In case of NHibernate we do not have the list of objects in the memory. We have got the list in the database and we want to be able to specify rules that can be used to generate appropriate SQL to fetch the records from database that satisfy the rule. Owing to this difference, we cannot use the original specification pattern as is when we are working with NHibernate. Let me show you what this means when it comes to writing code that makes use of specification pattern. A query, in its most basic form, to retrieve all employees living in London would look something as follows: var employees = session.Query<Employee>()                .Where(e => e.ResidentialAddress.City == "London"); The lambda expression passed to the Where method is our rule. We want all the Employee instances from database that satisfy this rule. We want to be able to push this rule behind some kind of abstraction such as ISpecification<T> so that this rule can be reused. We would need a method on ISpecification<T> that does not take any input (there are no entities in-memory to pass) and returns a lambda expression that can be passed into the Where method. Following is how that method could look: public interface ISpecification<T> where T : EntityBase<T> { Expression<Func<T, bool>> IsSatisfied(); } Note the differences from the previous version. We have changed the method name from IsSatisfiedBy to IsSatisfied as there is no entity being passed into this method that would warrant use of word By in the end. This method returns an Expression<Fund<T, bool>>. If you have dealt with situations where you pass lambda expressions around then you know what this type means. If you are new to expression trees, let me give you a brief explanation. Func<T, bool> is a usual function pointer. This pointer specifically points to a function that takes an instance of type T as input and returns a Boolean output. Expression<Func<T, bool>> takes this function pointer and converts it into a lambda expression. An implementation of this new interface would make things more clear. Next code listing shows the specification for employees living in London written against the new contract: public class EmployeesLivingIn : ISpecification<Employee> { private readonly string city;   public EmployeesLivingIn(string city) {    this.city = city; }   public override Expression<Func<Employee, bool>> IsSatisfied() {    return e => e.ResidentialAddress.City == city; } } There is not much changed here compared to the previous implementation. Definition of IsSatisfied now returns a lambda expression instead of a bool. This lambda is exactly same as the one we used in the ISession example. If I had to rewrite that example using the preceding specification then following is how that would look: var specification = new EmployeeLivingIn("London"); var employees = session.Query<Employee>()                .Where(specification.IsSatisfied()); We now have a specification wrapped in a reusable object that we can send straight to NHibernate's ISession interface. Now let's think about how we can use this from within domain services where we used repositories before. We do not want to reference ISession or any other NHibernate type from domain services as that would break onion architecture. We have two options. We can declare a new capability that can take a specification and execute it against the ISession interface. We can then make domain service classes take a dependency on this new capability. Or we can use the existing IRepository capability and add a method on it which takes the specification and executes it. We started this article with a statement that repositories have a downside, specifically when it comes to querying entities using different criteria. But now we are considering an option to enrich the repositories with specifications. Is that contradictory? Remember that one of the problems with repository was that every time there is a new criterion to query an entity, we needed a new method on repository. Specification pattern fixes that problem. Specification pattern has taken the criterion out of the repository and moved it into its own class so we only ever need a single method on repository that takes in ISpecification<T> and execute it. So using repository is not as bad as it sounds. Following is how the new method on repository interface would look: public interface IRepository<T> where T : EntityBase<T> { void Save(T entity); void Update(int id, Employee employee); T GetById(int id); IEnumerable<T> Apply(ISpecification<T> specification); } The Apply method in bold is the new method that works with specification now. Note that we have removed all other methods that ran various different queries and replaced them with this new method. Methods to save and update the entities are still there. Even the method GetById is there as the mechanism used to get entity by ID is not same as the one used by specifications. So we retain that method. One thing I have experimented with in some projects is to split read operations from write operations. The IRepository interface represents something that is capable of both reading from the database and writing to database. Sometimes, we only need a capability to read from database, in which case, IRepository looks like an unnecessarily heavy object with capabilities we do not need. In such a situation, declaring a new capability to execute specification makes more sense. I would leave the actual code for this as a self-exercise for our readers. Specification chaining In the original implementation of specification pattern, chaining was simply a matter of carrying out logical AND between the outputs of the IsSatisfiedBy method on the specification objects involved in chaining. In case of NHibernate adopted version of specification pattern, the end result boils down to the same but actual implementation is slightly more complex than just ANDing the results. Similar to original specification pattern, we would need an abstract base class Specification<T> and a specialized AndSepcificatin<T> class. I would just skip these details. Let's go straight into the implementation of the IsSatisifed method on AndSpecification where actual logical ANDing happens. public override Expression<Func<T, bool>> IsSatisfied() { var p = Expression.Parameter(typeof(T), "arg1"); return Expression.Lambda<Func<T, bool>>(Expression.AndAlso(          Expression.Invoke(specification1.IsSatisfied(), p),          Expression.Invoke(specification2.IsSatisfied(), p)), p); } Logical ANDing of two lambda expression is not a straightforward operation. We need to make use of static methods available on helper class System.Linq.Expressions.Expression. Let's try to go from inside out. That way it is easier to understand what is happening here. Following is the reproduction of innermost call to the Expression class: Expression.Invoke(specification1.IsSatisfied(), parameterName) In the preceding code, we are calling the Invoke method on the Expression class by passing the output of the IsSatisfied method on the first specification. Second parameter passed to this method is a temporary parameter of type T that we created to satisfy the method signature of Invoke. The Invoke method returns an InvocationExpression which represents the invocation of the lambda expression that was used to construct it. Note that actual lambda expression is not invoked yet. We do the same with second specification in question. Outputs of both these operations are then passed into another method on the Expression class as follows: Expression.AndAlso( Expression.Invoke(specification1.IsSatisfied(), parameterName), Expression.Invoke(specification2.IsSatisfied(), parameterName) ) Expression.AndAlso takes the output from both specification objects in the form of InvocationExpression type and builds a special type called BinaryExpression which represents a logical AND between the two expressions that were passed to it. Next we convert this BinaryExpression into an Expression<Func<T, bool>> by passing it to the Expression.Lambda<Func<T, bool>> method. This explanation is not very easy to follow and if you have never used, built, or modified lambda expressions programmatically like this before, then you would find it very hard to follow. In that case, I would recommend not bothering yourself too much with this. Following code snippet shows how logical ORing of two specifications can be implemented. Note that the code snippet only shows the implementation of the IsSatisfied method. public override Expression<Func<T, bool>> IsSatisfied() { var parameterName = Expression.Parameter(typeof(T), "arg1"); return Expression.Lambda<Func<T, bool>>(Expression.OrElse( Expression.Invoke(specification1.IsSatisfied(), parameterName), Expression.Invoke(specification2.IsSatisfied(), parameterName)), parameterName); } Rest of the infrastructure around chaining is exactly same as the one presented during discussion of original specification pattern. I have avoided giving full class definitions here to save space but you can download the code to look at complete implementation. That brings us to end of specification pattern. Though specification pattern is a great leap forward from where repository left us, it does have some limitations of its own. Next, we would look into what these limitations are. Limitations Specification pattern is great and unlike repository pattern, I am not going to tell you that it has some downsides and you should try to avoid it. You should not. You should absolutely use it wherever it fits. I would only like to highlight two limitations of specification pattern. Specification pattern only works with lambda expressions. You cannot use LINQ syntax. There may be times when you would prefer LINQ syntax over lambda expressions. One such situation is when you want to go for theta joins which are not possible with lambda expressions. Another situation is when lambda expressions do not generate optimal SQL. I will show you a quick example to understand this better. Suppose we want to write a specification for employees who have opted for season ticket loan benefit. Following code listing shows how that specification could be written: public class EmployeeHavingTakenSeasonTicketLoanSepcification :Specification<Employee> { public override Expression<Func<Employee, bool>> IsSatisfied() {    return e => e.Benefits.Any(b => b is SeasonTicketLoan); } } It is a very simple specification. Note the use of Any to iterate over the Benefits collection to check if any of the Benefit in that collection is of type SeasonTicketLoan. Following SQL is generated when the preceding specification is run: SELECT employee0_.Id           AS Id0_,        employee0_.Firstname     AS Firstname0_,        employee0_.Lastname     AS Lastname0_,        employee0_.EmailAddress AS EmailAdd5_0_,        employee0_.DateOfBirth   AS DateOfBi6_0_,        employee0_.DateOfJoining AS DateOfJo7_0_,        employee0_.IsAdmin       AS IsAdmin0_,        employee0_.Password     AS Password0_ FROM   Employee employee0_ WHERE EXISTS (SELECT benefits1_.Id FROM   Benefit benefits1_ LEFT OUTER JOIN Leave benefits1_1_ ON benefits1_.Id = benefits1_1_.Id LEFT OUTER JOIN SkillsEnhancementAllowance benefits1_2_ ON benefits1_.Id = benefits1_2_.Id LEFT OUTER JOIN SeasonTicketLoan benefits1_3_ ON benefits1_.Id = benefits1_3_.Id WHERE employee0_.Id = benefits1_.Employee_Id AND CASE WHEN benefits1_1_.Id IS NOT NULL THEN 1      WHEN benefits1_2_.Id IS NOT NULL THEN 2      WHEN benefits1_3_.Id IS NOT NULL THEN 3       WHEN benefits1_.Id IS NOT NULL THEN 0      END = 3) Isn't that SQL too complex? It is not only complex on your eyes but this is not how I would have written the needed SQL in absence of NHibernate. I would have just inner-joined the Employee, Benefit, and SeasonTicketLoan tables to get the records I need. On large databases, the preceding query may be too slow. There are some other such situations where queries written using lambda expressions tend to generate complex or not so optimal SQL. If we use LINQ syntax instead of lambda expressions, then we can get NHibernate to generate just the SQL. Unfortunately, there is no way of fixing this with specification pattern. Summary Repository pattern has been around for long time but suffers through some issues. General nature of its implementation comes in the way of extending repository pattern to use it with complex domain models involving large number of entities. Repository contract can be limiting and confusing when there is a need to write complex and very specific queries. Trying to fix these issues with repositories may result in leaky abstraction which can bite us later. Moreover, repositories maintained with less care have a tendency to grow into god objects and maintaining them beyond that point becomes a challenge. Specification pattern and query object pattern solve these issues on the read side of the things. Different applications have different data access requirements. Some applications are write-heavy while others are read-heavy. But there are a minute number of applications that fit into former category. A large number of applications developed these days are read-heavy. I have worked on applications that involved more than 90 percent database operations that queried data and only less than 10 percent operations that actually inserted/updated data into database. Having this knowledge about the application you are developing can be very useful in determining how you are going to design your data access layer. That brings use to the end of our NHibernate journey. Not quite, but yes, in a way. Resources for Article: Further resources on this subject: NHibernate 3: Creating a Sample Application [article] NHibernate 3.0: Using LINQ Specifications in the data access layer [article] NHibernate 2: Mapping relationships and Fluent Mapping [article]
Read more
  • 0
  • 0
  • 3462

article-image-new-features-jpa-20
Packt
28 Jul 2010
9 min read
Save for later

New Features in JPA 2.0

Packt
28 Jul 2010
9 min read
(For more resources on Java, see here.) Version 2.0 of the JPA specification introduces some new features to make working with JPA even easier. In the following sections, we discuss some of these new features: Criteria API One of the main additions to JPA in the 2.0 specification is the introduction of the Criteria API. The Criteria API is meant as a complement to the Java Persistence Query Language (JPQL). Although JPQL is very flexible, it has some problems that make working with it more difficult than necessary. For starters, JPQL queries are stored as strings and the compiler has no way of validating JPQL syntax. Additionally, JPQL is not type safe. We could write a JPQL query in which our where clause could have a string value for a numeric property and our code would compile and deploy just fine. To get around the JPQL limitations described in the previous paragraph, the Criteria API was introduced to JPA in version 2.0 of the specification. The Criteria API allows us to write JPA queries programmatically, without having to rely on JPQL. The following code example illustrates how to use the Criteria API in our Java EE 6 applications: package net.ensode.glassfishbook.criteriaapi;import java.io.IOException;import java.io.PrintWriter;import java.util.List;import javax.persistence.EntityManager;import javax.persistence.EntityManagerFactory;import javax.persistence.PersistenceUnit;import javax.persistence.TypedQuery;import ja vax.persistence.criteria.CriteriaBuilder;import javax.persistence.criteria.CriteriaQuery;import javax.persistence.criteria.Path;import javax.persistence.criteria.Predicate;import javax.persistence.criteria.Root;import javax.persistence.metamodel.EntityType;import javax.persistence.metamodel.Metamodel;import javax.persistence.metamodel.SingularAttribute;import javax.servlet.ServletException;import javax.servlet.annotation.WebServlet;import javax.servlet.http.HttpServlet;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpServletResponse;@WebServlet(urlPatterns = {"/criteriaapi"})public class CriteriaApiDemoServlet extends HttpServlet{ @PersistenceUnit(unitName = "customerPersistenceUnit") private EntityManagerFactory entityManagerFactory; @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter printWriter = response.getWriter(); List<UsState> matchingStatesList; EntityManager entityManager = entityManagerFactory.createEntityManager(); CriteriaBuilder criteriaBuilder = entityManager.getCriteriaBuilder(); CriteriaQuery<UsState> criteriaQuery = criteriaBuilder.createQuery(UsState.class); Root<UsState> root = criteriaQuery.from(UsState.class); Metamodel metamodel = entityManagerFactory.getMetamodel(); EntityType<UsState> usStateEntityType = metamodel.entity(UsState.class); SingularAttribute<UsState, String> usStateAttribute = usStateEntityType.getDeclaredSingularAttribute("usStateNm", String.class); Path<String> path = root.get(usStateAttribute); Predicate predicate = criteriaBuilder.like(path, "New%"); criteriaQuery = criteriaQuery.where(predicate); TypedQuery typedQuery = entityManager.createQuery(criteriaQuery); matchingStatesList = typedQuery.getResultList(); response.setContentType("text/html"); printWriter.println("The following states match the criteria:<br/>"); for (UsState state : matchingStatesList) { printWriter.println(state.getUsStateNm() + "<br/>"); } }} This example takes advantage of the Criteria APIc. When writing code using the Criteria API, the first thing we need to do is to obtain an instance of a class implementing the javax.persistence.criteria. CriteriaBuilder interface. As we can see in the previous example, we need to obtain said instance by invoking the getCriteriaBuilder() method on our EntityManager. From our CriteriaBuilder implementation, we need to obtain an instance of a class implementing the javax.persistence.criteria.CriteriaQuery interface. We do this by invoking the createQuery() method in our CriteriaBuilder implementation. Notice that CriteriaQuery is generically typed. The generic type argument dictates the type of result that our CriteriaQuery implementation will return upon execution. By taking advantage of generics in this way, the Criteria API allows us to write type safe code. Once we have obtained a CriteriaQuery implementation, from it we can obtain an instance of a class implementing the javax.persistence.criteria.Root interface. The Root implementation dictates what JPA entity we will be querying from. It is analogous to the FROM query in JPQL (and SQL). The next two lines in our example take advantage of another new addition to the JPA specification—the Metamodel API. In order to take advantage of the Metamodel API, we need to obtain an implementation of the javax.persistence. metamodel.Metamodel interface by invoking the getMetamodel() method on our EntityManagerFactory. From our Metamodel implementation, we can obtain a generically typed instance of the javax.persistence.metamodel.EntityType interface. The generic type argument indicates the JPA entity our EntityType implementation corresponds to. EntityType allows us to browse the persistent attributes of our JPA entities at runtime. This is exactly what we do in the next line in our example. In our case, we are getting an instance of SingularAttribute, which maps to a simple, singular attribute in our JPA entity. EntityType has methods to obtain attributes that map to collections, sets, lists, and maps. Obtaining these types of attributes is very similar to obtaining a SingularAttribute, therefore we won't be covering those directly. Refer to the Java EE 6 API documentation at http://java.sun.com/javaee/6/ docs/api/ for more information. As we can see in our example, SingularAttribute contains two generic type arguments. The first argument dictates the JPA entity we are working with and the second one indicates the type of attribute. We obtain our SingularAttribute by invoking the getDeclaredSingularAttribute() method on our EntityType implementation and passing the attribute name (as declared in our JPA entity) as a String. Once we have obtained our SingularAttribute implementation, we need to obtain an import javax.persistence.criteria.Path implementation by invoking the get() method in our Root instance and passing our SingularAttribute as a parameter. In our example, we will get a list of all the "new" states in the United States (that is, all states whose names start with "New"). Of course, this is the job of a "like" condition. We can do this with the Criteria API by invoking the like() method on our CriteriaBuilder implementation. The like() method takes our Path implementation as its first parameter and the value to search for as its second parameter. CriteriaBuilder has a number of methods that are analogous to SQL and JPQL clauses such as equals(), greaterThan(), lessThan(), and(), or(), and so on and so forth (for the complete list, refer to the Java EE 6 documentation at http://java.sun.com/javaee/6/docs/api/). These methods can be combined to create complex queries via the Criteria API. The like() method in CriteriaBuilder returns an implementation of the javax.persistence.criteria.Predicate interface, which we need to pass to the where() method in our CriteriaQuery implementation. This method returns a new instance of CriteriaBuilder which we assign to our criteriaBuilder variable. At this point, we are ready to build our query. When working with the Criteria API, we deal with the javax.persistence.TypedQuery interface, which can be thought of as a type-safe version of the Query interface we use with JPQL. We obtain an instance of TypedQuery by invoking the createQuery() method in EntityManager and passing our CriteriaQuery implementation as a parameter. To obtain our query results as a list, we simply invoke getResultList() on our TypedQuery implementation. It is worth reiterating that the Criteria API is type safe. Therefore, attempting to assign the results of getResultList() to a list of the wrong type would result in a compilation error. After building, packaging, and deploying our code, then pointing the browser to our servlet's URL, we should see all the "New" states displayed in the browser. Bean Validation support Another new feature introduced in JPA 2.0 is support for JSR 303, Bean Validation. Bean Validation support allows us to annotate our JPA entities with Bean Validation annotations. These annotations allow us to easily validate user input and perform data sanitation. Taking advantage of Bean Validation is very simple, all we need to do is annotate our JPA entity fields or getter methods with any of the validation annotations defined in the javax.validation.constraints package. Once our fields are annotated as appropriate, the EntityManager will prevent non-validating data from being persisted. The following code example is a modified version of the Customer JPA entity. It has been modifed to take advantage of Bean Validation in some of its fields. package net.ensode.glassfishbook.jpa.beanvalidation;import java.io.Serializable;import javax.persistence.Column;import javax.persistence.Entity;import javax.persistence.Id;import javax.persistence.Table;import javax.validation.constraints.NotNull;import javax.validation.constraints.Size;@Entity@Table(name = "CUSTOMERS")public class Customer implements Serializable{ @Id @Column(name = "CUSTOMER_ID") private Long customerId; @Column(name = "FIRST_NAME") @NotNull @Size(min=2, max=20) private String firstName; @Column(name = "LAST_NAME") @NotNull @Size(min=2, max=20) private String lastName; private String email; public Long getCustomerId() { return customerId; } public void setCustomerId(Long customerId) { this.customerId = customerId; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; }} In this example, we used the @NotNull annotation to prevent the firstName and lastName of our entity from being persisted with null values. We also used the @Size annotation to restrict the minimum and maximum length of these fields. This is all we need to do to take advantage of Bean Validation in JPA. If our code attempts to persist or update an instance of our entity that does not pass the declared validation, an exception of type javax.validation.ConstraintViolationException will be thrown and the entity will not be persisted. As we can see, Bean Validation pretty much automates data validation, freeing us from having to manually write validation code. In addition to the two annotations discussed in the previous example, the javax.validation.constraints package contains several additional annotations that we can use to automate validation on our JPA entities. Refer to the Java EE 6 API documentation at http://java.sun.com/javaee/6/docs/api/ for the complete list. Summary In this article, we discussed some new JPA 2.0 features such as the Criteria API that allows us to build JPA queries programmatically, the Metamodel API that allows us to take advantage of Java's type safety when working with JPA, and Bean Validation that allows us to easily validate input by simply annotating our JPA entity fields. Further resources on this subject: Interacting with Databases through the Java Persistence API [article] Setting up GlassFish for JMS and Working with Message Queues [article]
Read more
  • 0
  • 0
  • 3403

article-image-grails-object-relational-mapping-gorm
Packt
08 Jun 2010
5 min read
Save for later

The Grails Object Relational Mapping (GORM)

Packt
08 Jun 2010
5 min read
(For more resources on Groovy DSL, see here.) The Grails framework is an open source web application framework built for the Groovy language. Grails not only leverages Hibernate under the covers as its persistence layer, but also implements its own Object Relational Mapping layer for Groovy, known as GORM. With GORM, we can take a POGO class and decorate it with DSL-like settings in order to control how it is persisted. Grails programmers use GORM classes as a mini language for describing the persistent objects in their application. In this section, we will do a whistle-stop tour of the features of Grails. This won't be a tutorial on building Grails applications, as the subject is too big to be covered here. Our main focus will be on how GORM implements its Object model in the domain classes. Grails quick start Before we proceed, we need to install Grails and get a basic app installation up and running. The Grails' download and installation instructions can be found at http://www.grails.org/Installation. Once it has been installed, and with the Grails binaries in your path, navigate to a workspace directory and issue the following command: grails create-app GroovyDSL This builds a Grails application tree called GroovyDSL under your current workspace directory. If we now navigate to this directory, we can launch the Grails app. By default, the app will display a welcome page at http://localhost:8080/GroovyDSL/. cd GroovyDSLgrails run-app The grails-app directory The GroovyDSL application that we built earlier has a grails-app subdirectory, which is where the application source files for our application will reside. We only need to concern ourselves with the grails-app/domain directory for this discussion, but it's worth understanding a little about some of the other important directories. grails-app/conf: This is where the Grails configuration files reside. grails-app/controllers: Grails uses a Model View Controller (MVC) architecture. The controller directory will contain the Groovy controller code for our UIs. grails-app/domain: This is where Grails stores the GORM model classes of the application. grails-app/view: This is where the Groovy Server Pages (GSPs), the Grails equivalent to JSPs are stored. Grails has a number of shortcut commands that allow us to quickly build out the objects for our model. As we progress through this section, we will take a look back at these directories to see what files have been generated in these directories for us. In this section, we will be taking a whistle-stop tour through GORM. You might like to dig deeper into both GORM and Grails yourself. You can find further online documentation for GORM at http://www.grails.org/GORM. DataSource configuration Out of the box, Grails is configured to use an embedded HSQL in-memory database. This is useful as a means of getting up and running quickly, and all of the example code will work perfectly well with the default configuration. Having an in-memory database is helpful for testing because we always start with a clean slate. However, for the purpose of this section, it's also useful for us to have a proper database instance to peek into, in order to see how GORM maps Groovy objects into tables. We will configure our Grails application to persist in a MySQL database instance. Grails allows us to have separate configuration environments for development, testing, and production. We will configure our development environment to point to a MySQL instance, but we can leave the production and testing environments as they are. First of all we need to create a database, by using the mysqladmin command. This command will create a database called groovydsl, which is owned by the MySQL root user. mysqladmin -u root create groovydsl Database configuration in Grails is done by editing the DataSource.groovy source file in grails-app/conf. We are interested in the environments section of this file. environments { development { dataSource { dbCreate = "create-drop" url = "jdbc:mysql://localhost/groovydsl" driverClassName = "com.mysql.jdbc.Driver" username = "root" password = "" } } test { dataSource { dbCreate = "create-drop" url = "jdbc:hsqldb:mem:testDb" } } production { dataSource { dbCreate = "update" url = "jdbc:hsqldb:mem:testDb" } }} The first interesting thing to note is that this is a mini Groovy DSL for describing data sources. In the previous version, we have edited the development dataSource entry to point to the MySQL groovydsl database that we created. In early versions of Grails, there were three separate DataSource files that need to be configured for each environment, for example, DevelopmentDataSource.groovy. The equivalent DevelopmentDataSource.groovy file would be as follows: class DevelopmentDataSource { boolean pooling = true String dbCreate = "create-drop" String url = " jdbc:mysql://localhost/groovydsl " String driverClassName = "com.mysql.jdbc.Driver" String username = "root" String password = ""} The dbCreate field tells GORM what it should do with tables in the database, on startup. Setting this to create-drop will tell GORM to drop a table if it exists already, and create a new table, each time it runs. This will keep the database tables in sync with our GORM objects. You can also set dbCreate to update or create. DataSource.groovy is a handy little DSL for configuring the GORM database connections. Grails uses a utility class—groovu.utl. ConfigSlurper—for this DSL. The ConfigSlurper class allows us to easily parse a structured configuration file and convert it into a java.util.Properties object if we wish. Alternatively, we can navigate the ConfigObject returned by using dot notation. We can use the ConfigSlurper to open and navigate DataSource.groovy as shown in the next code snippet. ConfigSlurper has a built-in ability to partition the configuration by environment. If we construct the ConfigSlurper for a particular environment, it will only load the settings appropriate to that environment. def development = new ConfigSlurper("development").parse(newFile('DataSource.groovy').toURL())def production = new ConfigSlurper("production").parse(newFile('DataSource.groovy').toURL())assert development.dataSource.dbCreate == "create-drop"assert production.dataSource.dbCreate == "update"def props = development.toProperties()assert props["dataSource.dbCreate"] == "create-drop"
Read more
  • 0
  • 0
  • 3389

article-image-hibernate-types
Packt
27 Nov 2009
3 min read
Save for later

Hibernate Types

Packt
27 Nov 2009
3 min read
Hibernate allows transparent persistence, which means the application is absolutely isolated from the underlying database storage format. Three players in the Hibernate scene implement this feature: Hibernate dialect, Hibernate types, and HQL. The Hibernate dialect allows us to use a range of different databases, supporting different, proprietary variants of SQL and column types. In addition, HQL allows us to query persisted objects, regardless of their relational persisted form in the database. Hibernate types are a representation of databases SQL types, provide an abstraction of the underlying database types, and prevent the application from getting involved with the actual database column types. They allow us to develop the application without worrying about the target database and the column types that the database supports. Instead, we get involved with mapping Java types to Hibernate types. The database dialect, as part of Hibernate, is responsible for transforming Java types to SQL types, based on the target database. This gives us the flexibility to change the database to one that may support different column types or SQL without changing the application code. Built-in types Hibernate includes a rich and powerful range of built-in types. These types satisfy most needs of a typical application, providing a bridge between basic Java types and common SQL types. Java types mapped with these types range from basic, simple types, such as long and int, to large and complex types, such as Blob and Clob. The following table categorizes Hibernate built-in types with corresponding Java and SQL types: Java Type Hibernate Type Name SQL Type Primitives Boolean or boolean boolean BIT true_false CHAR(1)('T'or'F') yes_no CHAR(1)('Y'or'N') Byte or byte byte TINYINT char or Character character CHAR double or Double double DOUBLE float or float float FLOAT int or Integer integer INTEGER long or Long long BIGINT short or Short short SMALLINT String java.lang.String string VARCHAR character CHAR(1) text CLOB Arbitrary Precision Numeric java.math.BigDecimal big_decimal NUMERIC Byte Array byte[] or Byte[] binary VARBINARY   Time and Date java.util.Date date DATE time TIME timestamp TIMESTAMP java.util.Calendar calendar TIMESTAMP calendar_date DATE java.sql.Date date DATE java.sql.Time time TIME java.sql.Timestamp timestamp TIMESTAMP Localization java.util.Locale locale VARCHAR java.util.TimeZone timezone java.util.Currency currency Class Names java.lang.Class class VARCHAR Any Serializable Object java.io.Serializable Serializable VARBINARY JDBC Large Objects java.sql.Blob blob BLOB java.sql.Clob clob CLOB
Read more
  • 0
  • 0
  • 3361

article-image-coldfusion-9-power-cfcs-and-web-forms
Packt
05 Aug 2010
13 min read
Save for later

ColdFusion 9: Power CFCs and Web Forms

Packt
05 Aug 2010
13 min read
(For more resources on ColdFusion, see here.) There used to be long pages of what we called "spaghetti code" because the page would go on and on. You had to follow the conditional logic by going through the page up and down, and then had to understand how things worked. This made writing, updating, and debugging a diffcult task even for highly-skilled developers CFCs allow you to encapsulate some part of the logic of a page inside an object. Encapsulation simply means packaged for reuse inside something. CFCs are the object-packaging method used in ColdFusion. The practice of protecting access In CFC methods, there is an attribute called "access".Some methods within a CFC are more examples of reuse. The sample code for _product.cfc is shown here. It is an example of a power CFC. There is a method inside the CFC called setDefaults(). The variable variables.field.names comes from another location in our CFC: <cffunction name="setDefaults" access="private" output="false"> <cfset var iAttr = 0> <cfloop list="#listLen(variables.field.names)#" index="iAttr"> <cfscript> variables.attribute[#listGetAt(variables.field.names,iAttr)#] = setDefault(variables.field.names,iAttr); </cfscript> </cfloop></cffunction> The logic for this would actually be used in more than one place inside the object. When the object is created during the first run, it would call the setDefaults() method and set the defaults. When you use the load method to insert another record inside the CFC, it will run this method. This will become simpler as you use CFCs and methods more often. This is a concept called refactoring, where we take common features and wrap them for reuse. This takes place even inside a CFC. Again, the setDefaults() function is just another method inside the same CFC. Now, we look at the access attribute in the code example and note that it is set to private. This means that only this object can call the method. One of the benefits to CFCs is making code simpler. The interface to the outside world of the CFC is its methods. We can hide a method from the outside world, and also protect access to the method by setting the access attribute to private. If you want to make sure that only CFCs in the same directory can access these CFC's methods, then you will have to set the attribute to package. This is a value that is rarely used. The default value for the access attribute is public. This means that any code running on the web server can access the CFC. (Shared hosting companies block one account from being able to see the other accounts on the same server. If you are concerned about your hosting company, then you should either ask them about this issue or move to a dedicated or virtual hosting server.) The last value for the access attribute is remote. This is actually how you create a number of "cool power" uses of the CFC. There is a technology on the Web called web services. Setting the CFC to remote allows access to the CFC as a web service. You can also connect to this CFC through Flash applications, Flex, or AIR, using the remote access value. This method also allows the CFC to respond to AJAX calls. Now, we will learn to use more of the local power features. Web forms introduction Here, we will discuss web forms along with CFCs. Let us view our web form page. Web forms are the same in ColdFusion as they are in any other HTML scenario. You might even note that there is very little use for web forms until you have a server-side technology such as ColdFusion. This is because when the form is posted, you need some sort of program to handle the data posted back to the server. <!--- Example: 3_1.cfm ---><!--- Processing ---><!--- Content ---><form action="3_1.cfm" method="post"> <table> <tr> <td>Name:</td> <td><input type="text" name="name" id="idName" value="" /></td> </tr> <tr> <td>Description:</td> <td><input type="text" name="description" id="idDescription" value="" /></td> </tr> <tr> <td>Price:</td> <td><input type="text" name="price" id="idPrice" value="" /></td> </tr> <tr> <td>&nbsp;</td> <td><input type="submit" name="submit" value="submit" /></td> </tr> </table></form> First, notice that all of the information on the page is in the content section. Anything that goes from the server to the browser is considered as content. You can fll in and submit the form, and you will observe that all of the form fields get cleared out. This is because this form posts back to the same page. Self-posting forms are a valid method of handling page fow on websites. The reason why nothing seems to be happening is because the server is not doing anything with the data being sent back from the browser. Let us now add <cfdump var="#form#"/> to the bottom of the content, below the form tag, and observe what we get when we post the form: Now we see another common structure in ColdFusion. It is known as the form structure. There are two types of common structures that send data to the server. The first one is called get and the second one is called post. If you see the code, you will notice that the method of the form is post. The form post setting is the same as coding with the form variable in ColdFusion. You should also observe that there is one extra field in the form structure that is not shown in the URL structure variable. It is the FIELDNAMES variable. It returns a simple list of the field names that were returned with the form. Let us edit the code and change the form tag attribute to get. Then, refresh the page and click on the submit button: From the previous screenshot, it is evident that the browser looks at the get or post value of the form, and sends a get or post back to the server. Post is a "form method" belonging to the past and this is why ColdFusion translates posted variables to the form structure. Now change the dump tag to "URL" and observe the results. Fill out the form and submit it again with the new change. This displays the values in our structure as we would expect. This means you can either send URL-type data back to the server, or form-type data with forms. The advantage of sending form data is that form data can handle a larger volume of data being sent back to the server as compared to a get or URL request. Also, it is worth noting that this style of return prevents the form field values from being exposed in the URL. They can still be accessed, but are just not visible in the URL any more. So the method of choice for forms is post. Change both the method of the form attribute and the value of the cfdump var to form again. The Description box is not ideal for entering product descriptions. So, we are going to use a text area in its place. Use the following code to accommodate a text area box. You can change the size of form's objects using attributes and styles: <tr> <td>Description:</td> <td> <textArea name="description" id="idDescription"></textArea> </td></tr> Here, we see our form looking different. If you fill up the description with more content than the box can hold, it shows the scroll bars appropriately. Managing our product data Currently, we have a form that can be used for two purposes. It can be used to enter a new product as well as to edit existing ones. We are going to reuse this form. Reuse is the fastest path to make things easier. However, we must not think that it is the only way to do things. What we should think is that not reusing something requires a reason for doing it differently. In order to edit an existing product, we will have to create a page that shows the existing product records. Let us create the page: <!--- Example: product_list.cfm ---><!--- Processing ---><cfscript> objProduct = createObject("component","product").init(dsn="cfb"); rsProducts = objProduct.getRecordset();</cfscript><!--- Content ---><h3>Select a product to edit.</h3><ul> <cfoutput query="rsProducts"> <li> <a href="product_edit.cfm?id=#rsProducts.id#">#rsProducts.name# </li> </cfoutput></ul> There is no new code here. This is the browser view that we get when we run this page. Here, we will post our edit page. Before you run the code, take the code from 3_1.cfm that we wrote at the beginning of the article and save a copy as product_edit.cfm to make the page work correctly when someone clicks on any of the products: Now, we will click on a product. Let us manage the Watermelon Plant for now and observe what happens on the next page: This is our edit page, and we will modify it so that it can get the data when we click through from our list page. Getting data to our edit page The current page looks similar to the page where we put the form. To get the data from our database onto the page, we need to do a few things here. First, let us change the action of the form tag to product_edit.cfm. We can modify the processing section of the page frst, which will make things simpler. Add the following code to your product_edit.cfm page: <!--- Processing ---><cfparam name="url.id" default="0"><cfscript> objProduct = createObject("component","product").init(dsn="cfb"); objProduct.load(url.id);</cfscript> We need the default value set so that we do not receive an error message if the page is called without an id. After we set our default, we will see that we have created an object from our CFC object class. This time, we are passing the Data Source Name dsn into the object through the constructor method. This makes our code more portable, and ready for reuse. Once we have an instance, we set the current record using the load method and passing the id of the data record to the method. Let us look at the minor changes that we will make to the content section. We will add the values of the object's protected attributes. <!--- Content ---><cfoutput> <form action="product_edit.cfm" method="post"> <table> <tr> <td>Name:</td> <td> <input type="text" name="name" id="idName" value="#objProduct.get('name')#" /> </td> </tr> <tr> <td>Description:</td> <td> <textArea name="description" id="idDescription"> #objProduct.get('description')#</textArea> </td> </tr> <tr> <td>Price:</td> <td> <input type="text" name="price" id="idPrice" value="#objProduct.get('price')#" /> </td> </tr> <tr> <td>&nbsp;</td> <td> <input type="submit" name="submit" value="submit" /> </td> </tr> </table> </form></cfoutput> Now, we will refresh the form and see how the results differ: Doesn't this look better? We can go back to the list page and retrieve an existing product from the edit form. If we submit back the same form, browsers tend to empty out the form. It should not do that, but the form is not posting the ID of the record back to the server. This can lead to a problem because, if we do not send the ID of the record back, the database will have no idea as to which record's details should be changed. Let us solve these issues first, and then we will learn to use a new tag called the <cfinclude> tag along the way. The first problem that we are going to solve is where we are calling the page with the ID value in the URL structure; then, if we post the page we will be calling the page with the ID in the form structure. We are going to use a technique that has been widely used for years in the ColdFusion community. We are going to combine the two scopes into a new common structure. We will create a structure called attributes. First we will check if it exists. If it does not, then we will create the structure. After that, we will merge the URL structure, and then the FORM structure into the attributes structure. We will put that code in a common page called request_attributes.cfm, so we can include it on any page we want, reusing the code. Do remember that the form and URL scope always exist. <!--- request_attributes.cfm ---><cfscript> if(NOT isDefined("attributes")) { attributes = structNew(); } structAppend(attributes,url); structAppend(attributes,form);</cfscript> Let us modify our edit page in order to take care of a couple of issues. We need to include the script that we have just created. We will modify the processing section of our edit page as highlighted here: <!--- Processing ---><cfinclude template="request_attributes.cfm"><cfparam name="attributes.id" default="0"><cfscript> objProduct = createObject("component","product").init(dsn="cfb"); objProduct.load(attributes.id);</cfscript> There is only one more thing we need now: We need our form to store the id value of the record that is being managed. We could just put it in a textbox like the other fields, but the user does not need to know that information. Let us use a hidden input field and add it after our form tag: <!--- Content ---><cfoutput> <form action="product_edit.cfm" method="post"> <input type="hidden" name="id" value="#objProduct.get('id')#"> Refresh the screen, and it will work when we use the form, or when we choose an item from the product list page. We have now created our edit/add page.
Read more
  • 0
  • 0
  • 3331

article-image-games-fortune-scratch-14
Packt
15 Oct 2009
4 min read
Save for later

Games of Fortune with Scratch 1.4

Packt
15 Oct 2009
4 min read
Fortune-teller Most of us enjoy a good circus, carnival, or county fair. There's fun, food, and fortunes. Aah, yes, what would a fair be without the fortune-teller's tent? By the end of this article, you'll know everything you need to spin your fortunes and amaze your friends with your wisdom. Before we start the first exercise, create a new project and add two sprites. The first sprite will be the seeker. The second sprite will be the teller. Choose any sprites you want. My seeker will be a clam and my teller will be a snowman. If you want to add a background, go ahead. Time for action – create a list of questions In order to have a successful fortune-telling, we need two things: a question and an answer. Let's start by defining some questions and answers: Select the seeker from the list of sprites. From the Variables palette, click the Make a list button. In the list name dialog box, type questions and select For this sprite only. Click OK to create the list. Several new blocks display in the Variables palette, and an empty block titled seeker questions displays on the stage. Let's think about a couple of questions we may be tempted to ask, such as the following: Will my hair fall out? How many children will I have? Let's add our proposed questions to the questions list. Click the plus sign located in the bottom-left corner of the seeker questions box (on the stage) to display a text input field. Type Will my hair fall out? Press the plus sign again and enter the second question: How many children will I have? We now have two questions in our list. To automatically add the next item in the list, press enter. Let's add a say for 2 secs block to the scripts area of the seeker sprite so that we can start the dialog. From the Variables palette, drag the item of questions block to the input value of the say for 2 secs block. Double-click on the block and the seeker asks, "Will my hair fall out?" Change the value on the item block to last and double-click the block again. This time the seeker asks, "How many children will I have?" What just happened? I'm certain you could come up with a hundred different questions to ask a fortune-teller. Don't worry, you'll get your chance to ask more questions later. Did you notice that the new list we created behaved a lot like a variable? We were able to make the questions list private; we don't want our teller to peek at our questions, after all. Also, the list became visible on the screen allowing us to edit the contents. The most notable difference is that we added more than one item, and each item corresponds to a number. We essentially created a numbered list. If you work with other programming languages, then you might refer to lists as arrays. Because the seeker's questions were contained in a list, we used the item block to provide special instructions to the     say block in order to ask the question. The first value of the item block was position, which defaulted to one. The second value was the name of the list, which defaulted to questions. In contrast, if we used a variable to store a question, we would only need to supply the name of the variable to the say block. Have a go hero Create an answers list for the teller sprite, and add several items to the list. Remember, there are no wrong answers in this exercise. Work with an item in a list We can use lists to group related items, but accessing the items in the list requires an extra level of specificity. We need to know the name of the list and the position of the item within the list before we can do anything with the values. The following table shows the available ways to access a specific item in a list.
Read more
  • 0
  • 0
  • 3302
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-customization-using-adf-meta-data-services
Packt
15 Jun 2011
8 min read
Save for later

Customization using ADF Meta Data Services

Packt
15 Jun 2011
8 min read
Oracle ADF Enterprise Application Development—Made Simple Successfully plan, develop, test and deploy enterprise applications with Oracle ADF      Why customization? The reason ADF has customization features built-in is because Oracle Fusion Applications need them. Oracle Fusion Applications is a suite of programs capable of handling every aspect of a large organization—personnel, finance, project management, manufacturing, logistics, and much more. Because organizations are different, Oracle has to offer a way for each customer organization to fit Oracle Fusion Applications to their requirements. This customization functionality can also be very useful for organizations that do not use Oracle Fusion Applications. If you have two screens that work with the same data, but one of the screens must show more fields than the other, you can create one screen with all the fields and use customization to create another version of the same screen with fewer fields for other users. For example, the destination management application might have a data entry screen showing all details of a task to a dispatcher, but only the relevant details to an airport transfer guide: Companies such as DMC Solutions that produce software for sale realize additional benefit from the customization features in ADF. DMC Solu a base application, sell it to different customers and customize each in application to that customer without changing the base application. How does an ADF customization work? More and more Oracle products are using something called Meta Data Services to store metadata. Metadata is data that describes other pieces of information—where it came from, what it means, or how it is intended to be used. An image captured by a digital camera might include metadata about where and when the picture was taken, which camera settings were used, and so on. In the case of an ADF application, the metadata describes how the application is intended to be used. There are three kinds of customizations in ADF: Seeded customizations:They are customizations defined in advance (before the user runs the application) by customization developers. User customizations(sometimes called personalizations): They are changes to aspects of the user interface by application end users. The ADF framework offers a few user customization features, but you need additional software such as Oracle WebCenter for most user customizations. User customizations are outside the scope of this article. Design time at runtime:They are advanced customization of the application by application administrators and/or properly authorized end users. This requires that application developers have prepared the possible customizations as part of application development—it is complicated to program using only ADF, but Oracle WebCenter provides advanced components that make this easier. This is outside the scope of this article. Your customization metadata is stored in either files or a database repository. If you are only planning to use seeded customizations, a file-based repository is fine. However, if you plan to allow user customizations or design time at runtime, you should set up your production server to store customizations in a metadata database. Refer to the Fusion Middleware Administrator's Guide for information about setting up a metadata database. Applying the customization layers When an ADF application is customized, the ADF framework applies one or more customization layers on top of the base application. Each layer has a value, and customizations are assigned to a specific customization layer and value. The concept of multiple layers makes it possible to apply, for example: Industry customization (customizing the application for example, the travel industry: industry=travel) Organization customization (customizing the application for a specific travel company: org=xyztravel) Site customization (customizing the application for the Berlin office) Role-based customization (customizing the application for casual, normal, and advanced users) The XDM application that DMC Solution is building could be customized in one way for ABC Travel and in another way for XYZ Travel, and XYZ Travel might decide to further customize the application for different types of users: You can have as many layers as you need—Oracle Fusion Applications is reported to use 12 layers, but your applications are not likely to be that complex. For each customization layer, the developer of the base application must provide a customization class that will be executed at runtime, returning a value for each customization layer. The ADF framework will then apply the customizations that the customization developer has specified for that layer/value combination. This means that the same application can look in many different ways, depending on the values returned by the customization classes and the customizations registered:     Org layer value Role layer value Result qrstravel any Base application, because there are no customizations defined for QRS Travel abctravel any The application customized for ABC Travel, because there are no role layer customizations for ABC Travel, the value of the role layer does not change the application xyztravel normal The application customized for XYZ Travel and further customized for normal users in XYZ Travel xyztravel superuser The application customized for XYZ Travel and further customized for super users in XYZ Travel Making an application customizable To make an application customizable, you need to do three things: Develop a customization class for each layer of customization. Enable seeded customization in the application. Link the customization class to the application. The customization developer, who will be developing the customizations, will additionally have to set up JDeveloper correctly so that all customization levels can be accessed. This setup is described later in the article. Developing the customization classes For each layer of customization, you need to develop a customization class with a specific format—technically, it has to extend the Oracle-supplied abstract class oracle.mds.cust.CustomizationClass. A customization class has a name (returned by the getName() method) and a value (returned by the getValue() method). At runtime, the ADF framework will execute the customization classes for all layers to determine the customization value at each level. Additionally, the customization class has to return a short unique prefix to use for all customized items, and a cache hint telling ADF if this is a static or dynamic customization. Building the classes Your customization classes should go in your Common Code workspace. A customization class is a normal Java class, that is, it is created with File | New | General | Java Class. In the Create Java Class dialog, give your class a name (OrgLayerCC) and place it into a customization package (for example, com.dmcsol. xdm.customization). Choose to extend oracle.mds.cust.CustomizationClass and check the Implement Abstract Methods checkbox: Create a similar class called RoleLayerCC. Implementing the methods Because you asked the JDeveloper to implement the abstract methods, your classes already contain three methods: getCacheHint() getName() getValue(RestrictedSession, MetadataObject) The getCacheHint() method must return an oracle.mds.cust.CacheHint constant that tells ADF if the value of this layer is static (common for all users) or dynamic (depending on the user). The normal values here are ALL_USERS for static customizations or MULTI_USER for customizations that apply to multiple users. In the XDM application, you will use: ALL_USERS for OrgLevelCC, because this customization layer will apply to all users in the organization MULTI_USER for RoleLevelCC, because the role-based customization will apply to multiple users, but not necessarily to all Refer to the chapter on customization with MDS in Fusion Developer's Guide for Oracle Application Development Framework for information on other possible values. The getName() method simply returns the the name of the customization layer. The getValue() method must return an array of String objects. It will normally make most sense to return just one value—the application is running for exactly one organization, you are either a normal user or a super user. For advanced scenarios, it is possible to return multiple values, in such a case multiple customizations will be applied at the same layer. Each customization that a customization developer defines will be tied to a specific layer and value—there might be a customization that happens when org has the value xyztravel. For the OrgLayerCC class, the value is static and is defined when DMC Solutions installs the application for XYZ Travel—for example, in a property file. For the RoleLayerCC class , the value is dynamic, depending on the current user, and can be retrieved from the ADF security context. The OrgLayerCC class could look like the following: package com.dmcsol.xdm.customization; import ... public class RoleLayerCC extends CustomizationClass { public CacheHint getCacheHint() { return CacheHint.MULTI_USER; } public String getName() { return "role"; } public String[] getValue(RestrictedSession restrictedSession, MetadataObject metadataObject) { String[] roleValue = new String[1]; SecurityContext sec = ADFContext.getCurrent(). getSecurityContext(); if (sec.isUserInRole("superuser")) { roleValue[0] = "superuser"; } else { roleValue[0] = "normal"; } return roleValue; } } The GetCacheHint() method returns MULTI_USER because this is a dynamic customization—it will return different values for different users. The GetName() method simply returns the name of the layer. The GetValue() method uses oracle.adf.share.security.SecurityContext to look up if the user has the super user role and returns the value superuser or normal. Deploying the customization classes Because you place your customization class in the Common Code project, you need to deploy the Common Code project to an ADF library and have the build/ configuration manager copy it to your common library directory.
Read more
  • 0
  • 0
  • 3281

article-image-communicating-dynamics-crm-biztalk-server
Packt
20 Jul 2011
6 min read
Save for later

Communicating from Dynamics CRM to BizTalk Server

Packt
20 Jul 2011
6 min read
Microsoft BizTalk 2010: Line of Business Systems Integration A practical guide to integrating Line of Business systems with BizTalk Server 2010 There are three viable places where Dynamics CRM can communicate with BizTalk Server. First, a Dynamics CRM form is capable of executing client-side JavaScript at various points in the form lifecycle. One can definitely use JavaScript to invoke web services, including web services exposed by BizTalk Server. However, note that JavaScript invocation of web services is typically synchronous and could have a negative impact on the user experience if a form must constantly wait for responses from distributed services. Also, JavaScript that runs within Dynamics CRM is clientside and tied directly to the page on which it resides. If we programmatically interact with a Dynamics CRM entity, then any code existing in the client-side script will not get invoked. For instance, if after an "account" record is created we send a message, via JavaScript, to BizTalk, this logic would not fire if we created an "account" record programmatically. The second place where Dynamics CRM can communicate with BizTalk Server is through workflows. A workflow in Dynamics CRM is an automated process where a set of steps is executed according to rules that we define. For example, when a sales opportunity is closed, we run a workflow that adds a message to the customer record, notifies all parties tied to the opportunity, and sends a polite email to the lost prospect. Workflows are based on Windows Workflow 4.0 technology and can be built either in the Dynamics CRM application itself or within Visual Studio 2010. The Dynamics CRM web application allows us to piece together workflows using previously registered workflow steps. If we need new workflow steps or need to construct something complex, we can jump into Visual Studio 2010 and define the workflow there. Why would we choose to use a workflow to send a message to BizTalk Server? If you have a long-running process that can either be scheduled or executed on demand and want the option for users to modify the process, the workflow may be the right choice. The final strategy for communicating between Dynamics CRM and BizTalk Server is to use plugins. Plugins are server-based application extensions that execute business logic and get tied directly to an entity. This means that they are invoked whether we work in the Dynamics CRM web interface or through the API. I can run a plugin both synchronously and asynchronously, depending on the situation. For instance, if we need to validate the data on a record prior to saving it, we can set a plugin to run before the "save" operation is committed and provide some user feedback on the invalid information. Or, we could choose to asynchronously call a plugin after a record is saved and transmit data to our service bus, BizTalk Server. In the following exercise, we will leverage plugins to send data from Dynamics CRM to BizTalk Server. Integration with BizTalk Server In this first walkthrough, we will build a plugin that communicates from Dynamics CRM to a BizTalk Server located. An event message will be sent to BizTalk whenever a change occurs on an Account record in Dynamics CRM. Setup This exercise leverages a BizTalk Server project already present in your Visual Studio 2010 solution. We are going to publish a web service from BizTalk Server that takes in a message and routes it to a BizTalk send port that writes the message to the file system. If you have not already done so, go to the code package and navigate to C:LOBIntegrationChapter03Chapter3-DynamicsCRM and open the Visual Studio 2010 solution file named Chapter3-DynamicsCRM.sln. Find the BizTalk Server project named Chapter3-DynamicsCRM.AcctRouting and open it. The code package includes a custom schema named AccountEventChange_XML.xsd and notice which elements we want from Dynamics CRM 2011 when an account changes. The first element, EventSource, is used to designate the source of the change event, as there may be multiple systems that share changes in an organization's accounts. This BizTalk project should be set to deploy to a BizTalk application named Chapter3. Build and deploy the project to the designated BizTalk Server. After confirming a successful deployment, launch the BizTalk WCF Service Publishing Wizard. We are going to use this schema to expose a web service entry point into BizTalk Server that Dynamics CRM 2011 can invoke. On the WCF Service Type wizard page, select a WCF-BasicHttp adapter and set the service to expose metadata and have the wizard generate a receive location for us in the Chapter3 application: On the Create WCF Service wizard page, choose to Publish schemas as WCF service. This option gives us fine-grained control over the naming associated with our service. On the next page, delete the two-way operation already present in the service definition. Rename the topmost service definition to AccountChangeService and assign the service the same name. Right-click the service and create a new one-way operation named PublishAccountChange. Right-click the Request message of the operation and choose the AccountChangeEvent message from our BizTalk Project's DLL: On the following wizard page, set the namespace of the service to http://Chapter3/AccountServices. Next, set the location of our service to http://localhost/AccountChangeService and select the option to allow anonymous access to the generated service. Finally, complete the wizard by clicking the Create button on the final wizard page. Confirm that the wizard successfully created both an IIS-hosted web service, and a BizTalk receive port/location. Ensure that the IIS web service is running under an Application Pool that has permission to access the BizTalk databases. In order to test this service, first, go to the BizTalk Server Administration Console and locate the Chapter3 application. Right click the Send Ports folder and create a new, static one-way send port named Chapter3.SendAccountChange.FILE. Set the send port to use the FILE adapter and select the FileDropDropCustomerChangeEvent folder that is present in the code package: This send port should listen for all account change event messages, regardless of which receive location (and system) that they came from. Go to the Filters tab of this send port. Set the filter Property to BTS.MessageType and filter Value to http://Chapter3-DynamicsCRM.AcctRouting.AccountChangeEvent_XML#AccountChangeEvent. All that remains is to test our service. Open the WCF Test Client application and add a new service reference to http://localhost/AccountChangeService/AccountChangeService.svc. Invoke the PublishAccountChange method and, if everything is configured correctly, we will see a message emitted by BizTalk Server that matches our service input parameters: (Move the mouse over the image to enlarge.) We now are sufficiently ready to author the Dynamics CRM plugin, which calls this BizTalk service.  
Read more
  • 0
  • 0
  • 3274

article-image-microsoft-enterprise-library-security-application-block
Packt
09 Dec 2010
5 min read
Save for later

Microsoft Enterprise Library: Security Application Block

Packt
09 Dec 2010
5 min read
Microsoft Enterprise Library 5.0 Develop Enterprise applications using reusable software components of Microsoft Enterprise Library 5.0 Develop Enterprise Applications using the Enterprise Library Application Blocks Set up the initial infrastructure configuration of the Application Blocks using the configuration editor A step-by-step tutorial to gradually configure each Application Block and implement its functions to develop the required Enterprise Application The first step is the process of validating an identity against a store (Active Directory, Database, and so on); this is commonly called as Authentication. The second step is the process of verifying whether the validated identity is allowed to perform certain actions; this is commonly known Authorization. These two security mechanisms take care of allowing only known identities to access the application and perform their respective actions. Although, with the advent of new tools and technologies, it is not difficult to safeguard the application, utilizing these authentication and authorization mechanisms and implementing security correctly across different types of applications, or across different layers and in a consistent manner is pretty challenging for developers. Also, while security is an important factor, it's of no use if the application's performance is dismal. So, a good design should also consider performance and cache the outcome of authentication and authorization for repeated use. The Security Application Block provides a very simple and consistent way to implement authorization and credential caching functionality in our applications. Authorization doesn't belong to one particular layer; it is a best practice to authorize user action not only in the UI layer but also in the business logic layer. As Enterprise Library application blocks are layer-agnostic, we can leverage the same authorization rules and expect the same outcome across different layers bringing consistency. Authorization of user actions can be performed using an Authorization Provider; the block provides Authorization Rule Provider or AzMan Authorization Provider; it also provides the flexibility of implementing a custom authorization provider. Caching of security credentials is provided by the SecurityCacheProvider by leveraging the Caching Application Block and a custom caching provider can also be implemented using extension points. Both Authorization and Security cache providers are configured in the configuration file; this allows changing of provider any time without re-compilation. The following are the key features of the Security block: The Security Application Block provides a simple and consistent API to implement authorization. It abstracts the application code from security providers through configuration. It provides the Authorization Rule Provider to store rules in a configuration file and Windows Authorization Manager (AzMan) Authorization Provider to authorize against Active Directory, XML file, or database. Flexibility to implement custom Authorization Providers. It provides token generation and caching of authenticated IIdentity, IPrincipal and Profile objects. It provides User identity cache management, which improves performance while repeatedly authenticating users using cached security credentials. Flexibility to extend and implement custom Security Cache Providers. Developing an application We will explore each individual Security block feature and along the way we will understand the concepts behind the individual elements. This will help us to get up to speed with the basics. To get started, we will do the following: Reference the Validation block assemblies Add the required Namespaces Set up the initial configuration To complement the concepts and allow you to gain quick hands-on experience of different features of the Security Application Block, we have created a sample web application project with three additional projects, DataProvider, BusinessLayer, and BusinessEntities, to demonstrate the features. The application leverages SQL Membership, Role, and Profile provider for authentication, role management, and profiling needs. Before running the web application you will have to run the database generation script provided in the DBScript folder of the solution, and update the connection string in web.config appropriately. You might have to open the solution in "Administrator" mode based on your development environment. Also, create an application pool with an identity that has the required privileges to access the development SQL Server database, and map the application pool to the website. A screenshot of the sample application is shown as follows: (Move the mouse over the image to enlarge.) Referencing required/optional assemblies For the purposes of this demonstration we will be referencing non-strong-named assemblies but based on individual requirements Microsoft strong-named assemblies, or a modified set of custom assemblies can be referenced as well. The list of Enterprise Library assemblies that are required to leverage the Security Application Block functionality is given next. A few assemblies are optional based on the Authorization Provider and cache storage mechanism used. Use the Microsoft strong-named, or the non-strong-named, or a modified set of custom assemblies based on your referencing needs. The following table lists the required/optional assemblies: AssemblyRequired/OptionalMicrosoft.Practices.EnterpriseLibrary.Common.dllRequiredMicrosoft.Practices.ServiceLocation.dllRequiredMicrosoft.Practices.Unity.dllRequiredMicrosoft.Practices.Unity.Interception.dllRequiredMicrosoft.Practices.Unity.Configuration.dll Optional Useful while utilizing Unity configuration classes in our code Microsoft.Practices.EnterpriseLibrary.Security.dllRequiredMicrosoft.Practices.EnterpriseLibrary.Security.AzMan.dll Optional Used for Windows Authorization Manager Provider Microsoft.Practices.EnterpriseLibrary.Security.Cache.CachingStore.dll Optional Used for caching the User identity Microsoft.Practices.EnterpriseLibrary.Data.dll Optional Used for caching in Database Cache Storage Open Visual Studio 2008/2010 and create a new ASP.NET Web Application Project by selecting File | New | Project | ASP.NET Web Application; provide the appropriate name for the solution and the desired project location. Currently, the application will have a default web form and assembly references. In the Solution Explorer, right-click on the References section and click on Add Reference and go to the Browse tab. Next, navigate to the Enterprise Library 5.0 installation location; the default install location is %Program Files%Microsoft Enterprise Library 5.0Bin. Now select all the assemblies listed in the previous table, excluding the AzMan-related assembly (Microsoft.Practices.EnterpriseLibrary.Security.AzMan.dll). The final assembly selection will look similar to the following screenshot:
Read more
  • 0
  • 0
  • 3274

article-image-prepare-and-build
Packt
10 Dec 2012
13 min read
Save for later

Prepare and Build

Packt
10 Dec 2012
13 min read
(For more resources related to this topic, see here.) Let's take a look at the history and background of APEX. History and background APEX is a very powerful development tool, which is used to create web-based database-centric applications. The tool itself consists of a schema in the database with a lot of tables, views, and PL/SQL code. It's available for every edition of the database. The techniques that are used with this tool are PL/SQL, HTML, CSS, and JavaScript. Before APEX there was WebDB, which was based on the same techniques. WebDB became part of Oracle Portal and disappeared in silence. The difference between APEX and WebDB is that WebDB generates packages that generate the HTML pages, while APEX generates the HTML pages at runtime from the repository. Despite this approach APEX is amazingly fast. Because the database is doing all the hard work, the architecture is fairly simple. We only have to add a web server. We can choose one of the following web servers: Oracle HTTP Server (OHS) Embedded PL/SQL Gateway (EPG) APEX Listener APEX became available to the public in 2004 and then it was part of version 10g of the database. At that time it was called HTMLDB and the first version was 1.5. Before HTMLDB, it was called Oracle Flows , Oracle Platform, and Project Marvel. Throughout the years many versions have come out and at the time of writing the current version is 4.1.1. These many versions prove that Oracle has continuously invested in the development and support of APEX. This is important for the developers and companies who have to make a decision about which techniques to use in the future. According to Oracle, as written in their statement of direction, new versions of APEX will be released at least annually. The following screenshot shows the home screen of the current version of APEX: Home screen of APEX For the last few years, there is an increasing interest in the use of APEX from developers. The popularity came mainly from developers who found themselves comfortable with PL/SQL and wanted to easily enter the world of web-based applications. Oracle gave ADF a higher priority, because APEX was a no cost option of the database and with ADF (and all the related techniques and frameworks from Java), additional licenses could be sold. Especially now Oracle has pointed out APEX as one of the important tools for building applications in their Oracle Database Cloud Service, this interest will only grow. APEX shared a lot of the characteristics of cloud computing, even before cloud computing became popular. These characteristics include: Elasticity Roles and authorization Browser-based development and runtime RESTful web services (REST stands for Representational State Transfer) Multi-tenant Simple and fast to join APEX has outstanding community support, witnessed by the number of posts and threads on the Oracle forum. This forum is the most popular after the database and PL/SQL. Oracle itself has some websites, based on APEX. Among others there are the following: http://asktom.oracle.com http://shop.oracle.com http://cloud.oracle.com Oracle uses quite a few internal APEX applications. Oracle also provides a hosted version of APEX at http://apex.oracle.com. Users can sign up for free for a workspace to evaluate and experiment with the latest version of APEX. This environment is for evaluations and demonstrations only, there are no guarantees! Apex.oracle.com is a very popular service—more than 16,000 workspaces are active. To give an idea of the performance of APEX, the server used for this service used to be a Dell Poweredge 1950 with two Dual Core Xeon processors with 16 GB. Installing APEX In this section, we will discuss some additional considerations to take care of while installing APEX. The best source for the installation process is the Installation Guide of APEX. Runtime or full development environment On a production database, the runtime environment of APEX should be installed. This installation lacks the Application Builder and the SQL Workshop. Users can run applications, but the applications cannot be modified. The runtime environment of APEX can be administered using SQL*Plus and SQL Developer. The (web interface) options for importing an application, which are only available in a full development environment, can be used manually with the APEX_INSTANCE_ADMIN API. Using a runtime environment for production is recommended for security purposes, so that we can be certain that installed applications cannot be modified by anyone. On a development environment the full development environment can be installed with all the features available to the developers. Build status Besides the environment of APEX itself, the applications can also be installed in a similar way. When importing or exporting an application the Run Application Only or Run and Build Application options can be selected. Changing an application to Run Application Only can be done in the Application Builder by choosing Edit Application Properties. Changing the Build Status to Run and Build Application can only be done as the admin user of the workspace internal. In the APEX Administration Services, choose Manage Workspaces and then select Manage Applications | Build Status. Another setting related to the Runtime Only option could be used in the APEX Administration Services at instance level. Select Manage Instance and then select Security. Setting the property Disable Workspace Login to yes, acts as setting a Runtime Only environment, while still allowing instance administrators to log in to the APEX Administration Services. Tablespaces Following the install guide for the full development environment, at a certain moment, we have to run the following command, when logged in as SYS with the SYSDBA role, on the command line: @apexins tablespace_apex tablespace_files tablespace_temp images The command is explained as follows: tablespace_apex is the name of the tablespace that contains all the objects for the APEX application user. tablespace_files is the name of the tablespace that contains all the objects for the APEX files user. tablespace_temp is the name of the temporary tablespace of the database. images will be the virtual directory for APEX images. Oracle recommends using /i/ to support the future APEX upgrades. For the runtime environment, the command is as follows: @apxrtins tablespace_apex tablespace_files tablespace_temp images In the documentation, SYSAUX is given as an example for both tablespace_apex and tablespace_files. There are several reasons for not using SYSAUX for these tablespaces, but to use our own instead: SYSAUX is an important tablespace of the database itself We have more control over sizing and growth It is easier for a DBA to manage tablespace placement Contention in the SYSAUX tablespace is less occurring It's easier to clean-up older versions of APEX And last but not least, it's only an example Converting runtime environment into a full development environment and vice versa It's always possible to switch from a runtime to a production environment and vice versa. If you want to convert a runtime to a full development environment log in as SYS with the SYSDBA role and on the command line type @apxdvins.sql. For converting a full development to a runtime environment, type @apxdevrm—but export websheet applications first. Another way to restrict user access can be accomplished by logging in to the APEX Administration Services, where we can (among others) manage the APEX instance settings and all the workspaces. We can do that in two ways: http://server:port/apex/apex_admin: Log in with the administrator credentials http://server:port/apex/: Log in to the workspace internal, with the administrator credentials After logging in, perform the following steps: Go to Manage Instance. Select Security. Select the appropriate settings for Disable Administrator Login and Disable Workspace Login. These settings can also be set manually with the APEX_INSTANCE_ADMIN API. Choosing a web server When using a web-based development and runtime environment, we have to use a web server. Architecture of APEX The choice of a web server and the underlying architecture of the system has a direct impact on performance and scalability. Oracle provides us with three choices: Oracle HTTP Server (OHS) Embedded PL/SQL Gateway (EPG) APEX Listener Simply put, the web server maps the URL in a web browser to a procedure in the database. Everything the procedure prints with sys.htp package, is sent to the browser of the user. This is the concept used by tools such as WebDB and APEX. OHS The OHS is the oldest of the three. It's based on the Apache HTTP Server and uses a custom Apache Module named as mod_plsql: Oracle HTTP Server In release 10g of the database, OHS was installed with the database on the same machine. Upward to the release 11g, this is not the case anymore. If you want to install the OHS, you have to install the web tier part of WebLogic. If you install it on the same machine as the database, it's free of extra licence costs. This installation takes up a lot of space and is rather complex, compared with the other two. On the other hand, it's very fl exible and it has a proven track record. Configuration is done with the text files. EPG The EPG is part of XML DB and lives inside the database. Because everything is in the database, we have to use the dbms_xdb and dbms_epg PL/SQL packages to configure the EPG. Another implication is that all images and other files are stored inside the database, which can be accessed with PL/SQL or FTP, for example: Embedded PL/SQL gateway The architecture is very simple. It's not possible to install the EPG on a different machine than the database. From a security point of view, this is not the recommended architecture for real-life Internet applications and in most cases the EPG is used in development, test, or other internal environments with few users. APEX Listener APEX Listener is the newest of the three, it's still in development and with every new release more features are added to it. In the latest version, RESTful APIs can be created by configuring resource templates. APEX Listener is a Java application with a very small footprint. APEX Listener can be installed in a standalone mode, which is ideal for development and testing purposes. For production environments, the APEX Listener can be deployed by using a J2EE compliant Application Server such as Glassfish, WebLogic, or Oracle Containers for J2EE: APEX Listener Configuration of the APEX Listener is done in a browser. With some extra configuration, uploading of Excel into APEX collections can be achieved. In future release, other functionalities, such as OAuth 2.0 and ICAP virus scanner integration, have been announced. Configuration options of the APEX Listener Like OHS, an architectural choice can be made if we want to install APEX Listener on the same machine as the database. For large public applications, it's better to use a separate web server. Many documents and articles have been written about choosing the right web server. If you read between the lines, you'll see that Oracle more or less recommends the use of APEX Listener. Given the functionality, enhanced security, file caching, fl exibility of deployment possibilities, and feature announcements makes it the best choice. Creating a second administrator When installing APEX, by default the workspace Internal with the administrator user Admin is created. Some users know more than the average end user. Also, developers have more knowledge than the average user. Imagine that such users try to log in to either the APEX Administration Services or the normal login page with the workspace Internal and administrator Admin, and consequently use the wrong password. As a consequence, the Admin account would be locked after a number of login attempts. This is a very annoying situation, especially when it happens often. Big companies and APEX Hosting companies with many workspaces and a lot of anonymous users or developers may suffer from this. Fortunately there is an easy solution, creating a second administrator account. Login attempt in workspace Internal as Admin If the account is already locked, we have to unlock it first. This can be easily done by running the apxchpwd.sql script, which can be found in the main Apex directory of the unzipped installation file of APEX: Start SQL*Plus and connect as sys with the sysdba role Run the script by entering @apxchpwd.sql. Follow the instructions and enter a new password. Now we are ready to create a second administrator account. This can be done in two ways, using the web interface or the command line. APEX web interface Follow these steps to create a new administrator, using the browser. First, we need to log in to the APEX Administrator Services at http://server:port/apex/. Log in to the workspace Internal, with the administrator credentials After logging in, perform the following steps: Go to Manage Workspaces. Select Existing Workspaces. You can also select the edit icon of the workspace Internal to inspect the settings. You cannot change them. Select Cancel to return to the previous screen. Select the workspace Internal by clicking on the name. Select Manage Users. Here you can see the user Admin. You can also select the user Admin to change the password. Other settings cannot be changed. Select Cancel or Apply Changes to return to the previous screen. Select Create User. Make sure that Internal is selected in the Workspace field and APEX_xxxxxx is selected in Default Schema, and that the new user is an administrator. xxxxxx has to match your APEX scheme version in the database, for instance, APEX_040100. Click on Create to finish. Settings for the new administrator Command line When we still have access, we can use the web interface of APEX. If not we can use the command line: Start SQL*Plus and connect as SYS with the SYSDBA role. Unlock the APEX_xxxxxx account by issuing the following command: alter user APEX_xxxxxx account unlock; Connect to the APEX_xxxxxx account. If you don't remember your password, you can just reset it, without impacting the APEX instance. Execute the following (use your own username, e-mail, and password): BEGIN wwv_flow_api.set_security_group_id (p_security_group_id=>10); wwv_flow_fnd_user_api.create_fnd_user( p_user_name => 'second_admin', p_email_address => 'email@company.com', p_web_password => 'second_admin_password') ; END; / COMMIT / The new administrator is created. Connect again as SYS with the SYSDBA role and lock the account again with the following command: alter user APEX_xxxxxx account lock; Now you can log in to the Internal workspace with your newly created account and you'll be asked to change your password. Other accounts When an administrator of a developer workspace loses his/her password or has a locked account, you can bring that account back to life by following these steps: Log in to the APEX Administrator Services Go to Manage Workspace. Select Existing Workspaces. Select the workspace. Select Manage Users. Select the user, change the password, and unlock the user. A developer or an APEX end user account can be managed by the administrator of the workspace from the workspace itself. Follow these steps to do so: Log in to the workspace. Go to Administration. Select the user, change the password, and unlock the user.
Read more
  • 0
  • 0
  • 3251
article-image-building-jsfejb3-applications
Packt
22 Oct 2009
15 min read
Save for later

Building JSF/EJB3 Applications

Packt
22 Oct 2009
15 min read
Building JSF/EJB3 Applications This practical article shows you how to create a simple data-driven application using JSF and EJB3 technologies. The article also shows you how to effectively use NetBeans IDE when building enterprise applications. What We Are Going to Build The sample application we are building throughout the article is very straightforward. It offers just a few pages. When you click the Ask us a question link on the welcomeJSF.jsp page, you will be taken to the following page, on which you can submit a question:     Once you’re done with your question, you click the Submit button. As a result, the application persist your question along with your email in the database. The next page will look like this:     The web tier of the application is build using the JavaServer Faces technology, while EJB is used to implement the database-related code. Software You Need to Follow the Article Exercise To build the sample discussed here, you will need the following software components installed on your computer: Java Standard Development Kit (JDK) 5.0 or higher Sun Java System Application Server Platform Edition 9 MySQL NetBeans IDE 5.5 Setting Up the Database The first step in building our application is to set up the database to interact with. In fact, you could choose any database you like to be the application’s backend database. For the purpose of this article, though, we will discuss how to use MySQL. To keep things simple, let’s create a questions table that contains just three columns, outlined in the following table: Column Type Description trackno INTEGER AUTO_INCREMENT PRIMARY KEY Stores a track number generated automatically when a row is inserted. user_email VARCHAR(50) NOT NULL   question VARCHAR(2000) NOT NULL   Of course, a real-world questions table would contain a few more columns, for example, dateOfSubmission containing the date and time of submitting the question. To create the questions table, you first have to create a database and grant the required privileges to the user with which you are going to connect to that database. For example, you might create database my_db and user usr identified by password pswd. To do this, you should issue the following SQL commands from MySQL Command Line Client:   CREATE DATABASE my_db; GRANT CREATE, DROP, SELECT, INSERT, UPDATE, DELETE ON my_db.* TO 'usr'@'localhost' IDENTIFIED BY 'pswd'; In order to use the newly created database for subsequent statements, you should issue the following statement:   USE my_db   Finally, create the questions table in the database as follows:   CREATE TABLE questions(      trackno INTEGER AUTO_INCREMENT PRIMARY KEY,      user_email VARCHAR(50) NOT NULL,      question  VARCHAR(2000) NOT NULL ) ENGINE = InnoDB;     Once you’re done, you have the database with the questions table required to store incoming users’ questions. Setting Up a Data Source for Your MySQL Database Since the application we are going to build will interact with MySQL, you need to have installed an appropriate MySQL driver on your application server. For example, you might want to install MySQL Connector/J, which is the official JDBC driver for MySQL. You can pick up this software from the "downloads" page of the MySQL AB website at http://mysql.org/downloads/. Install the driver on your GlassFish application server as follows: Unpack the downloaded archive containing the driver to any directory on your machine Add mysql-connector-java-xxx-bin.jar to the CLASSPATH environment variable Make sure that your GlassFish application server is up and running Launch the Application Server Admin Console by pointing your browser at http://localhost:4848/ Within the Common Tasks frame, find and double-click the ResourcesJDBCNew Connection Pool node On the New Connection Pool page, click the New… button The first step of the New Connection Pool master, set the fields as shown in the following table: Setting Value Name jdbc/mysqlPool Resource type javax.sql.DataSource Database Vendor mysql   Click Next to move on to the second page of the master On the second page of New Connection Pool, set the properties to reflect your database settings, like that shown in the following table: Name Value databaseName my_db serverName localhost port 3306 user usr password pswd   Once you are done with setting the properties, click Finish. The newly created jdbc/mysqlPool connection pool should appear on the list. To check it, you should click its link to open it in a window, and then click the Ping button. If everything is okay, you should see a message telling you Ping succeeded Creating the Project The next step is to create an application project with NetBeans. To do this, follow the steps below: Choose File/New project and then choose the EnterpriseEnterprise Application template for the project. Click Next On the Name and Location page of the master, specify the name for the project: JSF_EJB_App. Also make sure that Create EJB Module and Create Web Application Module are checked. And click Finish As a result, NetBeans generates a new enterprise application in a standard project, containing actually two projects: an EJB module project and Web application project. In this particular example, you will use the first project for EJBs and the second one for JSF pages. Creating Entity Beans and Persistent Unit You create entity beans and the persistent unit in the EJB module project—in this example this is the JSF_EJB_App-ejb project. In fact, the sample discussed here will contain the only entity bean: Question. You might automatically generate it and then edit as needed. To generate it with NetBeans, follow the steps below: Make sure that your Sun Java System Application Server is up and running In the Project window, right click JSF_EJB_App-ejb project, and then choose New/Entity Classes From Database. As a result, you’ll be prompted to connect to your Sun Java System Application Server. Do it by entering appropriate credentials. In the New Entity Classes from Database window, select jdbc/mysqlPool from the Data Source combobox. If you recall from the Setting up a Data Source for your MySQL database section discussed earlier in this article, jdbc/mysqlPool is a JDBC connection pool created on your application server In the Connect dialog appeared, you’ll be prompted to connect to your MySQL database. Enter password pswd, set the Remember password during this session checkbox, and then click OK In the Available Tables listbox, choose questions, and click Add button to move it to the Selected Tables listbox. After that, click Next On the next page of the New Entity Classes from Database master, fill up the Package field. For example, you might choose the following name: myappejb.entities. And change the class name from Questions to Question in the Class Names box. Next, click the Create Persistent Unit button In the Create Persistent Unit window, just click the Create button, leaving default values of the fields In the New Entity Classes from Database dialog, click Finish As a result, NetBeans will generate the Question entity class, which you should edit so that the resultant class looks like the following:   package myappejb.entities; import java.io.Serializable; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.Table; @Entity @Table(name = "questions") public class Question implements Serializable { @Id @Column(name = "trackno") private Integer trackno; @Column(name = "user_email", nullable = false) private String userEmail; @Column(name = "question", nullable = false) private String question; public Question() { } public Integer getTrackno() { return this.trackno; } public void setTrackno(Integer trackno) { this.trackno = trackno; } public String getUserEmail() { return this.userEmail; } public void setUserEmail(String userEmail) { this.userEmail = userEmail; } public String getQuestion() { return this.question; } public void setQuestion(String question) { this.question = question; } }     Once you’re done, make sure to save all the changes made by choosing File/Save All. Having the above code in hand, you might of course do without first generating the Question entity from the database, but simply create an empty Java file in the myappejb.entities package, and then insert the above code there. Then you could separately create the persistent unit. However, the idea behind building the Question entity with the master here is to show how you can quickly get a required piece of code to be then edited as needed, rather than creating it from scratch. Creating Session Beans To finish with the JSF_EJB_App-ejb project, let’s proceed to creating the session bean that will be used by the web tier. In particular, you need to create the QuestionSessionBean session bean that will be responsible for persisting the data a user enters on the askquestion page. To generate the bean’s frame with a master, follow the steps below: In the Project window, right click JSF_EJB_App-ejb project, and then choose New/Session Bean In the New Session Bean window, enter EJB name: QuestionSessionBean. Then specify the package: myappejb.ejb. Make sure that the Session Type is set to Stateless and Create Interface is set to Remote. Click Finish As a result, NetBeans should generate two Java files: QuestionSessionBean.java and QuestionSessionRemote.java. You should modify QuestionSessionBean.java so that it contains the following code:   package myappejb.ejb; import javax.annotation.Resource; import javax.ejb.Stateless; import javax.ejb.TransactionManagement; import javax.ejb.TransactionManagementType; import javax.persistence.EntityManager; import javax.persistence.EntityManagerFactory; import javax.persistence.PersistenceUnit; import javax.transaction.UserTransaction; import myappejb.entities.Question; @Stateless <b>@TransactionManagement(TransactionManagementType.BEAN)</b> public class QuestionSessionBean implements myappejb.ejb.QuestionSessionRemote { /** Creates a new instance of QuestionSessionBean */ public QuestionSessionBean() { } @Resource private UserTransaction utx; @PersistenceUnit(unitName = "JSF_EJB_App-ejbPU") private EntityManagerFactory emf; private EntityManager getEntityManager() { return emf.createEntityManager(); } public void save(Question question) throws Exception { EntityManager em = getEntityManager(); try { utx.begin(); em.joinTransaction(); em.persist(question); utx.commit(); } catch (Exception ex) { try { utx.rollback(); throw new Exception(ex.getLocalizedMessage()); } catch (Exception e) { throw new Exception(e.getLocalizedMessage()); } } finally { em.close(); } } }   Next, modify the QuestionSessionRemote.java so that it looks like this:   package myappejb.ejb; import javax.ejb.Remote; import myappejb.entities.Question; @Remote public interface QuestionSessionRemote { void save(Question question) throws Exception; }   Choose File/Save All to save the changes made. That’s it. You just finished with your EJB module project. Adding JSF Framework to the Project Now that you have the entity and session beans created, let’s switch to the JSF_EJB_App-war project, where you’re building the web tier for the application.Before you can proceed to building JSF pages, you need to add the JavaServer Faces framework to the JSF_EJB_App-war project. To do this, follow the steps below: In the Project window, right click JSF_EJB_App-war project, and then choose Properties In the Project Properties window, select Frameworks from Categories, and click Add button. As a result, the Add a Framework dialog should appear In the Add a Framework dialog, choose JavaServer Faces and click OK Then click OK in the Project Properties dialog As a result, NetBeans adds the JavaServer Faces framework to the JSF_EJB_App-war project. Now if you extend the Configuration Files folder under the JSF_EJB_App-war project node in the Project window, you should see, among other configuration files, faces-config.xml there. Also notice the appearance of the welcomeJSF.jsp page in the Web Pages folder Creating JSF Managed Beans The next step is to create managed beans whose methods will be called from within the JSF pages. In this particular example, you need to create only one such bean: let’s call it QuestionController. This can be achieved by following the steps below: In the Project window, right click JSF_EJB_App-war project, and then choose New/Empty Java File In the New Empty Java File window, enter QuestionController as the class name and enter myappjsf.jsf in the Package field. Then, click Finish In the generated empty java file, insert the following code:   package myappjsf.jsf; import javax.ejb.EJB; import javax.faces.application.FacesMessage; import javax.faces.context.FacesContext; import myappejb.entities.Question; import myappejb.ejb.QuestionSessionBean; public class QuestionController { @EJB private QuestionSessionBean sbean; private Question question; public QuestionController() { } public Question getQuestion() { return question; } public void setQuestion(Question question) { this.question = question; } public String createSetup() { this.question = new Question(); this.question.setTrackno(null); return "question_create"; } public String create() { try { Integer trck = sbean.save(question); addSuccessMessage("Your question was successfully submitted."); } catch (Exception ex) { addErrorMessage(ex.getLocalizedMessage()); } return "created"; } public static void addErrorMessage(String msg) { FacesMessage facesMsg = new FacesMessage(FacesMessage.SEVERITY_ERROR, msg, msg); FacesContext fc = FacesContext.getCurrentInstance(); fc.addMessage(null, facesMsg); } public static void addSuccessMessage(String msg) { FacesMessage facesMsg = new FacesMessage(FacesMessage.SEVERITY_INFO, msg, msg); FacesContext fc = FacesContext.getCurrentInstance(); fc.addMessage("successInfo", facesMsg); } }   Next, you need to add information about the newly created JSF managed bean to the faces-config.xml configuration file automatically generated when adding the JSF framework to the project. Find this file in the following folder: JSF_EJB_App-warWeb PagesWEB-INF in the Project window, and then insert the following tag between the and tags:   <managed-bean> <managed-bean-name>questionJSFBean</managed-bean-name> <managed-bean-class>myappjsf.jsf.QuestionController</managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean>   Finally, make sure to choose File/Save All to save the changes made in faces-config.xml as well as in QuestionController.java. Creating JSF Pages To keep things simple, you create just one more JSF page: askquestion.jsp, where a user can submit a question. First, though, let’s modify the welcomeJSF.jsp page so that you can use it to move on to askquestion.jsp and then return to, once a question has been submitted. To achieve this, modify welcomeJSF.jsp as follows:   <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>JSP Page</title> </head> <body> <f:view> <h:messages errorStyle="color: red" infoStyle="color: green" layout="table"/> <h:form> <h1><h:outputText value="Ask us a question" /></h1> <h:commandLink action="#{questionJSFBean.createSetup}" value="New question"/> <br> </h:form> </f:view> </body> </html> Now you can move on and create askquestion.jsp. To do this, follow the steps below: In the Project window, right click JSF_EJB_App-war project, and then choose New/JSP In the New JSP File window, enter askquestion as the name for the page, and click Finish Modify the newly created askquestion.jsp so that it finally looks like this:   <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib uri="http://java.sun.com/jsf/core" prefix="f" %> <%@taglib uri="http://java.sun.com/jsf/html" prefix="h" %> <html>     <head>         <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />         <title>New Question</title>     </head>     <body>         <f:view>             <h:messages errorStyle="color: red" infoStyle="color: green" layout="table"/>             <h1>Your question, please!</h1>             <h:form>                 <h:panelGrid columns="2">                     <h:outputText value="Your Email:"/>                     <h:inputText id="userEmail" value= "#{questionJSFBean.question.userEmail}" title="Your Email" />                     <h:outputText value="Your Question:"/>                     <h:inputTextarea id="question" value= "#{questionJSFBean.question.question}"  title="Your Question" rows ="5" cols="35" />                 </h:panelGrid>                 <h:commandLink action="#{questionJSFBean.create}" value="Create"/>                 <br>                 <a href="/JSF_EJB_App-war/index.jsp">Back to index</a>             </h:form>         </f:view>     </body> </html>   The next step is to set page navigation. Turning back to the faces-config.xml configuration file, insert the following code there.   <navigation-rule> <navigation-case> <from-outcome>question_create</from-outcome> <to-view-id>/askquestion.jsp</to-view-id> </navigation-case> </navigation-rule> <navigation-rule> <navigation-case> <from-outcome>created</from-outcome> <to-view-id>/welcomeJSF.jsp</to-view-id> </navigation-case> </navigation-rule> Make sure that the above tags are within the <faces-config> and </faces-config> root tags. Check It You are ready now to check the application you just created. To do this, right-click JSF_EJB_App-ejb project in the Project window and choose Deploy Project. After the JSF_EJB_App-ejb project is successfully deployed, right click the JSF_EJB_App-war project and choose Run Project. As a result, the newly created application will run in a browser. As mentioned earlier, the application contains very few pages, actually three ones. For testing purposes, you can submit a question, and then check the questions database table to make sure that everything went as planned. Summary Both JSF and EJB 3 are popular technologies when it comes to building enterprise applications. This simple example illustrates how you can use these technologies together in a complementary way.
Read more
  • 0
  • 0
  • 3251

article-image-developing-applications-jboss-and-hibernate-part-1
Packt
19 Jan 2010
4 min read
Save for later

Developing Applications with JBoss and Hibernate: Part 1

Packt
19 Jan 2010
4 min read
Introducing Hibernate Hibernate provides a bridge between the database and the application by persisting application objects in the database, rather than requiring the developer to write and maintain lots of code to store and retrieve objects. The main configuration file, hibernate.cfg.xml, specifies how Hibernate obtains database connections, either from a JNDI DataSource or from a JDBC connection pool. Additionally, the configuration file defines the persistent classes, which are backed by mapping definition files. This is a sample hibernate.cfg.xml configuration file that is used to handle connections to a MySQL database, mapping the com.sample.MySample class. <hibernate-configuration><session-factory><property name="connection.username">user</property><property name="connection.password">password</property><property name="connection.url"> jdbc:mysql://localhost/database</property><property name="connection.driver_class"> com.mysql.jdbc.Driver</property><property name="dialect"> org.hibernate.dialect.MySQLDialect</property><mapping resource="com/sample/MyClass.hbm.xml"/></session-factory></hibernate-configuration> From our point of view, it is important to know that Hibernate applications can coexist in both the managed environment and the non-managed environment. An application server is a typical example of a managed environment that provides services to hosting applications, such as connection pooling and transaction. On the other hand, a non-managed application refers to standalone applications, such as Swing Java clients that typically lack any built-in service. In this article, we will focus on managed environment applications, installed on JBoss Application Server. You will not need to download any library to your JBoss installation. As a matter of fact, JBoss persistence layer is designed around Hibernate API, so it already contains all the core libraries. Creating a Hibernate application You can choose different strategies for building a Hibernate application. For example, you could start building Java classes and map files from scratch, and then let Hibernate generate the database schema accordingly. You can also start from a database schema and reverse engineer it into Java classes and Hibernate mapping files. We will choose the latter option, which is also the fastest. Here's an overview of our application. In this example, we will design an employee agenda divided into departments. The persistence model will be developed with Hibernate, using the reverse engineering facet of JBoss tools. We will then need an interface for recording our employees and departments, and to query them as well. The web interface will be developed using a simple Model-View-Controller (MVC) pattern and basic JSP 2.0 and servlet features. The overall architecture of this system resembles the AppStore application that has been used to introduce JPA. As a matter of fact, this example can be used to compare the two persistence models and to decide which option best suits your project needs. We have added a short section at the end of this example to stress a few important points about this choice. Setting up the database schema The overall architecture of this system resembles the AppStore application that has been used to introduce JPA. As a matter of fact, this example can be used to compare the two persistence models and to decide which option best suits your project needs. We have added a short section at the end of this example to stress a few important points about this choice. CREATE schema hibernate;GRANT ALL PRIVILEGES ON hibernate.* TO 'jboss'@'localhost' WITH GRANTOPTION;CREATE TABLE `hibernate`.`department` (`department_id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,`department_name` VARCHAR(45) NOT NULL,PRIMARY KEY (`department_id`))ENGINE = InnoDB;CREATE TABLE `hibernate`.`employee` (`employee_id` INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,`employee_name` VARCHAR(45) NOT NULL,`employee_salary` INTEGER UNSIGNED NOT NULL,`employee_department_id` INTEGER UNSIGNED NOT NULL,PRIMARY KEY (`employee_id`),CONSTRAINT `FK_employee_1` FOREIGN KEY `FK_employee_1` (`employee_department_id`)REFERENCES `department` (`department_id`)ON DELETE CASCADEON UPDATE CASCADE)ENGINE = InnoDB; With the first Data Definition Language (DDL) command, we have created a schema named Hibernate that will be used to store our tables. Then, we have assigned the necessary privileges on the Hibernate schema to the user jboss. Finally, we created a table named department that contains the list of company units, and another table named employee that contains the list of workers. The employee table references the department with a foreign key constraint.
Read more
  • 0
  • 0
  • 3246

article-image-basic-coding-hornetq-creating-and-consuming-messages
Packt
28 Nov 2012
4 min read
Save for later

Basic Coding with HornetQ: Creating and Consuming Messages

Packt
28 Nov 2012
4 min read
(For more resources related to this topic, see here.) Installing Eclipse on Windows You can download the Eclipse IDE for Java EE developers (in our case the ZIP file eclipse-jee-indigo-SR1-win32.zip) from http://www.eclipse.org/downloads/. Once downloaded, you have to unzip the eclipse folder inside the archive to the destination folder so that you have a folder structure like the one illustrated in the following screenshot: Now a double-click on the eclipse.exe file will fire the first run of Eclipse. Installing NetBeans on Windows NetBeans is one of the most frequently used IDE for Java development purposes. It mimics the Eclipse plugin module's installation, so you could download the J2EE version from the URL http://netbeans.org/downloads/.But remember that this version also comes with an integrated GlassFish application server and a Tomcat server. Even in this case you only need to download the .exe file (java_ee_sdk-6u3-jdk7-windows.exe, in our case) and launch the installer. Once finished, you should be able to run the IDE by clicking on the NetBeans icon in your Windows Start menu. Installing NetBeans on Linux If you are using a Debian-based version of Linux like Ubuntu, installing both NetBeans and Eclipse is nothing more than typing a command from the bash shell and waiting for the installation process to finish. As we are using Ubuntu Version 11, we will type the following command from a non-root user account to install Eclipse: sudo apt-get install eclipse The NetBeans installation procedure is slightly different due to the fact that the Ubuntu repositories do not have a package for a NetBeans installation. So, for installing NetBeans you have to download a script and then run it. If you are using a non-root user account, you need to type the following commands on a terminal: sudo wget http://download.netbeans.org/netbeans/7.1.1/final/bundles/ netbeans-7.1.1-ml-javaee-linux.sh sudo chmod +x netbeans-7.1.1-ml-javaee-linux.sh ./netbeans-7.1.1-ml-javaee-linux.sh During the first run of the IDE, Eclipse will ask which default workspace the new projects should be stored in. Choose the one suggested, and in case you are not planning to change it, check the Use this as the default and do not ask again checkbox for not re-proposing the question, as shown in the following screenshot: The same happens with NetBeans, but during the installation procedure. Post installation Both Eclipse and NetBeans have an integrated system for upgrading them to the latest version, so when you have correctly launched the first-time run, keep your IDE updated. For Eclipse, you can access the Update window by using the menu Help | Check for updates. This will pop up the window, as shown in this screenshot: NetBeans has the same functionality, which can be launched from the menu. A 10,000 foot view of HornetQ Before moving on with the coding phase, it is time to recover some concepts to allow the user and the coder to better understand how HornetQ manages messages. HornetQ is only a set of Plain Old Java Objects (POJOs) compiled and grouped into JAR files. The software developer could easily grasp that this characteristic leads to HornetQ having no dependency on third-party libraries. It is possible to use and even start HornetQ from any Java class; this is a great advantage over other frameworks. HornetQ deals internally only with its own set of classes, called the HornetQ core, avoiding any dependency on JMS dialect and specifications. Nevertheless, the client that connects with the HornetQ server can speak the JMS language. So the HornetQ server also uses a JMS to core HornetQ API translator. This means that when you send a JMS message to a HornetQ server, it is received as JMS and then translated into the core API dialect to be managed internally by HornetQ. The following figure illustrates this concept: The core messaging concepts of HornetQ are somewhat simpler than those of JMS: Message: This is a unit of data that can be sent/delivered from a consumer to a producer. Messages have various possibilities. But only to cite them, a message can have: durability, priority, expiry time, time, and dimension. Address: HornetQ maintains an association between an address (IP address of the server) and the queues available at that address. So the message is bound to the address. Queue: This is nothing more than a set of messages. Like messages, queues have attributes such as durability, temporary, and filtering expressions.
Read more
  • 0
  • 0
  • 3216
article-image-creating-wcf-service-business-object-and-data-submission-silverlight-4
Packt
23 Apr 2010
10 min read
Save for later

Creating a WCF Service, Business Object and Data Submission with Silverlight 4

Packt
23 Apr 2010
10 min read
Data applications When building applications that utilize data, it is important to start with defining what data you are going to collect and how it will be stored once collected. In the last chapter, we created a Silverlight application to post a collection of ink strokes to the server. We are going to expand the inkPresenter control to allow a user to submit additional information. Most developers would have had experience building business object layers, and with Silverlight we can still make use of these objects, either by using referenced class projects/libraries or by consuming WCF services and utilizing the associated data contracts. Time for action – creating a business object We'll create a business object that can be used by both Silverlight and our ASP.NET application. To accomplish this, we'll create the business object in our ASP.NET application, define it as a data contract, and expose it to Silverlight via our WCF service. Start Visual Studio and open the CakeORamaData solution. When we created the solution, we originally created a Silverlight application and an ASP.NET web project. In the web project, add a reference to the System.Runtime.Serialization assembly. Right-click on the web project and choose to add a new class. Name this class ServiceObjects and click OK. In the ServiceObjects class file, replace the existing code with the following code: using System; using System.Runtime.Serialization; namespace CakeORamaData.Web { [DataContract] public class CustomerCakeIdea { [DataMember] public string CustomerName { get; set; } [DataMember] public string PhoneNumber { get; set; } [DataMember] public string Email { get; set; } [DataMember] public DateTime EventDate { get; set; } [DataMember] public StrokeInfo[] Strokes { get; set; } } [DataContract] public class StrokeInfo { [DataMember] public double Width { get; set; } [DataMember] public double Height { get; set; } [DataMember] public byte[] Color { get; set; } [DataMember] public byte[] OutlineColor { get; set; } [DataMember] public StylusPointInfo[] Points { get; set; } } [DataContract] public class StylusPointInfo { [DataMember] public double X { get; set; } [DataMember] public double Y { get; set; } } } What we are doing here is defining the data that we'll be collecting from the customer. What just happened? We just added a business object that will be used by our WCF service and our Silverlight application. We added serialization attributes to our class, so that it can be serialized with WCF and consumed by Silverlight. The [DataContract] and [DataMember] attributes are the serialization attributes that WCF will use when serializing our business object for transmission. WCF provides an opt-in model, meaning that types used with WCF must include these attributes in order to participate in serialization. The [DataContract] attribute is required, however if you wish to, you can use the [DataMember] attribute on any of the properties of the class. By default, WCF will use the System.Runtime.Serialization.DataContractSerialzer to serialize the DataContract classes into XML. The .NET Framework also provides a NetDataContractSerializer which includes CLR information in the XML or the JsonDataContractSerializer that will convert the object into JavaScript Object Notation (JSON). The WebGet attribute provides an easy way to define which serializer is used. For more information on these serializers and the WebGet attribute visit the following MSDN web sites: http://msdn.microsoft.com/en-us/library/system.runtime.serialization.datacontractserializer.aspx. http://msdn.microsoft.com/en-us/library/system.runtime.serialization.netdatacontractserializer.aspx. http://msdn.microsoft.com/en-us/library/system.runtime.serialization.json.datacontractjsonserializer.aspx. http://msdn.microsoft.com/en-us/library/system.servicemodel.web.webgetattribute.aspx. Windows Communication Foundation (WCF) Windows Communication Foundation (WCF) provides a simplified development experience for connected applications using the service oriented programming model. WCF builds upon and improves the web service model by providing flexible channels in which to connect and communicate with a web service. By utilizing these channels developers can expose their services to a wide variety of client applications such as Silverlight, Windows Presentation Foundation and Windows Forms. Service oriented applications provide a scalable and reusable programming model, allowing applications to expose limited and controlled functionality to a variety of consuming clients such as web sites, enterprise applications, smart clients, and Silverlight applications. When building WCF applications the service contract is typically defined by an interface decorated with attributes that declare the service and the operations. Using an interface allows the contract to be separated from the implementation and is the standard practice with WCF. You can read more about Windows Communication Foundation on the MSDN website at: http://msdn.microsoft.com/en-us/netframework/aa663324.aspx. Time for action – creating a Silverlight-enabled WCF service Now that we have our business object, we need to define a WCF service that can accept the business object and save the data to an XML file. With the CakeORamaData solution open, right-click on the web project and choose to add a new folder, rename it to Services. Right-click on the web project again and choose to add a new item. Add a new WCF Service named CakeService.svc to the Services folder. This will create an interface and implementation files for our WCF service. Avoid adding the Silverlight-enabled WCF service, as this adds a service that goes against the standard design patterns used with WCF: The standard design practice with WCF is to create an interface that defines the ServiceContract and OperationContracts of the service. The interface is then provided, a default implementation on the server. When the service is exposed through metadata, the interface will be used to define the operations of the service and generate the client classes. The Silverlight-enabled WCF service does not create an interface, just an implementation, it is there as a quick entry point into WCF for developers new to the technology. Replace the code in the ICakeService.cs file with the definition below. We are defining a contract with one operation that allows a client application to submit a CustomerCakeIdea instance: using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; namespace CakeORamaData.Web.Services { // NOTE: If you change the interface name "ICakeService" here, you must also update the reference to "ICakeService" in Web.config. [ServiceContract] public interface ICakeService { [OperationContract] void SubmitCakeIdea(CustomerCakeIdea idea); } } The CakeService.svc.cs file will contain the implementation of our service interface. Add the following code to the body of the CakeService.svc.cs file to save the customer information to an XML file: using System; using System.ServiceModel.Activation; using System.Xml; namespace CakeORamaData.Web.Services { // NOTE: If you change the class name "CakeService" here, you must also update the reference to "CakeService" in Web.config. [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class CakeService : ICakeService { public void SubmitCakeIdea(CustomerCakeIdea idea) { if (idea == null) return; using (var writer = XmlWriter.Create(String.Format(@"C: ProjectsCakeORamaCustomerData{0}.xml", idea.CustomerName))) { writer.WriteStartDocument(); //<customer> writer.WriteStartElement("customer"); writer.WriteAttributeString("name", idea.CustomerName); writer.WriteAttributeString("phone", idea.PhoneNumber); writer.WriteAttributeString("email", idea.Email); // <eventDate></eventDate> writer.WriteStartElement("eventDate"); writer.WriteValue(idea.EventDate); writer.WriteEndElement(); // <strokes> writer.WriteStartElement("strokes"); if (idea.Strokes != null && idea.Strokes.Length > 0) { foreach (var stroke in idea.Strokes) { // <stroke> writer.WriteStartElement("stroke"); writer.WriteAttributeString("width", stroke.Width. ToString()); writer.WriteAttributeString("height", stroke.Height. ToString()); writer.WriteStartElement("color"); writer.WriteAttributeString("a", stroke.Color[0]. ToString()); writer.WriteAttributeString("r", stroke.Color[1]. ToString()); writer.WriteAttributeString("g", stroke.Color[2]. ToString()); writer.WriteAttributeString("b", stroke.Color[3]. ToString()); writer.WriteEndElement(); writer.WriteStartElement("outlineColor"); writer.WriteAttributeString("a", stroke. OutlineColor[0].ToString()); writer.WriteAttributeString("r", stroke. OutlineColor[1].ToString()); writer.WriteAttributeString("g", stroke. OutlineColor[2].ToString()); writer.WriteAttributeString("b", stroke. OutlineColor[3].ToString()); writer.WriteEndElement(); if (stroke.Points != null && stroke.Points.Length > 0) { writer.WriteStartElement("points"); foreach (var point in stroke.Points) { writer.WriteStartElement("point"); writer.WriteAttributeString("x", point. X.ToString()); writer.WriteAttributeString("y", point. Y.ToString()); writer.WriteEndElement(); } writer.WriteEndElement(); } // </stroke> writer.WriteEndElement(); } } // </strokes> writer.WriteEndElement(); //</customer> writer.WriteEndElement(); writer.WriteEndDocument(); } } } } We added the AspNetCompatibilityRequirements attribute to our CakeService implementation. This attribute is required in order to use a WCF service from within ASP.NET. Open Windows Explorer and create the path C:ProjectsCakeORamaCustomerData on your hard drive to store the customer XML files. One thing to note is that you will need to grant write permission to this directory for the ASP.NET user account when in a production environment. When adding a WCF service through Visual Studio, binding information is added to the web.config file. The default binding for WCF is wsHttpBinding, which is not a valid binding for Silverlight. The valid bindings for Silverlight are basicHttpBinding, binaryHttpBinding (implemented with a customBinding), and netTcpBinding. We need to modify the web.config, so that Silverlight can consume the service. Open the web.config file and add this customBinding section to the <system.serviceModel> node: <bindings> <customBinding> <binding name="customBinding0"> <binaryMessageEncoding /> <httpTransport> <extendedProtectionPolicy policyEnforcement="Never" /> </httpTransport> </binding> </customBinding> </bindings> We'll need to change the <service> node in the web.config to use our new customBinding, (we use the customBinding to implement binary HTTP which sends the information as a binary stream to the service), rather than the wsHttpbinding from: <service behaviorConfiguration="CakeORamaData.Web.Services. CakeServiceBehavior" name="CakeORamaData.Web.Services.CakeService"> <endpoint address="" binding="wsHttpBinding" contract="CakeORamaData.Web.Services.ICakeService"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IM etadataExchange" /> </service> To the following: <service behaviorConfiguration="CakeORamaData.Web.Services. CakeServiceBehavior" name="CakeORamaData.Web.Services.CakeService"> <endpoint address="" binding="customBinding" bindingConfiguratio n="customBinding0" contract="CakeORamaData.Web.Services.ICakeService" /> <endpoint address="mex" binding="mexHttpBinding" contract="IMeta dataExchange" /> </service> Set the start page to the CakeService.svc file, then build and run the solution. We will be presented with the following screen, which lets us know that the service and bindings are set up correctly: Our next step is to add the service reference to Silverlight. On the Silverlight project, right-click on the References node and choose to Add a Service Reference: On the dialog that opens, click the Discover button and choose the Services in Solution option. Visual Studio will search the current solution for any services: Visual Studio will find our CakeService and all we have to do is change the Namespace to something that makes sense such as Services and click the OK button: We can see that Visual Studio has added some additional references and files to our project. Developers used to WCF or Web Services will notice the assembly references and the Service References folder: Silverlight creates a ServiceReferences.ClientConfig file that stores the configuration for the service bindings. If we open this file, we can take a look at the client side bindings to our WCF service. These bindings tell our Silverlight application how to connect to the WCF service and the URL where it is located: <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomBinding_ICakeService"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647"> <extendedProtectionPolicy policyEnforcemen t="Never" /> </httpTransport> </binding> </customBinding> </bindings> <client> <endpoint address="http://localhost:2268/Services/ CakeService.svc" binding="customBinding" bindingConfiguration="Cust omBinding_ICakeService" contract="Services.ICakeService" name="CustomBinding_ICakeService" /> </client> </system.serviceModel> </configuration>
Read more
  • 0
  • 0
  • 3189

article-image-introduction-hibernate-and-spring-part-2
Packt
29 Dec 2009
6 min read
Save for later

An Introduction to Hibernate and Spring: Part 2

Packt
29 Dec 2009
6 min read
Object relational mapping As the previous discussion shows, we are looking for a solution that enables applications to work with the object representation of the data in database tables, rather than dealing directly with that data. This approach isolates the business logic from any relational issues that might arise in the persistence layer. The strategy to carry out this isolation is generally called object/relational mapping (O/R Mapping, or simply ORM). A broad range of ORM solutions have been developed. At the basic level, each ORM framework maps entity objects to JDBC statement parameters when the objects are persisted, and maps the JDBC query results back to the object representation when they are retrieved. Developers typically implement this framework approach when they use pure JDBC. Furthermore, ORM frameworks often provide more sophisticated object mappings, such as the mapping of inheritance hierarchy and object association, lazy loading, and caching of the persistent objects. Caching enables ORM frameworks to hold repeatedly fetched data in memory, instead of being fetched from the database in the next requests, causing deficiencies and delayed responses, the objects are returned to the application from memory. Lazy loading, another great feature of ORM frameworks, allows an object to be loaded without initializing its associated objects until these objects are accessed. ORM frameworks usually use mapping definitions, such as metadata, XML files, or Java annotations, to determine how each class and its persistent fields should be mapped onto database tables and columns. These frameworks are usually configured declaratively, which allows the production of more flexible code. Many ORM solutions provide an object query language, which allows querying the persistent objects in an object-oriented form, rather than working directly with tables and columns through SQL. This behavior allows the application to be more isolated from the database properties. Hibernate as an O/R Mapping solution For a long time, Hibernate has been the most popular persistence framework in the Java community. Hibernate aims to overcome the already mentioned impedance mismatch between object-oriented applications and relational databases. With Hibernate, we can treat the database as an object-oriented store, thereby eliminating mapping of the object-oriented and relational environments. Hibernate is a mediator that connects the object-oriented environment to the relational environment. It provides persistence services for an application by performing all of the required operations in the communication between the object-oriented and relational environments. Storing, updating, removing, and loading can be done regardless of the objects persistent form. In addition, Hibernate increases the application's effectiveness and performance, makes the code less verbose, and allows the code to be more focused on business rules than persistence logic. The following screenshot depicts Hibernates role in persistence: Hibernate fully supports object orientation, meaning all aspects of objects, such as association and inheritance, are properly persisted. Hibernate can also persist object navigation, that is, how an object is navigable through its associated objects. It caches data that is fetched repeatedly and provides lazy loading, which notably enhances database performance. As you will see, Hibernate provides caches in two levels: first-level built-in, and second-level pluggable cache strategies. Th e first-level cache is a required property for any ORM to preserve object consistency. It guaranties that the application always works with consistent objects. This is originated from the fact that many threads in the application use the ORM to persist the objects which might potentially be associated to the same table rows in the database. The following screenshot depicts the role of a cache when using Hibernate: Hibernate provides its own query language, which is Hibernate Query Language (HQL). At runtime, HQL expressions are transformed to their corresponding SQL statements, based on the database used. Because databases may use different versions of SQL and may expose different features, Hibernate presents a new concept, called an SQL dialect, t o distinguish how databases differ. Furthermore, Hibernate allows SQL expressions to be used either declaratively or programmatically, which is useful in specific situations when Hibernate does not satisfy application persistence requirements. Hibernate keeps track of object changes through snapshot comparisons to prevent unnecessary updating. Other O/R Mapping solutions Although Hibernate is the most popular persistence framework, many other frameworks do exist. Some of these are explained as follows: Enterprise JavaBeans (EJB): It is a standard J2EE (J ava 2 Enterprise Edition) technology that defines a different type of persistence by presenting entity beans. Mostly, for declarative middleware services that are provided by the application server, such as transactions, EJB may be preferred for architecture. However, due to its complexity, nontransparent persistence, and need for a container (all of which make it difficult to implement, test, and maintain), EJB is less often used than other persistence frameworks. iBatis SQL Map: It is a result set–mapping framework which works at the SQL level, allowing SQL string definitions with parameter placeholders in XML files. At runtime, the placeholders are filled with runtime values, either from simple parameter objects, JavaBeans properties, or a parameter map. To their advantage, SQL maps allow SQL to be fully customized for a specific database. To their disadvantage, however, these maps do not provide an abstraction from the specific features of the target database. Java Data Objects (JDO): It is a specification for general object persistence in any kind of data store, including relational databases and object-oriented databases. Most JDO implementations support using metadata mapping definitions. JDO provides its own query language, JDOQL, and its own strategy for change detection. TopLink: It provides a visual mapping editor (Mapping Workbench) and offers a particularly wide range of object, relational mappings, including a complete set of direct and relational mappings, object-to-XML mappings, and JAXB (Java API for XML Binding) support. TopLink provides a rich query framework that supports an object-oriented expression framework, EJB QL, SQL, and stored procedures. It can be used in either a JSE or a JEE environment. Hibernate designers has borrowed many Hibernate concepts and useful features from its ancestors Hibernate versus other frameworks Unlike the frameworks just mentioned, Hibernate is easy to learn, simple to use, comprehensive, and (unlike EJB) does not need an application server. Hibernate is well documented, and many resources are available for it. Downloaded more than three million times, Hibernate is used in many applications around the world. To use Hibernate, you need only J2SE 1.2 or later, and it can be used in stand-alone or distributed applications. The current version of Hibernate is 3, but the usage and configuration of this version are very similar to version 2. Most of the changes in Hibernate 3 are compatible with Hibernate 2. Hibernate solves many of the problems of mapping objects to a relational environment, isolating the application from getting involved in many persistence issues. Keep in mind that Hibernate is not a replacement for JDBC. Rather, it can be thought of as a tool that connects to the database through JDBC and presents an object-oriented, application-level view of the database.
Read more
  • 0
  • 0
  • 3160
Modal Close icon
Modal Close icon