Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-vaadin-project-spring-and-handling-login-spring
Packt
22 Jul 2013
16 min read
Save for later

Vaadin project with Spring and Handling login with Spring

Packt
22 Jul 2013
16 min read
(For more resources related to this topic, see here.) Setting up a Vaadin project with Spring in Maven We will set up a new Maven project for Vaadin application that will use the Spring framework. We will use a Java annotation-driven approach for Spring configuration instead of XML configuration files. This means that we will eliminate the usage of XML to the necessary minimum (for XML fans, don't worry there will be still enough XML to edit). In this recipe, we will set up a Spring project where we define a bean that will be obtainable from the Spring application context in the Vaadin code. As the final result, we will greet a lady named Adela, so we display Hi Adela! text on the screen. The brilliant thing about this is that we get the greeting text from the bean that we define via Spring. Getting ready First, we create a new Maven project. mvn archetype:generate -DarchetypeGroupId=com.vaadin -DarchetypeArtifactId=vaadin-archetype-application -DarchetypeVersion=LATEST -Dpackaging=war -DgroupId=com.packtpub.vaadin -DartifactId=vaadin-with-spring -Dversion=1.0 More information about Maven and Vaadin can be found at https://vaadin.com/book/-/page/getting-started.maven.html. How to do it... Carry out the following steps, in order to set up a Vaadin project with Spring in Maven: First, we need to add the necessary dependencies. Just add the following Maven dependencies into the pom.xml file: dependencies into the pom.xml file: <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> <version>2.2.2</version> </dependency> In the preceding code, we are referring to the spring.version property. Make sure we have added the Spring version inside the properties tag in the pom.xml file. <properties> … <spring.version>3.1.2.RELEASE</spring.version> </properties> At the time of writing, the latest version of Spring was 3.1.2. Check the latest version of the Spring framework at http://www.springsource.org/spring-framework. The last step in the Maven configuration file is to add the new repository into pom.xml. Maven needs to know where to download the Spring dependencies. <repositories> … <repository> <id>springsource-repo</id> <name>SpringSource Repository</name> <url>http://repo.springsource.org/release</url> </repository> </repositories> Now we need to add a few lines of XML into the src/main/webapp/WEB-INF/web.xml deployment descriptor file. At this point, we make the first step in connecting Spring with Vaadin. The location of the AppConfig class needs to match the full class name of the configuration class. <context-param> <param-name>contextClass</param-name> <param-value> org.springframework.web.context.support.Annotation ConfigWebApplicationContext </param-value> </context-param> <context-param> <param-name>contextConfigLocation</param-name> <param-value>com.packtpub.vaadin.AppConfig </param-value> </context-param> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> Create a new class AppConfig inside the com.packtpub.vaadin package and annotate it with the @Configuration annotation. Then create a new @Bean definition as shown: package com.packtpub.vaadin; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class AppConfig { @Bean(name="userService") public UserService helloWorld() { return new UserServiceImpl(); } } In order to have the recipe complete, we need to make a class that will represent a domain class. Create a new class called User. public class User { private String name; // generate getters and setters for name field } UserService is a simple interface defining a single method called getUser(). When the getUser() method is called in this recipe, we always create and return a new instance of the user (in the future, we could add parameters, for example login, and fetch user from the database). UserServiceImpl is the implementation of this interface. As mentioned, we could replace that implementation by something smarter than just returning a new instance of the same user every time the getUser() method is called. public interface UserService { public User getUser(); } public class UserServiceImpl implements UserService { @Override public User getUser() { User user = new User(); user.setName("Adela"); return user; } } Almost everything is ready now. We just make a new UI and get the application context from which we get the bean. Then, we call the service and obtain a user that we show in the browser. After we are done with the UI, we can run the application. public class AppUI extends UI { private ApplicationContext context; @Override protected void init(VaadinRequest request) { UserService service = getUserService(request); User user = service.getUser(); String name = user.getName(); Label lblUserName = new Label("Hi " + name + " !"); VerticalLayout layout = new VerticalLayout(); layout.setMargin(true); setContent(layout); layout.addComponent(lblUserName); } private UserService getUserService (VaadinRequest request) { WrappedSession session = request.getWrappedSession(); HttpSession httpSession = ((WrappedHttpSession) session).getHttpSession(); ServletContext servletContext = httpSession.getServletContext(); context = WebApplicationContextUtils.getRequired WebApplicationContext(servletContext); return (UserService) context.getBean("userService"); } } Run the following Maven commands in order to compile the widget set and run the application: mvn package mvn jetty:run How it works... In the first step, we have added dependencies to Spring. There was one additional dependency to cglib, Code Generation Library. This library is required by the @ Configuration annotation and it is used by Spring for making the proxy objects. More information about cglib can be found at http://cglib.sourceforge.net Then, we have added contextClass, contextConfigLocation and ContextLoaderListener into web.xml file. All these are needed in order to initialize the application context properly. Due to this, we are able to get the application context by calling the following code: WebApplicationContextUtils.getRequiredWebApplicationContext (servletContext); Then, we have made UserService that is actually not a real service in this case (we did so because it was not in the scope of this recipe). We will have a look at how to declare Spring services in the following recipes. In the last step, we got the application context by using the WebApplicationContextUtils class from Spring. WrappedSession session = request.getWrappedSession(); HttpSession httpSession = ((WrappedHttpSession) session).getHttpSession(); ServletContext servletContext = httpSession.getServletContext(); context = WebApplicationContextUtils.getRequired WebApplicationContext(servletContext); Then, we obtained an instance of UserService from the Spring application context. UserService service = (UserService) context.getBean("userService"); User user = service.getUser(); We can obtain a bean without knowing the bean name because it can be obtained by the bean type, like this context.getBean(UserService.class). There's more... Using the @Autowire annotation in classes that are not managed by Spring (classes that are not defined in AppConfig in our case) will not work, so no instances will be set via the @Autowire annotation. Handling login with Spring We will create a login functionality in this recipe. The user will be able to log in as admin or client. We will not use a database in this recipe. We will use a dummy service where we just hardcode two users. The first user will be "admin" and the second user will be "client". There will be also two authorities (or roles), ADMIN and CLIENT. We will use Java annotation-driven Spring configuration. Getting ready Create a new Maven project from the Vaadin archetype. mvn archetype:generate -DarchetypeGroupId=com.vaadin -DarchetypeArtifactId=vaadin-archetype-application -DarchetypeVersion=LATEST -Dpackaging=war -DgroupId=com.app -DartifactId=vaadin-spring-login -Dversion=1.0 Maven archetype generates the basic structure of the project. We will add the packages and classes, so the project will have the following directory and file structure: How to do it... Carry out the following steps, in order to create login with Spring framework: We need to add Maven dependencies in pom.xml to spring-core, spring-context, spring-web, spring-security-core, spring-security-config, and cglib (cglib is required by the @Configuration annotation from Spring). <dependency> <groupId>org.springframework</groupId> <artifactId>spring-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-web</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-core</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>org.springframework.security</groupId> <artifactId>spring-security-config</artifactId> <version>${spring.version}</version> </dependency> <dependency> <groupId>cglib</groupId> <artifactId>cglib</artifactId> <version>2.2.2</version> </dependency> Now we edit the web.xml file, so Spring knows we want to use the annotation-driven configuration approach. The path to the AppConfig class must match full class name (together with the package name). <context-param> <param-name>contextClass</param-name> <param-value> org.springframework.web.context.support.Annotation ConfigWebApplicationContext </param-value> </context-param> <context-param> <param-name>contextConfigLocation</param-name> <param-value>com.app.config.AppConfig</param-value> </context-param> <listener> <listener-class> org.springframework.web.context.ContextLoaderListener </listener-class> </listener> We are referring to the AppConfig class in the previous step. Let's implement that class now. AppConfig needs to be annotated by the @Configuration annotation, so Spring can accept it as the context configuration class. We also add the @ComponentScan annotation, which makes sure that Spring will scan the specified packages for Spring components. The package names inside the @ComponentScan annotation need to match our packages that we want to include for scanning. When a component (a class that is annotated with the @Component annotation) is found and there is a @Autowire annotation inside, the auto wiring will happen automatically. package com.app.config; import com.app.auth.AuthManager; import com.app.service.UserService; import com.app.ui.LoginFormListener; import com.app.ui.LoginView; import com.app.ui.UserView; import org.springframework.context.annotation.Bean; import org.springframework.context. annotation.ComponentScan; import org.springframework.context. annotation.Configuration; import org.springframework.context. annotation.Scope; @Configuration @ComponentScan(basePackages = {"com.app.ui" , "com.app.auth", "com.app.service"}) public class AppConfig { @Bean public AuthManager authManager() { AuthManager res = new AuthManager(); return res; } @Bean public UserService userService() { UserService res = new UserService(); return res; } @Bean public LoginFormListener loginFormListener() { return new LoginFormListener(); } } We are defining three beans in AppConfig. We will implement them in this step. AuthManager will take care of the login process. package com.app.auth; import com.app.service.UserService; import org.springframework.beans.factory. annotation.Autowired; import org.springframework.security.authentication. AuthenticationManager; import org.springframework.security.authentication. BadCredentialsException; import org.springframework.security.authentication. UsernamePasswordAuthenticationToken; import org.springframework.security.core.Authentication; import org.springframework.security.core. AuthenticationException; import org.springframework.security.core. GrantedAuthority; import org.springframework.security.core. userdetails.UserDetails; import org.springframework.stereotype.Component; import java.util.Collection; @Component public class AuthManager implements AuthenticationManager { @Autowired private UserService userService; public Authentication authenticate (Authentication auth) throws AuthenticationException { String username = (String) auth.getPrincipal(); String password = (String) auth.getCredentials(); UserDetails user = userService.loadUserByUsername(username); if (user != null && user.getPassword(). equals(password)) { Collection<? extends GrantedAuthority> authorities = user.getAuthorities(); return new UsernamePasswordAuthenticationToken (username, password, authorities); } throw new BadCredentialsException("Bad Credentials"); } } UserService will fetch a user based on the passed login. UserService will be used by AuthManager. package com.app.service; import org.springframework.security.core. GrantedAuthority; import org.springframework.security.core. authority.GrantedAuthorityImpl; import org.springframework.security.core. authority.SimpleGrantedAuthority; import org.springframework.security.core. userdetails.UserDetails; import org.springframework.security.core. userdetails.UserDetailsService; import org.springframework.security.core. userdetails.UsernameNotFoundException; import org.springframework.security.core. userdetails.User; import org.springframework.stereotype.Service; import java.util.ArrayList; import java.util.List; public class UserService implements UserDetailsService { @Override public UserDetails loadUserByUsername (String username) throws UsernameNotFoundException { List<GrantedAuthority> authorities = new ArrayList<GrantedAuthority>(); // fetch user from e.g. DB if ("client".equals(username)) { authorities.add (new SimpleGrantedAuthority("CLIENT")); User user = new User(username, "pass", true, true, false, false, authorities); return user; } if ("admin".equals(username)) { authorities.add (new SimpleGrantedAuthority("ADMIN")); User user = new User(username, "pass", true, true, false, false, authorities); return user; } else { return null; } } } LoginFormListener is just a listener that will initiate the login process, so it will cooperate with AuthManager. package com.app.ui; import com.app.auth.AuthManager; import com.vaadin.navigator.Navigator; import com.vaadin.ui.*; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.authentication. UsernamePasswordAuthenticationToken; import org.springframework.security.core.Authentication; import org.springframework.security.core. AuthenticationException; import org.springframework.security.core.context. SecurityContextHolder; import org.springframework.stereotype.Component; @Component public class LoginFormListener implements Button.ClickListener { @Autowired private AuthManager authManager; @Override public void buttonClick(Button.ClickEvent event) { try { Button source = event.getButton(); LoginForm parent = (LoginForm) source.getParent(); String username = parent.getTxtLogin().getValue(); String password = parent.getTxtPassword().getValue(); UsernamePasswordAuthenticationToken request = new UsernamePasswordAuthenticationToken (username, password); Authentication result = authManager.authenticate(request); SecurityContextHolder.getContext(). setAuthentication(result); AppUI current = (AppUI) UI.getCurrent(); Navigator navigator = current.getNavigator(); navigator.navigateTo("user"); } catch (AuthenticationException e) { Notification.show("Authentication failed: " + e.getMessage()); } } } The login form will be made as a separate Vaadin component. We will use the application context and that way we get bean from the application context by ourselves. So, we are not using auto wiring in LoginForm. package com.app.ui; import com.vaadin.ui.*; import org.springframework.context.ApplicationContext; public class LoginForm extends VerticalLayout { private TextField txtLogin = new TextField("Login: "); private PasswordField txtPassword = new PasswordField("Password: "); private Button btnLogin = new Button("Login"); public LoginForm() { addComponent(txtLogin); addComponent(txtPassword); addComponent(btnLogin); LoginFormListener loginFormListener = getLoginFormListener(); btnLogin.addClickListener(loginFormListener); } public LoginFormListener getLoginFormListener() { AppUI ui = (AppUI) UI.getCurrent(); ApplicationContext context = ui.getApplicationContext(); return context.getBean(LoginFormListener.class); } public TextField getTxtLogin() { return txtLogin; } public PasswordField getTxtPassword() { return txtPassword; } } We will use Navigator for navigating between different views in our Vaadin application. We make two views. The first is for login and the second is for showing the user detail when the user is logged into the application. Both classes will be in the com.app.ui package. LoginView will contain just the components that enable a user to log in (text fields and button). public class LoginView extends VerticalLayout implements View { public LoginView() { LoginForm loginForm = new LoginForm(); addComponent(loginForm); } @Override public void enter(ViewChangeListener.ViewChangeEvent event) { } }; UserView needs to identify whether the user is logged in or not. For this, we will use SecurityContextHolder that obtains the SecurityContext that holds the authentication data. If the user is logged in, then we display some data about him/her. If not, then we navigate him/her to the login form. public class UserView extends VerticalLayout implements View { public void enter(ViewChangeListener.ViewChangeEvent event) { removeAllComponents(); SecurityContext context = SecurityContextHolder.getContext(); Authentication authentication = context.getAuthentication(); if (authentication != null && authentication.isAuthenticated()) { String name = authentication.getName(); Label labelLogin = new Label("Username: " + name); addComponent(labelLogin); Collection<? extends GrantedAuthority> authorities = authentication.getAuthorities(); for (GrantedAuthority ga : authorities) { String authority = ga.getAuthority(); if ("ADMIN".equals(authority)) { Label lblAuthority = new Label("You are the administrator. "); addComponent(lblAuthority); } else { Label lblAuthority = new Label("Granted Authority: " + authority); addComponent(lblAuthority); } } Button logout = new Button("Logout"); LogoutListener logoutListener = new LogoutListener(); logout.addClickListener(logoutListener); addComponent(logout); } else { Navigator navigator = UI.getCurrent().getNavigator(); navigator.navigateTo("login"); } } } We have mentioned LogoutListener in the previous step. Here is how that class could look: public class LogoutListener implements Button.ClickListener { @Override public void buttonClick(Button.ClickEvent clickEvent) { SecurityContextHolder.clearContext(); UI.getCurrent().close(); Navigator navigator = UI.getCurrent().getNavigator(); navigator.navigateTo("login"); } } Everything is ready for the final AppUI class. In this class, we put in to practice all that we have created in the previous steps. We need to get the application context. That is done in the first lines of code in the init method. In order to obtain the application context, we need to get the session from the request, and from the session get the servlet context. Then, we use the Spring utility class, WebApplicationContextUtils, and we find the application context by using the previously obtained servlet context. After that, we set up the navigator. @PreserveOnRefresh public class AppUI extends UI { private ApplicationContext applicationContext; @Override protected void init(VaadinRequest request) { WrappedSession session = request.getWrappedSession(); HttpSession httpSession = ((WrappedHttpSession) session).getHttpSession(); ServletContext servletContext = httpSession.getServletContext(); applicationContext = WebApplicationContextUtils. getRequiredWebApplicationContext(servletContext); Navigator navigator = new Navigator(this, this); navigator.addView("login", LoginView.class); navigator.addView("user", UserView.class); navigator.navigateTo("login"); setNavigator(navigator); } public ApplicationContext getApplicationContext() { return applicationContext; } } Now we can run the application. The password for usernames client and admin is pass. mvn package mvn jetty:run How it works... There are two tricky parts from the development point of view while making the application: First is how to get the Spring application context in Vaadin. For this, we need to make sure that contextClass, contextConfigLocation, and ContextLoaderListener are defined in the web.xml file. Then we need to know how to get Spring application context from the VaadinRequest. We certainly need a reference to the application context in UI, so we define the applicationContext class field together with the public getter (because we need access to the application context from other classes, to get Spring beans). The second part, which is a bit tricky, is the AppConfig class. That class represents annotated Spring application configuration (which is referenced from the web.xml file). We needed to define what packages Spring should scan for components. For this, we have used the @ComponentScan annotation. The important thing to keep in mind is that the @Autowired annotation will work only for Spring managed beans that we have defined in AppConfig. When we try to add the @Autowired annotation to a simple Vaadin component, the autowired reference will remain empty because no auto wiring happens. It is up to us to decide what instances should be managed by Spring and where we use the Spring application context to retrieve the beans. Summary In this article, we saw how to add Spring into the Maven project. We also took a look at handling login with Spring Resources for Article:   Further resources on this subject: Vaadin Portlets in Liferay User Interface Development [Article] Creating a Basic Vaadin Project [Article] Getting Started with Ext GWT [Article]
Read more
  • 0
  • 0
  • 5010

article-image-ubuntu-user-interface-tweaks
Packt
01 Oct 2009
10 min read
Save for later

Ubuntu User Interface Tweaks

Packt
01 Oct 2009
10 min read
I have spent time on all of the major Desktop operating systems and Linux is by far the most customizable. The GNOME Desktop environment, the default environment of Ubuntu and many other Linux distributions, is a very simple yet very customizable interface. I have spent a lot of time around a lot of Linux users and rarely do I find two Desktops that look the same. Whether it is simple desktop background customizations or much more complex UI alterations, GNOME allows you to make your desktop your own. Just like any other environment that you're going to find yourself in for an extended period of time, you're going to want to make it your own. The GNOME Desktop offers this ability in a number of ways. First of all I'll cover perhaps the more obvious methods, and then I'll move to the more complex. As mentioned in the introduction, by the end of this article you'll know how to automate (script) the customization of your desktop down to the very last detail. This is perfect for those that find themselves reinstalling their machines on a regular basis. Appearance GNOME offers a number of basic customizations within the Applications menu. To use the "Appearance Preferences" tool simply navigate to: System > Preferences > Appearance You'll find that the main screen allows you to change your basic theme. The theme includes the environment color scheme, icon set and window bordering. This is often one of the very first things that users will change on a new installation. Of the default theme selections I generally prefer "Clearlooks" over the default Ubuntu brown color. The next tab allows you to set your background. This is the graphic, color or gradient that you want to appear on your desktop. This is also a very common customization. More often than not users will find third-party graphics specific in this section. A great place to find user-generated desktop content is the http://gnome-look.org website. It is dedicated to user-generated Artwork for the GNOME and Ubuntu desktop. On the third tab you'll find Fonts. I have found that fonts do play a very important role in the look of your desktop. For the longest time I didn't bother with customizing my fonts, but after being introduced to a few that I like, it is a must-have in my desktop customization list. My personal preference is to use the "Droid Sans" font, at 10pt for all settings. I think this is a very clean, crisp font design that really changes the desktop look and feel.  If you'd like to try out this font set you'll have to install it. This can be done using: sudo aptitude install ttf-droid Another noticeable customization to your desktop on the Fonts tab is the Rendering option. For laptops you'll definitely want to select the bottom-right option, "Subpixel Smoothing (LCDs)". You should notice a change right away when you check the box. Finally, the Details button on the fonts tab can make great improvements to the over all look. This is where you can set your font resolution. I highly suggest setting this value to 96 dots per inch (dpi). Recent versions of Ubuntu try to dynamically detect the preferred dpi. Unfortunately I haven't had the best of luck with this new feature, so I've continued to set it manually. I think you'll notice a change if your system is on something other than 96. Setting the font to "Droid Sans" 10pt and the resolution to 96 dpi is one of the biggest visual changes that I make to my system! The final tab in the Appearances tool is the Interface. This tab allows you to customize simple things like whether or not your Applications menu should display icons or not. Personally, I have found that I like the default settings, but I would suggest trying a few customizations and finding out what you like. If you've followed the suggestions so far I'm sure your desktop likely looks a lot different than it did out of the box. By changing the theme, desktop background, font and dpi you may have already made drastic changes. I'd like to also share with you some of the additional changes that I make, which will help demonstrate some of the more advanced, little known features of the GNOME desktop. gconf-editor A default Ubuntu system comes with a behind-the-scenes tool called the "gconf-editor". This is basically a graphical editor for your entire GNOME configuration settings. At first use it can be a bit confusing, but once you figure out where and how to find your preferred settings it becomes much easier. To launch the gconf-editor press the key combination ALT-F2 and type: gconf-editor I have heard people compare this tool to Microsoft's registry tool, but I assure you that it is far less complicated! It simply stores GNOME configuration and application settings. It even includes the changes that you made above! Anytime you make a change to the graphical interface it gets stored, and this tool is a graphical way to view those changes. Let's change something else, this time using the gconf-editor. Another of my favorite interface customizations includes the panels. By default you have two panels, one at the top and one at the bottom of your screen. I prefer to have both panels at the top of my screen, and I like them to be a bit smaller than they are out of the box. Here is how we would make that change using the gconf-editor. Navigate to Edit > Search and search for bottom_panel or top_panel. I will start with bottom_panel. You should come up with a few results, the first one being /apps/panel/toplevels/bottom_panel_screen0. You can now customize the color, size, auto-hide feature and much more of your panel. If you find orientation, double-click the entry, and change the value to "top" you'll find that your panel instantly moves to the top of the screen. You may want to alter the size entry while you're in there as well. Make a note of the Key name that you see for each item. These will come in handy a little bit later. A few other settings that you might find interesting are Nautilus desktop settings such as: computer_icon_visible home_icon_visible network_icon_visible trash_icon_visible volumes_visible These are simple check-box settings, activating or deactivating an option upon click. Basically these allow you to toggle the computer, home, network or trash icons on your desktop. I prefer to make sure each of these is turned off. The only one that I do like to keep on is volumes_visible. Try this out yourself and see what you prefer. Automation Earlier I mentioned that you'll want to make note of the Key Name for the settings that you're playing with. It is these names that allow us to automate, or script, the customization of our desktop environment. After putting a little bit of time into finding the key names for each of the customizations that I like I am now able to completely customize every aspect of my desktop by running a simple script! Let me give you a few examples. Above we found that the key name for the bottom panel was: /apps/panel/toplevels/bottom_panel_screen0 The key name specifically for the orientation was: /apps/panel/toplevels/bottom_panel_screen0/orientation The value we changed was top or bottom. We can now make this change from the command line using by typing: gconftool-2 -s --type string /apps/panel/toplevels/bottom_panel_screen0/orientation top Let us see a few more examples, these will change the font settings for each entry that we saw in the Appearances menu: gconftool -s --type string /apps/nautilus/preferences/desktop_font "Droid Sans 10"gconftool -s --type string /apps/metacity/general/titlebar_font "Droid Sans 10"gconftool -s --type string /desktop/gnome/interface/monospace_font_name "Droid Sans 10"gconftool -s --type string /desktop/gnome/interface/document_font_name "Droid Sans 10"gconftool -s --type string /desktop/gnome/interface/font_name "Droid Sans 10" You may or may not have made these changes manually, but just think about the time you could save on your next Ubuntu installation by pasting in these five commands instead! I will warn you though, once you start making a list of gconftool commands it's hard to stop. Considering how simple it is to make environment changes using simple commands, why not list everything! I'd like to share the script that I use to make my preferred changes. You'll likely want to edit the values to match your preferences. #!/bin/bash## customize GNOME interface# (christer@rootcertified.com)#gconftool-2 -s --type string /apps/nautilus/preferences/desktop_font "Droid Sans 10"gconftool-2 -s --type string /apps/metacity/general/titlebar_font "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/monospace_font_name "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/document_font_name "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/font_name "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/icon_theme "gnome-brave"gconftool-2 -s --type bool /apps/nautilus/preferences/always_use_browser truegconftool-2 -s --type bool /apps/nautilus/desktop/computer_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/home_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/network_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/trash_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/volumes_visible truegconftool-2 -s --type bool /apps/nautilus-open-terminal/desktop_opens_home_dir truegconftool-2 -s --type bool /apps/gnome-do/preferences/Do/Platform/Linux/TrayIconPreferences/StatusIconVisible truegconftool-2 -s --type bool /apps/gnome-do/preferences/Do/CorePreferences/QuietStart truegconftool-2 -s --type bool /apps/gnome-terminal/profiles/Default/default_show_menubar falsegconftool-2 -s --type string /apps/gnome-terminal/profiles/Default/font "Droid Sans Mono 10"gconftool-2 -s --type string /apps/gnome-terminal/profiles/Default/scrollbar_position "hidden"gconftool-2 -s --type string /apps/gnome/interface/gtk_theme "Shiki-Brave"gconftool-2 -s --type string /apps/gnome/interface/icon_theme "gnome-brave"gconftool-2 -s --type integer /apps/panel/toplevels/bottom_panel_screen0/size 23gconftool-2 -s --type integer /apps/panel/toplevels/top_panel_screen0/size 23 Summary By saving the above script into a file called "gnome-setup" and running it after a fresh installation I'm able to update my theme, fonts, visible and non-visible icons, gnome-do preferences, gnome-terminal preferences and much more within seconds. My desktop actually feels like my desktop again! I find that maintaining a simple file like this greatly eases the customization of my desktop environment and lets me focus on getting things done. I no longer spend an hour tweaking each little setting to make my machine my home again. I install, run my script, and get to work! If you have read this article you may be interested to view : Compiling and Running Handbrake in Ubuntu Control of File Types in Ubuntu Ubuntu 9.10: How To Upgrade Install GNOME-Shell on Ubuntu 9.10 "Karmic Koala" Five Years of Ubuntu Control of File Types in Ubuntu What's New In Ubuntu 9.10 "Karmic Koala" Create a Local Ubuntu Repository using Apt-Mirror and Apt-Cacher
Read more
  • 0
  • 0
  • 4993

article-image-introduction-hibernate-and-spring-part-1
Packt
29 Dec 2009
4 min read
Save for later

An Introduction to Hibernate and Spring: Part 1

Packt
29 Dec 2009
4 min read
This article by Ahmad Seddighi, introduces Spring and Hibernate, explaining what persistence is, why it is important, and how it is implemented in Java applications. It provides a theoretical discussion of Hibernate and how Hibernate solves problems related to persistence. Finally, we take a look at Spring and the role of Spring in persistence. Hibernate and Spring are open-source Java frameworks that simplify developing Java/JEE applications from simple, stand-alone applications running on a single JVM, to complex enterprise applications running on full-blown application servers. Hibernate and Spring allow developers to produce scalable, reliable, and effective code. Both frameworks support declarative configuration and work with a POJO (Plain Old Java Object) programming model (discussed later in this article), minimizing the dependence of application code on the frameworks, and making development more productive and portable. Although the aim of these frameworks partially overlap, for the most part, each is used for a different purpose. The Hibernate framework aims to solve the problems of managing data in Java: those problems which are not fully solved by the Java persistence API, JDBC (Java Database Connectivity), persistence providers, DBMS (Database Management Systems), and their mediator language, SQL (Structured Query Language). In contrast, Spring is a multitier framework that is not dedicated to a particular area of application architecture. However, Spring does not provide its own solution for issues such as persistence, for which there are already good solutions. Rather, Spring unifies preexisting solutions under its consistent API and makes them easier to use. As mentioned, one of these areas is persistence. Spring can be integrated with a persistence solution, such as Hibernate, to provide an abstraction layer over the persistence technology, and produce more portable, manageable, and effective code. Furthermore, Spring provides other services spread over the application architecture, such as inversion of control and aspect-oriented programming (explained later in this article), decoupling the application's components, and modularizing common behaviors. This article looks at the motivation and goals for Hibernate and Spring. The article begins with an explanation of why Hibernate is needed, where it can be used, and what it can do. We'll take a quick look at Hibernates alternatives, exploring their advantages and disadvantages. I'll outline the valuable features that Hibernate offers and explain how it can solve the problems of the traditional approach to Java persistence. The discussion continues with Spring. I'll explain what Spring is, what services it offers, and how it can help to develop a high-quality data-access layer with Hibernate. Persistence management in Java Persistence has long been a challenge in the enterprise community. Many persistence solutions from primitive, file-based approaches, to modern, object-oriented databases have been presented. For any of these approaches, the goal is to provide reliable, efficient, flexible, and scalable persistence. Among these competing solutions, relational databases (because of certain advantages) have been most widely accepted in the IT world. Today, almost all enterprise applications use relational databases. A relational database is an application that provides the persistence service. It provides many persistence features, such as indexing data to provide speedy searches; solves the relevant problems, such as protecting data from unauthorized access; and handles many complications, such as preserving relationships among data. Creating, modifying, and accessing relational databases is fairly simple. All such databases present data in two-dimensional tables and support SQL, which is relatively easy to learn and understand. Moreover, they provide other services, such as transactions and replication. These advantages are enough to ensure the popularity of relational databases. To provide support for relational databases in Java, the JDBC API was developed. JDBC allows Java applications to connect to relational databases, express their persistence purpose as SQL expressions, and transmit data to and from databases. The following screenshot shows how this works: Using this API, SQL statements can be passed to the database, and the results can be returned to the application, all through a driver. The mismatch problem JDBC handles many persistence issues and problems in communicating with relational databases. It also provides the needed functionality for this purpose. However, there remains an unsolved problem in Java applications: Java applications are essentially object-oriented programs, whereas relational databases store data in a relational form. While applications use object-oriented forms of data, databases represent data in two-dimensional table forms. This situation leads to the so-called object-relational paradigm mismatch, which (as we will see later) causes many problems in communication between object-oriented and relational environments. For many reasons, including ease of understanding, simplicity of use, efficiency, robustness, and even popularity, we may not discard relational databases. However, the mismatch cannot be eliminated in an effortless and straightforward manner.
Read more
  • 0
  • 0
  • 4939

article-image-presenting-data-using-adf-faces
Packt
20 Mar 2014
7 min read
Save for later

Presenting Data Using ADF Faces

Packt
20 Mar 2014
7 min read
(For more resources related to this topic, see here.) In this article, you will learn how to present a single record, multiple records, and master-details records on your page using different components and methodologies. You will also learn how to enable the internationalizing and localizing processes in your application by using a resource bundle and the different options of bundle you can have. Starting from this article onward, we will not use the HR schema. We will rather use the FacerHR schema in the Git repository under the BookDatabaseSchema folder and read the README.txt file for information on how to create the database schema. This schema will be used for the whole book, so you need to do this only once. Make sure you validate your database connection information for your recipes to work without problem. Presenting single records on your page In this recipe, we will address the need for presenting a single record in a page, which is useful specifically when you want to focus on a specific record in the table of your database; for example, a user's profile can be represented by a single record in an employee's table. The application and its model have been created for you; you can see it by cloning the PresentingSingleRecord application from the Git repository. How to do it... In order to present a single record in pages, follow the ensuing steps: Open the PresentingSingleRecord application. Create a bounded task flow by right-clicking on ViewController and navigating to New | ADF Task Flow. Name the task flow single-employee-info and uncheck the Create with Page Fragments option. You can create a task flow with a page fragment, but you will need a page to host it at the end; alternatively, you can create a whole page if the task flow holds only one activity and is not reusable. However, in this case, I prefer to create a page-based task flow for fast deployment cycles and train you to always start from task flow. Add a View activity inside of the task flow and name it singleEmployee. Double-click on the newly created activity to create the page; this page will be based on the Oracle Three Column layout. Close the dialog by pressing the OK button. Navigate to Data Controls pane | HrAppModuleDataControl, drag-and-drop EmployeesView1 into the white area of the page template, and select ADF Form from the drop-down list that appears as you drop the view object. Check the Row Navigation option so that it has the first, previous, next, and last buttons for navigating through the task. Group attributes based on their category, so the Personal Information group should include the EmployeeId, FirstName, LastName, Email, and Phone Number attributes; the Job Information group should include HireDate, Job, Salary, and CommissionPct; and the last group will be Department Information that includes both ManagerId and DepartmentId attributes. Select multiple components by holding the Ctrl key and click on the Group button at the top-right corner, as shown in the following screenshot: Change the Display Label values of the three groups to eInfo, jInfo, and dInfo respectively. The Display Label option is a little misleading when it comes to groups in a form as groups don't have titles. Due to this, Display Label will be assigned to the Id attribute of the af:group component that will wrap the components, which can't have space and should be reasonably small; however, Input Text w/Label or Output Text w/Label will end up in the Label attribute in the panelLabelAndMessage component. Change the Component to Use option of all attributes from ADF Input Text w/Label to ADF Output Text w/Label. You might think that if you check the Read-Only Form option, it will have the same effect, but it won't. What will happen is that the readOnly attribute of the input text will change to true, which will make the input text non-updateable; however, it won't change the component type. Change the Display Label option for the attributes to have more human-readable labels to the end user; you should end up with the following screen: Finish by pressing the OK button. You can save yourself the trouble of editing the Display Label option every time you create a component that is based on a view object by changing the Label attribute in UI Hints from the entity object or view object. More information can be found in the documentation at http://docs.oracle.com/middleware/1212/adf/ADFFD/bcentities.htm#sm0140. Examine the page structure from the Structure pane in the bottom-left corner as shown in the following screenshot. A panel form layout can be found inside the center facet of the page template. This panel form layout represents an ADF form, and inside of it, there are three group components; each group has a panel label and message for each field of the view object. At the bottom of the panel form layout, you can locate a footer facet; expand it to see a panel group layout that has all the navigation buttons. The footer facet identifies the locations of the buttons, which will be at the bottom of this panel form layout even if some components appear inside the page markup after this facet. Examine the panel form layout properties by clicking on the Properties pane, which is usually located in the bottom-right corner. It allows you to change attributes such as Max Columns, Rows, Field Width, or Label Width. Change these attributes to change the form and to have more than one column. If you can't see the Structure or Properties pane, you can see them again by navigating to Window menu | Structure or Window menu | Properties. Save everything and run the page, placing it inside the adf-config task flow; to see this in action, refer to the following screenshot: How it works... The best component to represent a single record is a panel form layout, which presents the user with an organized form layout for different input/output components. If you examine the page source code, you can see an expression like #{bindings.FirstName.inputValue}, which is related to the FirstName binding inside the Bindings section of the page definition where it points to EmployeesView1Iterator. However, iterator means multiple records, then why FirstName is only presenting a single record? It's because the iterator is aware of the current row that represents the row in focus, and this row will always point to the first row of the view object's select statement when you render the page. By pressing different buttons on the form, the Current Row value changes and thus the point of focus changes to reflect a different row based on the button you pressed. When you are dealing with a single record, you can show it as the input text or any of the user input's components; alternatively, you can change it as the output text if you are just viewing it. In this recipe, you can see that the Group component is represented as a line in the user interface when you run the page. If you were to change the panel form layout's attributes, such as Max Columns or Rows, you would see a different view. Max Columns represents the maximum number of columns to show in a form, which defaults to 3 in case of desktops and 2 in case of PDAs; however, if this panel form layout is inside another panel form layout, the Max Columns value will always be 1. The Rows attribute represents the numbers of rows after which we should start a new column; it has a default value of 231-1. You can know more about each attribute by clicking on the gear icon that appears when you hover over an attribute and reading the information on the property's Help page. The benefit of having a panel form layout is that all labels are aligned properly; this organizes everything for you similar to the HTML table component. See also Check the following reference for more information about arranging content in forms: http://docs.oracle.com/middleware/1212/adf/ADFUI/af_orgpage.htm#CDEHDJEA
Read more
  • 0
  • 0
  • 4935

article-image-new-soa-capabilities-biztalk-server-2009-wcf-sql-server-adapter
Packt
26 Oct 2009
3 min read
Save for later

New SOA Capabilities in BizTalk Server 2009: WCF SQL Server Adapter

Packt
26 Oct 2009
3 min read
Do not go where the path may lead; go instead where there is no path and leave a trail.-Ralph Waldo Emerson Many of the patterns and capabilities shown in this article are compatible with the last few versions of the BizTalk Server product. So what's new in BizTalk Server 2009?` BizTalk Server 2009 is the sixth formal release of the BizTalk Server product. This upcoming release has a heavy focus on platform modernization through new support for Windows Server 2008, Visual Studio.NET 2008, SQL Server 2008, and the .NET Framework 3.5. This will surely help developers who have already moved to these platforms in their day-to-day activities but have been forced to maintain separate environments solely for BizTalk development efforts. Lets get started. What is the WCF SQL Adapter? The BizTalk Adapter Pack 2.0 now contains five system and data adapters including SAP, Siebel, Oracle databases, Oracle applications, and SQL Server. What are these adapters and how are they different than the adapters available for previous version of BizTalk? Up until recently, BizTalk adapters were built using a commonly defined BizTalk Adapter Framework. This framework prescribed interfaces and APIs for adapter developers in order to elicit a common look and feel for the users of the adapters. Moving forward, adapter developers are encouraged by Microsoft to use the new WCF LOB Adapter SDK. As you can guess from the name, this new adapter framework, which can be considered an evolution of the BizTalk Adapter Framework, is based on WCF technologies. All of the adapters in the BizTalk Adapter Pack 2.0 are built upon the WCF LOB Adapter SDK. What this means is that all of the adapters are built as reusable, metadata-rich components that are surfaced to users as WCF bindings. So much like you have a wsHttp or netTcp binding, now you have a sqlBinding or sapBinding. As you would expect from a WCF binding, there is a rich set of configuration attributes for these adapters and they are no longer tightly coupled to BizTalk itself. Microsoft has made connection a commodity, and no longer do organizations have to spend tens of thousands of dollars to connect to line of business systems like SAP through expensive, BizTalk-only adapters. This latest version of the BizTalk Adapter Pack now includes a SQL Server adapter, which replaces the legacy BizTalk-only SQL Server adapter. What do we get from this SQL Server adapter that makes it so much better than the old one? Feature Classic SQL Adapter WCF SQL Adapter Execute create-read-update-delete statements on tables and views; execute stored procedures and generic T-SQL statements Partial (send operations only support stored procedures and updategrams) Yes Database polling via FOR XML Yes Yes Database polling via  traditional tabular results No Yes Proactive database push via SQL Query Notification No Yes Expansive adapter configuration which impacts connection management and transaction behavior No Yes Support for composite transactions which allow aggregation of operations across tables or procedures into a single atomic transaction No Yes Rich metadata browsing and retrieval for finding and selecting database operations No Yes Support for the latest data types (e.g. XML) and SQL Server 2008 platform No Yes Reusable outside of BizTalk applications by WCF or basic HTTP clients No Yes Adapter extension and configuration through out of the box WCF components or custom WCF behaviors No Yes Dynamic WSDL generation which always reflects current state of the system instead of fixed contract which always requires explicit updates No Yes
Read more
  • 0
  • 0
  • 4874

article-image-modeling-relationships-gorm
Packt
09 Jun 2010
6 min read
Save for later

Modeling Relationships with GORM

Packt
09 Jun 2010
6 min read
(For more resources on Groovy DSL, see here.) Storing and retrieving simple objects is all very well, but the real power of GORM is that it allows us to model the relationships between objects, as we will now see. The main types of relationships that we want to model are associations, where one object has an associated relationship with another, for example, Customer and Account, composition relationships, where we want to build an object from sub components, and inheritance, where we want to model similar objects by describing their common properties in a base class. Associations Every business system involves some sort of association between the main business objects. Relationships between objects can be one-to-one, one-to-many, or many-to-many. Relationships may also imply ownership, where one object only has relevance in relation to another parent object. If we model our domain directly in the database, we need to build and manage tables, and make associations between the tables by using foreign keys. For complex relationships, including many-to-many relationships, we may need to build special tables whose sole function is to contain the foreign keys needed to track the relationships between objects. Using GORM, we can model all of the various associations that we need to establish between objects directly within the GORM class definitions. GORM takes care of all of the complex mappings to tables and foreign keys through a Hibernate persistence layer. One-to-one The simplest association that we need to model in GORM is a one-to-one association. Suppose our customer can have a single address; we would create a new Address domain class using the grails create-domain-class command, as before. class Address { String street String city static constraints = { }} To create the simplest one-to-one relationship with Customer, we just add an Address field to the Customer class. class Customer { String firstName String lastName Address address static constraints = { }} When we rerun the Grails application, GORM will recreate a new address table. It will also recognize the address field of Customer as an association with the Address class, and create a foreign key relationship between the customer and address tables accordingly. This is a one-directional relationship. We are saying that a Customer "has an" Address but an Address does not necessarily "have a" Customer. We can model bi-directional associations by simply adding a Customer field to the Address. This will then be reflected in the relational model by GORM adding a customer_id field to the address table. class Address { String street String city Customer customer static constraints = { } }mysql> describe address;+-------------+--------------+------+-----+---------+----------------+| Field | Type | Null | Key | Default | Extra |+-------------+--------------+------+-----+---------+----------------+| id | bigint(20) | NO | PRI | NULL | auto_increment || version | bigint(20) | NO | | | || city | varchar(255) | NO | | | || customer_id | bigint(20) | YES | MUL | NULL | || street | varchar(255) | NO | | | |+-------------+--------------+------+-----+---------+----------------+5 rows in set (0.01 sec)mysql> These basic one-to-one associations can be inferred by GORM just by interrogating the fields in each domain class via reflection and the Groovy metaclasses. To denote ownership in a relationship, GORM uses an optional static field applied to a domain class, called belongsTo. Suppose we add an Identity class to retain the login identity of a customer in the application. We would then use class Customer { String firstName String lastName Identity ident}class Address { String street String city}class Identity { String email String password static belongsTo = Customer} Classes are first-class citizens in the Groovy language. When we declare static belongsTo = Customer, what we are actually doing is storing a static instance of a java.lang.Class object for the Customer class in the belongsTo field. Grails can interrogate this static field at load time to infer the ownership relation between Identity and Customer. Here we have three classes: Customer, Address, and Identity. Customer has a one-to-one association with both Address and Identity through the address and ident fields. However, the ident field is "owned" by Customer as indicated in the belongsTo setting. What this means is that saves, updates, and deletes will be cascaded to identity but not to address, as we can see below. The addr object needs to be saved and deleted independently of Customer but id is automatically saved and deleted in sync with Customer. def addr = new Address(street:"1 Rock Road", city:"Bedrock")def id = new Identity(email:"email", password:"password")def fred = new Customer(firstName:"Fred", lastName:"Flintstone", address:addr,ident:id)addr.save(flush:true)assert Customer.list().size == 0assert Address.list().size == 1assert Identity.list().size == 0fred.save(flush:true)assert Customer.list().size == 1assert Address.list().size == 1assert Identity.list().size == 1fred.delete(flush:true)assert Customer.list().size == 0assert Address.list().size == 1assert Identity.list().size == 0addr.delete(flush:true)assert Customer.list().size == 0assert Address.list().size == 0assert Identity.list().size == 0 Constraints You will have noticed that every domain class produced by the grails create-domain- class command contains an empty static closure, constraints. We can use this closure to set the constraints on any field in our model. Here we apply constraints to the e-mail and password fields of Identity. We want an e-mail field to be unique, not blank, and not nullable. The password field should be 6 to 200 characters long, not blank, and not nullable. class Identity { String email String password static constraints = { email(unique: true, blank: false, nullable: false) password(blank: false, nullable:false, size:6..200) }} From our knowledge of builders and the markup pattern, we can see that GORM could be using a similar strategy here to apply constraints to the domain class. It looks like a pretended method is provided for each field in the class that accepts a map as an argument. The map entries are interpreted as constraints to apply to the model field. The Builder pattern turns out to be a good guess as to how GORM is implementing this. GORM actually implements constraints through a builder class called ConstrainedPropertyBuilder. The closure that gets assigned to constraints is in fact some markup style closure code for this builder. Before executing the constraints closure, GORM sets an instance of ConstrainedPropertyBuilder to be the delegate for the closure. We are more accustomed to seeing builder code where the Builder instance is visible. def builder = new ConstrainedPropertyBuilder()builder.constraints {} Setting the builder as a delegate of any closure allows us to execute the closure as if it was coded in the above style. The constraints closure can be run at any time by Grails, and as it executes the ConstrainedPropertyBuilder, it will build a HashMap of the constraints it encounters for each field. We can illustrate the same technique by using MarkupBuilder and NodeBuilder. The Markup class in the following code snippet just declares a static closure named markup. Later on we can use this closure with whatever builder we want, by setting the delegate of the markup to the builder that we would like to use. class Markup { static markup = { customers { customer(id:1001) { name(firstName:"Fred", surname:"Flintstone") address(street:"1 Rock Road", city:"Bedrock") } customer(id:1002) { name(firstName:"Barney", surname:"Rubble") address(street:"2 Rock Road", city:"Bedrock") } } }}Markup.markup.setDelegate(new groovy.xml.MarkupBuilder())Markup.markup() // Outputs xmlMarkup.markup.setDelegate(new groovy.util.NodeBuilder())def nodes = Markup.markup() // builds a node tree
Read more
  • 0
  • 0
  • 4844
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-applying-linq-entities-wcf-service
Packt
05 Feb 2013
15 min read
Save for later

Applying LINQ to Entities to a WCF Service

Packt
05 Feb 2013
15 min read
(For more resources related to this topic, see here.) Creating the LINQNorthwind solution The first thing we need to do is create a test solution. In this article, we will start from the data access layer. Perform the following steps: Start Visual Studio. Create a new class library project LINQNorthwindDAL with solution name LINQNorthwind (make sure the Create directory for the solution is checked to specify the solution name). Delete the Class1.cs file. Add a new class ProductDAO to the project. Change the new class ProductDAO to be public. Now you should have a new solution with the empty data access layer class. Next, we will add a model to this layer and create the business logic layer and the service interface layer. Modeling the Northwind database In the previous section, we created the LINQNorthwind solution. Next, we will apply LINQ to Entities to this new solution. For the data access layer, we will use LINQ to Entities instead of the raw ADO.NET data adapters. As you will see in the next section, we will use one LINQ statement to retrieve product information from the database and the update LINQ statements will handle the concurrency control for us easily and reliably. As you may recall, to use LINQ to Entities in the data access layer of our WCF service, we first need to add an entity data model to the project. In the Solution Explorer, right-click on the project item LINQNorthwindDAL, select menu options Add | New Item..., and then choose Visual C# Items | ADO.NET Entity Data Model as Template and enter Northwind.edmx as the name. Select Generate from database, choose the existing Northwind connection, and add the Products table to the model. Click on the Finish button to add the model to the project. The new column RowVersion should be in the Product entity . If it is not there, add it to the database table with a type of Timestamp and refresh the entity data model from the database In the EMD designer, select the RowVersion property of the Product entity and change its Concurrency Mode from None to Fixed. Note that its StoreGeneratedPattern should remain as Computed. This will generate a file called Northwind.Context.cs, which contains the Db context for the Northwind database. Another file called Product.cs is also generated, which contains the Product entity class. You need to save the data model in order to see these two files in the Solution Explorer. In Visual Studio Solution Explorer, the Northwind.Context.cs file is under the template file Northwind.Context.tt and Product.cs is under Northwind.tt. However, in Windows Explorer, they are two separate files from the template files. Creating the business domain object project During Implementing a WCF Service in the Real World, we create a business domain object (BDO) project to hold the intermediate data between the data access objects and the service interface objects. In this section, we will also add such a project to the solution for the same purpose. In the Solution Explorer, right-click on the LINQNorthwind solution. Select Add | New Project... to add a new class library project named LINQNorthwindBDO. Delete the Class1.cs file. Add a new class file ProductBDO.cs. Change the new class ProductBDO to be public. Add the following properties to this class: ProductID ProductName QuantityPerUnit UnitPrice Discontinued UnitsInStock UnitsOnOrder ReorderLevel RowVersion The following is the code list of the ProductBDO class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace LINQNorthwindBDO { public class ProductBDO { public int ProductID { get; set; } public string ProductName { get; set; } public string QuantityPerUnit { get; set; } public decimal UnitPrice { get; set; } public int UnitsInStock { get; set; } public int ReorderLevel { get; set; } public int UnitsOnOrder { get; set; } public bool Discontinued { get; set; } public byte[] RowVersion { get; set; } } } As noted earlier, in this article we will use BDO to hold the intermediate data between the data access objects and the data contract objects. Besides this approach, there are some other ways to pass data back and forth between the data access layer and the service interface layer, and two of them are listed as follows: The first one is to expose the Entity Framework context objects from the data access layer up to the service interface layer. In this way, both the service interface layer and the business logic layer—we will implement them soon in following sections—can interact directly with the Entity Framework. This approach is not recommended as it goes against the best practice of service layering. Another approach is to use self-tracking entities. Self-tracking entities are entities that know how to do their own change tracking regardless of which tier those changes are made on. You can expose self-tracking entities from the data access layer to the business logic layer, then to the service interface layer, and even share the entities with the clients. Because self-tracking entities are independent of entity context, you don't need to expose the entity context objects. The problem of this approach is, you have to share the binary files with all the clients, thus it is the least interoperable approach for a WCF service. Now this approach is not recommended by Microsoft, so in this book we will not discuss it. Using LINQ to Entities in the data access layer Next we will modify the data access layer to use LINQ to Entities to retrieve and update products. We will first create GetProduct to retrieve a product from the database and then create UpdateProduct to update a product in the database. Adding a reference to the BDO project Now we have the BDO project in the solution, we need to modify the data access layer project to reference it. In the Solution Explorer, right-click on the LINQNorthwindDAL project. Select Add Reference.... Select the LINQNorthwindBDO project from the Projects tab under Solution. Click on the OK button to add the reference to the project. Creating GetProduct in the data access layer We can now create the GetProduct method in the data access layer class ProductDAO, to use LINQ to Entities to retrieve a product from the database. We will first create an entity DbContext object and then use LINQ to Entities to get the product from the DbContext object. The product we get from DbContext will be a conceptual entity model object. However, we don't want to pass this product object back to the upper-level layer because we don't want to tightly couple the business logic layer with the data access layer. Therefore, we will convert this entity model product object to a ProductBDO object and then pass this ProductBDO object back to the upper-level layers. To create the new method, first add the following using statement to the ProductBDO class: using LINQNorthwindBDO; Then add the following method to the ProductBDO class: public ProductBDO GetProduct(int id) { ProductBDO productBDO = null; using (var NWEntities = new NorthwindEntities()) { Product product = (from p in NWEntities.Products where p.ProductID == id select p).FirstOrDefault(); if (product != null) productBDO = new ProductBDO() { ProductID = product.ProductID, ProductName = product.ProductName, QuantityPerUnit = product.QuantityPerUnit, UnitPrice = (decimal)product.UnitPrice, UnitsInStock = (int)product.UnitsInStock, ReorderLevel = (int)product.ReorderLevel, UnitsOnOrder = (int)product.UnitsOnOrder, Discontinued = product.Discontinued, RowVersion = product.RowVersion }; } return productBDO; } Within the GetProduct method, we had to create an ADO.NET connection, create an ADO. NET command object with that connection, specify the command text, connect to the Northwind database, and send the SQL statement to the database for execution. After the result was returned from the database, we had to loop through the DataReader and cast the columns to our entity object one by one. With LINQ to Entities, we only construct one LINQ to Entities statement and everything else is handled by LINQ to Entities. Not only do we need to write less code, but now the statement is also strongly typed. We won't have a runtime error such as invalid query syntax or invalid column name. Also, a SQL Injection attack is no longer an issue, as LINQ to Entities will also take care of this when translating LINQ expressions to the underlying SQL statements. Creating UpdateProduct in the data access layer In the previous section, we created the GetProduct method in the data access layer, using LINQ to Entities instead of ADO.NET. Now in this section, we will create the UpdateProduct method, using LINQ to Entities instead of ADO.NET. Let's create the UpdateProduct method in the data access layer class ProductBDO, as follows: public bool UpdateProduct( ref ProductBDO productBDO, ref string message) { message = "product updated successfully"; bool ret = true; using (var NWEntities = new NorthwindEntities()) { var productID = productBDO.ProductID; Product productInDB = (from p in NWEntities.Products where p.ProductID == productID select p).FirstOrDefault(); // check product if (productInDB == null) { throw new Exception("No product with ID " + productBDO.ProductID); } NWEntities.Products.Remove(productInDB); // update product productInDB.ProductName = productBDO.ProductName; productInDB.QuantityPerUnit = productBDO.QuantityPerUnit; productInDB.UnitPrice = productBDO.UnitPrice; productInDB.Discontinued = productBDO.Discontinued; productInDB.RowVersion = productBDO.RowVersion; NWEntities.Products.Attach(productInDB); NWEntities.Entry(productInDB).State = System.Data.EntityState.Modified; int num = NWEntities.SaveChanges(); productBDO.RowVersion = productInDB.RowVersion; if (num != 1) { ret = false; message = "no product is updated"; } } return ret; } Within this method, we first get the product from database, making sure the product ID is a valid value in the database. Then, we apply the changes from the passed-in object to the object we have just retrieved from the database, and submit the changes back to the database. Let's go through a few notes about this method: You have to save productID in a new variable and then use it in the LINQ query. Otherwise, you will get an error saying Cannot use ref or out parameter 'productBDO' inside an anonymous method, lambda expression, or query expression. If Remove and Attach are not called, RowVersion from database (not from the client) will be used when submitting to database, even though you have updated its value before submitting to the database. An update will always succeed, but without concurrency control. If Remove is not called and you call the Attach method, you will get an error saying The object cannot be attached because it is already in the object context. If the object state is not set to be Modified, Entity Framework will not honor your changes to the entity object and you will not be able to save any change to the database. Creating the business logic layer Now let's create the business logic layer. Right click on the solution item and select Add | New Project.... Add a class library project with the name LINQNorthwindLogic. Add a project reference to LINQNorthwindDAL and LINQNorthwindBDO to this new project. Delete the Class1.cs file. Add a new class file ProductLogic.cs. Change the new class ProductLogic to be public. Add the following two using statements to the ProductLogic.cs class file: using LINQNorthwindDAL; using LINQNorthwindBDO; Add the following class member variable to the ProductLogic class: ProductDAO productDAO = new ProductDAO(); Add the following new method GetProduct to the ProductLogic class: public ProductBDO GetProduct(int id) { return productDAO.GetProduct(id); } Add the following new method UpdateProduct to the ProductLogic class: public bool UpdateProduct( ref ProductBDO productBDO, ref string message) { var productInDB = GetProduct(productBDO.ProductID); // invalid product to update if (productInDB == null) { message = "cannot get product for this ID"; return false; } // a product cannot be discontinued // if there are non-fulfilled orders if (productBDO.Discontinued == true && productInDB.UnitsOnOrder > 0) { message = "cannot discontinue this product"; return false; } else { return productDAO.UpdateProduct(ref productBDO, ref message); } } Build the solution. We now have only one more step to go, that is, adding the service interface layer. Creating the service interface layer The last step is to create the service interface layer. Right-click on the solution item and select Add | New Project.... Add a WCF service library project with the name of LINQNorthwindService. Add a project reference to LINQNorthwindLogic and LINQNorthwindBDO to this new service interface project. Change the service interface file IService1.cs, as follows: Change its filename from IService1.cs to IProductService.cs. Change the interface name from IService1 to IProductService, if it is not done for you. Remove the original two service operations and add the following two new operations: [OperationContract] [FaultContract(typeof(ProductFault))] Product GetProduct(int id); [OperationContract] [FaultContract(typeof(ProductFault))] bool UpdateProduct(ref Product product, ref string message); Remove the original CompositeType and add the following data contract classes: [DataContract] public class Product { [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; } [DataMember] public byte[] RowVersion { get; set; } } [DataContract] public class ProductFault { public ProductFault(string msg) { FaultMessage = msg; } [DataMember] public string FaultMessage; } The following is the content of the IProductService.cs file: using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; namespace LINQNorthwindService { [ServiceContract] public interface IProductService { [OperationContract] [FaultContract(typeof(ProductFault))] Product GetProduct(int id); [OperationContract] [FaultContract(typeof(ProductFault))] bool UpdateProduct(ref Product product, ref string message); } [DataContract] public class Product { [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; } [DataMember] public byte[] RowVersion { get; set; } } [DataContract] public class ProductFault { public ProductFault(string msg) { FaultMessage = msg; } [DataMember] public string FaultMessage; } } Change the service implementation file Service1.cs, as follows: Change its filename from Service1.cs to ProductService.cs. Change its class name from Service1 to ProductService, if it is not done for you. Add the following two using statements to the ProductService.cs file: using LINQNorthwindLogic; using LINQNorthwindBDO; Add the following class member variable: ProductLogic productLogic = new ProductLogic(); Remove the original two methods and add following two methods: public Product GetProduct(int id) { ProductBDO productBDO = null; try { productBDO = productLogic.GetProduct(id); } catch (Exception e) { string msg = e.Message; string reason = "GetProduct Exception"; throw new FaultException<ProductFault> (new ProductFault(msg), reason); } if (productBDO == null) { string msg = string.Format("No product found for id {0}", id); string reason = "GetProduct Empty Product"; throw new FaultException<ProductFault> (new ProductFault(msg), reason); } Product product = new Product(); TranslateProductBDOToProductDTO(productBDO, product); return product; } public bool UpdateProduct(ref Product product, ref string message) { bool result = true; // first check to see if it is a valid price if (product.UnitPrice <= 0) { message = "Price cannot be <= 0"; result = false; } // ProductName can't be empty else if (string.IsNullOrEmpty(product.ProductName)) { message = "Product name cannot be empty"; result = false; } // QuantityPerUnit can't be empty else if (string.IsNullOrEmpty(product.QuantityPerUnit)) { message = "Quantity cannot be empty"; result = false; } else { try { var productBDO = new ProductBDO(); TranslateProductDTOToProductBDO(product, productBDO); result = productLogic.UpdateProduct( ref productBDO, ref message); product.RowVersion = productBDO.RowVersion; } catch (Exception e) { string msg = e.Message; throw new FaultException<ProductFault> (new ProductFault(msg), msg); } } return result; } Because we have to convert between the data contract objects and the business domain objects, we need to add the following two methods: private void TranslateProductBDOToProductDTO( ProductBDO productBDO, Product product) { product.ProductID = productBDO.ProductID; product.ProductName = productBDO.ProductName; product.QuantityPerUnit = productBDO.QuantityPerUnit; product.UnitPrice = productBDO.UnitPrice; product.Discontinued = productBDO.Discontinued; product.RowVersion = productBDO.RowVersion; } private void TranslateProductDTOToProductBDO( Product product, ProductBDO productBDO) { productBDO.ProductID = product.ProductID; productBDO.ProductName = product.ProductName; productBDO.QuantityPerUnit = product.QuantityPerUnit; productBDO.UnitPrice = product.UnitPrice; productBDO.Discontinued = product.Discontinued; productBDO.RowVersion = product.RowVersion; } Change the config file App.config, as follows: Change Service1 to ProductService. Remove the word Design_Time_Addresses. Change the port to 8080. Now, BaseAddress should be as follows: http://localhost:8080/LINQNorthwindService/ProductService/ Copy the connection string from the App.config file in the LINQNorthwindDAL project to the following App.config file: <connectionStrings> <add name="NorthwindEntities" connectionString="metadata=res://*/Northwind. csdl|res://*/Northwind.ssdl|res://*/Northwind. msl;provider=System.Data.SqlClient;provider connection string="data source=localhost;initial catalog=Northwind;integrated security=True;Multipl eActiveResultSets=True;App=EntityFramework"" providerName="System.Data.EntityClient" /> </connectionStrings> You should leave the original connection string untouched in the App.config file in the data access layer project. This connection string is used by the Entity Model Designer at design time. It is not used at all during runtime, but if you remove it, whenever you open the entity model designer in Visual Studio, you will be prompted to specify a connection to your database. Now build the solution and there should be no errors. Testing the service with the WCF Test Client Now we can run the program to test the GetProduct and UpdateProduct operations with the WCF Test Client. You may need to run Visual Studio as administrator to start the WCF Test Client. First set LINQNorthwindService as the startup project and then press Ctrl + F5 to start the WCF Test Client. Double-click on the GetProduct operation, enter a valid product ID, and click on the Invoke button. The detailed product information should be retrieved and displayed on the screen, as shown in the following screenshot: Now double-click on the UpdateProduct operation, enter a valid product ID, and specify a name, price, quantity per unit, and then click on Invoke. This time you will get an exception as shown in the following screenshot: From this image we can see that the update failed. The error details, which are in HTML View in the preceding screenshot, actually tell us it is a concurrency error. This is because, from WCF Test Client, we can't enter a row version as it is not a simple datatype parameter, thus we didn't pass in the original RowVersion for the object to be updated, and when updating the object in the database, the Entity Framework thinks this product has been updated by some other users.
Read more
  • 0
  • 0
  • 4836

article-image-java-hibernate-collections-associations-and-advanced-concepts
Packt
15 Sep 2015
16 min read
Save for later

Java Hibernate Collections, Associations, and Advanced Concepts

Packt
15 Sep 2015
16 min read
In this article by Yogesh Prajapati and Vishal Ranapariya, the author of the book Java Hibernate Cookbook, he has provide a complete guide to the following recipes: Working with a first-level cache One-to-one mapping using a common join table Persisting Map (For more resources related to this topic, see here.) Working with a first-level cache Once we execute a particular query using hibernate, it always hits the database. As this process may be very expensive, hibernate provides the facility to cache objects within a certain boundary. The basic actions performed in each database transaction are as follows: The request reaches the database server via the network. The database server processes the query in the query plan. Now the database server executes the processed query. Again, the database server returns the result to the querying application through the network. At last, the application processes the results. This process is repeated every time we request a database operation, even if it is for a simple or small query. It is always a costly transaction to hit the database for the same records multiple times. Sometimes, we also face some delay in receiving the results because of network routing issues. There may be some other parameters that affect and contribute to the delay, but network routing issues play a major role in this cycle. To overcome this issue, the database uses a mechanism that stores the result of a query, which is executed repeatedly, and uses this result again when the data is requested using the same query. These operations are done on the database side. Hibernate provides an in-built caching mechanism known as the first-level cache (L1 cache). Following are some properties of the first-level cache: It is enabled by default. We cannot disable it even if we want to. The scope of the first-level cache is limited to a particular Session object only; the other Session objects cannot access it. All cached objects are destroyed once the session is closed. If we request for an object, hibernate returns the object from the cache only if the requested object is found in the cache; otherwise, a database call is initiated. We can use Session.evict(Object object) to remove single objects from the session cache. The Session.clear() method is used to clear all the cached objects from the session. Getting ready Let's take a look at how the L1 cache works. Creating the classes For this recipe, we will create an Employee class and also insert some records into the table: Source file: Employee.java @Entity @Table public class Employee { @Id @GeneratedValue private long id; @Column(name = "name") private String name; // getters and setters @Override public String toString() { return "Employee: " + "nt Id: " + this.id + "nt Name: " + this.name; } } Creating the tables Use the following table script if the hibernate.hbm2ddl.auto configuration property is not set to create: Use the following script to create the employee table: CREATE TABLE `employee` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ); We will assume that two records are already inserted, as shown in the following employee table: id name 1 Yogesh 2 Aarush Now, let's take a look at some scenarios that show how the first-level cache works. How to do it… Here is the code to see how caching works. In the code, we will load employee#1 and employee#2 once; after that, we will try to load the same employees again and see what happens: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); System.out.println("nLoading employee#2..."); /* Line 6 */ Employee employee2 = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2.toString()); System.out.println("nLoading employee#1 again..."); /* Line 10 */ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); System.out.println("nLoading employee#2 again..."); /* Line 15 */ Employee employee2_dummy = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush Loading employee#1 again... Employee: Id: 1 Name: Yogesh Loading employee#2 again... Employee: Id: 2 Name: Aarush How it works… Here, we loaded Employee#1 and Employee#2 as shown in Line 2 and 6 respectively and also the print output for both. It's clear from the output that hibernate will hit the database to load Employee#1 and Employee#2 because at startup, no object is cached in hibernate. Now, in Line 10, we tried to load Employee#1 again. At this time, hibernate did not hit the database but simply use the cached object because Employee#1 is already loaded and this object is still in the session. The same thing happened with Employee#2. Hibernate stores an object in the cache only if one of the following operations is completed: Save Update Get Load List There's more… In the previous section, we took a look at how caching works. Now, we will discuss some other methods used to remove a cached object from the session. There are two more methods that are used to remove a cached object: evict(Object object): This method removes a particular object from the session clear(): This method removes all the objects from the session evict (Object object) This method is used to remove a particular object from the session. It is very useful. The object is no longer available in the session once this method is invoked and the request for the object hits the database: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); /* Line 5 */ session.evict(employee1); System.out.println("nEmployee#1 removed using evict(…)..."); System.out.println("nLoading employee#1 again..."); /* Line 9*/ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Employee#1 removed using evict(…)... Loading employee#1 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Here, we loaded an Employee#1, as shown in Line 2. This object was then cached in the session, but we explicitly removed it from the session cache in Line 5. So, the loading of Employee#1 will again hit the database. clear() This method is used to remove all the cached objects from the session cache. They will no longer be available in the session once this method is invoked and the request for the objects hits the database: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); System.out.println("nLoading employee#2..."); /* Line 6 */ Employee employee2 = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2.toString()); /* Line 9 */ session.clear(); System.out.println("nAll objects removed from session cache using clear()..."); System.out.println("nLoading employee#1 again..."); /* Line 13 */ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); System.out.println("nLoading employee#2 again..."); /* Line 17 */ Employee employee2_dummy = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush All objects removed from session cache using clear()... Loading employee#1 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush Here, Line 2 and 6 show how to load Employee#1 and Employee#2 respectively. Now, we removed all the objects from the session cache using the clear() method. As a result, the loading of both Employee#1 and Employee#2 will again result in a database hit, as shown in Line 13 and 17. One-to-one mapping using a common join table In this method, we will use a third table that contains the relationship between the employee and detail tables. In other words, the third table will hold a primary key value of both tables to represent a relationship between them. Getting ready Use the following script to create the tables and classes. Here, we use Employee and EmployeeDetail to show a one-to-one mapping using a common join table: Creating the tables Use the following script to create the tables if you are not using hbm2dll=create|update: Use the following script to create the detail table: CREATE TABLE `detail` ( `detail_id` bigint(20) NOT NULL AUTO_INCREMENT, `city` varchar(255) DEFAULT NULL, PRIMARY KEY (`detail_id`) ); Use the following script to create the employee table: CREATE TABLE `employee` ( `employee_id` BIGINT(20) NOT NULL AUTO_INCREMENT, `name` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`employee_id`) ); Use the following script to create the employee_detail table: CREATE TABLE `employee_detail` ( `detail_id` BIGINT(20) DEFAULT NULL, `employee_id` BIGINT(20) NOT NULL, PRIMARY KEY (`employee_id`), KEY `FK_DETAIL_ID` (`detail_id`), KEY `FK_EMPLOYEE_ID` (`employee_id`), CONSTRAINT `FK_EMPLOYEE_ID` FOREIGN KEY (`employee_id`) REFERENCES `employee` (`employee_id`), CONSTRAINT `FK_DETAIL_ID` FOREIGN KEY (`detail_id`) REFERENCES `detail` (`detail_id`) ); Creating the classes Use the following code to create the classes: Source file: Employee.java @Entity @Table(name = "employee") public class Employee { @Id @GeneratedValue @Column(name = "employee_id") private long id; @Column(name = "name") private String name; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name="employee_detail" , joinColumns=@JoinColumn(name="employee_id") , inverseJoinColumns=@JoinColumn(name="detail_id") ) private Detail employeeDetail; public long getId() { return id; } public void setId(long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Detail getEmployeeDetail() { return employeeDetail; } public void setEmployeeDetail(Detail employeeDetail) { this.employeeDetail = employeeDetail; } @Override public String toString() { return "Employee" +"n Id: " + this.id +"n Name: " + this.name +"n Employee Detail " + "nt Id: " + this.employeeDetail.getId() + "nt City: " + this.employeeDetail.getCity(); } } Source file: Detail.java @Entity @Table(name = "detail") public class Detail { @Id @GeneratedValue @Column(name = "detail_id") private long id; @Column(name = "city") private String city; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name="employee_detail" , joinColumns=@JoinColumn(name="detail_id") , inverseJoinColumns=@JoinColumn(name="employee_id") ) private Employee employee; public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } public long getId() { return id; } public void setId(long id) { this.id = id; } @Override public String toString() { return "Employee Detail" +"n Id: " + this.id +"n City: " + this.city +"n Employee " + "nt Id: " + this.employee.getId() + "nt Name: " + this.employee.getName(); } } How to do it… In this section, we will take a look at how to insert a record step by step. Inserting a record Using the following code, we will insert an Employee record with a Detail object: Code Detail detail = new Detail(); detail.setCity("AHM"); Employee employee = new Employee(); employee.setName("vishal"); employee.setEmployeeDetail(detail); Transaction transaction = session.getTransaction(); transaction.begin(); session.save(employee); transaction.commit(); Output Hibernate: insert into detail (city) values (?) Hibernate: insert into employee (name) values (?) Hibernate: insert into employee_detail (detail_id, employee_id) values (?,?) Hibernate saves one record in the detail table and one in the employee table and then inserts a record in to the third table, employee_detail, using the primary key column value of the detail and employee tables. How it works… From the output, it's clear how this method works. The code is the same as in the other methods of configuring a one-to-one relationship, but here, hibernate reacts differently. Here, the first two statements of output insert the records in to the detail and employee tables respectively, and the third statement inserts the mapping record in to the third table, employee_detail, using the primary key column value of both the tables. Let's take a look at an option used in the previous code in detail: @JoinTable: This annotation, written on the Employee class, contains the name="employee_detail" attribute and shows that a new intermediate table is created with the name "employee_detail" joinColumns=@JoinColumn(name="employee_id"): This shows that a reference column is created in employee_detail with the name "employee_id", which is the primary key of the employee table inverseJoinColumns=@JoinColumn(name="detail_id"): This shows that a reference column is created in the employee_detail table with the name "detail_id", which is the primary key of the detail table Ultimately, the third table, employee_detail, is created with two columns: one is "employee_id" and the other is "detail_id". Persisting Map Map is used when we want to persist a collection of key/value pairs where the key is always unique. Some common implementations of java.util.Map are java.util.HashMap, java.util.LinkedHashMap, and so on. For this recipe, we will use java.util.HashMap. Getting ready Now, let's assume that we have a scenario where we are going to implement Map<String, String>; here, the String key is the e-mail address label, and the value String is the e-mail address. For example, we will try to construct a data structure similar to <"Personal e-mail", "emailaddress2@provider2.com">, <"Business e-mail", "emailaddress1@provider1.com">. This means that we will create an alias of the actual e-mail address so that we can easily get the e-mail address using the alias and can document it in a more readable form. This type of implementation depends on the custom requirement; here, we can easily get a business e-mail using the Business email key. Use the following code to create the required tables and classes. Creating tables Use the following script to create the tables if you are not using hbm2dll=create|update. This script is for the tables that are generated by hibernate: Use the following code to create the email table: CREATE TABLE `email` ( `Employee_id` BIGINT(20) NOT NULL, `emails` VARCHAR(255) DEFAULT NULL, `emails_KEY` VARCHAR(255) NOT NULL DEFAULT '', PRIMARY KEY (`Employee_id`,`emails_KEY`), KEY `FK5C24B9C38F47B40` (`Employee_id`), CONSTRAINT `FK5C24B9C38F47B40` FOREIGN KEY (`Employee_id`) REFERENCES `employee` (`id`) ); Use the following code to create the employee table: CREATE TABLE `employee` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `name` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`id`) ); Creating a class Source file: Employee.java @Entity @Table(name = "employee") public class Employee { @Id @GeneratedValue @Column(name = "id") private long id; @Column(name = "name") private String name; @ElementCollection @CollectionTable(name = "email") private Map<String, String> emails; public long getId() { return id; } public void setId(long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Map<String, String> getEmails() { return emails; } public void setEmails(Map<String, String> emails) { this.emails = emails; } @Override public String toString() { return "Employee" + "ntId: " + this.id + "ntName: " + this.name + "ntEmails: " + this.emails; } } How to do it… Here, we will consider how to work with Map and its manipulation operations, such as inserting, retrieving, deleting, and updating. Inserting a record Here, we will create one employee record with two e-mail addresses: Code Employee employee = new Employee(); employee.setName("yogesh"); Map<String, String> emails = new HashMap<String, String>(); emails.put("Business email", "emailaddress1@provider1.com"); emails.put("Personal email", "emailaddress2@provider2.com"); employee.setEmails(emails); session.getTransaction().begin(); session.save(employee); session.getTransaction().commit(); Output Hibernate: insert into employee (name) values (?) Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?,?,?) Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?,?,?) When the code is executed, it inserts one record into the employee table and two records into the email table and also sets a primary key value for the employee record in each record of the email table as a reference. Retrieving a record Here, we know that our record is inserted with id 1. So, we will try to get only that record and understand how Map works in our case. Code Employee employee = (Employee) session.get(Employee.class, 1l); System.out.println(employee.toString()); System.out.println("Business email: " + employee.getEmails().get("Business email")); Output Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select emails0_.Employee_id as Employee1_0_0_, emails0_.emails as emails0_, emails0_.emails_KEY as emails3_0_ from email emails0_ where emails0_.Employee_id=? Employee Id: 1 Name: yogesh Emails: {Personal email=emailaddress2@provider2.com, Business email=emailaddress1@provider1.com} Business email: emailaddress1@provider1.com Here, we can easily get a business e-mail address using the Business email key from the map of e-mail addresses. This is just a simple scenario created to demonstrate how to persist Map in hibernate. Updating a record Here, we will try to add one more e-mail address to Employee#1: Code Employee employee = (Employee) session.get(Employee.class, 1l); Map<String, String> emails = employee.getEmails(); emails.put("Personal email 1", "emailaddress3@provider3.com"); session.getTransaction().begin(); session.saveOrUpdate(employee); session.getTransaction().commit(); System.out.println(employee.toString()); Output Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select emails0_.Employee_id as Employee1_0_0_, emails0_.emails as emails0_, emails0_.emails_KEY as emails3_0_ from email emails0_ where emails0_.Employee_id=? Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?, ?, ?) Employee Id: 2 Name: yogesh Emails: {Personal email 1= emailaddress3@provider3.com, Personal email=emailaddress2@provider2.com, Business email=emailaddress1@provider1.com} Here, we added a new e-mail address with the Personal email 1 key and the value is emailaddress3@provider3.com. Deleting a record Here again, we will try to delete the records of Employee#1 using the following code: Code Employee employee = new Employee(); employee.setId(1); session.getTransaction().begin(); session.delete(employee); session.getTransaction().commit(); Output Hibernate: delete from email where Employee_id=? Hibernate: delete from employee where id=? While deleting the object, hibernate will delete the child records (here, e-mail addresses) as well. How it works… Here again, we need to understand the table structures created by hibernate: Hibernate creates a composite primary key in the email table using two fields: employee_id and emails_KEY. Summary In this article you familiarized yourself with recipes such as working with a first-level cache, one-to-one mapping using a common join table, and persisting map. Resources for Article: Further resources on this subject: PostgreSQL in Action[article] OpenShift for Java Developers[article] Oracle 12c SQL and PL/SQL New Features [article]
Read more
  • 0
  • 0
  • 4829

article-image-article-odata-on-mobile-devices
Packt
02 Aug 2012
8 min read
Save for later

Odata on Mobile Devices

Packt
02 Aug 2012
8 min read
With the continuous evolution of mobile operating systems, smart mobile devices (such as smartphones or tablets) play increasingly important roles in everyone's daily work and life. The iOS (from Apple Inc., for iPhone, iPad, and iPod Touch devices), Android (from Google) and Windows Phone 7 (from Microsoft) operating systems have shown us the great power and potential of modern mobile systems. In the early days of the Internet, web access was mostly limited to fixed-line devices. However, with the rapid development of wireless network technology (such as 3G), Internet access has become a common feature for mobile or portable devices. Modern mobile OSes, such as iOS, Android, and Windows Phone have all provided rich APIs for network access (especially Internet-based web access). For example, it is quite convenient for mobile developers to create a native iPhone program that uses a network API to access remote RSS feeds from the Internet and present the retrieved data items on the phone screen. And to make Internet-based data access and communication more convenient and standardized, we often leverage some existing protocols, such as XML or JSON, to help us. Thus, it is also a good idea if we can incorporate OData services in mobile application development so as to concentrate our effort on the main application logic instead of the details about underlying data exchange and manipulation. In this article, we will discuss several cases of building OData client applications for various kinds of mobile device platforms. The first four recipes will focus on how to deal with OData in applications running on Microsoft Windows Phone 7. And they will be followed by two recipes that discuss consuming an OData service in mobile applications running on the iOS and Android platforms. Although this book is .NET developer-oriented, since iOS and Android are the most popular and dominating mobile OSes in the market, I think the last two recipes here would still be helpful (especially when the OData service is built upon WCF Data Service on the server side). Accessing OData service with OData WP7 client library What is the best way to consume an OData service in a Windows Phone 7 application? The answer is, by using the OData client library for Windows Phone 7 (OData WP7 client library). Just like the WCF Data Service client library for standard .NET Framework based applications, the OData WP7 client library allows developers to communicate with OData services via strong-typed proxy and entity classes in Windows Phone 7 applications. Also, the latest Windows Phone SDK 7.1 has included the OData WP7 client library and the associated developer tools in it. In this recipe, we will demonstrate how to use the OData WP7 client library in a standard Windows Phone 7 application. Getting ready The sample WP7 application we will build here provides a simple UI for users to view and edit the Categories data by using the Northwind OData service. The application consists of two phone screens, shown in the following screenshot: Make sure you have installed Windows Phone SDK 7.1 (which contains the OData WP7 client library and tools) on the development machine. You can get the SDK from the following website: http://create.msdn.com/en-us/home/getting_started The source code for this recipe can be found in the ch05ODataWP7ClientLibrarySln directory. How to do it... Create a new ASP.NET web application that contains the Northwind OData service. Add a new Windows Phone Application project in the same solution (see the following screenshot). Select Windows Phone OS 7.1 as the Target Windows Phone OS Version in the New Windows Phone Application dialog box (see the following screenshot). Click on the OK button, to finish the WP7 project creation. The following screenshot shows the default WP7 project structure created by Visual Studio: Create a new Windows Phone Portrait Page (see the following screenshot) and name it EditCategory.xaml. Create the OData client proxy (against the Northwind OData service) by using the Visual Studio Add Service Reference wizard. Add the XAML content for the MainPage.xaml page (see the following XAML fragment). <Grid x_Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0"> <ListBox x_Name="lstCategories" ItemsSource="{Binding}"> <ListBox.ItemTemplate>> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="60" /> <ColumnDefinition Width="260" /> <ColumnDefinition Width="140" /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Text="{Binding Path=CategoryID}" FontSize="36" Margin="5"/> <TextBlock Grid.Column="1" Text="{Binding Path=CategoryName}" FontSize="36" Margin="5" TextWrapping="Wrap"/> <HyperlinkButton Grid.Column="2" Content="Edit" HorizontalAlignment="Right" NavigateUri="{Binding Path=CategoryID, StringFormat='/EditCategory.xaml? ID={0}'}" FontSize="36" Margin="5"/> <Grid> <DataTemplate> <ListBox.ItemTemplate> <ListBox> <Grid> Add the code for loading the Category list in the code-behind file of the MainPage. xaml page (see the following code snippet). public partial class MainPage : PhoneApplicationPage { ODataSvc.NorthwindEntities _ctx = null; DataServiceCollection _categories = null; ...... private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e) { Uri svcUri = new Uri("http://localhost:9188/NorthwindOData.svc"); _ctx = new ODataSvc.NorthwindEntities(svcUri); _categories = new DataServiceCollection(_ctx); _categories.LoadCompleted += (o, args) => { if (_categories.Continuation != null) _categories.LoadNextPartialSetAsync(); else { this.Dispatcher.BeginInvoke( () => { ContentPanel.DataContext = _categories; ContentPanel.UpdateLayout(); } ); } }; var query = from c in _ctx.Categories select c; _categories.LoadAsync(query); } } Add the XAML content for the EditCategory.xamlpage (see the following XAML fragment). <Grid x_Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0"> <StackPanel> <TextBlock Text="{Binding Path=CategoryID, StringFormat='Fields of Categories({0})'}" FontSize="40" Margin="5" /> <Border> <StackPanel> <TextBlock Text="Category Name:" FontSize="24" Margin="10" /> <TextBox x_Name="txtCategoryName" Text="{Binding Path=CategoryName, Mode=TwoWay}" /> <TextBlock Text="Description:" FontSize="24" Margin="10" /> <TextBox x_Name="txtDescription" Text="{Binding Path=Description, Mode=TwoWay}" /> </StackPanel> </Border> <StackPanel Orientation="Horizontal" HorizontalAlignment="Center"> <Button x_Name="btnUpdate" Content="Update" HorizontalAlignment="Center" Click="btnUpdate_Click" /> <Button x_Name="btnCancel" Content="Cancel" HorizontalAlignment="Center" Click="btnCancel_Click" /> </StackPanel> </StackPanel> </Grid> Add the code for editing the selected Category item in the code-behind file of the EditCategory.xaml page. In the PhoneApplicationPage_Loaded event, we will load the properties of the selected Category item and display them on the screen (see the following code snippet). private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e) { EnableControls(false); Uri svcUri = new Uri("http://localhost:9188/NorthwindOData. svc"); _ctx = new ODataSvc.NorthwindEntities(svcUri); var id = int.Parse(NavigationContext.QueryString["ID"]); var query = _ctx.Categories.Where(c => c.CategoryID == id); _categories = new DataServiceCollection(_ctx); _categories.LoadCompleted += (o, args) => { if (_categories.Count <= 0) { MessageBox.Show("Failed to retrieve Category item."); NavigationService.GoBack(); } else { EnableControls(true); ContentPanel.DataContext = _categories[0]; ContentPanel.UpdateLayout(); } }; _categories.LoadAsync(query); } The code for updating changes (against the Category item) is put in the Click event of the Update button (see the following code snippet). private void btnUpdate_Click(object sender, RoutedEventArgs e) { EnableControls(false); _ctx.UpdateObject(_categories[0]); _ctx.BeginSaveChanges( (ar) => { this.Dispatcher.BeginInvoke( () => { try { var response = _ctx.EndSaveChanges(ar); NavigationService.Navigate(new Uri("/MainPage.xaml", UriKind.Relative)); } catch (Exception ex) { MessageBox.Show("Failed to save changes."); EnableControls(true); } } ); }, null ); } Select the WP7 project and launch it in Windows Phone Emulator (see the following screenshot). Depending on the performance of the development machine, it might take a while to start the emulator. Running a WP7 application in Windows Phone Emulator is very helpful especially when the phone application needs to access some web services (such as WCF Data Service) hosted on the local machine (via the Visual Studio test web server). How it works... Since the OData WP7 client library (and tools) has been installed together with Windows Phone SDK 7.1, we can directly use the Visual Studio Add Service Reference wizard to generate the OData client proxy in Windows Phone applications. And the generated OData proxy is the same as what we used in standard .NET applications. Similarly, all network access code (such as the OData service consumption code in this recipe) has to follow the asynchronous programming pattern in Windows Phone applications. There's more... In this recipe, we use the Windows Phone Emulator for testing. If you want to deploy and test your Windows Phone application on a real device, you need to obtain a Windows Phone developer account so as to unlock your Windows Phone device. Refer to the walkthrough: App Hub - windows phone developer registration walkthrough,available at http://go.microsoft.com/fwlink/?LinkID=202697
Read more
  • 0
  • 0
  • 4817

article-image-images-colors-and-backgrounds
Packt
07 Oct 2013
5 min read
Save for later

Images, colors, and backgrounds

Packt
07 Oct 2013
5 min read
(For more resources related to this topic, see here.) The following screenshot (Images and colors) shows the final result of this article:   Images and colors The following is the corresponding drawing.kv code: 64. # File name: drawing.kv (Images and colors) 65. <DrawingSpace>: 66. canvas: 67. Ellipse: 68. pos: 10,10 69. size: 80,80 70. source: 'kivy.png' 71. Rectangle: 72. pos: 110,10 73. size: 80,80 74. source: 'kivy.png' 75. Color: 76. rgba: 0,0,1,.75 77. Line: 78. points: 10,10,390,10 79. width: 10 80. cap: 'square' 81. Color: 82. rgba: 0,1,0,1 83. Rectangle: 84. pos: 210,10 85. size: 80,80 86. source: 'kivy.png' 87. Rectangle: 88. pos: 310,10 89. size: 80,80 This code starts with an Ellipse (line 67) and a Rectangle (line 71). We use the source property, which inserts an image to decorate the polygon. The image kivy.png is 80 x 80 pixels with a white background (without any alpha/transparency channel). The result is shown in the first two columns of the previous screenshot (Images and colors). In line 75, we use the context instruction Color to change the color (with the rgba property: red, green, blue, and alpha) of the coordinate space context. This means that the next VertexInstructions will be drawn with the color changed by rgba. A ContextInstruction changes the current coordinate space context. In the previous screenshot, the blue bar at the bottom (line 77) has a transparent blue (line 76) instead of the default white (1,1,1,1) as seen in the previous examples. We set the ends shape of the line to a square with the cap property (line 80). We change the color again in line 81. After that, we draw two more rectangles, one with the kivy.png image and other without it. In the previous screenshot (Images and color) you can see that the white part of the image has become as green as the basic Rectangle on the left. Be very careful with this. The Color instruction acts as a light that is illuminating the kivy.png image. This is why you can still see the Kivy logo on the background instead of it being all covered by the color. There is another important detail to notice in the previous screenshot. There is a blue line that crosses the first two polygons in front and then crosses behind the last two. This illustrates the fact that the instructions are executed in order and this might bring some unwanted results. In this example we have full control of the order but for more complicated scenarios Kivy provides an alternative. We can specify three Canvas instances (canvas.before, canvas, and canvas.after) for each Widget. They are useful to organize the order of execution to guarantee that the background component remains in the background, or to bring some of the elements to the foreground. The following drawing.kv file shows an example of these three sets (lines 92, 98, and 104) of instructions: 90. # File name: drawing.kv (Before and After Canvas) 91. <DrawingSpace>: 92. canvas.before: 93. Color: 94. rgba: 1,0,0,1 95. Rectangle: 96. pos: 0,0 97. size: 100,100 98. canvas: 99. Color: 100. rgba: 0,1,0,1 101. Rectangle: 102. pos: 100,0 103. size: 100,100 104. canvas.after: 105. Color: 106. rgba: 0,0,1,1 107. Rectangle: 108. pos: 200,0 109. size: 100,100 110. Button: 111. text: 'A very very very long button' 112. pos_hint: {'center_x': .5, 'center_y': .5} 113. size_hint: .9,.1 In each set, a Rectangle of different color is drawn (lines 95, 101, and 107). The following diagram illustrates the execution order of the canvas. The number on the top-left margin of each code block indicates the order of execution: Execution order of the canvas Please note that we didn't define any canvas, canvas.before, or canvas.after for the Button, but Kivy does. The Button is a Widget and it displays graphics on the screen. For example, the gray background is just a Rectangle. That means that it has instructions in its internal Canvas instances. The following screenshot shows the result (executed with python drawing.py --size=300x100): Before and after canvas The graphics of the Button (the child) are covered up by the graphics of instructions in the canvas.after. But what is executed between canvas.before and canvas? It could be code of a base class when we are working with inheritance and we want to add instructions in the subclass that should be executed before the base class Canvas instances. A practical example of this will be covered when we apply them in the last section of this article in the comic creator project. The canvas.before will also be useful when we study how to dynamically add instruction to Canvas instances For now, it is sufficient to understand that there are three sets of instructions (Canvas instances) that provide some flexibility when we are displaying graphics on the screen. We will now explore some more context instructions related to three basic transformations. Summary In this article we learned how to add images and colors to shapes and how to position graphics at a front or back level. Resources for Article: Further resources on this subject: Easily Writing SQL Queries with Spring Python [Article] Python Testing: Installing the Robot Framework [Article] Advanced Output Formats in Python 2.6 Text Processing [Article]
Read more
  • 0
  • 0
  • 4777
article-image-restful-services-jax-rs-20
Packt
26 Sep 2013
16 min read
Save for later

RESTful Services JAX-RS 2.0

Packt
26 Sep 2013
16 min read
Representational State Transfer Representational State Transfer (REST) is a style of information application architecture that aligns the distributed applications to the HTTP request and response protocols, in particular matching Hypermedia to the HTTP request methods and Uniform Resource Identifiers (URI). Hypermedia is the term that describes the ability of a system to deliver self-referential content, where related contextual links point to downloadable or stream-able digital media, such as photographs, movies, documents, and other data. Modern systems, especially web applications, demonstrate through display of text that a certain fragment of text is a link to the media. Hypermedia is the logical extension of the term hypertext, which is a text that contains embedded references to other text. These embedded references are called links, and they immediately transfer the user to the other text when they are invoked. Hypermedia is a property of media, including hypertext, to immediately link other media and text. In HTML, the anchor tag <a> accepts a href attribute, the so-called hyperlink parameter. The World Wide Web is built on the HTTP standards, Versions 1.0 and 1.1, which define specific enumerations to retrieve data from a web resource. These operations, sometimes called Web Methods, are GET, POST, PUT, and DELETE. Representational State Transfer also reuses these operations to form semantic interface to a URI. Representational State Transfer, then, is both a style and architecture for building network enabled distributed applications. It is governed by the following constraints: Client/Server: A REST application encourages the architectural robust principle of separation of concerns by dividing the solution into clients and servers. A standalone, therefore, cannot be a RESTful application. This constraint ensures the distributed nature of RESTful applications over the network. Stateless: A REST application exhibits stateless communication. Clients cannot and should not take advantage of any stored context information in the server and therefore the full data of each request must be sent to the server for processing. Cache: A REST application is able to declare which data is cacheable or not cacheable. This constraint allows the architect to set the performance level for the solution, in other words, a trade-off. Caching data to the web resource, allows the business to achieve a sense of latency, scalability, and availability. The counter point to improved performance through caching data is the issue of expiration of the cache at the correct time and sequence, when do we delete stale data? The cache constraint also permits successful implementation providers to develop optimal frameworks and servers. Uniform Interface: A REST application emphasizes and maintains a unique identifier for each component and there are a set of protocols for accessing data. The constraint allows general interaction to any REST component and therefore anyone or anything can manipulate a REST component to access data. The drawback is the Uniform Interface may be suboptimal in ease-of-use and cognitive load to directly provide a data structure and remote procedure function call. Layered Style: A REST application can be composed of functional processing layers in order to simplify complex flows of data between clients and servers. Layered style constraint permits modularization of function with the data and in itself is another sufficient example of separation of concerns. The layered style is an approach that benefits load-balancing servers, caching content, and scalability. Code-on-Demand: A REST application can optimally supply downloadable code on demand for the client to execute. The code could be the byte-codes from the JVM, such as a Java Applet, or JavaFX WebStart application, or it could be a JavaScript code with say JSON data. Downloadable code is definitely a clear security risk that means that the solution architect must assume responsibility of sandboxing Java classes, profiling data, and applying certificate signing in all instances. Therefore, code-on-demand, is a disadvantage in a public domain service, and this constraint in REST application is only seen inside the firewalls of corporations. In terms of the Java platform, the Java EE standard covers REST applications through the specification JAX-RS and this article covers Version 2.0. JAX-RS 2.0 features For Java EE 7, the JAX-RS 2.0 specification has the following new features: Client-side API for invoking RESTful server-side remote endpoint Support for Hypermedia linkage Tighter integration with the Bean Validation framework Asynchronous API for both server and client-side invocations Container filters on the server side for processing incoming requests and outbound responses Client filter on the client side for processing outgoing request and incoming responses Reader and writer interceptors to handle specific content types Architectural style The REST style is simply a Uniform Resource Identifier and the application of the HTTP request methods, which invokes resources that generate a HTTP response. Although Fielding, himself, says that REST does not necessarily require the HTTP communication as a networker layer, and the style of architecture can be built on any other network protocol. Let's look at those methods again with a fictional URL (http://fizzbuzz.com/) Method Description POST A REST style application creates or inserts an entity with the supplied data. The client can assume new data has been inserted into the underlying backend database and the server returns a new URI to reference the data. PUT A REST style application replaces the entity into the database with the supplied data. GET A REST style application retrieves the entity associated with the URI, and it can be a collection of URI representing entities or it can be the actual properties of the entity DELETE A REST style application deletes the entity associated with the URI from the backend database. The user should note that PUT and DELETE are idempotent operations, meaning they can be repeated endlessly and the result is the same in steady state conditions. The GET operation is a safe operation; it has no side effects to the server-side data. REST style for collections of entities Let's take a real example with the URL http://fizzbuzz.com/resources/, which represents the URI of a collection of resources. Resources could be anything, such as books, products, or cast iron widgets. Method Description GET Retrieves the collection entities by URI under the link http://fizzbuzz.com/resources and they may include other more data. POST Creates a new entity in the collection under the URI http://fizzbuzz.com/resources. The URI is automatically assigned and returned by this service call, which could be something like http://fizzbuzz.com/resources/WKT54321. PUT Replaces the entire collection of entities under the URI http://fizzbuzz.com/resources. DELETE Deletes the entire collection of entities under the URI http://fizzbuzz.com/resources. As a reminder, a URI is a series of characters that identifies a particular resource on the World Wide Web. A URI, then, allows different clients to uniquely identify a resource on the web, or a representation. A URI is combination of a Uniform Resource Name (URN) and a Uniform Resource Locator (URL). You can think of a URN like a person's name, as way of naming an individual and a URL is similar to a person's home address, which is the way to go and visit them sometime. In the modern world, non-technical people are accustomed to desktop web browsing as URL. However, the web URL is a special case of a generalized URI. A diagram that illustrates HTML5 RESTful communication between a JAX RS 2.0 client and server, is as follows: REST style for single entities Assuming we have a URI reference to a single entity like http://fizzbuzz.com/resources/WKT54321. Method Description GET Retrieves the entity with reference to URI under the link http://fizzbuzz.com/resources/WKT54321. POST Creates a new sub entity under the URI http://fizzbuzz.com/resources/WKT54321. There is a subtle difference here, as this call does something else. It is not often used, except to create Master-Detail records. The URI of the subentity is automatically assigned and returned by this service call, which could be something like http://fizzbuzz.com/resources/WKT54321/D1023 PUT Replaces the referenced entity's entire collection with the URI http://fizzbuzz.com/resources/WKT54321 . If the entity does not exist then the service creates it. DELETE Deletes the entity under the URI references http://fizzbuzz.com/resources/WKT54321 Now that we understand the REST style we can move on to the JAX-RS API properly. Consider carefully your REST hierarchy of resources The key to build a REST application is to target the users of the application instead of blindly converting the business domain into an exposed middleware. Does the user need the whole detail of every object and responsibility in the application? On the other hand is the design not spreading enough information for the intended audience to do their work? Servlet mapping In order to enable JAX-RS in a Java EE application the developer must set up the configuration in the web deployment descriptor file. JAX-RS requires a specific servlet mapping to be enabled, which triggers the provider to search for annotated classes. The standard API for JAX-RS lies in the Java package javax.ws.rs and in its subpackages. Interestingly, the Java API for REST style interfaces sits underneath the Java Web Services package javax.ws. For your web applications, you must configure a Java servlet with just fully qualified name of the class javax.ws.rs.core.Application. The Servlet must be mapped to a root URL pattern in order to intercept REST style requests. The following is an example code of such a configuration: <?xml version="1.0" encoding="UTF-8"?><web-app xsi:schemaLocation="http://metadata-complete="false"><display-name>JavaEE Handbook 7 JAX RS Basic</display-name><servlet><servlet-name>javax.ws.rs.core.Application</servlet-name><load-on-startup>1</load-on-startup></servlet><servlet-mapping><servlet-name>javax.ws.rs.core.Application</servlet-name><url-pattern>/rest/*</url-pattern></servlet-mapping></web-app> In the web deployment descriptor we just saw, only the servlet name is defined, javax.ws.rs.core.Application. Do not define the servlet class. The second step maps the URL pattern to the REST endpoint path. In this example, any URL that matches the simplified Glob pattern /rest/* is mapped. This is known as the application path. A JAX-RS application is normally packaged up as a WAR file and deployed to a Java EE application server or a servlet container that conforms to the Web Profile. The application classes are stored in the /WEB-INF/classes and required libraries are found under /WEB-INF/lib folder. Therefore, JAX-RS applications share the consistency of configurations as servlet applications. The JAX-RS specification does recommend, but does not enforce, that conformant providers follow the general principles of Servlet 3.0 plug-ability and discoverability of REST style endpoints. Discoverability is achieved in the recommendation through class scanning of the WAR file, and it is up to the provider to make this feature available. An application can create a subclass of the javax.ws.rs.core.Application class. The Application class is a concrete class and looks like the following code: public class Application {public Application() { /* ... */ }public java.util.Set<Class<?>> getClasses() {/* ... */ }public java.util.Set<Object> getSingletons() {/* ... */ }public java.util.Map<String,Object> getProperties() {/* ... */ }} Implementing a custom Application subclass is a special case of providing maximum control of RESTful services for your business application. The developer must provide a set collection of classes that represent JAX-RS end points. The engineer must supply a list of Singleton object, if any, and do something useful with the properties. The default implementation of javax.ws.rs.core.Application and the methods getClasses() and getSingletons() return empty sets. The getProperties() method returns an empty map collection. By returning empty sets, the provider assumes that all relevant JAX-RS resource and provider classes that are scanned and found will be added to the JAX-RS application. Majority of the time, I suspect, you, the developer will want to rely on annotations to specify the REST end points and there are Servlet Context listeners and Servlet Filters to configure application wide behavior, for example the startup sequence of a web application. So how can we register our own custom Application subclass? The answer is just subclass the core class with your own class. The following code explains what we just read: package com.fizbuzz.services;@javax.ws.rs.ApplicationPath("rest")public class GreatApp extends javax.ws.rs.core.Application {// Do your custom thing here} In the custom GreatApp class, you can now configure logic for initialization. Note the use of @ApplicationPath annotation to configure the REST style URL. You still have to associate your custom Application subclass into the web deployment descriptor with a XML Servlet name definition. Remember the Application Configuration A very common error for first time JAX-RS developers is to forget that the web deployment descriptor really requires a servlet mapping to a javax.ws.js.core.Application type. Now that we know how to initialize a JAX-RS application, let us dig deeper and look at the defining REST style endpoints. Mapping JAX-RS resources JAX-RS resources are configured through the resource path, which suffixes the application path. Here is the constitution of the URL. http://<hostname>:<port>/<web_context>/<application_path>/<resource_path> The <hostname> is the host name of the server. The <port> refers to the port number, which is optional and the default port is 80. The <web_context> is the Servlet context, which is the context for the deployed web application. The <application_path> is the configuration URI pattern as specified in the web deployment descriptor @ApplicationPath or the Servlet configuration of Application type. The <resource_path> is the resource path to REST style resource. The final fragment <resource_path> defines the URL pattern that maps a REST style resource. The resource path is configured by annotation javax.ws.rs.Path. Test-Driven Development with JAX-RS Let us write a unit test to verify the simplest JAX-RS service. It follows a REST style resource around a list of books. There are only four books in this endpoint and the only thing the user/client can do at the start is to access the list of books by author and title. The client invokes the REST style endpoint, otherwise known as a resource with an HTTP GET request. The following is the code for the class RestfulBookService: package je7hb.jaxrs.basic;import javax.annotation.*;import javax.ws.rs.*;import java.util.*;@Path("/books")public class RestfulBookService {private List<Book> products = Arrays.asList(new Book("Sir Arthur Dolan Coyle","Sherlock Holmes and the Hounds of the Baskervilles"),new Book("Dan Brown", "Da Vinci Code"),new Book("Charles Dickens", "Great Expectations"),new Book("Robert Louis Stevenson", "Treasure Island"));@GET@Produces("text/plain")public String getList() {StringBuffer buf = new StringBuffer();for (Book b: products) {buf.append(b.title); buf.append('n'); }return buf.toString();}@PostConstructpublic void acquireResource() { /* ... */ }@PreDestroypublic void releaseResource() { /* ... */ }static class Book {public final String author;public final String title;Book(String author, String title) {this.author = author;this.title = title;}}} The annotation @javax.ws.rs.Path declares the class as a REST style end-point for a resource. The @Path annotation is assigned to the class itself. The path argument defines the relative URL pattern for this resource, namely/books. The method getList() is the interesting one. It is annotated with both @javax.ws.rs.GET and @javax.ws.rs.Produces. The @GET is one of the six annotations that conform to the HTTP web request methods. It indicates that the method is associated with HTTP GET protocol request. The @Produces annotation indicates the MIME content that this resource will generate. In this example, the MIME content is text/plain. The other methods on the resource bring CDI into the picture. In the example, we are injecting post construction and pre destruction methods into the bean. This is the only class that we require for a simple REST style application from the server side. With an invoking of the web resource with a HTTP GET Request like http://localhost:8080/mywebapp/rest/books we should get a plain text output with list of titles like the following: Sherlock Holmes and the Hounds of the BaskervillesDa Vinci CodeGreat ExpectationsTreasure Island So do we test this REST style interface then? We could use Arquillian Framework directly, but this means our tests have to be built specifically in a project and it complicates the build process. Arquillian uses another open source project in the JBoss repository called ShrinkWrap. The framework allows the construction of various types of virtual Java archives in a programmatic fashion. Let's look at the unit test class RestfulBookServiceTest in the following code: package je7hb.jaxrs.basic;// import omittedpublic class RestfulBookServiceTest {@Testpublic void shouldAssembleAndRetrieveBookList()throws Exception {WebArchive webArchive =ShrinkWrap.create(WebArchive.class, "test.war").addClasses(RestfulBookService.class).setWebXML(new File("src/main/webapp/WEB-INF/web.xml")).addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml");File warFile = new File(webArchive.getName());new ZipExporterImpl(webArchive).exportTo(warFile, true);SimpleEmbeddedRunner runner =SimpleEmbeddedRunner.launchDeployWarFile(warFile, "mywebapp", 8080);try {URL url = new URL("http://localhost:8080/mywebapp/rest/books");InputStream inputStream = url.openStream();BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream));List<String> lines = new ArrayList<>();String text = null;int count=0;while ( ( text = reader.readLine()) != null ) {lines.add(text);++count;System.out.printf("**** OUTPUT ****text[%d] = %sn", count, text );}assertFalse( lines.isEmpty() );assertEquals("Sherlock Holmes and the Houndsof the Baskervilles", lines.get(0));assertEquals("Da Vinci Code", lines.get(1));assertEquals("Great Expectations", lines.get(2));assertEquals( "Treasure Island", lines.get(3) );}finally {runner.stop();}}} In the unit test method, shouldAssembleAndRetrieveBookList(), we first assemble a virtual web archive with an explicit name test.war. The WAR file contains the RestfulBookService services, the Web deployment descriptor file web.xml and an empty beans.xml file, which if you remember is only there to trigger the CDI container into life for this web application. With the virtual web archive, we export the WAR as a physical file with the utility ZipExporterImpl class from ShinkWrap library, which creates the file test.war in the project root folder. Next, we fire up the SimpleEmbeddedRunner utility. It deploys the web archive to an embedded GlassFish container. Essentially, this is the boilerplate to get to deliver a test result. We, then, get to the heart of the test itself; we construct a URL to invoke the REST style endpoint, which is http://localhost:8080/mywebapp/rest/books. We read the output from the service endpoint with Java standard I/O line by line into a list collection of Strings. Once we have a list of collections, then we assert each line against the expected output from the rest style service. Because we acquired an expensive resource, like an embedded Glassfish container, we are careful to release it, which is the reason, why we surround critical code with a try-finally block statement. When the execution comes to end of test method, we ensure the embedded GlassFish container is shut down.
Read more
  • 0
  • 0
  • 4771

article-image-maximizing-everyday-debugging
Packt
14 Mar 2014
5 min read
Save for later

Maximizing everyday debugging

Packt
14 Mar 2014
5 min read
(For more resources related to this topic, see here.) Getting ready For this article, you will just need a premium version of VS2013 or you may use VS Express for Windows Desktop. Be sure to run your choice on a machine using a 64-bit edition of Windows. Note that Edit and Continue previously existed for 32-bit code. How to do it… Both features are now supported by C#/VB, but we will be using C# for our examples. The features being demonstrated are compiler-based features, so feel free to use code from one of your own projects if you prefer. To see how Edit and Continue can benefit 64-bit development, perform the following steps: Create a new C# Console Application using the default name. To ensure the demonstration is running with 64-bit code, we need to change the default solution platform. Click on the drop-down arrow next to Any CPU and select Configuration Manager…: When the Configuration Manager dialog opens, we can create a new Project Platform targeting 64-bit code. To do this, click on the drop-down menu for Platform and select <New...>: When <New...> is selected, it will present the New Project Platform dialog box. Select x64 as the new platform type: Once x64 has been selected, you will return to the Configuration Manager. Verify that x64 remains active under Platform and then click on Close to close this dialog. The main IDE window will now indicate that x64 is active: Now, let's add some code to demonstrate the new behavior. Replace the existing code in your blank class file so that it looks like the following listing: class Program { static void Main(string[] args) { int w = 16; int h = 8; Debugging Your .NET Application 156 int area = calcArea(w, h); Console.WriteLine("Area: " + area); } private static int calcArea(int width, int height) { return width / height; } } Let's set some breakpoints so that we are able to inspect during execution. First, add a breakpoint to the Main method's Console line. Add a second breakpoint to the calcArea method's return line. You can do this by either clicking on the left side of the editor window's border or by right-clicking on the line, and selecting Breakpoint | Insert Breakpoint: If you are not sure where to click, use the right-click method and then practice toggling the breakpoint by left-clicking on the breakpoint marker. Feel free to use any method that you find most convenient. Once the two breakpoints are added, Visual Studio will mark their location as shown in the following screenshot (the arrow indicates where you may click to toggle the breakpoint): With the breakpoint marker now set, let's debug the program. Begin debugging by either pressing F5 or clicking on the Start button on the toolbar: Once debugging starts, the program will quickly execute until stopped by the first breakpoint. Let's first take a look at Edit and Continue. Visual Studio will stop at the calcArea method's return line. Astute readers will notice an error (marked by 1 in the following screenshot) present in the calculation as the area value returned should be width * height. Make the correction. Before continuing, note the variables listed in the Autos window (marked by 2 in the following screenshot). If you don't see Autos, it can be made visible by pressing Ctrl + D, A or through Debug | Windows | Autos while debugging. After correcting the area calculation, advance the debugging step by pressing F10 twice. (Alternatively make the advancement by selecting the menu item Debug | Step Over twice). Visual Studio will advance to the declaration for area. Note that you were able to edit your code and continue debugging without restarting. The Autos window will update to display the function's return value, which is 128 (the value for area has not been assigned yet): There's more… Programmers who write C++ already have the ability to see the return values of functions; this just brings .NET developers into the fold. Your development experience won't have to suffer based on the languages chosen for your projects. The Edit and Continue functionality is also available for ASP.NET projects. New projects created in VS2013 will have Edit and Continue enabled by default. Existing projects imported to VS2013 will usually need this to be enabled if it hasn't been already. To do so, right-click on your ASP.NET project in Solution Explorer and select Properties (alternatively, it is also available via Project | <Project Name> Properties…). Navigate to the Web option and scroll to the bottom to check the Enable Edit and Continue checkbox. The following screenshot shows where this option is located on the properties page: Summary In this article, we learned how to use the Edit and Continue feature. Using this feature enables you to make changes to your project without having to immediately recompile your project. This simplifies debugging and enables a bit of exploration. You also saw how the Autos window can display the values of variables as you step through your program’s execution. Resources for Article: Further resources on this subject: Using the Data Pager Control in Visual Studio 2008 [article] Load Testing Using Visual Studio 2008: Part 1 [article] Creating a Simple Report with Visual Studio 2008 [article]
Read more
  • 0
  • 0
  • 4767

article-image-software-task-management-tool-rake
Packt
16 Apr 2014
5 min read
Save for later

The Software Task Management Tool - Rake

Packt
16 Apr 2014
5 min read
(For more resources related to this topic, see here.) Installing Rake As Rake is a Ruby library, you should first install Ruby on the system if you don't have it installed already. The installation process is different for each operating system. However, we will see the installation example only for the Debian operating system family. Just open the terminal and write the following installation command: $ sudo apt-get install ruby If you have an operating system that doesn't contain the apt-get utility and if you have problems with the Ruby installation, please refer to the official instructions at https://www.ruby-lang.org/en/installation. There are a lot of ways to install Ruby, so please choose your operating system from the list on this page and select your desired installation method. Rake is included in the Ruby core as Ruby 1.9, so you don't have to install it as a separate gem. However, if you still use Ruby 1.8 or an older version, you will have to install Rake as a gem. Use the following command to install the gem: $ gem install rake The Ruby release cycle is slower than that of Rake and sometimes, you need to install it as a gem to work around some special issues. So you can still install Rake as a gem and in some cases, this is a requirement even for Ruby Version 1.9 and higher. To check if you have installed it correctly, open your terminal and type the following command: $ rake --version This should return the installed Rake version. The next sign that Rake is installed and is working correctly is an error that you see after typing the rake command in the terminal: $ mkdir ~/test-rake $ cd ~/test-rake $ rake rake aborted! No Rakefile found (looking for: rakefile, Rakefile, rakefile.rb, Rakefile.rb) (See full trace by running task with --trace) Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Introducing rake tasks From the previous error message, it's clear that first you need to have Rakefile. As you can see, there are four variants of its name: rakefile, Rakefile, rakefile.rb, and Rakefile.rb. The most popularly used variant is Rakefile. Rails also uses it. However, you can choose any variant for your project. There is no convention that prohibits the user from using any of the four suggested variants. Rakefile is a file that is required for any Rake-based project. Apart from the fact that its content usually contains DSL, it's also a general Ruby file. Also, you can write any Ruby code in it. Perform the following steps to get started: Let's create a Rakefile in the current folder, which will just say Hello Rake, using the following commands: $ echo "puts 'Hello Rake'" > Rakefile $ cat Rakefile puts 'Hello Rake' Here, the first line creates a Rakefile with the content, puts 'Hello Rake', and the second line just shows us its content to make sure that we've done everything correctly. Now, run rake as we tried it before, using the following command: $ rake Hello Rake rake aborted! Don't know how to build task 'default' (See full trace by running task with --trace) The message has changed and it says Hello Rake. Then, it gets aborted because of another error message. At this moment, we have made the first step in learning Rake. Now, we have to define a default rake task that will be executed when you try to start Rake without any arguments. To do so, open your editor and change the created Rakefile with the following content: task :default do puts 'Hello Rake' end Now, run rake again: $ rake Hello, Rake The output that says Hello, Rake demonstrates that the task works correctly. The command-line arguments The most commonly used rake command-line argument is -T. It shows us a list of available rake tasks that you have already defined. We have defined the default rake task, and if we try to show the list of all rake tasks, it should be there. However, take a look at what happens in real life using the following command: $ rake -T The list is empty. Why? The answer lies within Rake. Run the rake command with the -h option to get the whole list of arguments. Pay attention to the description of the -T option, as shown in the following command-line output: -T, --tasks [PATTERN] Display the tasks (matching optional PATTERN) with descriptions, then exit. You can get more information on Rake in the repository at the following GitHub link at https://github.com/jimweirich/rake. The word description is the cornerstone here. It's a new term that we should know. Additionally, there is also an optional description to name a rake task. However, it's recommended that you define it because you won't see the list of all the defined rake tasks that we've already seen. It will be inconvenient for you to read your Rakefile every time you try to run some rake task. Just accept it as a rule: always leave a description for the defined rake tasks. Now, add a description to your rake tasks with the desc method call, as shown in the following lines of code: desc "Says 'Hello, Rake'" task :default do puts 'Hello, Rake.' end As you see, it's rather easy. Run the rake -T command again and you will see an output as shown: $ rake -T rake default # Says 'Hello, Rake' If you want to list all the tasks even if they don't have descriptions, you can pass an -A option with the -T option to the rake command. The resulting command will look like this: rake -T -A.
Read more
  • 0
  • 0
  • 4767
article-image-overview-tomcat-6-servlet-container-part-2
Packt
18 Jan 2010
8 min read
Save for later

An Overview of Tomcat 6 Servlet Container: Part 2

Packt
18 Jan 2010
8 min read
Nested components These components are specific to the Tomcat implementation, and their primary purpose is to enable the various Tomcat containers to perform their tasks. Valve A valve is a processing element that can be placed within the processing path of each of Tomcat's containers—engine, host, context, or a servlet wrapper. A Valve is added to a container using the <Valve> element in server.xml. They are executed in the order in which they are encountered within the server.xml file. The Tomcat distribution comes with a number of pre-rolled valves. These include: A valve that logs specific elements of a request (such as the remote client's IP address) to a log file or database A valve that lets you control access to a particular web application based on the remote client's IP address or host name A valve that lets you log every request and response header A valve that lets you configure single sign-on access across multiple web applications on a specific virtual host If these don't meet your needs, you can write your own implementations of org.apache.catalina.Valve and place them into service. A container does not hold references to individual valves. Instead, it holds a reference to a single entity known as the Pipeline, which represents a chain of valves associated with that container. When a container is invoked to process a request, it delegates the processing to its associated pipeline. The valves in a pipeline are arranged as a sequence, based on how they are defined within the server.xml file. The final valve in this sequence is known as the pipeline's basic valve. This valve performs the task that embodies the core purpose of a given container. Unlike individual valves, the pipeline is not an explicit element in server.xml, but instead is implicitly defined in terms of the sequence of valves that are associated with a given container. Each Valve is aware of the next valve in the pipeline. After it performs its pre processing, it invokes the next Valve in the chain, and when the call returns, it performs its own post processing before returning. This is very similar to what happens in filter chains within the servlet specification. In this image, the engine's configured valve(s) fire when an incoming request is received. An engine's basic valve determines the destination host and delegates processing to that host. The destination host's (www.host1.com) valves now fire in sequence. The host's basic valve then determines the destination context (here, Context1) and delegates processing to it. The valves configured for Context1 now fire and processing is then delegated by the context's basic valve to the appropriate wrapper, whose basic valve hands off processing to its wrapped servlet. The response then returns over the same path in reverse. A Valve becomes part of the Tomcat server's implementation and provides a way for developers to inject custom code into the servlet container's processing of a request. As a result, the class files for custom valves must be deployed to CATALINA_HOME/lib, rather than to the WEB-INF/classes of a deployed application. As they are not part of the servlet specification, valves are non-portable elements of your enterprise application. Therefore, if you rely on a particular valve, you will need to find equivalent alternatives in a different application server. It is important to note that valves are required to be very efficient in order not to introduce inordinate delays into the processing of a request. Realm Container managed security works by having the container handle the authentication and authorization aspects of an application. Authentication is defined as the task of ensuring that the user is who she says she is, and authorization is the task of determining whether the user may perform some specific action within an application. The advantage of container managed security is that security can be configured declaratively by the application's deployer. That is, the assignment of passwords to users and the mapping of users to roles can all be done through configuration, which can then be applied across multiple web applications without any coding changes being required to those web applications. Application Managed Security The alternative is having the application manage security. In this case, your web application code is the sole arbiter of whether a user may access some specific functionality or resource within your application. For Container managed security to work, you need to assemble the following components: Security constraints: Within your web application's deployment descriptor, web.xml, you must identify the URL patterns for restricted resources, as well as the user roles that would be permitted to access these resources. Credential input mechanism: In the web.xml deployment descriptor, you specify how the container should prompt the user for authentication credentials. This is usually accomplished by showing the user a dialog that prompts the user for a user name and password, but can also be configured to use other mechanisms such as a custom login form. Realm: This is a data store that holds user names, passwords, and roles, against which the user-supplied credentials are checked. It can be a simple XML file, a table in a relational database that is accessed using the JDBC API, or a Lightweight Directory Access Protocol (LDAP) server that can be accessed through the JNDI API. A realm provides Tomcat with a consistent mechanism of accessing these disparate data sources. All three of the above components are technically independent of each other. The power of container based security is that you can assemble your own security solution by mixing and matching selections from each of these groups. Now, when a user requests a resource, Tomcat will check to see whether a security constraint exists for this resource. For a restricted resource, Tomcat will then automatically request the user for her credentials and will then check these credentials against the configured realm. Access to the resource will be allowed only if the user's credentials are valid and if the user is a member of the role that is configured to access that resource. Executor This is a new element, available only since 6.0.11. It allows you to configure a shared thread pool that is available to all your connectors. This places an upper limit on the number of concurrent threads that may be started by your connectors. Note that this limit applies even if a particular connector has not used up all the threads configured for it. Listener Every major Tomcat component implements the org.apache.catalina.Lifecycle interface. This interface lets interested listeners to register with a component, to be notified of lifecycle events, such as the starting or stopping of that component. A listener implements the org.apache.catalina.LifecycleListener interface and implements its lifecycleEvent() method, which takes a LifecycleEvent that represents the event that has occurred. This gives you an opportunity to inject your own custom processing into Tomcat's lifecycle. Manager Sessions allows 'applications' to be made possible over the stateless HTTP protocol. A session represents a conversation between a client and a server and is implemented by a javax.servlet.http.HttpSession instance that is stored on the server and is associated with a unique identifier that is passed back by the client on each interaction. A new session is created on request and remains alive on the server either until it times out after a period of inactivity by its associated client, or until it is explicitly invalidated, for instance, by the client choosing to log out. The above image shows a very simplistic view of the session mechanism within Tomcat. An org.apache.catalina.Manager component is used by the Catalina engine to create, find, or invalidate sessions. This component is responsible for the sessions that are created for a context and their life cycles. The default Manager implementation simply retains sessions in memory, but supports session survival across server restarts. It writes out all active sessions to disk when the server is stopped and will reload them into memory when the server is started up again. A <Manager> must be a child of a <Context> element and is responsible for managing the sessions associated with that web application context. The default Manager takes attributes such as the algorithm that is used to generate its session identifiers, the frequency in seconds with which the manager should check for expired sessions, the maximum number of active sessions supported, and the file in which the sessions should be stored. Other implementations of Manager are provided that let you persist sessions to a durable data store such as a file or a JDBC database.
Read more
  • 0
  • 0
  • 4733

article-image-python-multimedia-working-audios
Packt
30 Aug 2010
14 min read
Save for later

Python Multimedia: Working with Audios

Packt
30 Aug 2010
14 min read
(For more resources on Python, see here.) So let's get on with it! Installation prerequisites Since we are going to use an external multimedia framework, it is necessary to install the necessary to install the packages mentioned in this section. GStreamer GStreamer is a popular open source multimedia framework that supports audio/video manipulation of a wide range of multimedia formats. It is written in the C programming language and provides bindings for other programming languages including Python. Several open source projects use GStreamer framework to develop their own multimedia application. Throughout this article, we will make use of the GStreamer framework for audio handling. In order to get this working with Python, we need to install both GStreamer and the Python bindings for GStreamer. Windows platform The binary distribution of GStreamer is not provided on the project website http://www.gstreamer.net/. Installing it from the source may require considerable effort on the part of Windows users. Fortunately, GStreamer WinBuilds project provides pre-compiled binary distributions. Here is the URL to the project website: http://www.gstreamer-winbuild.ylatuya.es The binary distribution for GStreamer as well as its Python bindings (Python 2.6) are available in the Download area of the website: http://www.gstreamer-winbuild.ylatuya.es/doku.php?id=download You need to install two packages. First, the GStreamer and then the Python bindings to the GStreamer. Download and install the GPL distribution of GStreamer available on the GStreamer WinBuilds project website. The name of the GStreamer executable is GStreamerWinBuild-0.10.5.1.exe. The version should be 0.10.5 or higher. By default, this installation will create a folder C:gstreamer on your machine. The bin directory within this folder contains runtime libraries needed while using GStreamer. Next, install the Python bindings for GStreamer. The binary distribution is available on the same website. Use the executable Pygst-0.10.15.1-Python2.6.exe pertaining to Python 2.6. The version should be 0.10.15 or higher. GStreamer WinBuilds appears to be an independent project. It is based on the OSSBuild developing suite. Visit http://code.google.com/p/ossbuild/ for more information. It could happen that the GStreamer binary built with Python 2.6 is no longer available on the mentioned website at the time you are reading this book. Therefore, it is advised that you should contact the developer community of OSSBuild. Perhaps they might help you out! Alternatively, you can build GStreamer from source on the Windows platform, using a Linux-like environment for Windows, such as Cygwin (http://www.cygwin.com/). Under this environment, you can first install dependent software packages such as Python 2.6, gcc compiler, and others. Download the gst-python-0.10.17.2.tar.gz package from the GStreamer website http://www.gstreamer.net/. Then extract this package and install it from sources using the Cygwin environment. The INSTALL file within this package will have installation instructions. Other platforms Many of the Linux distributions provide GStreamer package. You can search for the appropriate gst-python distribution (for Python 2.6) in the package repository. If such a package is not available, install gst-python from the source as discussed in the earlier the Windows platform section. If you are a Mac OS X user, visit http://py26-gst-python.darwinports.com/. It has detailed instructions on how to download and install the package Py26-gst-python version 0.10.17 (or higher). Mac OS X 10.5.x (Leopard) comes with the Python 2.5 distribution. If you are using packages using this default version of Python, GStreamer Python bindings using Python 2.5 are available on the darwinports website: http://gst-python.darwinports.com/ PyGobject There is a free multiplatform software utility library called 'GLib'. It provides data structures such as hash maps, linked lists, and so on. It also supports the creation of threads. The 'object system' of GLib is called GObject. Here, we need to install the Python bindings for GObject. The Python bindings are available on the PyGTK website at: http://www.pygtk.org/downloads.html. Windows platform The binary installer is available on the PyGTK website. The complete URL is: http://ftp.acc.umu.se/pub/GNOME/binaries/win32/pygobject/2.20/?. Download and install version 2.20 for Python 2.6. Other platforms For Linux, the source tarball is available on the PyGTK website. There could even be binary distribution in the package repository of your Linux operating system. The direct link to the Version 2.21 of PyGObject (source tarball) is: http://ftp.gnome.org/pub/GNOME/sources/pygobject/2.21/ If you are a Mac user and you have Python 2.6 installed, a distribution of PyGObject is available at http://py26-gobject.darwinports.com/. Install version 2.14 or later. Summary of installation prerequisites The following table summarizes the packages needed for this article. Package Download location Version Windows platform Linux/Unix/OS X platforms GStreamer http://www.gstreamer.net/ 0.10.5 or later Install using binary distribution available on the Gstreamer WinBuild website: http://www.gstreamer-winbuild.ylatuya.es/doku.php?id=download Use GStreamerWinBuild-0.10.5.1.exe (or later version if available). Linux: Use GStreamer distribution in package repository. Mac OS X: Download and install by following instructions on the website: http://gstreamer.darwinports.com/. Python Bindings for GStreamer http://www.gstreamer.net/ 0.10.15 or later for Python 2.6 Use binary provided by GStreamer WinBuild project. See http://www.gstreamer-winbuild.ylatuya.es for details pertaining to Python 2.6. Linux: Use gst-python distribution in the package repository. Mac OS X: Use this package (if you are using Python2.6): http://py26-gst-python.darwinports.com/. Linux/Mac: Build and install from the source tarball. Python bindings for GObject "PyGObject" Source distribution: http://www.pygtk.org/downloads.html 2.14 or later for Python 2.6 Use binary package from pygobject-2.20.0.win32-py2.6.exe Linux: Install from source if pygobject is not available in the package repository. Mac: Use this package on darwinports (if you are using Python2.6) See http://py26-gobject.darwinports.com/ for details. Testing the installation Ensure that the GStreamer and its Python bindings are properly installed. It is simple to test this. Just start Python from the command line and type the following: >>>import pygst If there is no error, it means the Python bindings are installed properly. Next, type the following: >>>pygst.require("0.10")>>>import gst If this import is successful, we are all set to use GStreamer for processing audios and videos! If import gst fails, it will probably complain that it is unable to work some required DLL/shared object. In this case, check your environment variables and make sure that the PATH variable has the correct path to the gstreamer/bin directory. The following lines of code in a Python interpreter show the typical location of the pygst and gst modules on the Windows platform. >>> import pygst>>> pygst<module 'pygst' from 'C:Python26libsite-packagespygst.pyc'>>>> pygst.require('0.10')>>> import gst>>> gst<module 'gst' from 'C:Python26libsite-packagesgst-0.10gst__init__.pyc'> Next, test if PyGObject is successfully installed. Start the Python interpreter and try importing the gobject module. >>import gobject If this works, we are all set to proceed! A primer on GStreamer In this article, we will be using GStreamer multimedia framework extensively. Before we move on to the topics that teach us various audio processing techniques, a primer on GStreamer is necessary. So what is GStreamer? It is a framework on top of which one can develop multimedia applications. The rich set of libraries it provides makes it easier to develop applications with complex audio/video processing capabilities. Fundamental components of GStreamer are briefly explained in the coming sub-sections. Comprehensive documentation is available on the GStreamer project website. GStreamer Application Development Manual is a very good starting point. In this section, we will briefly cover some of the important aspects of GStreamer. For further reading, you are recommended to visit the GStreamer project website: http://www.gstreamer.net/documentation/ gst-inspect and gst-launch We will start by learning the two important GStreamer commands. GStreamer can be run from the command line, by calling gst-launch-0.10.exe (on Windows) or gst-launch-0.10(on other platforms). The following command shows a typical execution of GStreamer on Linux. We will see what a pipeline means in the next sub-section. $gst-launch-0.10 pipeline_description GStreamer has a plugin architecture. It supports a huge number of plugins. To see more details about any plugin in your GStreamer installation, use the command gst-inspect-0.10 (gst-inspect-0.10.exe on Windows). We will use this command quite often. Use of this command is illustrated here. $gst-inspect-0.10 decodebin Here, decodebin is a plugin. Upon execution of the preceding command, it prints detailed information about the plugin decodebin. Elements and pipeline In GStreamer, the data flows in a pipeline. Various elements are connected together forming a pipeline, such that the output of the previous element is the input to the next one. A pipeline can be logically represented as follows: Element1 ! Element2 ! Element3 ! Element4 ! Element5 Here, Element1 through to Element5 are the element objects chained together by the symbol !. Each of the elements performs a specific task. One of the element objects performs the task of reading input data such as an audio or a video. Another element decodes the file read by the first element, whereas another element performs the job of converting this data into some other format and saving the output. As stated earlier, linking these element objects in a proper manner creates a pipeline. The concept of a pipeline is similar to the one used in Unix. Following is a Unix example of a pipeline. Here, the vertical separator | defines the pipe. $ls -la | more Here, the ls -la lists all the files in a directory. However, sometimes, this list is too long to be displayed in the shell window. So, adding | more allows a user to navigate the data. Now let's see a realistic example of running GStreamer from the command prompt. $ gst-launch-0.10 -v filesrc location=path/to/file.ogg ! decodebin ! audioconvert ! fakesink For a Windows user, the gst command name would be gst-launch-0.10.exe. The pipeline is constructed by specifying different elements. The !symbol links the adjacent elements, thereby forming the whole pipeline for the data to flow. For Python bindings of GStreamer, the abstract base class for pipeline elements is gst.Element, whereas gst.Pipeline class can be used to created pipeline instance. In a pipeline, the data is sent to a separate thread where it is processed until it reaches the end or a termination signal is sent. Plugins GStreamer is a plugin-based framework. There are several plugins available. A plugin is used to encapsulate the functionality of one or more GStreamer elements. Thus we can have a plugin where multiple elements work together to create the desired output. The plugin itself can then be used as an abstract element in the GStreamer pipeline. An example is decodebin. We will learn about it in the upcoming sections. A comprehensive list of available plugins is available at the GStreamer website http://gstreamer.freedesktop.org. In almost all applications to be developed, decodebin plugin will be used. For audio processing, the functionality provided by plugins such as gnonlin, audioecho, monoscope, interleave, and so on will be used. Bins In GStreamer, a bin is a container that manages the element objects added to it. A bin instance can be created using gst.Bin class. It is inherited from gst.Element and can act as an abstract element representing a bunch of elements within it. A GStreamer plugin decodebin is a good example representing a bin. The decodebin contains decoder elements. It auto-plugs the decoder to create the decoding pipeline. Pads Each element has some sort of connection points to handle data input and output. GStreamer refers to them as pads. Thus an element object can have one or more "receiver pads" termed as sink pads that accept data from the previous element in the pipeline. Similarly, there are 'source pads' that take the data out of the element as an input to the next element (if any) in the pipeline. The following is a very simple example that shows how source and sink pads are specified. >gst-launch-0.10.exe fakesrc num-bufferes=1 ! fakesink The fakesrc is the first element in the pipeline. Therefore, it only has a source pad. It transmits the data to the next linkedelement, that is fakesink which only has a sink pad to accept elements. Note that, in this case, since these are fakesrc and fakesink, just empty buffers are exchanged. A pad is defined by the class gst.Pad. A pad can be attached to an element object using the gst.Element.add_pad() method. The following is a diagrammatic representation of a GStreamer element with a pad. It illustrates two GStreamer elements within a pipeline, having a single source and sink pad. Now that we know how the pads operate, let's discuss some of special types of pads. In the example, we assumed that the pads for the element are always 'out there'. However, there are some situations where the element doesn't have the pads available all the time. Such elements request the pads they need at runtime. Such a pad is called a dynamic pad. Another type of pad is called ghost pad. These types are discussed in this section.   Dynamic pads Some objects such as decodebin do not have pads defined when they are created. Such elements determine the type of pad to be used at the runtime. For example, depending on the media file input being processed, the decodebin will create a pad. This is often referred to as dynamic pad or sometimes the available pad as it is not always available in elements such as decodebin. Ghost pads As stated in the Bins section a bin object can act as an abstract element. How is it achieved? For that, the bin uses 'ghost pads' or 'pseudo link pads'. The ghost pads of a bin are used to connect an appropriate element inside it. A ghost pad can be created using gst.GhostPad class. Caps The element objects send and receive the data by using the pads. The type of media data that the element objects will handle is determined by the caps (a short form for capabilities). It is a structure that describes the media formats supported by the element. The caps are defined by the class gst.Caps. Bus A bus refers to the object that delivers the message generated by GStreamer. A message is a gst.Message object that informs the application about an event within the pipeline. A message is put on the bus using the gst.Bus.gst_bus_post() method. The following code shows an example usage of the bus. 1 bus = pipeline.get_bus()2 bus.add_signal_watch()3 bus.connect("message", message_handler) The first line in the code creates a gst.Bus instance. Here the pipeline is an instance of gst.PipeLine. On the next line, we add a signal watch so that the bus gives out all the messages posted on that bus. Line 3 connects the signal with a Python method. In this example, the message is the signal string and the method it calls is message_handler. Playbin/Playbin2 Playbin is a GStreamer plugin that provides a high-level audio/video player. It can handle a number of things such as automatic detection of the input media file format, auto-determination of decoders, audio visualization and volume control, and so on. The following line of code creates a playbin element. playbin = gst.element_factory_make("playbin") It defines a property called uri. The URI (Uniform Resource Identifier) should be an absolute path to a file on your computer or on the Web. According to the GStreamer documentation, Playbin2 is just the latest unstable version but once stable, it will replace the Playbin. A Playbin2 instance can be created the same way as a Playbin instance. gst-inspect-0.10 playbin2 With this basic understanding, let us learn about various audio processing techniques using GStreamer and Python.
Read more
  • 0
  • 0
  • 4728
Modal Close icon
Modal Close icon