Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-hello-opencl
Packt
18 Dec 2013
16 min read
Save for later

Hello OpenCL

Packt
18 Dec 2013
16 min read
(For more resources related to this topic, see here.) The Wikipedia definition says that, Parallel Computing is a form of computation in which many calculations are carried out simultaneously, operating on the principle that large problems can often be divided into smaller ones, which are then solved concurrently (in parallel). There are many Parallel Computing programming standards or API specifications, such as OpenMP, OpenMPI, Pthreads, and so on. This book is all about OpenCL Parallel Programming. In this article, we will start with a discussion on different types of parallel programming. We will first introduce you to OpenCL with different OpenCL components. We will also take a look at the various hardware and software vendors of OpenCL and their OpenCL installation steps. Finally, at the end of the article we will see an OpenCL program example SAXPY in detail and its implementation. Advances in computer architecture All over the 20th century computer architectures have advanced by multiple folds. The trend is continuing in the 21st century and will remain for a long time to come. Some of these trends in architecture follow Moore's Law. "Moore's law is the observation that, over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years". Many devices in the computer industry are linked to Moore's law, whether they are DSPs, memory devices, or digital cameras. All the hardware advances would be of no use if there weren't any software advances. Algorithms and software applications grow in complexity, as more and more user interaction comes into play. An algorithm can be highly sequential or it may be parallelized, by using any parallel computing framework. Amdahl's Law is used to predict the speedup for an algorithm, which can be obtained given n threads. This speedup is dependent on the value of the amount of strictly serial or non-parallelizable code (B). The time T(n) an algorithm takes to finish when being executed on n thread(s) of execution corresponds to: T(n) = T(1) (B + (1-B)/n) Therefore the theoretical speedup which can be obtained for a given algorithm is given by : Speedup(n) = 1/(B + (1-B)/n) Amdahl's Law has a limitation, that it does not fully exploit the computing power that becomes available as the number of processing core increase. Gustafson's Law takes into account the scaling of the platform by adding more processing elements in the platform. This law assumes that the total amount of work that can be done in parallel, varies linearly with the increase in number of processing elements. Let an algorithm be decomposed into (a+b). The variable a is the serial execution time and variable b is the parallel execution time. Then the corresponding speedup for P parallel elements is given by: (a + P*b) Speedup = (a + P*b) / (a + b) Now defining α as a/(a+b), the sequential execution component, as follows, gives the speedup for P processing elements: Speedup(P) = P – α *(P - 1) Given a problem which can be solved using OpenCL, the same problem can also be solved on a different hardware with different capabilities. Gustafson's law suggests that with more number of computing units, the data set should also increase that is, "fixed work per processor". Whereas Amdahl's law suggests the speedup which can be obtained for the existing data set if more computing units are added, that is, "Fixed work for all processors". Let's take the following example: Let the serial component and parallel component of execution be of one unit each. In Amdahl's Law the strictly serial component of code is B (equals 0.5). For two processors, the speedup T(2) is given by: T(2) = 1 / (0.5 + (1 – 0.5) / 2) = 1.33 Similarly for four and eight processors, the speedup is given by: T(4) = 1.6 and T(8) = 1.77 Adding more processors, for example when n tends to infinity, the speedup obtained at max is only 2. On the other hand in Gustafson's law, Alpha = 1(1+1) = 0.5 (which is also the serial component of code). The speedup for two processors is given by: Speedup(2) = 2 – 0.5(2 - 1) = 1.5 Similarly for four and eight processors, the speedup is given by: Speedup(4) = 2.5 and Speedup(8) = 4.5 The following figure shows the work load scaling factor of Gustafson's law, when compared to Amdahl's law with a constant workload: Comparison of Amdahl's and Gustafson's Law OpenCL is all about parallel programming, and Gustafson's law very well fits into this book as we will be dealing with OpenCL for data parallel applications. Workloads which are data parallel in nature can easily increase the data set and take advantage of the scalable platforms by adding more compute units. For example, more pixels can be computed as more compute units are added. Different parallel programming techniques There are several different forms of parallel computing such as bit-level, instruction level, data, and task parallelism. This book will largely focus on data and task parallelism using heterogeneous devices. We just now coined a term, heterogeneous devices. How do we tackle complex tasks "in parallel" using different types of computer architecture? Why do we need OpenCL when there are many (already defined) open standards for Parallel Computing? To answer this question, let us discuss the pros and cons of different Parallel computing Framework. OpenMP OpenMP is an API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran. It is prevalent only on a multi-core computer platform with a shared memory subsystem. A basic OpenMP example implementation of the OpenMP Parallel directive is as follows: #pragma omp parallel { body; } When you build the preceding code using the OpenMP shared library, libgomp would expand to something similar to the following code: void subfunction (void *data) { use data; body; } setup data; GOMP_parallel_start (subfunction, &data, num_threads); subfunction (&data); GOMP_parallel_end (); void GOMP_parallel_start (void (*fn)(void *), void *data, unsigned num_threads) The OpenMP directives make things easy for the developer to modify the existing code to exploit the multicore architecture. OpenMP, though being a great parallel programming tool, does not support parallel execution on heterogeneous devices, and the use of a multicore architecture with shared memory subsystem does not make it cost effective. MPI Message Passing Interface (MPI) has an advantage over OpenMP, that it can run on either the shared or distributed memory architecture. Distributed memory computers are less expensive than large shared memory computers. But it has its own drawback with inherent programming and debugging challenges. One major disadvantage of MPI parallel framework is that the performance is limited by the communication network between the nodes. Supercomputers have a massive number of processors which are interconnected using a high speed network connection or are in computer clusters, where computer processors are in close proximity to each other. In clusters, there is an expensive and dedicated data bus for data transfers across the computers. MPI is extensively used in most of these compute monsters called supercomputers. OpenACC The OpenACC Application Program Interface (API) describes a collection of compiler directives to specify loops and regions of code in standard C, C++, and Fortran to be offloaded from a host CPU to an attached accelerator, providing portability across operating systems, host CPUs, and accelerators. OpenACC is similar to OpenMP in terms of program annotation, but unlike OpenMP which can only be accelerated on CPUs, OpenACC programs can be accelerated on a GPU or on other accelerators also. OpenACC aims to overcome the drawbacks of OpenMP by making parallel programming possible across heterogeneous devices. OpenACC standard describes directives and APIs to accelerate the applications. The ease of programming and the ability to scale the existing codes to use the heterogeneous processor, warrantees a great future for OpenACC programming. CUDA Compute Unified Device Architecture (CUDA) is a parallel computing architecture developed by NVIDIA for graphics processing and GPU (General Purpose GPU) programming. There is a fairly good developer community following for the CUDA software framework. Unlike OpenCL, which is supported on GPUs by many vendors and even on many other devices such as IBM's Cell B.E. processor or TI's DSP processor and so on, CUDA is supported only for NVIDIA GPUs. Due to this lack of generalization, and focus on a very specific hardware platform from a single vendor, OpenCL is gaining traction. CUDA or OpenCL? CUDA is more proprietary and vendor specific but has its own advantages. It is easier to learn and start writing code in CUDA than in OpenCL, due to its simplicity. Optimization of CUDA is more deterministic across a platform, since less number of platforms are supported from a single vendor only. It has simplified few programming constructs and mechanisms. So for a quick start and if you are sure that you can stick to one device (GPU) from a single vendor that is NVIDIA, CUDA can be a good choice. OpenCL on the other hand is supported for many hardware from several vendors and those hardware vary extensively even in their basic architecture, which created the requirement of understanding a little complicated concepts before starting OpenCL programming. Also, due to the support of a huge range of hardware, although an OpenCL program is portable, it may lose optimization when ported from one platform to another. The kernel development where most of the effort goes, is practically identical between the two languages. So, one should not worry about which one to choose. Choose the language which is convenient. But remember your OpenCL application will be vendor agnostic. This book aims at attracting more developers to OpenCL. There are many libraries which use OpenCL programming for acceleration. Some of them are MAGMA, clAMDBLAS, clAMDFFT, BOLT C++ Template library, and JACKET which accelerate MATLAB on GPUs. Besides this, there are C++ and Java bindings available for OpenCL also. Once you've figured out how to write your important "kernels" it's trivial to port to either OpenCL or CUDA. A kernel is a computation code which is executed by an array of threads. CUDA also has a vast set of CUDA accelerated libraries, that is, CUBLAS, CUFFT, CUSPARSE, Thrust and so on. But it may not take a long time to port these libraries to OpenCL. Renderscripts Renderscripts is also an API specification which is targeted for 3D rendering and general purpose compute operations in an Android platform. Android apps can accelerate the performance by using these APIs. It is also a cross-platform solution. When an app is run, the scripts are compiled into a machine code of the device. This device can be a CPU, a GPU, or a DSP. The choice of which device to run it on is made at runtime. If a platform does not have a GPU, the code may fall back to the CPU. Only Android supports this API specification as of now. The execution model in Renderscripts is similar to that of OpenCL. Hybrid parallel computing model Parallel programming models have their own advantages and disadvantages. With the advent of many different types of computer architectures, there is a need to use multiple programming models to achieve high performance. For example, one may want to use MPI as the message passing framework, and then at each node level one might want to use, OpenCL, CUDA, OpenMP, or OpenACC. Besides all the above programming models many compilers such as Intel ICC, GCC, and Open64 provide auto parallelization options, which makes the programmers job easy and exploit the underlying hardware architecture without the need of knowing any parallel computing framework. Compilers are known to be good at providing instruction-level parallelism. But tackling data level or task level auto parallelism has its own limitations and complexities. Introduction to OpenCL OpenCL standard was first introduced by Apple, and later on became part of the open standards organization "Khronos Group". It is a non-profit industry consortium, creating open standards for the authoring, and acceleration of parallel computing, graphics, dynamic media, computer vision and sensor processing on a wide variety of platforms and devices. The goal of OpenCL is to make certain types of parallel programming easier, and to provide vendor agnostic hardware-accelerated parallel execution of code. OpenCL (Open Computing Language) is the first open, royalty-free standard for general-purpose parallel programming of heterogeneous systems. It provides a uniform programming environment for software developers to write efficient, portable code for high-performance compute servers, desktop computer systems, and handheld devices using a diverse mix of multi-core CPUs, GPUs, and DSPs. OpenCL gives developers a common set of easy-to-use tools to take advantage of any device with an OpenCL driver (processors, graphics cards, and so on) for the processing of parallel code. By creating an efficient, close-to-the-metal programming interface, OpenCL will form the foundation layer of a parallel computing ecosystem of platform-independent tools, middleware, and applications. We mentioned vendor agnostic, yes that is what OpenCL is about. The different vendors here can be AMD, Intel, NVIDIA, ARM, TI, and so on. The following diagram shows the different vendors and hardware architectures which use the OpenCL specification to leverage the hardware capabilities: The heterogeneous system The OpenCL framework defines a language to write "kernels". These kernels are functions which are capable of running on different compute devices. OpenCL defines an extended C language for writing compute kernels, and a set of APIs for creating and managing these kernels. The compute kernels are compiled with a runtime compiler, which compiles them on-the-fly during host application execution for the targeted device. This enables the host application to take advantage of all the compute devices in the system with a single set of portable compute kernels. Based on your interest and hardware availability, you might want to do OpenCL programming with a "host and device" combination of "CPU and CPU" or "CPU and GPU". Both have their own programming strategy. In CPUs you can run very large kernels as the CPU architecture supports out-of-order instruction level parallelism and have large caches. For the GPU you will be better off writing small kernels for better performance. Hardware and software vendors There are various hardware vendors who support OpenCL. Every OpenCL vendor provides OpenCL runtime libraries. These runtimes are capable of running only on their specific hardware architectures. Not only across different vendors, but within a vendor there may be different types of architectures which might need a different approach towards OpenCL programming. Now let's discuss the various hardware vendors who provide an implementation of OpenCL, to exploit their underlying hardware. Advanced Micro Devices, Inc. (AMD) With the launch of AMD A Series APU, one of industry's first Accelerated Processing Unit (APU), AMD is leading the efforts of integrating both the x86_64 CPU and GPU dies in one chip. It has four cores of CPU processing power, and also a four or five graphics SIMD engine, depending on the silicon part which you wish to buy. The following figure shows the block diagram of AMD APU architecture: AMD architecture diagram—© 2011, Advanced Micro Devices, Inc. An AMD GPU consist of a number of Compute Engines (CU) and each CU has 16 ALUs. Further, each ALU is a VLIW4 SIMD processor and it could execute a bundle of four or five independent instructions. Each CU could be issued a group of 64 work items which form the work group (wavefront). AMD Radeon ™ HD 6XXX graphics processors uses this design. The following figure shows the HD 6XXX series Compute unit, which has 16 SIMD engines, each of which has four processing elements: AMD Radeon HD 6xxx Series SIMD Engine—© 2011, Advanced Micro Devices, Inc. Starting with the AMD Radeon HD 7XXX series of graphics processors from AMD, there were significant architectural changes. AMD introduced the new Graphics Core Next (GCN) architecture. The following figure shows an GCN compute unit which has 4 SIMD engines and each engine is 16 lanes wide: GCN Compute Unit—© 2011, Advanced Micro Devices, Inc. A group of these Compute Units forms an AMD HD 7xxx Graphics Processor. In GCN, each CU includes four separate SIMD units for vector processing. Each of these SIMD units simultaneously execute a single operation across 16 work items, but each can be working on a separate wavefront. Apart from the APUs, AMD also provides discrete graphics cards. The latest family of graphics card, HD 7XXX, and beyond uses the GCN architecture. NVIDIA® One of NVIDIA GPU architectures is codenamed "Kepler". GeForce® GTX 680 is one Kepler architectural silicon part. Each Kepler GPU consists of different configurations of Graphics Processing Clusters (GPC) and streaming multiprocessors. The GTX 680 consists of four GPCs and eight SMXs as shown in the following figure: NVIDIA Kepler architecture—GTX 680, © NVIDIA® Kepler architecture is part of the GTX 6XX and GTX 7XX family of NVIDIA discrete cards. Prior to Kepler, NVIDIA had Fermi architecture which was part of the GTX 5XX family of discrete and mobile graphic processing units. Intel® Intel's OpenCL implementation is supported in the Sandy Bridge and Ivy Bridge processor families. Sandy Bridge family architecture is also synonymous with the AMD's APU. These processor architectures also integrated a GPU into the same silicon as the CPU by Intel. Intel changed the design of the L3 cache, and allowed the graphic cores to get access to the L3, which is also called as the last level cache. It is because of this L3 sharing that the graphics performance is good in Intel. Each of the CPUs including the graphics execution unit is connected via Ring Bus. Also each execution unit is a true parallel scalar processor. Sandy Bridge provides the graphics engine HD 2000, with six Execution Units (EU), and HD 3000 (12 EU), and Ivy Bridge provides HD 2500(six EU) and HD 4000 (16 EU). The following figure shows the Sandy bridge architecture with a ring bus, which acts as an interconnect between the cores and the HD graphics: Intel Sandy Bridge architecture—© Intel® ARM Mali™ GPUs ARM also provides GPUs by the name of Mali Graphics processors. The Mali T6XX series of processors come with two, four, or eight graphics cores. These graphic engines deliver graphics compute capability to entry level smartphones, tablets, and Smart TVs. The below diagram shows the Mali T628 graphics processor. ARM Mali—T628 graphics processor, © ARM Mali T628 has eight shader cores or graphic cores. These cores also support Renderscripts APIs besides supporting OpenCL. Besides the four key competitors, companies such as TI (DSP), Altera (FPGA), and Oracle are providing OpenCL implementations for their respective hardware. We suggest you to get hold of the benchmark performance numbers of the different processor architectures we discussed, and try to compare the performance numbers of each of them. This is an important first step towards comparing different architectures, and in the future you might want to select a particular OpenCL platform based on your application workload.
Read more
  • 0
  • 0
  • 2763

article-image-implementing-basic-helloworld-wcf-windows-communication-foundation-service
Packt
27 Oct 2009
7 min read
Save for later

Implementing a Basic HelloWorld WCF (Windows Communication Foundation) Service

Packt
27 Oct 2009
7 min read
We will build a HelloWorld WCF service by carrying out the following steps: Create the solution and project Create the WCF service contract interface Implement the WCF service Host the WCF service in the ASP.NET Development Server Create a client application to consume this WCF service Creating the HelloWorld solution and project Before we can build the WCF service, we need to create a solution for our service projects. We also need a directory in which to save all the files. Throughout this article, we will save our project source codes in the D:SOAwithWCFandLINQProjects directory. We will have a subfolder for each solution we create, and under this solution folder, we will have one subfolder for each project. For this HelloWorld solution, the final directory structure is shown in the following image: You don't need to manually create these directories via Windows Explorer; Visual Studio will create them automatically when you create the solutions and projects. Now, follow these steps to create our first solution and the HelloWorld project: Start Visual Studio 2008. If the Open Project dialog box pops up, click Cancel to close it. Go to menu File | New | Project. The New Project dialog window will appear. From the left-hand side of the window (Project types), expand Other Project Types and then select Visual Studio Solutions as the project type. From the right-hand side of the window (Templates), select Blank Solution as the template. At the bottom of the window, type HelloWorld as the Name, and D:SOAwithWCFandLINQProjects as the Location. Note that you should not enter HelloWorld within the location, because Visual Studio will automatically create a folder for a new solution. Click the OK button to close this window and your screen should look like the following image, with an empty solution. Depending on your settings, the layout may be different. But you should still have an empty solution in your Solution Explorer. If you don't see Solution Explorer, go to menu View | Solution Explorer, or press Ctrl+Alt+L to bring it up. In the Solution Explorer, right-click on the solution, and select Add | New Project… from the context menu. You can also go to menu File | Add | New Project… to get the same result. The following image shows the context menu for adding a new project. The Add New Project window should now appear on your screen. In the left-hand side of this window (Project types), select Visual C# as the project type, and on the right-hand side of the window (Templates), select Class Library as the template. At the bottom of the window, type HelloWorldService as the Name. Leave D:SOAwithWCFandLINQProjectsHelloWorld as the Location. Again, don't add HelloWorldService to the location, as Visual Studio will create a subfolder for this new project (Visual Studio will use the solution folder as the default base folder for all the new projects added to the solution). You may have noticed that there is already a template for WCF Service Application in Visual Studio 2008. For the very first example, we will not use this template. Instead, we will create everything by ourselves so you know what the purpose of each template is. This is an excellent way for you to understand and master this new technology. Now, you can click the OK button to close this window. Once you click the OK button, Visual Studio will create several files for you. The first file is the project file. This is an XML file under the project directory, and it is called HelloWorldService.csproj. Visual Studio also creates an empty class file, called Class1.cs. Later, we will change this default name to a more meaningful one, and change its namespace to our own one. Three directories are created automatically under the project folder—one to hold the binary files, another to hold the object files, and a third one for the properties files of the project. The window on your screen should now look like the following image: We now have a new solution and project created. Next, we will develop and build this service. But before we go any further, we need to do two things to this project: Click the Show All Files button on the Solution Explorer toolbar. It is the second button from the left, just above the word Solution inside the Solution Explorer. If you allow your mouse to hover above this button, you will see the hint Show All Files, as shown in above diagram. Clicking this button will show all files and directories in your hard disk under the project folder-rven those items that are not included in the project. Make sure that you don't have the solution item selected. Otherwise, you can't see the Show All Files button. Change the default namespace of the project. From the Solution Explorer, right-click on the HelloWorldService project, select Properties from the context menu, or go to menu item Project | HelloWorldService Properties…. You will see the project properties dialog window. On the Application tab, change the Default namespace to MyWCFServices. Lastly, in order to develop a WCF service, we need to add a reference to the ServiceModel namespace. On the Solution Explorer window, right-click on the HelloWorldService project, and select Add Reference… from the context menu. You can also go to the menu item Project | Add Reference… to do this. The Add Reference dialog window should appear on your screen. Select System.ServiceModel from the .NET tab, and click OK. Now, on the Solution Explorer, if you expand the references of the HelloWorldService project, you will see that System.ServiceModel has been added. Also note that System.Xml.Linq is added by default. We will use this later when we query a database. Creating the HelloWorldService service contract interface In the previous section, we created the solution and the project for the HelloWorld WCF Service. From this section on, we will start building the HelloWorld WCF service. First, we need to create the service contract interface. In the Solution Explorer, right-click on the HelloWorldService project, and select Add | New Item…. from the context menu. The following Add New Item - HelloWorldService dialog window should appear on your screen. On the left-hand side of the window (Categories), select Visual C# Items as the category, and on the right-hand side of the window (Templates), select Interface as the template. At the bottom of the window, change the Name from Interface1.cs to IHelloWorldService.cs. Click the Add button. Now, an empty service interface file has been added to the project. Follow the steps below to customize it. Add a using statement: using System.ServiceModel; Add a ServiceContract attribute to the interface. This will designate the interface as a WCF service contract interface. [ServiceContract] Add a GetMessage method to the interface. This method will take a string as the input, and return another string as the result. It also has an attribute, OperationContract. [OperationContract] String GetMessage(String name); Change the interface to public. The final content of the file IHelloWorldService.cs should look like the following: using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.ServiceModel;namespace MyWCFServices{[ServiceContract]public interface IHelloWorldService{[OperationContract]String GetMessage(String name);}}
Read more
  • 0
  • 0
  • 2748

article-image-introducing-xcode-tools-iphone-development
Packt
28 Sep 2011
9 min read
Save for later

Introducing Xcode Tools for iPhone Development

Packt
28 Sep 2011
9 min read
  (For more resources on iPhone Development, see here.) There is a lot of fun stuff to cover, so let's get started. Development using the Xcode Tools If you are running Mac OSX 10.5, chances are your machine is already running Xcode. These are located within the /Developer/Applications folder. Apple also makes this freely available through the Apple Developer Connection at http://developer.apple.com/. The iPhone SDK includes a suite of development tools to assist you with your development of your iPhone, and other iOS device applications. We describe these in the following table. iPhone SDK Core Components COMPONENT DESCRIPTION Xcode This is the main Integrated Development Environment (IDE) that enables you to manage, edit, and debug your projects. DashCode This enables you to develop web-based iPhone and iPad applications, and Dashboard widgets. iPhone Simulator The iPhone Simulator is a Cocoa-based application, which provides a software simulator to simulate an iPhone or iPad on your Mac OSX. Interface Builder   Instruments These are the Analysis tools, which help you optimize your applications and monitor for memory leaks in real-time. The Xcode tools require an Intel-based Mac running Mac OS X version 10.6.4 or later in order to function correctly. Inside Xcode, Cocoa, and Objective-C Xcode 4 is a complete toolset for building Mac OSX (Cocoa-Based) and iOS applications. The new single-windowed development interface has been redesigned to be a lot easier and even more helpful to use than it has been in previous releases. It can now also identify mistakes in both syntax and logical errors, and will even fix your code for you. It provides you with the tools to enable you to speed up your development process, therefore becoming more productive. It also takes care of the deployment of both your Mac OSX and iOS applications. The Integrated Development Interface (IDE) allows you to do the following: Create and manage projects, including specifying platforms, target requirements, dependencies, and build configurations. Supports Syntax Colouring and automatic indenting of code. Enables you to navigate and search through the components of a project, including header files and documentation. Enables you to Build and Run your project. Enables you to debug your project locally, run within the iOS simulator, or remotely, within a graphical source-level debugger. Xcode incorporates many new features and improvements, apart from the redesigned user interface; it features a new and improved LLVM (Low Level Virtual Machine) debugger, which has been supercharged to run 3 times faster and 2.5 times more efficient. This new compiler is the next generation compiler technology designed for high-performance projects and completely supports C, Objective-c, and now C++. It is also incorporated into the Xcode IDE and compiles twice as fast and quickly as GCC and your applications will run faster. The following list includes the many improvements made to this release. The interface has been completely redesigned and features a single-window integrated development interface. Interface Builder has now been fully integrated within the Xcode development IDE. Code Assistant opens in a second window that shows you the file that you are working on, and can automatically find and open the corresponding header file(s). Fix-it checks the syntax of your code and validates symbol names as you type. It will even highlight any errors that it finds and will even fix them for you. The new Version Editor works with GIT (Free Open-Source) version control software or Subversion. This will show you the files entire SCM (software configuration management) history and will even compare any two versions of the file. The new LLVM 2.0 compiler includes full support for C, Objective-C, and C++ The LLDB debugger has now been improved to be even faster, it uses less memory than the GDB debugging engine. The new Xcode 4 development IDE now lets you work on several interdependent projects within the same window. It automatically determines its dependencies so that it builds the projects in the right order. Xcode allows you to customize an unlimited number of build and debugging tools, and executable packaging. It supports several source-code management tools, namely, CVS "Version control software which is an important component of the Source Configuration Management (SCM)" and Subversion, which allows you to add files to a repository, commit changes, get updated versions and compare versions using the Version Editor tool. The iPhone Simulator The iPhone Simulator is a very useful tool that enables you to test your applications without using your actual device, whether this being your iPhone or any other iOS device. You do not need to launch this application manually, as this is done when you Build and run your application within the Xcode Integrated Development Environment (IDE). Xcode installs your application on the iPhone Simulator for you automatically. The iPhone Simulator also has the capability of simulating different versions of the iPhone OS, and this can become extremely useful if your application needs to be installed on different iOS platforms, as well as testing and debugging errors reported in your application when run under different versions of the iOS. While the iPhone Simulator acts as a good test bed for your applications, it is recommended to test your application on the actual device, rather than relying on the iPhone Simulator for testing. The iPhone Simulator can be found at the following location /Developer/Platforms/iPhoneSimulator.Platform/Developer/Applications. Layers of the iOS Architecture According to Apple, they describe the set of frameworks and technologies that are currently implemented within the iOS operating system as a series of layers. Each of these layers is made up of a variety of different frameworks that can be used and incorporated into your applications. Layers of the iOS Architecture We shall now go into detail and explain each of the different layers of the iOS Architecture; this will give you a better understanding of what is covered within each of the Core layers. The Core OS Layer This is the bottom layer of the hierarchy and is responsible for the foundation of the Operating system, which the other layers sit on top of. This important layer is in charge of managing memory - allocating and releasing of memory once it has finished with it, taking care of file system tasks, handles networking, and other Operating System tasks. It also interacts directly with the hardware. The Core OS Layer consists of the following components: COMPONENT NAME COMPONENT NAME OS X Kernel Mach 3.0 BSD Sockets Security Power Management Keychain Certificates File System Bonjour The Core Services Layer The Core Services layer provides an abstraction over the services provided in the Core OS layer. It provides fundamental access to the iPhone OS services. The Core Services Layer consists of the following components: COMPONENT NAME COMPONENT NAME Collections Address Book Networking File Access SQLite Core Location Net Services Threading Preferences URL Utilities The Media Layer The Media Layer provides Multimedia services that you can use within your iPhone, and other iOS devices. The Media Layer is made up of the following components: COMPONENT NAME COMPONENT NAME Core Audio OpenGL Audio Mixing Audio Recording Video Playback Image Formats: JPG, PNG and TIFF PDF Quartz Core Animations OpenGL ES The Cocoa-Touch Layer The Cocoa-Touch layer provides an abstraction layer to expose the various libraries for programming the iPhone, and other IOS devices. You probably can understand why Cocoa-Touch is located at the top of the hierarchy due to its support for Multi-Touch capabilities. The Cocoa-Touch Layer is made up of the following components: COMPONENT NAME COMPONENT NAME Cocoa-Touch Layer Multi-Touch Events Multi-Touch Controls Accelerometer/Gyroscope View Hierarchy Localization/Geographical Alerts Web Views People Picker Image Picker Controllers   Understanding Cocoa, the language of the Mac Cocoa is defined as the development framework used for the development of most native Mac OSX applications. A good example of a Cocoa related application is Mail or Text Edit. This framework consists of a collection of shared object code libraries known as the Cocoa frameworks. It consists of a runtime system and a development environment. These set of frameworks provide you with a consistent and optimized set of prebuilt code modules that will speed up your development process. Cocoa provides you with a rich-layer of functionality, as well as a comprehensive object-oriented like structure and APIs on which you can build your applications. Cocoa uses the Model-View-Controller (MVC) design pattern. What are Design Patterns? Design Patterns represent and handle specific solutions to problems that arise when developing software within a particular context. These can be either a description or a template, on how to go about to solve a problem in a variety of different situations. What is the difference between Cocoa and Cocoa-Touch? Cocoa-Touch is the programming language framework that drives user interaction on iOS. It consists and uses technology derived from the cocoa framework and was redesigned to handle multi-touch capabilities. The power of the iPhone and its User Interface are available to developers throughout the Cocoa-Touch frameworks. Cocoa-Touch is built upon the Model-View-Controller structure; it provides a solid stable foundation for creating mind blowing applications. Using the Interface builder developer tool, developers will find it both very easy and fun to use the new drag-and-drop method when designing their next great masterpiece application on iOS. The Model-View-Controller The Model-View-Controller (or MVC) comprises a logical way of dividing up the code that makes up the GUI (Graphical User Interface) of an application. Object-Oriented applications like Java and .Net have adopted the MVC design pattern. The MVC model comprises three distinctive categories:Model : This part defines your application's underlying data engine. It is responsible for maintaining the integrity of that data. View : This part defines the user interface for your application and has no explicit knowledge of the origin of data displayed in that interface. It is made up of Windows, controls, and other elements that the user can see and interact with. Controller : This part acts as a bridge between the model and view and facilitates updates between them. It binds the Model and View together and the application logic decides how to handle the user's inputs.  
Read more
  • 0
  • 0
  • 2735

article-image-creating-web-application-jboss-5
Packt
31 Dec 2009
7 min read
Save for later

Creating a Web Application on JBoss AS 5

Packt
31 Dec 2009
7 min read
Wonder what was the first message sent through Internet? At 22:30 hours on October 29, 1969, a message was transmitted using ARPANET (the predecessor of the global Internet) on a host-to-host connection. It was meant to transmit "login". However, it transmitted just "lo" and crashed. Developing web layout The basic component of any Java web application is the servlet. Born in the middle of the 90s, servlets quickly gained success against their competitors, the CGI scripts. This was because of some innovative features, especially the ability to execute requests concurrently, without the overhead of creating a new process for each request. However, a few things were missing, for example, the servlet API did not address any APIs specifically for creating the client GUI. This resulted in multiple ways of creating the presentation tier, generally with tag libraries that differed from job to job and from individual developers. The second thing that was missing in the servlet specification was a clear distinction between the presentation tier and the backend. A plethora of web frameworks tried to fill this gap; particularly the Struts framework effectively realized a clean separation of the model (application logic that interacts with a database) from the view (HTML pages presented to the client) and the controller (instance that passes information between view and model). However, the limitation of these frameworks was that even if they realized a complete modular abstraction, they still failed as they always exposed theHttpServletRequest and HttpServletSessionobjects to their action(s). Their actions, in turn, needed to accept the interface contracts such as ActionForm, ActionMapping, and so on. The JavaServer Faces that emerged on the stage a few years later pursued a different approach. Unlike request-driven Model–View–Controller (MVC) web frameworks, JSF chose a component-based approach that ties the user interface component to a well-defined request processing lifecycle. This greatly simplifies the development of web applications. The JSF specification allows you to have presentation components be POJOs. This creates a cleaner separation from the servlet layer and makes it easier to do testing by not requiring the POJOs to be dependent on the servlet classes. In the following sections, we will describe how to create a web layout for our application store using the JSF technology. For an exhaustive explanation of the JSF framework, we suggest you to surf the JSF homepage at http://java.sun.com/javaee/javaserverfaces/. Installing JSF on JBoss AS JBoss AS already ships with the JSF libraries, so the good news is that you don't need to download or install them in the application server. There are different implementations of the JSF libraries. Earlier JBoss releases adopted the Apache MyFaces library. JBoss AS 4.2 and 5.x ship with the Common Development and Distribution License (CDDL) implementation (now called "Project Mojarra") of the JSF 1.2 specification that is available from the java.net open source community. Switching to another JSF implementation is anyway possible. All you have to do is package your JSF libraries with your web application and configure your web.xml to ignore the JBoss built-in implementation: <context-param><param-name>org.jboss.jbossfaces.WAR_BUNDLES_JSF_IMPL</param-name><param-value>true</param-value></context-param> We will start by creating a new JSF project. From the File menu, select New | Other | JBoss Tools Web | JSF | JSF Web project. The JSF applet wizard will display, requesting the Project Name, the JSF Environment, and the default starting Template. Choose AppStoreWeb as the project name, and check that the JSF Environment used is JSF 1.2. You can leave all other options to the defaults and click Finish. Eclipse will now suggest that you switch to the Web Projects view that logically assembles all JSF components. (It seems that the current release of the plugin doesn't understand your choice, so you have to manually click on the Web Projects tab.) The key configuration file of a JSF application is faces-config.xml contained in the Configuration folder. Here you declare all navigation rules of the application and the JSF managed beans. Managed beans are simple POJOs that provide the logic for initializing and controlling JSF components, and for managing data across page requests, user sessions, or the application as a whole. Adding JSF functionalities also requires adding some information to your web.xml file so that all requests ending with a certain suffix are intercepted by the Faces Servlet. Let's have a look at the web.xml configuration file: <?xml version="1.0"?><web-app version="2.5" xsi:schemaLocation="http://java.sun.com/xml/ns/javaeehttp://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"><display-name>AppStoreWeb</display-name><context-param><param-name>javax.faces.STATE_SAVING_METHOD</param-name><param-value>server</param-value></context-param><context-param> [1]<param-name>com.sun.faces.enableRestoreView11Compatibility</param-name><param-value>true</param-value></context-param><listener><listener-class>com.sun.faces.config.ConfigureListener</listener-class></listener><!-- Faces Servlet --><servlet><servlet-name>Faces Servlet</servlet-name><servlet-class>javax.faces.webapp.FacesServlet</servlet-class><load-on-startup>1</load-on-startup></servlet><!-- Faces Servlet Mapping --><servlet-mapping><servlet-name>Faces Servlet</servlet-name><url-pattern>*.jsf</url-pattern></servlet-mapping><login-config><auth-method>BASIC</auth-method></login-config></web-app> The context-param pointed out here [1] is not added by default when you create a JSF application. However, it needs to be added, else you'll stumble into an annoying ViewExpiredException when your session expires (JSF 1.2). Setting up navigation rules In the first step, we will define the navigation rules for our AppStore. A minimalist approach would require a homepage that displays the orders, along with two additional pages for inserting new customers and new orders respectively. Let's add the following navigation rule to the faces-config.xml: <faces-config><navigation-rule><from-view-id>/home.jsp</from-view-id> [1]<navigation-case><from-outcome>newCustomer</from-outcome> [2]<to-view-id>/newCustomer.jsp</to-view-id></navigation-case><navigation-case><from-outcome>newOrder</from-outcome> [3]<to-view-id>/newOrder.jsp</to-view-id></navigation-case></navigation-rule><navigation-rule><from-view-id></from-view-id> [4]<navigation-case><from-outcome>home</from-outcome><to-view-id>/home.jsp</to-view-id></navigation-case></navigation-rule></faces-config> In a navigation rule, you can have one from-view-id that is the (optional) starting page, and one or more landing pages that are tagged as to-view-id. The from-outcome determines the navigation flow. Think about this parameter as a Struts forward, that is, instead of embedding the landing page in the JSP/servlet, you'll simply declare a virtual path in your JSF beans. Therefore, our starting page will be home.jsp [1] that has two possible links—the newCustomer.jsp form [2] and the newOrder.jsp form [3]. At the bottom, there is a navigation rule that is valid across all pages [4]. Every page requesting the home outcome will be redirected to the homepage of the application. The above JSP will be created in a minute, so don't worry if Eclipse validator complains about the missing pages. This configuration can also be examined from the Diagram tab of your faces-config.xml: The next piece of code that we will add to the confi guration is the JSF managed bean declaration. You need to declare each bean here that will be referenced by JSF pages. Add the following code snippet at the top of your faces-config.xml (just before navigation rules): <managed-bean><managed-bean-name>manager</managed-bean-name> [1]<managed-bean-class>com.packpub.web.StoreManagerJSFBean</managed-bean-class> [2]<managed-bean-scope>request</managed-bean-scope> [3]</managed-bean> The <managed-bean-name> [1] element will be used by your JSF page to reference your beans. The <managed-bean-class> [2] is obviously the corresponding class. The managed beans can then be stored within the request, session, or application scopes, depending on the value of the <managed-bean-scope> element [3].
Read more
  • 0
  • 0
  • 2729

article-image-chatroom-application-using-dwr-java-framework
Packt
23 Oct 2009
19 min read
Save for later

Chatroom Application using DWR Java Framework

Packt
23 Oct 2009
19 min read
Starting the Project and Configuration We start by creating a new project for our chat room, with the project name DWRChatRoom. We also need to add the dwr.jar file to the lib directory and enable DWR in the web.xml file. The following is the source code of the dwr.xml file. <?xml ver sion="1.0" encoding="UTF-8"?> <!DOCTYPE dwr PUBLIC "-//GetAhead Limited//DTD Direct Web Remoting 2.0//EN" "http://getahead.org/dwr/dwr20.dtd"> <dwr> <allow> <create creator="new" javascript="Login"> <param name="class" value="chatroom.Login" /> </create> <create creator="new" javascript="ChatRoomDatabase"> <param name="class" value="chatroom.ChatRoomDatabase" /> </create> </allow> </dwr> The source code for web.xml is as follows: <?xml ver sion="1.0" encoding="UTF-8"?> <web-app xsi_schemaLocation="http://java. sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_ 5.xsd" id="WebApp_ID" version="2.5"> <display-name>DWRChatRoom</display-name> <servlet> <display-name>DWR Servlet</display-name> <servlet-name>dwr-invoker</servlet-name> <servlet-class> org.directwebremoting.servlet.DwrServlet </servlet-class> <init-param> <param-name>debug</param-name> <param-value>true</param-value> </init-param> <init-param> <param-name>activeReverseAjaxEnabled</param-name> <param-value>true</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>dwr-invoker</servlet-name> <url-pattern>/dwr/*</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list></web-app> Developing the User Interface The next step we do is to create files for presentation: style sheet and HTML/JSP files. The style sheet, loginFailed.html, and index.jsp files are required for the application. The source code of the style sheet is as follows: body{margin:0;padding:0;line-height: 1.5em;}b{font-size: 110%;}em{color: red;}#topsection{background: #EAEAEA;height: 90px; /*Height of top section*/}#topsection h1{margin: 0;padding-top: 15px;}#contentwrapper{float: left;width: 100%;}#contentcolumn{margin-left: 200px; /*Set left margin to LeftColumnWidth*/}#leftcolumn{float: left;width: 200px; /*Width of left column*/margin-left: -100%;background: #C8FC98;}#footer{clear: left;width: 100%;background: black;color: #FFF;text-align: center;padding: 4px 0;}#footer a{color: #FFFF80;}.innertube{margin: 10px; /*Margins for inner DIV inside each column (to provide padding)*/margin-top: 0;} Our first page is the login page. It is located in the WebContent directory and it is named index.jsp. The source code for the page is given as follows: <%@ page language="java" contentType="text/html; charset=ISO-8859-1"    pageEncoding="ISO-8859-1"%><!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html><head><meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"><title>Book Authoring</title><script type='text/javascript' src='/DWRChatroom/dwr/interface/Login.js'></script><script type='text/javascript' src='/DWRChatroom/dwr/engine.js'></script><script type='text/javascript' src='/DWRChatroom/dwr/util.js'></script>  <script type="text/javascript">function login(){  var userNameInput=dwr.util.byId('userName');  var userName=userNameInput.value;  Login.doLogin(userName,loginResult);}function loginResult(newPage){  window.location.href=newPage;}</script></head><body><h1>Book Authoring Sample</h1><table cellpadding="0" cellspacing="0"><tr><td>User name:</td><td><input id="userName" type="text" size="30"></td></tr><tr><td>&nbsp;</td><td><input type="button" value="Login" onclick="login();return false;"></td></tr></table></body></html> The login screen uses the DWR functionality to process the user login (the Java classes are presented after the web pages). The loginResults function opens either the failure page or the main page based on the result of the login operation. If the login was unsuccessful, a very simple loginFailed.html page is shown to the user, the source code for which is as follows: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"><html><head><meta http-equiv="Content-Type" content="text/html;                                         charset=ISO-8859-1"><title>Login failed</title></head><body><h2>Login failed.</h2></body></html> The main page, mainpage.jsp, includes all the client-side logic of our ChatRoom application. The source code for the page is as follows: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN""http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html lang="en" xml_lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /><title>Chatroom</title><link href="styles.css" rel="stylesheet" type="text/css" /><%   if (session.getAttribute("username") == null         || session.getAttribute("username").equals("")) {      //if not logged in and trying to access this page      //do nothing, browser shows empty page      return;   }%><script type='text/javascript' src='/DWRChatRoom/dwr/interface/Login.js'></script><script type='text/javascript' src='/DWRChatRoom/dwr/interface/ChatRoomDatabase.js'></script><script type='text/javascript' src='/DWRChatRoom/dwr/engine.js'></script><script type='text/javascript' src='/DWRChatRoom/dwr/util.js'></script>  <script type="text/javascript">dwr.engine.setActiveReverseAjax(true);function logout(){  Login.doLogout(showLoginScreen);}function showLoginScreen(){  window.location.href='index.jsp';}function showUsersOnline(){  var cellFuncs = [          function(user) {             return '<i>'+user+'</i>';          }          ];    Login.getUsersOnline({    callback:function(users)     {      dwr.util.removeAllRows('usersOnline');      dwr.util.addRows( "usersOnline",users, cellFuncs,                                    { escapeHtml:false });    }    });}function getPreviousMessages(){    ChatRoomDatabase.getChatContent({    callback:function(messages)     {      var chatArea=dwr.util.byId('chatArea');      var html="";      for(index in messages)      {         var msg=messages[index];         html+=msg;      }      chatArea.innerHTML=html;      var chatAreaHeight = chatArea.scrollHeight;      chatArea.scrollTop = chatAreaHeight;    }    });}function newMessage(message){  var chatArea=dwr.util.byId('chatArea');  var oldMessages=chatArea.innerHTML;  chatArea.innerHTML=oldMessages+message;    var chatAreaHeight = chatArea.scrollHeight;  chatArea.scrollTop = chatAreaHeight;}function sendMessageIfEnter(event){  if(event.keyCode == 13)  {    sendMessage();  }}function sendMessage(){    var message=dwr.util.byId('messageText');    var messageText=message.value;    ChatRoomDatabase.postMessage(messageText);    message.value='';}</script></head><body onload="showUsersOnline();"><div id="maincontainer"><div id="topsection"><div class="innertube"><h1>Chatroom</h1><h4>Welcome <i><%=(String) session.getAttribute("username")%></i></h4></div></div><div id="contentwrapper"><div id="contentcolumn"><div id="chatArea" style="width: 600px; height: 300px; overflow: auto"></div><div id="inputArea"><h4>Send message</h4><input id="messageText" type="text" size="50"  onkeyup="sendMessageIfEnter(event);"><input type="button" value="Send msg"                                                      onclick="sendMessage();"></div></div></div><div id="leftcolumn"><div class="innertube"><table cellpadding="0" cellspacing="0">  <thead>    <tr>      <td><b>Users online</b></td>    </tr>  </thead>  <tbody id="usersOnline">  </tbody></table><input id="logoutButton" type="button" value="Logout"  onclick="logout();return false;"></div></div><div id="footer">Stylesheet by <a  href="http://www.dynamicdrive.com/style/">Dynamic Drive CSSLibrary</a></div></div><script type="text/javascript">getPreviousMessages();</script></body></html> The first chat-room-specific JavaScript function is getPreviousMessages(). This function is called at the end of mainpage.jsp, and it retrieves previous chat messages for this chat room. The newMessage() function is called by the server-side Java code when a new message is posted to the chat room. The function also scrolls the chat area automatically to show the latest message. The sendMessageIfEnter() and sendMessage() functions are used to send user messages to the server. There is the input field for the message text in the HTML code, and the sendMessageIfEnter() function listens to onkeyup events in the input field. If the user presses enter, the sendMessage() function is called to send the message to the server. The HTML code includes the chat area of specified size and with automatic scrolling. Developing the Java Code There are several Java classes in the application. The Login class handles the user login and logout and also keeps track of the logged-in users. The source code of the Login class is as follows: package chatroom;import java.util.Collection;import java.util.List;import javax.servlet.ServletContext;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpSession;import org.directwebremoting.ScriptSession;import org.directwebremoting.ServerContext;import org.directwebremoting.ServerContextFactory;import org.directwebremoting.WebContext;import org.directwebremoting.WebContextFactory;import org.directwebremoting.proxy.ScriptProxy;public class Login {   public Login() {   }      public String doLogin(String userName) {      UserDatabase userDb=UserDatabase.getInstance();      if(!userDb.isUserLogged(userName)) {         userDb.login(userName);         WebContext webContext= WebContextFactory.get();         HttpServletRequest request = webContext.getHttpServletRequest();         HttpSession session=request.getSession();         session.setAttribute("username", userName);         String scriptId = webContext.getScriptSession().getId();         session.setAttribute("scriptSessionId", scriptId);         updateUsersOnline();         return "mainpage.jsp";      }      else {         return "loginFailed.html";      }   }      public void doLogout() {      try {         WebContext ctx = WebContextFactory.get();         HttpServletRequest request = ctx.getHttpServletRequest();         HttpSession session = request.getSession();         Util util = new Util();         String userName = util.getCurrentUserName(session);         UserDatabase.getInstance().logout(userName);         session.removeAttribute("username");         session.removeAttribute("scriptSessionId");         session.invalidate();      } catch (Exception e) {         System.out.println(e.toString());      }      updateUsersOnline();   }      private void updateUsersOnline() {      WebContext webContext= WebContextFactory.get();      ServletContext servletContext = webContext.getServletContext();      ServerContext serverContext = ServerContextFactory.get(servletContext);      webContext.getScriptSessionsByPage("");      String contextPath = servletContext.getContextPath();      if (contextPath != null) {         Collection<ScriptSession> sessions =                            serverContext.getScriptSessionsByPage                                            (contextPath + "/mainpage.jsp");         ScriptProxy proxy = new ScriptProxy(sessions);         proxy.addFunctionCall("showUsersOnline");      }   }      public List<String> getUsersOnline() {      UserDatabase userDb=UserDatabase.getInstance();      return userDb.getLoggedInUsers();   }} The following is the source code of the UserDatabase class package chatroom;import java.util.List;import java.util.Vector;//this class holds currently logged in users//there is no persistencepublic class UserDatabase {      private static UserDatabase userDatabase=new UserDatabase();      private List<String> loggedInUsers=new Vector<String>();      private UserDatabase() {}      public static UserDatabase getInstance() {      return userDatabase;   }      public List<String> getLoggedInUsers() {      return loggedInUsers;   }      public boolean isUserLogged(String userName) {      return loggedInUsers.contains(userName);    }      public void login(String userName) {      loggedInUsers.add(userName);   }      public void logout(String userName) {      loggedInUsers.remove(userName);   }} The Util class is used by the Login class, and it provides helper methods for the sample application. The source code for the Util class is as follows: package chatroom;import java.util.Hashtable;import java.util.Map;import javax.servlet.http.HttpServletRequest;import javax.servlet.http.HttpSession;import org.directwebremoting.WebContext;import org.directwebremoting.WebContextFactory;public class Util {      public Util() {         }      public String getCurrentUserName() {      //get user name from session      WebContext ctx = WebContextFactory.get();      HttpServletRequest request = ctx.getHttpServletRequest();      HttpSession session=request.getSession();      return getCurrentUserName(session);   }   public String getCurrentUserName(HttpSession session) {      String userName=(String)session.getAttribute("username");      return userName;   }} The logic for the server-side chat room functionality is in the ChatRoomDatabase class. The source code for the ChatRoomDatabase is as follows: package chatroom;import java.util.Collection;import java.util.Date;import java.util.List;import java.util.Vector;import javax.servlet.ServletContext;import org.directwebremoting.ScriptSession;import org.directwebremoting.ServerContext;import org.directwebremoting.ServerContextFactory;import org.directwebremoting.WebContext;import org.directwebremoting.WebContextFactory;import org.directwebremoting.proxy.ScriptProxy;public class ChatRoomDatabase {   private static List<String> chatContent = new Vector<String>();   public ChatRoomDatabase() {   }   public void postMessage(String message) {      String user = (new Util()).getCurrentUserName();      if (user != null) {         Date time = new Date();         StringBuffer sb = new StringBuffer();         sb.append(time.toString());         sb.append(" <b><i>");         sb.append(user);         sb.append("</i></b>:  ");         sb.append(message);         sb.append("<br/>");         String newMessage=sb.toString();         chatContent.add(newMessage);         postNewMessage(newMessage);      }   }   public List<String> getChatContent() {      return chatContent;   }   private ScriptProxy getScriptProxyForSessions() {      WebContext webContext = WebContextFactory.get();      ServletContext servletContext = webContext.getServletContext();      ServerContext serverContext = ServerContextFactory.get(servletContext);      webContext.getScriptSessionsByPage("");      String contextPath = servletContext.getContextPath();      if (contextPath != null) {         Collection<ScriptSession> sessions = serverContext               .getScriptSessionsByPage(contextPath + "/mainpage.jsp");         ScriptProxy proxy = new ScriptProxy(sessions);         return proxy;      }      return null;   }   public void postNewMessage(String newMessage) {      ScriptProxy proxy = getScriptProxyForSessions();      if (proxy != null) {         proxy.addFunctionCall("newMessage",newMessage);      }   }} The Chatroom code is surprisingly simple. The chat content is stored in a Vector of Strings. The getChatContent()method just returns the chat content Vector to the browser. The postMessage()method is called when the user sends a new chat message. The method verifies whether the user is logged in, and adds the current time and username to the chat message and then appends the message to the chat content. The method also calls the postNewMessage() method that is used to show new chat content to all logged-in users. Note that the postMessage() method does not return any value. We let DWR and reverse AJAX functionality show the chat message to all users, including the user who sent the message. The getScriptProxyForSessions() and postNewMessage() methods use reverse AJAX to update the chat areas of all logged-in users with the new message. And that is it! The chat room sample is very straightforward and basic functionality is already in place, and the application is ready for further development. Testing the Chat We test the chat room application with three users: Smith, Brown, and Jones. We have given some screenshots of a typical scenario in a chat room here. Both Smith and Brown log into the system and exchange some messages. Both users see empty chat rooms when they log in and start chatting. The empty area that is above the send message input field is reserved for chat content. Smith and Brown exchange some messages as is seen in the following screenshot: The third user, Jones, joins the chat and sees all the previous messages in the chat room. Jones then exchanges some messages with Smith and Brown. Smith and Brown log out from the system leaving Jones alone in the chat room (until she also logs out). This is visible in the following screenshot: Summary This sample application showed how to use DWR in a chat room application. This application makes it clear that DWR makes development of these kind of collaborative applications very easy. DWR itself does not even play a big part in the applications. DWR is just a transparent feature of the application. So developers can concentrate on the actual project and aspects such as persistence of data and a neat user interface, instead of the low-level details of AJAX.    
Read more
  • 0
  • 0
  • 2726

article-image-getting-started-spring-mvc-developing-mvc-components
Packt
31 Dec 2009
5 min read
Save for later

Getting Started With Spring MVC - Developing the MVC components

Packt
31 Dec 2009
5 min read
In the world of networked applications, thin clients (also known as web applications) are more in demand than thick clients. Due to this demand, every language is providing frameworks that try to make web-application development simpler. The simplicity is not provided just through setting up the basic application structure or generating boiler plate code. These frameworks are trying to provide simplicity through plug-ability of the frameworks i.e. the components of different frameworks could be brought together without much difficulty. Among such frameworks, Spring Framework is one of the most used. With its support to multiple Data Access frameworks/libraries and light-weight IoC container makes it suitable for scenarios where one would like mix-and-match multiple frameworks, a different one for each layer. This aspect of Spring Framework becomes more suitable for development of web-applications where the UI does not need to know with which framework it is dealing for business process or data access. The component of the Spring Framework stack that caters to the web UI is Spring MVC. In this discussion, we will focus on the basics of Spring MVC. First section will deal with the terms and terminologies related with Spring MVC and MVC. The second section will detail the steps for developing components of a web-application using Spring MVC. That is the agenda for this discussion. Spring MVC Spring MVC, as the name suggests, is a framework based on Model (M), View (V), Controller (C) pattern. Currently there are more than seven well known web-application frameworks that implement MVC pattern. Then what are the features of Spring MVC that sets it apart from other frameworks? The two main features are: Pluggable View technology Injection of services into controllers The former provides a way to use different UI frameworks instead of Spring MVC’s UI library and the latter removes the need to develop a new way to access functionality of business layer. Pluggable View technologyVarious View technologies are available in the market (including Tiles, Velocity, etc) with which Spring MVC can quite easily be integrated. In other words, JSP is not the only template engine supported. The pluggable feature is not limited to the templating technologies. By using common configuration functionality, other frameworks such as JSF can be integrated with Spring MVC applications. Thus, it is possible to mix-and-match different View technologies by using Spring MVC. Injection of Services into ControllersThis feature comes into picture when the Spring Framework is used to implement the business layer. Using the IoC capabilities of Spring Framework, the business layer services and/or objects can be injected into the Controller without explicitly setting up the call to the service or mirroring the business layer objects in controller. This helps in reduction of code duplication between Web UI/process layer and business process layer. The next important aspect of Spring MVC is its components. They are: Model (M) View (V) Controller (C) Model deals with the data that the application has to present, View contains the logic to present the data and Controller takes care of the flow of navigation and application logic. Following are the details. ModelModel is an object that holds the data to be displayed. It can be any Java object – from simple POJO to any type of Collection object. It can also be a combination of both – an instance of POJO to hold the detailed data and a collection object to hold all the instances of the POJO which, in reality, is most commonly used Model in Spring MVC. Also, the framework has its own way to hold the data. It holds the data using the Model object that is an instance of org.springframework.ui.ModelMap. Internally, whichever collection class object is used, the framework maps it to the ModelMap class. ViewIn MVC, it is the View that presents the data to the user. Spring MVC, just as many other JEE frameworks, uses a combination of JSP and tag libraries to implement View. Apart from using JSP, many kinds of View technologies like Tiles, Velocity, and Jasper Reports can be plugged into the Framework. The main class behind this plug ability is the org.springframework.web.servlet.View. The View class achieves the plug-in functionality by presenting the View as Logical View instead of actual/physical View. Physical view corresponds to the page developed using any of the templating technologies. The Logical View corresponds to the name of the View to be used. The name is then mapped to the actual View in the configuration file. One important point to remember about how Spring MVC uses Logical View is that Logical View and Model are treated as one entity named Model And View represented by org.springframework.web.servlet.ModelAndView class. ControllerThe flow of application and navigation is directed by the controller. It also processes the user input and transforms it into the Model. In Spring MVC, controllers are developed either by extending the out-of-the-box Controller classes or implementing the Controller interface. Following comes under the former category SimpleFormController AbstractController AbstractCommandController CancellableFormController AbstractCommandController MultiActionController ParameterizableViewController ServletForwardingController ServletWrappingController UrlFilenameViewController Of these most commonly used are AbstractController, AbstractCommandController, SimpleFormController and CancellableFormController. That wraps up this section. Let us move onto the next section – steps for developing an application using Spring MVC.
Read more
  • 0
  • 0
  • 2726
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-guice
Packt
24 Sep 2013
13 min read
Save for later

Google Guice

Packt
24 Sep 2013
13 min read
(For more resources related to this topic, see here.) Structure of flightsweb application Our application has two servlets: IndexServlet, which is a trivial example of forwarding any request, mapped with "/" to index.jsp and FlightServlet, which processes the request using the functionality we developed in the previous section and forwards the response to response.jsp. Here in, we simply declare the FlightEngine and SearchRequest as the class attributes and annotate them with @Inject. FlightSearchFilter is a filter with the only responsibility of validating the request parameters. Index.jsp is the landing page of this application and presents the user with a form to search the flights, and response.jsp is the results page. The flight search form will look as shown in the following screenshot: The search page would subsequently lead to the following result page. In order to build the application, we need to execute the following command in the directory, where the pom.xml file for the project resides: shell> mvn clean package The project for this article being a web application project compiles and assembles a WAR file, flightsweb.war in the target directory. We could deploy this file to TOMCAT. Using GuiceFilter Let's start with a typical web application development scenario. We need to write a JSP to render a form for searching flights and subsequently a response JSP page. The search form would post the request parameters to a processing servlet, which processes the parameters and renders the response. Let's have a look at web.xml. A web.xml file for an application intending to use Guice for dependency injection needs to apply the following filter: <filter> <filter-name>guiceFilter</filter-name> <filter-class>com.google.inject.servlet.GuiceFilter</filter-class> </filter> <filter-mapping> <filter-name>guiceFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> It simply says that all the requests need to pass via the Guice filter. This is essential since we need to use various servlet scopes in our application as well as to dispatch various requests to injectable filters and servlets. Rest any other servlet, filter-related declaration could be done programmatically using Guice-provided APIs. Rolling out our ServletContextListener interface Let's move on to another important piece, a servlet context listener for our application. Why do we need a servlet context listener in the first place? A servlet context listener comes into picture once the application is deployed. This event is the best time when we could bind and inject our dependencies. Guice provides an abstract class, which implements ServletContextListener interface. This class basically takes care of initializing the injector once the application is deployed, and destroying it once it is undeployed. Here, we add to the functionality by providing our own configuration for the injector and leave the initialization and destruction part to super class provided by Guice. For accomplishing this, we need to implement the following API in our sub class: protected abstract Injector getInjector(); Let's have a look at how the implementation would look like: package org.packt.web.listener; import com.google.inject.servlet.GuiceServletContextListener; import com.google.inject.servlet.ServletModule; public class FlightServletContextListener extends GuiceServletContextListener { @Override protected Injector getInjector() { return Guice.createInjector( new ServletModule(){ @Override protected void configureServlets() { // overridden method contains various // configurations } }); }} Here, we are returning the instance of injector using the API: public static Injector createInjector(Module... modules) Next, we need to provide a declaration of our custom FlightServletContextListener interface in web.xml: <listener> <listener-class> org.packt.web.listener.FlightServletContextListener </listener-class> </listener> ServletModule – the entry point for configurations In the argument for modules, we provide a reference of an anonymous class, which extends the class ServletModule. A ServletModule class configures the servlets and filters programmatically, which is actually a replacement of declaring the servlet and filters and their corresponding mappings in web.xml. Why do we need to have a replacement of web.xml in the first place? Think of it on different terms. We need to provide a singleton scope to our servlet. We need to use various web scopes like RequestScope, SessionScope, and so on for our classes, such as SearchRequest and SearchResponse. These could not be done simply via declarations in web.xml. A programmatic configuration is far more logical choice for this. Let's have a look at a few configurations we write in our anonymous class extending the ServletModule: new ServletModule(){ @Override protected void configureServlets() { install(new MainModule()); serve("/response").with(FlightServlet.class); serve("/").with(IndexServlet.class); } } A servlet module at first provides a way to install our modules using the install() API. Here, we install MainModule, which is reused from the previous section. Rest all other modules are installed from MainModule. Binding language ServletModule presents APIs, which could be used for configuring filters and servlets. Using these expressive APIs known as EDSL, we could configure the mappings between servlets, filters, and respective URLs. Guice uses an embedded domain specific language or EDSL to help us create bindings simply and readably. We are already using this notation while creating various sort of bindings using the bind() APIs. Readers could refer to the Binder javadoc, where EDSL is discussed with several examples. Mapping servlets Here, following statement maps the /response path in the application to the FlightServlet class's instance: serve("/response").with(FlightServlet.class); serve() returns an instance of ServletKeyBindingBuilder. It provides various APIs, using which we could map a URL to an instance of servlet. This API also has a variable argument, which helps to avoid repetition. For example, in order to map /response as well as /response-quick, both the URLs to FlightServlet.class we could use the following statement: serve("/response","/response-quick").with(FlightServlet.class); serveRegex() is similar to serve(), but accepts the regular expression for URL patterns, rather than concrete URLs. For instance, an easier way to map both of the preceding URL patterns would be using this API: serveRegex("^response").with(FlightServlet.class); ServletKeyBindingBuilder.with() is an overloaded API. Let's have a look at the various signatures. void with(Class<? extends HttpServlet> servletKey); void with(Key<? extends HttpServlet> servletKey); To use the key binding option, we will develop a custom annotation @FlightServe. FlightServlet will be then annotated with it. Following binding maps a URL pattern to a key: serve("/response").with(Key.get(HttpServlet.class, FlightServe.class)); Since this, we need to just declare a binding between @FlightServe and FlightServlet, which will go in modules: bind(HttpServlet.class). annotatedWith(FlightServe.class).to(FlightServlet.class) What is the advantage of binding indirectly using a key? First of all, it is the only way using which we could separate an interface from an implementation. Also it helps us to assign scope as a part of the configuration. A servlet or a filter must be at least in singleton In this case we can assign scope directly in configuration. The option of annotating a filter or a servlet with @Singleton is also available, although. Guice 3.0 provides following overloaded versions, which even facilitate providing initialization parameters and hence provide type safety. void with(HttpServlet servlet); void with(Class<? extends HttpServlet> servletKey, Map<String, String> initParams); void with(Key<? extends HttpServlet> servletKey, Map<String, String> initParams); void with(HttpServlet servlet, Map<String, String> initParams); An important point to be noted here is that ServletModule not only provides a programmatic API to configure the servlets, but also a type-safe idiomatic API to configure the initialization parameters. It is not possible to ensure type safety while declaring the initialization parameters in web.xml. Mapping filters Similar to the servlets, filters could be mapped to URL patterns or regular expressions. Here, the filter() API is used to map a URL pattern to a Filter. For example: filter("/response").through(FlightSearchFilter.class); filter() returns an instance of FilterKeyBindingBuilder. FlightKeyBindingBuilder provides various APIs, using which we can map a URL to an instance of filter. filter() and filterRegex() APIs take exactly the same kind of arguments as serve() and serveRegex() does when it comes to handling the pure URLs or regular expressions. Let's have a look at FilterKeyBindingBuilder.through() APIs. Similar to ServletKeyBindingBuilder.with() it also provides various overloaded versions: void through(Class<? extends Filter> filterKey); void through(Key<? extends Filter> filterKey); Key mapped to a URL, which is then bound via annotation to an implementation could be exemplified as: filter("/response"). through(Key.get(Filter.class,FlightFilter.class)); The binding is done through annotation. Also note, that the filter implementation is deemed as singleton in scope. bind(Filter.class). annotatedWith(FlightFilter.class). to(FlightSearchFilter.class).in(Singleton.class); Guice 3.0 provides following overloaded versions, which even facilitate providing initialization parameters and provide type safety: void through(Filter filter); void through(Class<? extends Filter> filterKey, Map<String, String> initParams); void through(Key<? extends Filter> filterKey, Map<String, String> initParams); void through(Filter filter, Map<String, String> initParams); Again, these type safe APIs provide a better configuration option then declaration driven web.xml. Web scopes Aside from dependency injection and configuration facilities via programmable APIs, Guice provides feature of scoping various classes, depending on their role in the business logic. As we saw, while developing the custom scope, a scope comes into picture during binding phase. Later, when the scope API is invoked, it brings the provider into picture. Actually it is the provider which is the key to the complete implementation of the scope. Same thing applies for the web scope. @RequestScoped Whenever we annotate any class with either of servlet scopes like @RequestScoped or @SessionScoped, call to scope API of these respective APIs are made. This results in eager preparation of the Provider<T> instances. So to harness these providers, we need not configure any type of binding, as these are implicit bindings. We just need to inject these providers where we need the instances of respective types. Let us discuss various examples related to these servlet scopes. Classes scoped to @RequestScoped are instantiated on every request. A typical example would be to instantiate SearchRequest on every request. We need to annotate the SearchRQ with the @RequestScoped. @RequestScoped public class SearchRequest { ……} Next, in FlightServlet we need to inject the implicit provider: @Inject private Provider<SearchRequest> searchRQProvider; The instance could be fetched simply by invoking the .get() API of the provider: SearchRequest searchRequest = searchRQProvider.get(); @SessionScoped The same case goes with @SessionScoped annotation. In FlightSearchFilter, we need an instance of RequestCounter (a class for keeping track of number of requests in a session). This class RequestCounter needs to be annotated with @SessionScoped, and would be fetched in the same way as the SearchRequest. However the Provider takes care to instantiate it on every new session creation: @SessionScoped public class RequestCounter implements Serializable{……} Next, in FlightSearchFilter, we need to inject the implicit provider: @Inject private Provider<RequestCounter> sessionCountProvider; The instance could be fetched simply by invoking the .get() API of the provider. @RequestParameters Guice also provides a @RequestParameters annotation. It could be directly used to inject the request parameters. Let's have a look at an example in FlightSearchFilter. Here, we inject the provider for type Map<String,String[]> in a field: @Inject @RequestParameters private Provider<Map<String, String[]>> reqParamMapProvider; As the provider is bound internally via InternalServletModule (Guice installs this module internally), we can harness the implicit binding and inject the Provider. An important point to be noted over here is that, in case we try to inject the classes annotated with ServletScopes, like @RequestScoped or @SessionScoped, outside of the ServletContext or via a non HTTP request like RPC, Guice throws the following exception: SEVERE: Exception starting filter guiceFilter com.google.inject.ProvisionException: Guice provision errors: Error in custom provider, com.google.inject.OutOfScopeException: Cannot access scoped object. Either we are not currently inside an HTTP Servlet request, or you may have forgotten to apply com.google.inject.servlet.GuiceFilter as a servlet filter for this request. This happens because the Providers associated with these scopes necessarily work with a ServletContext and hence it could not complete the dependency injection. We need to make sure that our dependencies annotated with ServletScopes come into the picture only when we are in WebScope. Another way in which the scoped dependencies could be made available is by using the injector.getInstance() API. This however requires that we need to inject the injector itself using the @Inject injector in the dependent class. This is however not advisable as it is mixing dependency injection logic with the application logic. We need to avoid this approach. Exercising caution while scoping Our examples illustrate cases where we are injecting the dependencies with narrower scope in the dependencies of wider scope. For example, RequestCounter (which is @SessionScoped) is injected in FlightSearchFilter (which is a singleton). This needs to be very carefully designed, as in when we are absolutely sure that a narrowly scoped dependency should be always present else it would create a problem. It basically results in scope widening, which means that apparently we are widening the scope of SessionScoped objects to that of singleton scoped object, the servlet. If not managed properly, it could result into memory leaks, as the garbage collector could not collect the references to the narrowly scoped objects, which are held in the widely scoped objects. Sometimes this is unavoidable, in such a case we need to make sure we are following two basic rules: Injecting the narrow scoped dependency using Providers. By following this strategy, we never allow the widely scoped class to hold the reference to the narrowly scoped dependency, once it goes out of scope. Do not get the injector instance injected in the wide scoped class instance to fetch the narrow scoped dependency, directly. It could result in hard to debug bugs. Make sure that we use the dependent narrowly scoped objects in APIs only. This lets these to live as stack variables rather than heap variables. Once method execution finishes, the stack variables are garbage collected. Assigning the object fetched from the provider to a class level reference could affect garbage collection adversely, and result in memory leaks. Here, we are using these narrowly scoped dependencies in APIs: doGet() and doFilter(). This makes sure that they are always available. Contrarily, injecting widely scoped dependencies in narrowly scoped dependencies works well, for example, in a @RequestScoped annotated class if we inject a @SessionScoped annotated dependency, it is much better since it is always guaranteed that dependency would be available for injection and once narrowly scoped object goes out of scope it is garbage collected properly. We retrofitted our flight search application in a web environment. In doing so we learned about many aspects of the integration facilities Guice offers us: We learned how to set up the application to use dependency injection using GuiceFilter and a custom ServletContextListener. We saw how to avoid servlet, filter mapping in web.xml, and follow a safer programmatic approach using ServletModule. We saw the usage of various mapping APIs for the same and also certain newly introduced features in Guice 3.0. We discussed how to use the various web scopes.
Read more
  • 0
  • 0
  • 2723

article-image-facelets-templating-jsf-20
Packt
20 Jun 2011
7 min read
Save for later

Facelets Templating in JSF 2.0

Packt
20 Jun 2011
7 min read
One advantage that Facelets has over JSP is its templating mechanism. Templates allow us to specify page layout in one place, then we can have template clients that use the layout defined in the template. Since most web applications have consistent layout across pages, using templates makes our applications much more maintainable, since changes to the layout need to be made in a single place. If at one point we need to change the layout for our pages (add a footer, or move a column from the left side of the page to the right side of the page, for example), we only need to change the template, and the change is reflected in all template clients. NetBeans provides very good support for facelets templating. It provides several templates "out of the box", using common web page layouts. We can then select from one of several predefined templates to use as a base for our template or simply to use it "out of the box". NetBeans gives us the option of using HTML tables or CSS for layout. For most modern web applications, CSS is the preferred approach. For our example we will pick a layout containing a header area, a single left column, and a main area. After clicking on Finish, NetBeans automatically generates our template, along with the necessary CSS files. The automatically generated template looks like this: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <h:head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <link href="./resources/css/default.css" rel="stylesheet" type="text/css" /> <link href="./resources/css/cssLayout.css" rel="stylesheet" type="text/css" /> <title>Facelets Template</title> </h:head> <h:body> <div id="top" class="top"> <ui:insert name="top">Top</ui:insert> </div> <div> <div id="left"> <ui:insert name="left">Left</ui:insert> </div> <div id="content" class="left_content"> <ui:insert name="content">Content</ui:insert> </div> </div> </h:body> </html> As we can see, the template doesn't look much different from a regular Facelets file. Adding a Facelets template to our project We can add a Facelets template to our project simply by clicking on File | New File, then selecting the JavaServer Faces category and the Facelets Template file type. Notice that the template uses the following namespace: Java EE 6 Development with NetBeans 7" href="http://java.sun.com" target="_blank">http://java.sun.com/jsf/facelets. This namespace allows us to use the <ui:insert> tag, the contents of this tag will be replaced by the content in a corresponding <ui:define> tag in template clients. Using the template To use our template, we simply need to create a Facelets template client, which can be done by clicking on File | New File, selecting the JavaServer Faces category and the Facelets Template Client file type. After clicking on Next >, we need to enter a file name (or accept the default), and select the template that we will use for our template client. After clicking on Finish, our template client is created. <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <body> <ui:composition template="./template.xhtml"> <ui:define name="top"> top </ui:define> <ui:define name="left"> left </ui:define> <ui:define name="content"> content </ui:define> </ui:composition> </body> </html> As we can see, the template client also uses the Java EE 6 Development with NetBeans 7" href="http://java.sun.com" target="_blank">http://java.sun.com/jsf/facelets" namespace. In a template client, the <ui:composition> tag must be the parent tag of any other tag belonging to this namespace. Any markup outside this tag will not be rendered; the template markup will be rendered instead. The <ui:define> tag is used to insert markup into a corresponding <ui:insert> tag in the template. The value of the name attribute in <ui:define> must match the corresponding <ui:insert> tag in the template. After deploying our application, we can see templating in action by pointing the browser to our template client URL. Notice that NetBeans generated a template that allows us to create a fairly elegant page with very little effort on our part. Of course, we should replace the markup in the <ui:define> tags to suit our needs. Here is a modified version of our template, adding markup to be rendered in the corresponding places in the template: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <body> <ui:composition template="./template.xhtml"> <ui:define name="top"> <h2>Welcome to our Site</h2> </ui:define> <ui:define name="left"> <h3>Links</h3> <ul> <li> <h:outputLink value="http://www.packtpub.com"> <h:outputText value="Packt Publishing"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.ensode.net"> <h:outputText value="Ensode.net"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.ensode.com"> <h:outputText value="Ensode Technology, LLC"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.netbeans.org"> <h:outputText value="NetBeans.org"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.glassfish. org"> <h:outputText value="GlassFish.org"/> </h:outputLink> </li> <li> <h:outputLink value="http://www.oracle.com/technetwork/ java/javaee/overview/index.html"> <h:outputText value="Java EE 6"/> </h:outputLink> </li> <li><h:outputLink value="http://www.oracle.com/ technetwork/java/index.html"> <h:outputText value="Java"/> </h:outputLink></li> </ul> </ui:define> <ui:define name="content"> <p> In this main area we would put our main text, images, forms, etc. In this example we will simply use the typical filler text that web designers love to use. </p> <p> Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc venenatis, diam nec tempor dapibus, lacus erat vehicula mauris, id lacinia nisi arcu vitae purus. Nam vestibulum nisi non lacus luctus vel ornare nibh pharetra. Aenean non lorem lectus, eu tempus lectus. Cras mattis nibh a mi pharetra ultricies. In consectetur, tellus sit amet pretium facilisis, enim ipsum consectetur magna, a mattis ligula massa vel mi. Maecenas id arcu a erat pellentesque vestibulum at vitae nulla. Nullam eleifend sodales tincidunt. Donec viverra libero non erat porta sit amet convallis enim commodo. Cras eu libero elit, ac aliquam ligula. Quisque a elit nec ligula dapibus porta sit amet a nulla. Nulla vitae molestie ligula. Aliquam interdum, velit at tincidunt ultrices, sapien mauris sodales mi, vel rutrum turpis neque id ligula. Donec dictum condimentum arcu ut convallis. Maecenas blandit, ante eget tempor sollicitudin, ligula eros venenatis justo, sed ullamcorper dui leo id nunc. Suspendisse potenti. Ut vel mauris sem. Duis lacinia eros laoreet diam cursus nec hendrerit tellus pellentesque. </p> </ui:define> </ui:composition> </body> After making the above changes, our template client now renders as follows: As we can see, creating Facelets templates and template clients with NetBeans is a breeze.
Read more
  • 0
  • 0
  • 2709

article-image-using-fluent-nhibernate-persistence-tester-and-ghostbusters-test
Packt
06 Oct 2010
3 min read
Save for later

Using the Fluent NHibernate Persistence Tester and the Ghostbusters Test

Packt
06 Oct 2010
3 min read
NHibernate 3.0 Cookbook Get solutions to common NHibernate problems to develop high-quality performance-critical data access applications Master the full range of NHibernate features Reduce hours of application development time and get better application architecture and performance Create, maintain, and update your database structure automatically with the help of NHibernate Written and tested for NHibernate 3.0 with input from the development team distilled in to easily accessible concepts and examples Part of Packt's Cookbook series: each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible      The reader would benefit from reading the previous article on Testing Using NHibernate Profiler and SQLite. Using the Fluent NHibernate Persistence Tester Mappings are a critical part of any NHibernate application. In this recipe, I'll show you how to test those mappings using Fluent NHibernate's Persistence tester. Getting ready Complete the Fast testing with SQLite in-Memory database recipe mentioned in the previous article. How to do it... Add a reference to FluentNHibernate. In PersistenceTests.cs, add the following using statement: using FluentNHibernate.Testing; Add the following three tests to the PersistenceTests fixture: [Test] public void Product_persistence_test() { new PersistenceSpecification<Product>(Session) .CheckProperty(p => p.Name, "Product Name") .CheckProperty(p => p.Description, "Product Description") .CheckProperty(p => p.UnitPrice, 300.85M) .VerifyTheMappings(); } [Test] public void ActorRole_persistence_test() { new PersistenceSpecification<ActorRole>(Session) .CheckProperty(p => p.Actor, "Actor Name") .CheckProperty(p => p.Role, "Role") .VerifyTheMappings(); } [Test] public void Movie_persistence_test() { new PersistenceSpecification<Movie>(Session) .CheckProperty(p => p.Name, "Movie Name") .CheckProperty(p => p.Description, "Movie Description") .CheckProperty(p => p.UnitPrice, 25M) .CheckProperty(p => p.Director, "Director Name") .CheckList(p => p.Actors, new List<ActorRole>() { new ActorRole() { Actor = "Actor Name", Role = "Role" } }) .VerifyTheMappings(); } Run these tests with NUnit. How it works... The Persistence tester in Fluent NHibernate can be used with any mapping method. It performs the following four steps: Create a new instance of the entity (Product, ActorRole, Movie) using the values provided. Save the entity to the database. Get the entity from the database. Verify that the fetched instance matches the original. At a minimum, each entity type should have a simple Persistence test, such as the one shown previously. More information about the Fluent NHibernate Persistence tester can be found on their wiki at http://wiki.fluentnhibernate.org/Persistence_specification_testing See also Testing with the SQLite in-memory database Using the Ghostbusters test  
Read more
  • 0
  • 0
  • 2694

article-image-build-your-own-application-access-twitter-using-java-and-netbeans-part-3
Packt
31 Mar 2010
7 min read
Save for later

Build your own Application to access Twitter using Java and NetBeans: Part 3

Packt
31 Mar 2010
7 min read
This is the third part of the Twitter Java client tutorial article series! In Build your own Application to access Twitter using Java and NetBeans: Part 2 we: Created a twitterLogin dialog to take care of the login process Added functionality to show your 20 most recent tweets right after logging in Added the functionality to update your Twitter status Showing your Twitter friends’ timeline Open your NetBeans IDE along with your SwingAndTweet project, and make sure you’re in the Design View. Select the Tabbed Pane component from the Palette panel and drag it into the SwingAndTweetUI JFrame component: A new JTabbedPane1 container will appear below the JScrollPane1 control in the Inspector panel. Now drag the JScrollPane1 control into the JTabbedPane1 container: The jScrollPane1 control will merge with the jTabbedPane1 and a tab will appear. Double-click on the tab, replace its default name –tab1– with Home, and press Enter: Resize the jTabbedPane1 control so it takes all the available space from the main window: Now drag a Scroll Pane container from the Palette panel and drop it into the white area of the jTabbedPane1 control:   A new tab will appear, containing the new jScrollPane2 object you’ve just dropped in. Now drag a Panel container from the Palette panel and drop it into the white area of the jTabbedPane1 control: A JPanel1 container will appear inside the jScrollPane2 container, as shown in the next screenshot: Change the name of the new tab to Friends and then click on the Source tab to change to the Source view. Once your app code shows up, locate the btnLoginActionPerformed method and type the following code at the end of this method, right below the jTextArea1.updateUI() line: //code for the Friends timeline try { java.util.List<Status> statusList = twitter.getFriendsTimeline(); jPanel1.setLayout(new GridLayout(statusList.size(),1)); for (int i=0; i<statusList.size(); i++) { statusText = new JLabel(String.valueOf(statusList.get(i).getText())); statusUser = new JLabel(statusList.get(i).getUser().getName()); JPanel individualStatus = new JPanel(new GridLayout(2,1)); individualStatus.add(statusUser); individualStatus.add(statusText); jPanel1.add(individualStatus); } } catch (TwitterException e) { JOptionPane.showMessageDialog (null, "A Twitter error ocurred!");} jPanel1.updateUI(); The next screenshot shows how the code in your btnLoginActionPerformed method should look like after adding the code: One important thing you should notice is that there will be 6 error icons due to the fact that we need to declare some variables and write some import statements. Scroll up the code window until you locate the import twitter4j.*; and the import javax.swing.JOptionPane; lines, and add the following lines right after them: import java.awt.GridLayout; import javax.swing.JLabel; import javax.swing.JPanel; Now scroll down the code until you locate the Twitter twitter; line you added in Swinging and Tweeting with Java and NetBeans: Part 2 of this tutorial series and add the following lines: JLabel statusText; JLabel statusUser; If you go back to the buttonUpdateStatusActionPerformed method, you’ll notice the errors have disappeared. Now everything is ready for you to test the new functionality in your Twitter client! Press F6 to run your SwingAndTweet application and log in with your Twitter credentials. The main window will show your last 20 tweets, and if you click on the Friends tab, you will see the last 20 tweets of the people you’re following, along with your own tweets: Close your SwingAndTweet application to return to NetBeans. Let’s examine what we did in the previous exercise. On steps 2-5 you added a JTabbedPane container and created a Home tab where the JScrollPane1 and JTextArea1 controls show your latest tweets, and then on steps 6-8 you added the JPanel1 container inside the JScrollPane2 container. On step 9 you changed the name of the new tab to Friends and then added some code to show your friends’ latest tweets. As in previous exercises, we need to add the code inside a try-catch block because we are going to call the Twitter4J API to get the last 20 tweets on your friends timeline. The first line inside the try block is: java.util.List<Status> statusList = twitter.getFriendsTimeline(); This line gets the 20 most recent tweets from your friends’ timeline, and assigns them to the statusList variable. The next line, jPanel1.setLayout(new GridLayout(statusList.size(),1)); sets your jPanel1 container to use a layout manager called GridLayout, so the components inside jPanel1 can be arranged into rows and columns. The GridLayout constructor requires two parameters; the first one defines the number of rows, so we use the statusList.size() function to retrieve the number of tweets obtained with the getFriendsTimeline() function in the previous line of code. The second parameter defines the number of columns, and in this case we only need 1 column. The next line, for (int i=0; i<statusList.size(); i++) { starts a for loop that iterates through all the tweets obtained from your friends’ timeline. The next 6 lines are executed inside the for loop. The next line in the execution path is statusText = new JLabel(String.valueOf(statusList.get(i).getText())); This line assigns the text of an individual tweet to a JLabel control called statusText. You can omit the String.valueOf function in this line because the getText() already returns a string value –I used it because at first I was having trouble getting NetBeans to compile this line, I still haven’t found out why, but as soon as I have an answer, I’ll let you know. As you can see, the statusText JLabel control was created programmatically; this means we didn’t use the NetBeans GUI interface. The next line, statusUser = new JLabel(statusList.get(i).getUser().getName()); creates a JLabel component called statusUser, gets the name of the user that wrote the tweet through the statusList.get(i).getUser().getName() method and assigns this value to the statusUser component. The next line, JPanel individualStatus = new JPanel(new GridLayout(2,1)); creates a JPanel container named individualStatus to contain the two JLabels we created in the last two lines of code. This panel has a GridLayout with 2 rows and one column. The first row will contain the name of the user that wrote the tweet, and the second row will contain the text of that particular tweet. The next two lines, individualStatus.add(statusUser); individualStatus.add(statusText); add the name of the user (statusUser) and the text of the individual tweet (statusText) to the individualStatus container, and the next line, jPanel1.add(individualStatus); adds the individualStatus JPanel component – which contains the username and text of one individual tweet –to the jPanel1 container. This is the last line of code inside the for loop. The catch block shows an error message in case an error occurs when executing the getFriendsTimeline() function, and the jPanel1.updateUI(); line updates the jPanel1 container so it shows the most recent information added to it. Now you can see your friends’ latest tweets along with your own tweets, but we need to improve the way tweets are displayed, don’t you think so? Improving the way your friends’ tweets are displayed For starters, let’s change some font attributes to show the user name in bold style and the text of the tweet in plain style. Then we’ll add a black border to separate each individual tweet. Add the following line below the other import statements in your code: import java.awt.Font; Scroll down until you locate the btnLoginActionPerformed method and add the following two lines below the statusUser = new JLabel(statusList.get(i).getUser().getName()) line: Font newLabelFont = new Font(statusUser.getFont().getName(),Font.PLAIN,statusUser.getFont().getSize()); statusText.setFont(newLabelFont); The following screenshot shows the btnLoginActionPerformed method after adding those two lines:   Press F6 to run your SwingAndTweet application. Now you will be able to differentiate the user name from the text of your friends’ tweets: And now let’s add a black border to each individual tweet. Scroll up the code until you locate the import declarations and add the following lines below the import statement you added on step 1 of this exercise: import javax.swing.BorderFactory; import java.awt.Color; Scroll down to the btnLoginActionPerformed method and add the following line right after the individualStatus.add(statusText) line: individualStatus.setBorder(BorderFactory.createLineBorder(Color.black)); The next screenshot shows the appearance of your friends’ timeline tab with a black border separating each individual tweet:
Read more
  • 0
  • 0
  • 2671
article-image-building-flex-type-ahead-text-input
Packt
15 Mar 2010
7 min read
Save for later

Building a Flex Type-Ahead Text Input

Packt
15 Mar 2010
7 min read
Here is an example of how google.com implements the type-ahead list using DHTML: As you can see, once 'type-ahead' is typed into the text field , the user is given a selection of possible search phrases that google is already aware of. My intention with this article is to build a type-ahead list in Flex. To start, lets narrow down the scope of the application and make it easy to expand on. We'll create an application which is used primarily for searching for fruits. Our basic Fruit Finder application will consist of a form with a TextInput field. The TextInput field will allow the user to type in a fruit name and will automatically suggest a fruit if one is partially found in our list of fruits. 1. Building a Basic Form To start, here is what our form looks like: The XML which creates this user interface is quite simple: <?xml version="1.0" encoding="utf-8"?><mx:Application layout="absolute"><mx:Panel title="Fruit Finder"><mx:Form> <mx:FormHeading label="Fruit Finder"/> <mx:FormItem label="Fruit Name"> <mx:TextInput id="fruit"/> </mx:FormItem> <mx:FormItem> <mx:Button label="Search"/> </mx:FormItem> </mx:Form></mx:Panel></mx:Application> You'll notice the normal xml version declaration, the Application tag, a Panel tag and finally the Form tag. Nothing too complicated so far. If you are unfamiliar with the basics of Flex or Forms in Flex, you should take this opportunity to visit Adobe's website to explore them. This XML code gives up 90% of our GUI. In the coming steps will have to define the elements which will make up the fruit list which will appear as a user is typing. Next, we need to define our list of fruits. 2. Adding Data to Our Type Ahead List Now that we have the beginnings of our GUI, lets start building our fruit list. Thinking ahead for a bit, I know that we will have to display a list of fruits to the user. The simplest Flex control to use for this job is the List Control. We will be dynamically adding the List to the application's display list via ActionScript, but for now we just need to define the data which will be displayed in the list. We will start by creating adding a Script tag and adding an ArrayCollection to it. You will have to use the import statement to make the ArrayCollection class available to you. Our ArrayCollection constructor is passed an array of fruit names. Here is what the code looks like: <mx:Script><![CDATA[ import mx.collections.ArrayCollection; public var fruitList:ArrayCollection = new ArrayCollection(['apple', 'orange', 'banana', 'kiwi', 'avocado', 'tomato', 'squash', 'cucumber']);]]></mx:Script> Normally defining the list of items in this way is not commonly performed. For a real world use, getting this list of items through an XML source is more likely (especially in web applications), but this will work for our demonstration. Now that our fruit list is defined, we just need to connect it to a type-ahead list which we will create in the next step. links:http://livedocs.adobe.com/flex/3/html/help.html?content=databinding_4.htmlhttp://livedocs.adobe.com/flex/3/langref/mx/collections/ArrayCollection.html 3. Triggering the Appearance of Our Type Ahead-List It is common in modern web applications that the type ahead list appear automatically upon the user typing. We will add this functionality to our application by using the KeyUp event. Simply put, when the user begins typing into our TextInput field we will do the following: Determine if the type ahead list is already created. For the first key press, there will be no type-ahead list.  In this case we need to create the list, set it's data provider to fruitList (step 2) and add it to the UI. We will also need to position the type ahead list beneath the TextInput field so that the user is properly cued as to what is happening. To start our implementation of Type-Ahead Text Input, we use the KeyUp event. We change our FormItem tag surrounding the TextInput field to look like this: <mx:FormItem label="Fruit Name" keyUp="filterFruits(event)">We then define a filterFruits function like so:public function filterFruits(event:KeyboardEvent):void{ // if the type ahead list is not present, create it if(typeAheadList==null){ // create the list and assign the dataprovider typeAheadList = new List(); typeAheadList.dataProvider = fruitList; // add the list to the screen this.addChild(typeAheadList); }} In the above code we are programmatically creating a List control. Immediately assign the data provider to it. Lastly, we add the child to the application. Our function does everything that we need it to do for a Type-Ahead Text Input with the exception of positioning the type ahead list in the correct place. Here is what our app currently looks like: We are making progress, but without the correct positioning, our type-ahead list creates a bad user experience. To move this list to the correct location we need to use the localtoGlobal method to translate coordinate systems. This requires a short explanation. Flex has multiple coordinate systems on the Flash stage that you can make use of for making your controls and components position properly. The first is call the global corrdinate system. This system starts at the upper left hand corner of the Flash stage and extends down and out. The second is called the local coordinate system which starts at the upper left hand corner of a component. There is also a content coordinate system which encompasses a components content. For our purposes we only need to focus on the local and global systems. link:http://livedocs.adobe.com/flex/3/html/help.html?content=containers_intro_5.html Our goal here is to place our list directly beneath the fruit TextInput field. To accomplish this, we must first grabs the coordinates of the fruit TextInput field. Here is the code for retrieving them: var p1:Point = new Point(fruit.x,fruit.y); We use the Point type which receives the x and y coordinates of the fruit control. p1 now holds the points in the local coordinate system. You may ask, "what is it local to?". In this case it is local to it's parent container which is the FormItem. In order to convert these points to the global system we need to use to the localToGlobal method: var p2:Point = fruit_form_item.localToGlobal(p1); p2 now contains the converted coordinates. Note, we added the id of fruit_form_item to the FormItem Tag which is the parent of our fruit TextInput. From here we can now place the fruit List in the correct place in our application. typeAheadList.x=p2.x;typeAheadList.y=p2.y + fruit.height; // set the widthtypeAheadList.width=fruit.width; Notice above that we added fruit.height to the y value of the typeAheadList. This is necessary to not block the view TextInput field. We are moving it down by n pixels, where n is the height of the TextInput field. We also set the x coordinate of our list so that it is in the correct place. Here is what the final result for this step look like:
Read more
  • 0
  • 0
  • 2658

article-image-getting-your-apex-components-logic-right
Packt
09 Oct 2009
10 min read
Save for later

Getting Your APEX Components Logic Right

Packt
09 Oct 2009
10 min read
Pre-generation editing After reading this article, we will understand our project a lot better. Also, to a certain level, we will be able control the way our application will be generated. Generation is often performed more than once as you refine the definitions and settings between iterations. In this article we will learn a lot of ways to edit the project in order to generate optimally. But we must understand that we will not cover all the exceptions in the generation process. If we want to do a real Forms to APEX conversion project, it will be very wise to carefully read the help texts in the Migration Documentation provided by Oracle in every APEX instance—especially the appendix called Oracle Forms Generation Capabilities and Workarounds, which will help you to understand the choices that can be made in the generation process. The information in these migration help texts tells us how the different components in Oracle Forms will be converted in APEX and how to implement business logic in the APEX application. For example, when we take a look at the Block to Page Region Mappings, we learn how APEX converts certain blocks to APEX regions during conversion. Investigating When we take a look at our conversion project, we must understand what will be generated. In case of generation, the most important parts are the blocks on our Forms modules. These are, quite literally, the building blocks our pages in APEX will be based upon. Of course, we have our program units, triggers, and much more; but the pages that are defined in the APEX application (which we put in production after the project is finished) will be based on Blocks, Reports, and Menus. This is why we need to adjust them before we generate anything. This might seem like a small part of the project as we look at the count of all the components in our project page, but that doesn't make it less important. We can't adjust reports as they are defined by the query that they are built upon, but we can alter the blocks. That's why we focus on those components first. Data blocks The building blocks of our APEX pages are the blocks and, of course, the reports. The blocks we can generate in our project are the ones that are based on database block. Non-database blocks such as those that hold menus and buttons are not generated by default, as they will be generated as blank pages. In the block overview page, we get the basic information about the blocks in our project. The way the blocks will be generated is determined by APEX based on the contents, the number of items on the block, and, most importantly, the number of records displayed. For further details on the generation rules, refer to the Migration Guide—Appendix A: Forms Generation Capabilities and Workarounds. In the Blocks overview page in our conversion project, we notice that not all the blocks are included. In other words, they aren't checked to be included in the project. This is because they are not oriented from a database block. To include or exclude a block during generation, we need to check or uncheck the specific block. Don't confuse this with the applicability of a block. We also might notice that some of the blocks are already set to complete. In our example we see that the S_CUSTOMER1 and S_CUSTOMER blocks are set to complete. If we take a look inside these components and check the annotations, they are indeed set to complete. There's also a note set for us. As we see in the following screenshot, it states Incorporating Enhanced Query: The Enhanced Query is something that we will use later in this article. But beware of the statement that a component is Complete as we will see that we might want to alter the query on which the customer's block is based. If we look at a block that is not yet set to complete in the overview page (such as the Orders block) and we look at the Application Express Page Query region in the details screen, we see that only Original Query is present. This is the query that is in the original Forms XML file we uploaded earlier. Although we have the Original Query present in our page, we can also alter it and customize the query on which this block is based. But this will be done later in the article. In this way, we have a better control over the way we will generate our application. We can't alter this query as it is to be implemented as a Master-Detail Form. Block items Each block contains a number of items. These items define the fields in our application and are derived from our Forms XML files. In the block details pages, we can find the details of the items on the particular block as well. Here we can see the most basic information about the items, namely their Type, Prompt, Column Name, and the Triggers on that particular item. We can also see the Name of the item if it is a Database Item and if the item is complete or not, and whether or not it is Applicable. When a block is set to complete, it is assumed that we have all the information required about the items, as we see in the example shown here: But there are also cases where we don't get all the information about the items we want. In our case, we might want to customize the query the block is based on or define the items further. We will cover this later in the article. In the above screenshot we notice that for all the items the Column Name is not known. This is an indication that the items will not be generated properly and we need to take a further look into the query and, maybe, some of the triggers. When we want to alter the completeness and applicability of the items in our block, there's a great functionality available on the upper-right of the Blocks Details page. In the Block Tasks section, we find a link that states: Set All Block Items Completeness and Applicability. This function is used to make bulk changes in the items in the block we are in. It can be useful to change the completeness of all items when we are not sure what more needs to be done. To set the completeness or the applicability with a bulk change on all the items, we click on the link in the Block Tasks region and this takes us to the following screen: In the Set Block Item & Trigger Status page we can select the Attribute (Items, Block Triggers, or Item Triggers), the Set Tracking Attribute (Complete or Applicable), and the Set Value (Yes or No). To make changes, set the correct attribute, tracking attribute, and value, and then click on Apply Changes. Original versus Enhanced Query As mentioned earlier, we can encounter both Original and Enhanced Queries in the blocks of our Forms. The Original Query is taken from the XML file directly as it is stated in the source of the block we are looking at. So where does the Enhanced Query originate from? This is one of the automatically generated parts of the Forms Conversion tool in APEX. If a block contains a POST QUERY trigger, the Forms Conversion tool generates an Enhanced Query for us. In the following screenshot, we see both the Enhanced Query and the Original Query in the S_CUSTOMER block. We can clearly notice the additional lines at the bottom of the Enhanced Query. The query in the Enhanced Query section still looks a lot like the one in the Original Query section, but is slightly altered. The code is generated automatically by taking the code from both the Original Query and POST QUERY triggers on this block. Please note that the query is automatically generated by APEX by adding a WHERE clause to the SQL query. This means that we will still need to check it and, probably, optimize it to work properly. The following screenshot shows us the POST QUERY trigger. Notice that it's set to both applicable and complete. This is because the code is now embedded in the enhanced query and so the trigger is taken care of for our project. Triggers Besides items, even blocks contain triggers. These define the actions in our blocks and are, therefore, equally important. Most of the triggers are very Forms-specific, but it's nice to be the judge of that ourselves. In the Orders Block, we have the Block Triggers region that contains the triggers in our orders block. The region tells us the name, applicability, and completeness. It gives us a snippet of the code inside the trigger and tells us the level it is set to (ITEM or BLOCK). A lot of the triggers in our project need to be implemented post-generation, which will be discussed later in this article. But as mentioned above, there is one trigger that we need in the pre-generation stage of our project. This is the POST-QUERY trigger. In this example, the applicability in the orders block is set to No. This is also the reason why we have no Enhanced Query to choose from in this block. The reasons behind setting the trigger to not applicable can be many, and you can learn more about the reasons if you read the migration help texts carefully. We probably want to change the applicability of the trigger ourselves because the POST-QUERY trigger contains some necessary information on how we need to define our block. If we click on the edit link (the pencil icon) for the POST-QUERY trigger, we can alter the applicability. Set the value for Applicable to Yes and click on Apply Changes. This will take us back to the Block Details screen. In the Triggers region, we can see that the applicability of the POST QUERY trigger is now set to Yes. Now if we scroll up to the Application Express Page Query region, we can also see that the Enhanced Query is now in place. As shown in the following screenshot, we can see that we automatically generated an extended version of the Original Query, embedding the logic in the Post Query trigger. For the developers among us, we can see that the query produced by the conversion tool in APEX doesn't make the query very optimal. We can rewrite the query in the Custom Query section, which we will describe later in this article. We are able to set the values for our triggers in the same way we used to set the applicability and completeness of the items in our blocks. In the upper-right corner of our Block Details screen, we find the Block Tasks region. Here we find the link to the tasks for items as well as triggers. Click on the Set All Block Triggers Completeness and Applicability to navigate to the screen where we can set the values. In the Attribute section, we can choose from both the block level triggers as well as the item level triggers. We can't adjust them all at once, so we may need to adjust them twice.  
Read more
  • 0
  • 0
  • 2652

article-image-parse-objects-and-queries
Packt
18 Oct 2013
6 min read
Save for later

Parse Objects and Queries

Packt
18 Oct 2013
6 min read
(For more resources related to this topic, see here.) In this article, we will learn how to work with Parse objects along with writing queries to set and get data from Parse. Every application has a different and specific Application ID associated with the Client Key, which remains same for all the applications of the same user. Parse is based on object-oriented principles. All the operations on Parse will be done in the form of objects. Parse saves your data in the form of objects you send, and helps you to fetch the data in the same format again. In this article, you will learn about objects and operations that can be performed on Parse objects. Parse objects All the data in Parse is saved in the form of PFObject. When you fetch any data from Parse by firing a query, the result will be in the form of PFObject. The detailed concept of PFObject is explained in the following section. PFObject Data stored on Parse is in the form of objects and it's developed around PFObject. PFObject can be defined as the key-value (dictionary format) pair of JSON data. The Parse data is schemaless, which means that you don't need to specify ahead of time what keys exist on each PFObject. Parse backend will take care of storing your data simply as a set of whatever key-value pair you want. Let's say you are tracking the visited count of the username with a user ID using your application. A single PFObject could contain the following code: visitedCount:1122, userName:"Jack Samuel", userId:1232333332 Parse accepts only string as Key. Values can be strings, numbers, Booleans, or even arrays, and dictionaries—anything that can be JSON encoded. The class name of PFObject is used to distinguish different sorts of data. Let's say you call the visitedCounts object of the user. Parse recommends you to write your class name NameYourClassLikeThis and nameYourKeysLikeThis just to provide readability to the code. As you have seen in the previous example, we have used visitedCounts to represent the visited count key. Operations on Parse objects You can perform save, update, and delete operations on Parse objects. Following is the detailed explanation of the operations that can be performed on Parse objects. Saving objects To save your User table on the Parse Cloud with additional fields, you need to follow the coding convention similar to the NSMutableDictionary method. After updating the data you have to call the saveInBackground method to save it on the Parse Cloud. Here is the example that explains how to save additional data on the Parse Cloud: PFObject *userObject = [PFObject currentUser];[userObject setObject:[NSNumber numberWithInt:1122]forKey:@"visitedCount"];[userObject setObject:@"Jack Samuel" forKey:@"userName"];[userObject setObject:@"1232333332" forKey:@"userId"];[userObject saveInBackground]; Just after executing the preceding piece of code, your data is saved on the Parse Cloud. You can check your data in Data Browser of your application on Parse. It should be something similar to the following line of code: objectId: "xWMyZ4YEGZ", visitedCount: 1122, userName: "JackSamuel", userId: "1232333332",createdAt:"2011-06-10T18:33:42Z", updatedAt:"2011-06-10T18:33:42Z" There are two things to note here: You don't have to configure or set up a new class called User before running your code. Parse will automatically create the class when it first encounters it. There are also a few fields you don't need to specify, those are provided as a convenience: objectId is a unique identifier for each saved object. createdAt and updatedAt represent the time that each object was created and last modified in the Parse Cloud. Each of these fields is filled in by Parse, so they don't exist on PFObject until a save operation has completed. You can provide additional logic after the success or failure of the callback operation using the saveInBackgroundWithBlock or saveInBackgroundWithTarget:selector: methods provided by Parse: [userObject saveInBackgroundWithBlock:^(BOOLsucceeded, NSError *error) {if (succeeded)NSLog(@"Success");elseNSLog(@"Error %@",error);}]; Fetching objects To fetch the saved data from the Parse Cloud is even easier than saving data. You can fetch the data from the Parse Cloud in the following way. You can fetch the complete object from its objectId using PFQuery. Methods to fetch data from the cloud are asynchronous. You can implement this either by using block-based or callback-based methods provided by Parse: PFQuery *query = [PFQuery queryWithClassName:@"GameScore"]; // 1[query getObjectInBackgroundWithId:@"xWMyZ4YEGZ" block:^(PFObject*gameScore, NSError *error) { //2// Do something with the returned PFObject in the gameScorevariable.int score = [[gameScore objectForKey:@"score"] intValue];NSString *playerName = [gameScore objectForKey:@"playerName"];//3BOOL cheatMode = [[gameScore objectForKey:@"cheatMode"]boolValue];NSLog(@"%@", gameScore);}];// The InBackground methods are asynchronous, so the code writtenafter this will be executed// immediately. The codes which are dependent on the query resultshould be moved// inside the completion block above. Lets analyze each line in here, as follows: Line 1: It creates a query object pointing to the class name given in the argument. Line 2: It calls an asynchronous method on the query object created in line 1 to download the complete object for objectId, provided as an argument. As we are using the block-based method, we can provide code inside the block, which will execute on success or failure. Line 3: It reads data from PFObject that we got in response to the query. Parse provides some common values of all Parse objects as properties: NSString *objectId = gameScore.objectId;NSDate *updatedAt = gameScore.updatedAt;NSDate *createdAt = gameScore.createdAt; To refresh the current Parse object, type: [myObject refresh]; This method can be called on any Parse object, which is useful when you want to refresh the data of the object. Let's say you want to re-authenticate a user, so you can call the refresh method on the user object to refresh it. Saving objects offline Parse provides you with the functions to save your data when the user is offline. So when the user is not connected to the Internet, the data will be saved locally in the objects, and as soon as the user is connected to the Internet, data will be saved automatically on the Parse Cloud. If your application is forcefully closed before establishing the connection, Parse will try again to save the object next time the application is opened. For such operations, Parse provides you with the saveEventually method, so that you will not lose any data even when the user is not connected to the Internet. Eventually all calls are executed in the order the request is made. The following code demonstrates the saveEventually call: // Create the object.PFObject *gameScore = [PFObject objectWithClassName:@"GameScore"];[gameScore setObject:[NSNumber numberWithInt:1337]forKey:@"score"];[gameScore setObject:@"Sean Plott" forKey:@"playerName"];[gameScore setObject:[NSNumber numberWithBool:NO]forKey:@"cheatMode"];[gameScore saveEventually]; Summary In this article, we explored Parse objects and the way to query the data available on Parse. We started by exploring Parse objects and the ways to save these objects on the cloud. Finally, we learned about the queries which will help us to fetch the saved data on Parse. Resources for Article: Further resources on this subject: New iPad Features in iOS 6 [Article] Creating a New iOS Social Project [Article] Installing Alfresco Software Development Kit (SDK) [Article]
Read more
  • 0
  • 0
  • 2650
article-image-enterprise-javabeans
Packt
22 Oct 2009
10 min read
Save for later

Enterprise JavaBeans

Packt
22 Oct 2009
10 min read
Readers familiar with previous versions of J2EE will notice that Entity Beans were not mentioned in the above paragraph. In Java EE 5, Entity Beans have been deprecated in favor of the Java Persistence API (JPA). Entity Beans are still supported for backwards compatibility; however, the preferred way of doing Object Relational Mapping with Java EE 5 is through JPA. Refer to Chapter 4 in the book Java EE 5 Development using GlassFish Application Server for a detailed discussion on JPA. Session Beans As we previously mentioned, session beans typically encapsulate business logic. In Java EE 5, only two artifacts need to be created in order to create a session bean: the bean itself, and a business interface. These artifacts need to be decorated with the proper annotations to let the EJB container know they are session beans. Previous versions of J2EE required application developers to create several artifacts in order to create a session bean. These artifacts included the bean itself, a local or remote interface (or both), a local home or a remote home interface (or both) and a deployment descriptor. As we shall see in this article, EJB development has been greatly simplified in Java EE 5. Simple Session Bean The following example illustrates a very simple session bean: package net.ensode.glassfishbook; import javax.ejb.Stateless; @Stateless public class SimpleSessionBean implements SimpleSession { private String message = "If you don't see this, it didn't work!"; public String getMessage() { return message; } } The @Stateless annotation lets the EJB container know that this class is a stateless session bean. There are two types of session beans, stateless and stateful. Before we explain the difference between these two types of session beans, we need to clarify how an instance of an EJB is provided to an EJB client application. When EJBs (both session beans and message-driven beans) are deployed, the EJB container creates a series of instances of each EJB. This is what is typically referred to as the EJB pool. When an EJB client application obtains an instance of an EJB, one of the instances in the pool is provided to this client application. The difference between stateful and stateless session beans is that stateful session beans maintain conversational state with the client, where stateless session beans do not. In simple terms, what this means is that when an EJB client application obtains an instance of a stateful session bean, the same instance of the EJB is provided for each method invocation, therefore, it is safe to modify any instance variables on a stateful session bean, as they will retain their value for the next method call. The EJB container may provide any instance of an EJB in the pool when an EJB client application requests an instance of a stateless session bean. As we are not guaranteed the same instance for every method call, values set to any instance variables in a stateless session bean may be "lost" (they are not really lost; the modification is in another instance of the EJB in the pool). Other than being decorated with the @Stateless annotation, there is nothing special about this class. Notice that it implements an interface called SimpleSession. This interface is the bean's business interface. The SimpleSession interface is shown next: package net.ensode.glassfishbook; import javax.ejb.Remote; @Remote public interface SimpleSession { public String getMessage(); } The only peculiar thing about this interface is that it is decorated with the @Remoteannotation. This annotation indicates that this is a remote business interface . What this means is that the interface may be in a different JVM than the client application invoking it. Remote business interfaces may even be invoked across the network. Business interfaces may also be decorated with the @Local interface. This annotation indicates that the business interface is a local business interface. Local business interface implementations must be in the same JVM as the client application invoking their methods. As remote business interfaces can be invoked either from the same JVM or from a different JVM than the client application, at first glance, we might be tempted to make all of our business interfaces remote. Before doing so, we must be aware of the fact that the flexibility provided by remote business interfaces comes with a performance penalty, because method invocations are made under the assumption that they will be made across the network. As a matter of fact, most typical Java EE application consist of web applications acting as client applications for EJBs; in this case, the client application and the EJB are running on the same JVM, therefore, local interfaces are used a lot more frequently than remote business interfaces. Once we have compiled the session bean and its corresponding business interface,we need to place them in a JAR file and deploy them. Just as with WAR files, the easiest way to deploy an EJB JAR file is to copy it to [glassfish installationdirectory]/glassfish/domains/domain1/autodeploy. Now that we have seen the session bean and its corresponding business interface, let's take a look at a client sample application: package net.ensode.glassfishbook; import javax.ejb.EJB; public class SessionBeanClient { @EJB private static SimpleSession simpleSession; private void invokeSessionBeanMethods() { System.out.println(simpleSession.getMessage()); System.out.println("nSimpleSession is of type: " + simpleSession.getClass().getName()); } public static void main(String[] args) { new SessionBeanClient().invokeSessionBeanMethods(); } } The above code simply declares an instance variable of type net.ensode.SimpleSession, which is the business interface for our session bean. The instance variable is decorated with the @EJB annotation; this annotation lets the EJB container know that this variable is a business interface for a session bean. The EJB container then injects an implementation of the business interface for the client code to use. As our client is a stand-alone application (as opposed to a Java EE artifact such as a WAR file) in order for it to be able to access code deployed in the server, it must be placed in a JAR file and executed through the appclient utility. This utility can be found at [glassfish installation directory]/glassfish/bin/. Assuming this path is in the PATH environment variable, and assuming we placed our client code in a JAR file called simplesessionbeanclient.jar, we would execute the above client code by typing the following command in the command line: appclient -client simplesessionbeanclient.jar Executing the above command results in the following console output: If you don't see this, it didn't work! SimpleSession is of type: net.ensode.glassfishbook._SimpleSession_Wrapper which is the output of the SessionBeanClient class. The first line of output is simply the return value of the getMessage() method we implemented in the session bean. The second line of output displays the fully qualified class name of the class implementing the business interface. Notice that the class name is not the fully qualified name of the session bean we wrote; instead, what is actually provided is an implementation of the business interface created behind the scenes by the EJB container. A More Realistic Example In the previous section, we saw a very simple, "Hello world" type of example. In this section, we will show a more realistic example. Session beans are frequently used as Data Access Objects (DAOs). Sometimes, they are used as a wrapper for JDBC calls, other times they are used to wrap calls to obtain or modify JPA entities. In this section, we will take the latter approach. The following example illustrates how to implement the DAO design pattern in asession bean. Before looking at the bean implementation, let's look at the business interface corresponding to it: package net.ensode.glassfishbook; import javax.ejb.Remote; @Remote public interface CustomerDao { public void saveCustomer(Customer customer); public Customer getCustomer(Long customerId); public void deleteCustomer(Customer customer); } As we can see, the above is a remote interface implementing three methods; thesaveCustomer() method saves customer data to the database, the getCustomer()method obtains data for a customer from the database, and the deleteCustomer() method deletes customer data from the database. Let's now take a look at the session bean implementing the above business interface. As we are about to see, there are some differences between the way JPA code is implemented in a session bean versus in a plain old Java object. package net.ensode.glassfishbook; import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import javax.annotation.Resource; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.sql.DataSource; @Stateless public class CustomerDaoBean implements CustomerDao { @PersistenceContext private EntityManager entityManager; @Resource(name = "jdbc/__CustomerDBPool") private DataSource dataSource; public void saveCustomer(Customer customer) { if (customer.getCustomerId() == null) { saveNewCustomer(customer); } else { updateCustomer(customer); } } private void saveNewCustomer(Customer customer) { customer.setCustomerId(getNewCustomerId()); entityManager.persist(customer); } private void updateCustomer(Customer customer) { entityManager.merge(customer); } public Customer getCustomer(Long customerId) { Customer customer; customer = entityManager.find(Customer.class, customerId); return customer; } public void deleteCustomer(Customer customer) { entityManager.remove(customer); } private Long getNewCustomerId() { Connection connection; Long newCustomerId = null; try { connection = dataSource.getConnection(); PreparedStatement preparedStatement = connection .prepareStatement( "select max(customer_id)+1 as new_customer_id " + "from customers"); ResultSet resultSet = preparedStatement.executeQuery(); if (resultSet != null && resultSet.next()) { newCustomerId = resultSet.getLong("new_customer_id"); } connection.close(); } catch (SQLException e) { e.printStackTrace(); } return newCustomerId; } } The first difference we should notice is that an instance of javax.persistence. EntityManager is directly injected into the session bean. In previous JPA examples,we had to inject an instance of javax.persistence.EntityManagerFactory, then use the injected EntityManagerFactory instance to obtain an instance of EntityManager. The reason we had to do this was that our previous examples were not thread safe. What this means is that potentially the same code could be executed concurrently by more than one user. As EntityManager is not designed to be used concurrently by more than one thread, we used an EntityManagerFactory instance to provide each thread with its own instance of EntityManager. Since the EJB container assigns a session bean to a single client at time, session beans are inherently thread safe, therefore, we can inject an instance of EntityManager directly into a session bean. The next difference between this session bean and previous JPA examples is that in previous examples, JPA calls were wrapped between calls to UserTransaction.begin() and UserTransaction.commit(). The reason we had to do this is because JPA calls are required to be in wrapped in a transaction, if they are not in a transaction, most JPA calls will throw a TransactionRequiredException. The reason we don't have to explicitly wrap JPA calls in a transaction as in previous examples is because session bean methods are implicitly transactional; there is nothing we need to do to make them that way. This default behavior is what is known as Container-Managed Transactions. Container-Managed Transactions are discussed in detail later in this article. When a JPA entity is retrieved in one transaction and updated in a different transaction, the EntityManager.merge() method needs to be invoked to update the data in the database. Invoking EntityManager.persist() in this case will result in a "Cannot persist detached object" exception.
Read more
  • 0
  • 0
  • 2635

article-image-microsoft-enterprise-library-authorization-and-security-cache
Packt
09 Dec 2010
6 min read
Save for later

Microsoft Enterprise Library: Authorization and Security Cache

Packt
09 Dec 2010
6 min read
  Microsoft Enterprise Library 5.0 Develop Enterprise applications using reusable software components of Microsoft Enterprise Library 5.0 Develop Enterprise Applications using the Enterprise Library Application Blocks Set up the initial infrastructure configuration of the Application Blocks using the configuration editor A step-by-step tutorial to gradually configure each Application Block and implement its functions to develop the required Enterprise Application           Read more about this book       (For more resources on Microsoft Enterprise Library, see here.) Understanding Authorization Providers An Authorization Provider is simply a class that provides authorization logic; technically it implements either an IAuthorizationProvider interface or an abstract class named AuthorizationProvider and provides authorization logic in the Authorize method. As mentioned previously, the Security Application Block provides two Authorization Providers out of the box, AuthorizationRuleProvider and AzManAuthorizationProvider both implementing the abstract class AuthorizationProvider available in the Microsoft.Practices.EnterpriseLibrary.Security namespace. This abstract class in turn implements the IAuthorizationProvider interface, which defines the basic functionality of an Authorization Provider; it exposes a single method named Authorize, which accepts an instance of the IPrincipal object and the name of the rule to evaluate. Custom providers can be implemented either by implementing the IAuthorizationProvider interface or an abstract class named AuthorizationProvider. An IPrincipal instance (GenericPrincipal, WindowsPrincipal, PassportPrincipal, and so on) represents the security context of the user on whose behalf the code is running; it also includes the user's identity represented as an instance of IIdentity (GenericIdentity, FormsIdentity, WindowsIdentity, PassportIdentity, and so on). The following diagram shows the members and inheritance hierarchy of the respective class and interface: Authorization Rule Provider The AuthorizationRuleProvider class is an implementation that evaluates Boolean expressions to determine whether the objects are authorized; these expressions or rules are stored in the configuration file. We can create authorization rules using the Rule Expression Editor part of the Enterprise Library configuration tool and validate them using the Authorize method of the Authorization Provider. This authorization provider is part of the Microsoft.Practices.EnterpriseLibrary.Security namespace. Authorizing using Authorization Rule Provider Authorization Rule Provider stores authorization rules in the configuration and this is one of the simplest ways to perform authorization. Basically, we need to configure to use the Authorization Rule Provider and provide authorization rules based on which the authorization will be performed. Let us add Authorization Rule Provider as our Authorization Provider; click on the plus symbol on the right side of the Authorization Providers and navigate to the Add Authorization Rule Provider menu item. The following screenshot shows the configuration options of the Add Authorization Rule Provider menu item: The following screenshot shows the default configuration of the newly added Authorization Provider; in this case, it is Authorization Rule Provider: Now we have the Authorization Rule Provider added to the configuration but we still need to add the authorization rules. Imagine that we have a business scenario where: We have to allow only users belonging to the administrator's role to add or delete products. We should allow all authenticated customers to view the products. This scenario is quite common where certain operations can be performed only by specific roles, basically role-based authorization. To fulfill this requirement, we will have to add three different rules for add, delete, and view operations. Right-click on the Authorization Rule Provider and click on the Add Authorization Rule menu item as shown on the following screenshot. The following screenshot shows the newly added Authorization Rule: Let us update the name of the rule to "Product.Add" to represent the operation for which the rule is configured. We will provide the rule using the Rule Expression Editor; click on the right corner button to open the Rule Expression Editor. The requirement is to allow only the administrator role to perform this action. The following action needs to be performed to configure the rule: Click on the Role button to add the Role expression: R. Enter the role name next to the role expression: R:Admin. Select the checkbox Is Authenticated to allow only authenticated users. The following screenshot displays the Rule Expression Editor dialog box with the expression configured to R:Admin. The following screenshot shows the Rule Expression property set to R:Admin. Now let us add the rule for the product delete operation. This rule is configured in a similar fashion. The resulting configuration will be similar to the configuration shown. The following screenshot displays the added authorization rule named Product.Delete with the configured Rule Expression: Alright, we now have to allow all authenticated customers to view the products. Basically we want the authorization to pass if the user is either of role Customer; also Admin role should have permission, only then the user will be able to view products. We will add another rule called Product.View and configure the rule expression using the Rule Expression Editor as given next. While configuring the rule, use the OR operator to specify that either Admin or Customer can perform this operation. The following screenshot displays the added authorization rule named Product.View with the configured Rule Expression: Now that we have the configuration ready, let us get our hands dirty with some code. Before authorizing we need to authenticate the user; based on the authentication requirement we could be using either out-of-the-box authentication mechanism or we might use custom authentication. Assuming that we are using the current Windows identity, the following steps will allow us to authorize specific operations by passing the Windows principal while invoking the Authorize method of the Authorization Provider. The first step is to get the IIdentity and IPrincipal based on the authentication mechanism. We are using current Windows identity for this sample. WindowsIdentity windowsIdentity = WindowsIdentity.GetCurrent();WindowsPrincipal windowsPrincipal = new WindowsPrincipal(windowsIdentity); Create an instance of the configured Authorization Provider using the AuthorizationFactory.GetAuthorizationProvider method; in our case we will get an instance of Authorization Rule Provider. IAuthorizationProvider authzProvider = AuthorizationFactory.GetAuthorizationProvider("Authorization Rule Provider"); Now use the instance of Authorization Provider to authorize the operation by passing the IPrincipal instance and the rule name. bool result = authzProvider.Authorize(windowsPrincipal, "Product.Add"); AuthorizationFactory.GetAuthorizationProvider also has an overloaded alternative without any parameter, which gets the default authorization provider configured in the configuration. AzMan Authorization Provider The AzManAuthorizationProvider class provides us the ability to define individual operations of an application, which then can be grouped together to form a task. Each individual operation or task can then be assigned roles to perform those operations or tasks. The best part of Authorization Manager is that it provides an administration tool as a Microsoft Management Console (MMC) snap-in to manage users, roles, operations, and tasks. Policy administrators can configure an Authorization Manager Policy store in an Active Directory, Active Directory Application Mode (ADAM) store, or in an XML file. This authorization provider is part of the Microsoft.Practices.EnterpriseLibrary.Security namespace.
Read more
  • 0
  • 0
  • 2633
Modal Close icon
Modal Close icon