Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-designing-your-very-own-aspnet-mvc-application
Packt
28 Oct 2009
8 min read
Save for later

Designing your very own ASP.NET MVC Application

Packt
28 Oct 2009
8 min read
When downloading and installing the ASP.NET MVC framework SDK, a new project template is installed in Visual Studio—the ASP.NET MVC project template. This article by Maarten Balliauw describes how to use this template. We will briefly touch all aspects of ASP.NET MVC by creating a new ASP.NET MVC web application based on this Visual Studio template. Besides view, controller, and model, new concepts including ViewData—a means of transferring data between controller and view, routing—the link between a web browser URL and a specific action method inside a controller, and unit testing of a controller are also illustrated in this article. (For more resources on .NET, see here.) Creating a new ASP.NET MVC web application project Before we start creating an ASP.NET MVC web application, make sure that you have installed the ASP.NET MVC framework SDK from http://www.asp.net/mvc. After installation, open Visual Studio 2008 and select menu option File | New | Project. The following screenshot will be displayed. Make sure that you select the .NET framework 3.5 as the target framework. You will notice a new project template called ASP.NET MVC Web Application. This project template creates the default project structure for an ASP.NET MVC application. After clicking on OK, Visual Studio will ask you if you want to create a test project. This dialog offers the choice between several unit testing frameworks that can be used for testing your ASP.NET MVC application. You can decide for yourself if you want to create a unit testing project right now—you can also add a testing project later on. Letting the ASP.NET MVC project template create a test project now is convenient because it creates all of the project references, and contains an example unit test, although this is not required. For this example, continue by adding the default unit test project. What's inside the box? After the ASP.NET MVC project has been created, you will notice a default folder structure. There's a Controllers folder, a Models folder, a Views folder, as well as a Content folder and a Scripts folder. ASP.NET MVC comes with the convention that these folders (and namespaces) are used for locating the different blocks used for building the ASP.NET MVC framework. The Controllers folder obviously contains all of the controller classes; the Models folder contains the model classes; while the Views folder contains the view pages. Content will typically contain web site content such as images and stylesheet files, and Scripts will contain all of the JavaScript files used by the web application. By default, the Scripts folder contains some JavaScript files required for the use of Microsoft AJAX or jQuery. Locating the different building blocks is done in the request life cycle. One of the first steps in the ASP.NET MVC request life cycle is mapping the requested URL to the correct controller action method. This process is referred to as routing. A default route is initialized in the Global.asax file and describes to the ASP.NET MVC framework how to handle a request. Double-clicking on the Global.asax file in the MvcApplication1 project will display the following code: using System;using System.Collections.Generic;using System.Linq;using System.Web;using System.Web.Mvc;using System.Web.Routing;namespace MvcApplication1{ public class GlobalApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } }} In the Application_Start() event handler, which is fired whenever the application is compiled or the web server is restarted, a route table is registered. The default route is named Default, and responds to a URL in the form of http://www.example.com/{controller}/{action}/{id}. The variables between { and } are populated with actual values from the request URL or with the default values if no override is present in the URL. This default route will map to the Home controller and to the Index action method, according to the default routing parameters. We won't have any other action with this routing map. By default, all the possible URLs can be mapped through this default route. It is also possible to create our own routes. For example, let's map the URL http://www.example.com/Employee/Maarten to the Employee controller, the Show action, and the firstname parameter. The following code snippet can be inserted in the Global.asax file we've just opened. Because the ASP.NET MVC framework uses the first matching route, this code snippet should be inserted above the default route; otherwise the route will never be used. routes.MapRoute( "EmployeeShow", // Route name "Employee/{firstname}", // URL with parameters new { // Parameter defaults controller = "Employee", action = "Show", firstname = "" } ); Now, let's add the necessary components for this route. First of all, create a class named EmployeeController in the Controllers folder. You can do this by adding a new item to the project and selecting the MVC Controller Class template located under the Web | MVC category. Remove the Index action method, and replace it with a method or action named Show. This method accepts a firstname parameter and passes the data into the ViewData dictionary. This dictionary will be used by the view to display data. The EmployeeController class will pass an Employee object to the view. This Employee class should be added in the Models folder (right-click on this folder and then select Add | Class from the context menu). Here's the code for the Employee class: namespace MvcApplication1.Models{ public class Employee { public string FirstName { get; set; } public string LastName { get; set; } public string Email { get; set; } }} After adding the EmployeeController and Employee classes, the ASP.NET MVC project now appears as shown in the following screenshot: The EmployeeController class now looks like this: using System.Web.Mvc;using MvcApplication1.Models;namespace MvcApplication1.Controllers{ public class EmployeeController : Controller { public ActionResult Show(string firstname) { if (string.IsNullOrEmpty(firstname)) { ViewData["ErrorMessage"] = "No firstname provided!"; } else { Employee employee = new Employee { FirstName = firstname, LastName = "Example", Email = firstname + "@example.com" }; ViewData["FirstName"] = employee.FirstName; ViewData["LastName"] = employee.LastName; ViewData["Email"] = employee.Email; } return View(); } }} The action method we've just created can be requested by a user via a URL—in this case, something similar to http://www.example.com/Employee/Maarten. This URL is mapped to the action method by the route we've created before. By default, any public action method (that is, a method in a controller class) can be requested using the default routing scheme. If you want to avoid a method from being requested, simply make it private or protected, or if it has to be public, add a [NonAction] attribute to the method. Note that we are returning an ActionResult (created by the View() method), which can be a view-rendering command, a page redirect, a JSON result, a string, or any other custom class implementation inheriting the ActionResult that you want to return. Returning an ActionResult is not necessary. The controller can write content directly to the response stream if required, but this would be breaking the MVC pattern—the controller should never be responsible for the actual content of the response that is being returned. Next, create a Show.aspx page in the Views | Employee folder. You can create a view by adding a new item to the project and selecting the MVC View Content Page template, located under the Web | MVC category, as we want this view to render in a master page (located in Views | Shared). There is an alternative way to create a view related to an action method, which will be covered later in this article. In the view, you can display employee information or display an error message if an employee is not found. Add the following code to the Show.aspx page: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" Inherits=" System.Web.Mvc.ViewPage" %><asp:Content ID="Content1" ContentPlaceHolderID="MainContent" runat="server"> <% if (ViewData["ErrorMessage"] != null) { %> <h1><%=ViewData["ErrorMessage"]%></h1> <% } else { %> <h1><%=ViewData["FirstName"]%> <%=ViewData["LastName"]%></h1> <p> E-mail: <%=ViewData["Email"]%> </p> <% } %></asp:Content> If the ViewData, set by the controller, is given an ErrorMessage, then the ErrorMessage is displayed on the resulting web page. Otherwise, the employee details are displayed. Press the F5 button on your keyboard to start the development web server. Alter the URL in your browser to something ending in /Employee/Your_Name_Here, and see the action method and the view we've just created in action.
Read more
  • 0
  • 0
  • 6445

article-image-dispatchers-and-routers
Packt
12 Nov 2012
5 min read
Save for later

Dispatchers and Routers

Packt
12 Nov 2012
5 min read
(For more resources related to this topic, see here.) Dispatchers In the real world, dispatchers are the communication coordinators that are responsible for receiving and passing messages. For the emergency services (for example, in U.S. – 911), the dispatchers are the people responsible for taking in the call, and passing on the message to the other departments (medical, police, fire station, or others). The dispatcher coordinates the route and activities of all these departments, to make sure that the right help reaches the destination as early as possible. Another example is how the airport manages airplanes taking off. The air traffic controllers (ATCs) coordinate the use of the runway between the various planes taking off and landing. On one side, air traffic controllers manage the runways (usually ranging from 1 to 3), and on the other, aircrafts of different sizes and capacity from different airlines ready to take off and land. An air traffic controller coordinates the various airplanes, gets the airplanes lined up, and allocates the runways to take off and land: As we can see, there are multiple runways available and multiple airlines, each having a different set of airplanes needing to take off. It is the responsibility of air traffic controller(s) to coordinate the take-off and landing of planes from each airline and do this activity as fast as possible. Dispatcher as a pattern Dispatcher is a well-recognized and used pattern in the Java world. Dispatchers are used to control the flow of execution. Based on the dispatching policy, dispatchers will route the incoming message or request to the business process. Dispatchers as a pattern provide the following advantages: Centralized control: Dispatchers provide a central place from where various messages/requests are dispatched. The word "centralized" means code is re-used, leading to improved maintainability and reduced duplication of code. Application partitioning: There is a clear separation between the business logic and display logic. There is no need to intermingle business logic with the display logic. Reduced inter-dependencies: Separation of the display logic from the business logic means there are reduced inter-dependencies between the two. Reduced inter-dependencies mean less contention on the same resources, leading to a scalable model. Dispatcher as a concept provides a centralized control mechanism that decouples different processing logic within the application, which in turn reduces inter-dependencies. Executor in Java In Akka, dispatchers are based on the Java Executor framework (part of java.util.concurrent).Executor provides the framework for the execution of asynchronous tasks. It is based on the producer–consumer model, meaning the act of task submission (producer) is decoupled from the act of task execution (consumer). The threads that submit tasks are different from the threads that execute the tasks. Two important implementations of the Executor framework are as follows: ThreadPoolExecutor: It executes each submitted task using thread from a predefined and configured thread pool. ForkJoinPool: It uses the same thread pool model but supplemented with work stealing. Threads in the pool will find and execute tasks (work stealing) created by other active tasks or tasks allocated to other threads in the pool that are pending execution. Fork/join is based a on fine-grained, parallel, divide-andconquer style, parallelism model. The idea is to break down large data chunks into smaller chunks and process them in parallel to take advantage of the underlying processor cores. Executor is backed by constructs that allow you to define and control how the tasks are executed. Using these Executor constructor constructs, one can specify the following: How many threads will be running? (thread pool size) How are the tasks queued until they come up for processing? How many tasks can be executed concurrently? What happens in case the system overloads, when tasks to be rejected are selected? What is the order of execution of tasks? (LIFO, FIFO, and so on) Which pre- and post-task execution actions can be run? In the book Java Concurrency in Practice, Addison-Wesley Publishing, the authors have described the Executor framework and its usage very nicely. It will be useful to read the book for more details on the concurrency constructs provided by Java language. Dispatchers in Akka In the Akka world, the dispatcher controls and coordinates the message dispatching to the actors mapped on the underlying threads. They make sure that the resources are optimized and messages are processed as fast as possible. Akka provides multiple dispatch policies that can be customized according to the underlying hardware resource (number of cores or memory available) and type of application workload. If we take our example of the airport and map it to the Akka world, we can see that the runways are mapped to the underlying resources—threads. The airlines with their planes are analogous to the mailbox with the messages. The ATC tower employs a dispatch policy to make sure the runways are optimally utilized and the planes are spending minimum time on waiting for clearance to take off or land: For Akka, the dispatchers, actors, mailbox, and threads look like the following diagram: The dispatchers run on their threads; they dispatch the actors and messages from the attached mailbox and allocate on heap to the executor threads. The executor threads are configured and tuned to the underlying processor cores that available for processing the messages.
Read more
  • 0
  • 0
  • 6425

article-image-getting-started-mule
Packt
26 Aug 2013
10 min read
Save for later

Getting Started with Mule

Packt
26 Aug 2013
10 min read
(For more resources related to this topic, see here.) Mule ESB is a lightweight Java programming language. Through ESB, you can integrate or communicate with multiple applications. Mule ESB enables easy integration of existing systems, regardless of the different technologies that the applications use, including JMS, web services, JDBC, and HTTP. Understanding Mule concepts and terminologies Enterprise Service Bus (ESB) is an application that gives access to other applications and services. Its main task is to be the messaging and integration backbone of an enterprise. An ESB is a distributed middleware system to integrate different applications. All these applications communicate through the ESB. It consists of a set of service containers that integrate various types of applications. The containers are interconnected with a reliable messaging bus. Getting ready An ESB is used for integration using a service-oriented approach. Its main features are as follows: Polling JMS Message transformation and routing services Tomcat hot deployment Web service security We often use the abbreviation, VETRO, to summarize the ESB functionality: V– validate the schema validation E– enrich T– transform R– route (either itinerary or content based) O– operate (perform operations; they run at the backend) Before introducing any ESB, developers and integrators must connect different applications in a point-to-point fashion. How to do it... After the introduction of an ESB, you just need to connect each application to the ESB so that every application can communicate with each other through the ESB. You can easily connect multiple applications through the ESB, as shown in the following diagram: Need for the ESB You can integrate different applications using ESB. Each application can communicate through ESB: To integrate more than two or three services and/or applications To integrate more applications, services, or technologies in the future To use different communication protocols To publish services for composition and consumption For message transformation and routing   What is Mule ESB? Mule ESB is a lightweight Java-based enterprise service bus and integration platform that allows developers and integrators to connect applications together quickly and easily, enabling them to exchange data. There are two editions of Mule ESB: Community and Enterprise. Mule ESB Enterprise is the enterprise-class version of Mule ESB, with additional features and capabilities that are ideal for clustering and performance tuning, DataMapper, and the SAP connector. Mule ESB Community and Enterprise editions are built on a common code base, so it is easy to upgrade from Mule ESB Community to Mule ESB Enterprise. Mule ESB enables easy integration of existing systems, regardless of the different technologies that the applications use, including JMS, web services, JDBC, and HTTP. The key advantage of an ESB is that it allows different applications to communicate with each other by acting as a transit system for carrying data between applications within your enterprise or across the Internet. Mule ESB includes powerful capabilities that include the following: Service creation and hosting: It exposes and hosts reusable services using Mule ESB as a lightweight service container Service mediation: It shields services from message formats and protocols, separate business logic from messaging, and enables location-independent service calls Message routing: It routes, filters, aggregates, and re-sequences messages based on content and rules Data transformation: It exchanges data across varying formats and transport protocols Mule ESB is lightweight but highly scalable, allowing you to start small and connect more applications over time. Mule provides a Java-based messaging framework. Mule manages all the interactions between applications and components transparently. Mule provides transformation, routing, filtering, Endpoint, and so on. How it works... When you examine how a message flows through Mule ESB, you can see that there are three layers in the architecture, which are listed as follows: Application Layer Integration Layer Transport Layer Likewise, there are three general types of tasks you can perform to configure and customize your Mule deployment. Refer to the following diagram: The following list talks about Mule and its configuration: Service component development: This involves developing or re-using the existing POJOs, which is a class with attributes and it generates the get and set methods, Cloud connectors, or Spring Beans that contain the business logic and will consume, process, or enrich messages. Service orchestration: This involves configuring message processors, routers, transformers, and filters that provide the service mediation and orchestration capabilities required to allow composition of loosely coupled services using a Mule flow. New orchestration elements can be created also and dropped into your deployment. Integration: A key requirement of service mediation is decoupling services from the underlying protocols. Mule provides transport methods to allow dispatching and receiving messages on different protocol connectors. These connectors are configured in the Mule configuration file and can be referenced from the orchestration layer. Mule supports many existing transport methods and all the popular communication protocols, but you may also develop a custom transport method if you need to extend Mule to support a particular legacy or proprietary system. Spring beans: You can construct service components from Spring beans and define these Spring components through a configuration file. If you don't have this file, you will need to define it manually in the Mule configuration file. Agents: An agent is a service that is created in Mule Studio. When you start the server, an agent is created. When you stop the server, this agent will be destroyed. Connectors: The Connector is a software component. Global configuration: Global configuration is used to set the global properties and settings. Global Endpoints: Global Endpoints can be used in the Global Elements tab. We can use the global properties' element as many times in a flow as we want. For that, we must pass the global properties' reference name. Global message processor: A global message processor observes a message or modifies either a message or the message flow; examples include transformers and filters. Transformers: A transformer converts data from one format to another. You can define them globally and use them in multiple flows. Filters: Filters decide which Mule messages should be processed. Filters specify the conditions that must be met for a message to be routed to a service or continue progressing through a flow. There are several standard filters that come with Mule ESB, which you can use, or you can create your own filters. Models: It is a logical grouping of services, which are created in Mule Studio. You can start and stop all the services inside a particular model. Services: You can define one or more services that wrap your components (business logic) and configure Routers, Endpoints, transformers, and filters specifically for that service. Services are connected using Endpoints. Endpoints: Services are connected using Endpoints. It is an object on which the services will receive (inbound) and send (outbound) messages. Flow: Flow is used for a message processor to define a message flow between a source and a target. Setting up the Mule IDE The developers who were using Mule ESB over other technologies such as Liferay Portal, Alfresco ECM, or Activiti BPM can use Mule IDE in Eclipse without configuring the standalone Mule Studio in the existing environment. In recent times, MuleSoft (http://www.mulesoft.org/) only provides Mule Studio from Version 3.3 onwards, but not Mule IDE. If you are using the older version of Mule ESB, you can get Mule IDE separately from http://dist.muleforge.org/mule-ide/releases/. Getting ready To set Mule IDE, we need Java to be installed on the machine and its execution path should be set in an environment variable. We will now see how to set up Java on our machine. Firstly, download JDK 1.6 or a higher version from the following URL: http://www.oracle.com/technetwork/java/javase/downloads/jdk6downloads-1902814.html. In your Windows system, go to Start | Control Panel | System | Advanced. Click on Environment Variables under System Variables, find Path, and click on it. In the Edit window, modify the path by adding the location of the class to its value. If you do not have the item Path, you may select the option of adding a new variable and adding Path as the name and the location of the class as its value. Close the window, reopen the command prompt window, and run your Java code. How to do it... If you go with Eclipse, you have to download Mule IDE Standalone 3.3. Download Mule ESB 3.3 Community edition from the following URL: http://www.mulesoft.org/extensions/mule-ide. Unzip the downloaded file and set MULE_HOME as the environment variable. Download the latest version of Eclipse from http://www.eclipse.org/downloads/. After installing Eclipse, you now have to integrate Mule IDE in the Eclipse. If you are using Eclipse Version 3.4 (Galileo), perform the following steps to install Mule IDE. If you are not using Version 3.4 (Galileo), the URL for downloading will be different. Open Eclipse IDE. Go to Help | Install New Software…. Write the URL in the Work with: textbox: http://dist.muleforge.org/muleide/updates/3.4/ and press Enter. Select the Mule IDE checkbox. Click on the Next button. Read and accept the license agreement terms. Click on the Finish button. This will take some time. When it prompts for a restart, shut it down and restart Eclipse. Mule configuration After installing Mule IDE, you will now have to configure Mule in Eclipse. Perform the following steps: Open Eclipse IDE. Go to Window | Preferences. Select Mule, add the distribution folder mule as standalone 3.3; click on the Apply button and then on the OK button. This way you can configure Mule with Eclipse. Installing Mule Studio Mule Studio is a powerful, user-friendly Eclipse-based tool. Mule Studio has three main components: a package tree, a palette, and a canvas. Mule ESB easily creates flows as well as edits and tests them in a few minutes. Mule Studio is currently in public beta. It is based on drag-and-drop elements and supports two-way editing. Getting ready To install Mule Studio, download Mule Studio from http://www.mulesoft.org/download-mule-esb-community-edition. How to do it... Unzip the Mule Studio folder. Set the environment variable for Mule Studio. While starting with Mule Studio, the config.xml file will be created automatically by Mule Studio. The three main components of Mule Studio are as follows: A package tree A palette A canvas A package tree A package tree contains the entire structure of your project. In the following screenshot, you can see the package explorer tree. In this package explorer tree, under src/main/java, you can store the custom Java class. You can create a graphical flow from src/main/resources. In the app folder you can store the mule-deploy.properties file. The folders src, main, and app contain the flow of XML files. The folders src, main, and test contain flow-related test files. The Mule-project.xml file contains the project's metadata. You can edit the name, description, and server runtime version used for a specific project. JRE System Library contains the Java runtime libraries. Mule Runtime contains the Mule runtime libraries. A palette The second component is palette. The palette is the source for accessing Endpoints, components, transformers, and Cloud connectors. You can drag them from the palette and drop them onto the canvas in order to create flows. The palette typically displays buttons indicating the different types of Mule elements. You can view the content of each button by clicking on them. If you do not want to expand elements, click on the button again to hide the content. A canvas The third component is canvas; canvas is a graphical editor. In canvas you can create flows. The canvas provides a space that facilitates the arrangement of Studio components into Mule flows. In the canvas area you can configure each and every component, and you can add or remove components on the canvas.
Read more
  • 0
  • 0
  • 6314

article-image-apache-karaf-provisioning-and-clusters
Packt
18 Jul 2014
12 min read
Save for later

Apache Karaf – Provisioning and Clusters

Packt
18 Jul 2014
12 min read
(For more resources related to this topic, see here.) In this article, we will cover the following topics: What is OSGi and what are its key features? The role of the OSGi framework The OSGi base artifact—the OSGi bundle and the concept of dependencies between bundles The Apache Karaf OSGi container and the provisioning of applications in the container How to manage the provisioning on multiple Karaf instances? What is OSGi? Developers are always looking for very dynamic, flexible, and agile software components. The purposes to do so are as follows: Reuse: This feature states that instead of duplicating the code, a component should be shared by other components, and multiple versions of the same component should be able to cohabit. Visibility: This feature specifies that a component should not use the implementation from another component directly. The implementation should be hidden, and the client module should use the interface provided by another component. Agility: This feature specifies that the deployment of a new version of a component should not require you to restart the platform. Moreover, a configuration change should not require a restart. For instance, it's not acceptable to restart a production platform just to change a log level. A minor change such as a log level should be dynamic, and the platform should be agile enough to reload the components that should be reloaded. Discovery: This feature states that a component should be able to discover other components. It's a kind of Plug and Play system: as soon as a component needs another component, it just looks for it and uses it. OSGi has been created to address the preceding points. The core concept is to force developers to use a very modular architecture in order to reduce complexity. As this paradigm is applicable for most modern systems, OSGi is now used for small embedded devices as well as for very large systems. Different applications and systems use OSGi, for example, desktop applications, application servers, frameworks, embedded devices, and so on. The OSGi framework OSGi is designed to run in Java. In order to provide these features and deploy OSGi applications, a core layer has to be deployed in the Java Virtual Machine (JVM): the OSGi framework. This framework manages the life cycle and the relationship between the different OSGi components and artifacts. The OSGi bundle In OSGi, the components are packaged as OSGi bundles. An OSGi bundle is a simple Java JAR (Java ARchive) file that contains additional metadata used by the OSGi framework. These metadata are stored in the manifest file of the JAR file. The following is the metadata: Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Version: 2.1.6 Bundle-Name: My Logger Bundle-SymbolicName: my_logger Export-Package: org.my.osgi.logger;version=2.1 Import-Package: org.apache.log4j;version="[1.2,2)" Private-Package: org.my.osgi.logger.internal We can see that OSGi is very descriptive and verbose. We explicitly describe all the OSGi metadata (headers), including the package that we export or import with a specified version or version range. As the OSGi headers are defined in the META-INF/MANIFEST file contained in the JAR file, it means that an OSGi bundle is a regular JAR file that you can use outside of OSGi. The life cycle layer of the OSGi framework is an API to install, start, stop, update, and uninstall OSGi bundles. Dependency between bundles An OSGi bundle can use other bundles from the OSGi framework in two ways. The first way is static code sharing. When we say that this bundle exports packages, it means a bundle can expose some code for other bundles. On the other hand, when we say that this bundle imports packages, it means a bundle can use code from other bundles. For instance, we have the bundle A (packaged as the bundleA.jar file) with the following META-INF/MANIFEST file: Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Version: 1.0.0 Bundle-Name: Bundle A Bundle-SymbolicName: bundle_a Export-Package: com.bundle.a;version=1.0 We can see that the bundle A exposes (exports) the com.bundle.a package with Version 1.0. On the other hand, we have the bundle B (packaged as the bundleB.jar file) with the following META-INF/MANIIFEST file: Manifest-Version: 1.0 Bundle-ManifestVersion: 2 Bundle-Version: 2.0.0 Bundle-Name: Bundle B Bundle-SymbolicName: bundle_b Import-Package: com.bundle.a;version="[1.0,2)" We can see that the bundle B imports (so, it will use) the com.bundle.a package in any version between 1.0 and 2 (excluded). So, this means that the OSGi framework will wire the bundles, as the bundle A provides the package used by the bundle B (so, the constraint is resolved). This mechanism is similar to regular Java applications, but instead of embedding the required JAR files in your application, you can just declare the expected code. The OSGi framework is responsible for the link between the different bundles; it's done by the modules layer of the OSGi framework. This approach is interesting when you want to use code which is not natively designed for OSGi. It's a step forward for the reuse of components. However, it provides a limited answer to the purposes seen earlier in the article, especially visibility and discovery. The second way in which an OSGi bundle can use other bundles from the OSGi framework is more interesting. It uses Service-Oriented Architecture (SOA) for low-level components. Here, more than exposing the code, an OSGi bundle exposes a OSGi service. On the other hand, another bundle can use an OSGi service. The services layer of the OSGi framework provides a service registry and all the plumbing mechanisms to wire the services. The OSGi services provide a very dynamic system, offering a Publish-Find-Bind model for the bundles. The OSGi container The OSGi container provides a set of additional features on top of the OSGi framework, as shown in the following diagram: Apache Karaf provides the following features: It provides the abstraction of the OSGi framework. If you write an OSGi application, you have to package your application tightly coupled with the OSGi framework (such as the Apache Felix framework or Eclipse Equinox). Most of the time, you have to prepare the scripts, configuration files, and so on in order to provide a complete, ready-to-use application. Apache Karaf allows you to focus only on your application. Karaf, by default, provides the packaging (including scripts and so on), and it also abstracts the OSGi framework. Thanks to Karaf, it's very easy to switch from Apache Felix (the default framework in Karaf) to Eclipse Equinox. Provides support for the OSGi Blueprint and Spring frameworks. Apache Karaf allows you to directly use Blueprint or Spring as the dependency framework in your bundles. In the new version of Karaf (starting from Karaf 3.0.1), it also supports new dependency frameworks (such as DS, CDI, and so on). Apache Karaf provides a complete, Unix-like shell console where you have a lot of commands available to manage and monitor your running container. This shell console works on any system supporting Java and provides a complete Unix-like environment, including completion, contextual help, key bindings, and more. You can access the shell console using SSH. Apache Karaf also provides a complete management layer (using JMX) that is remotely accessible, which means you can perform the same actions as you do using the shell commands with several MBeans. In addition to the default root Apache Karaf container, for convenience, Apache Karaf allows you to manage multiple container instances. Apache Karaf provides dedicated commands and MBeans to create the instances, control the instances, and so on. Logging is a key layer for any kind of software container. Apache Karaf provides a powerful and very dynamic logging system powered by Pax Logging. In your OSGi application, you are not coupled to a specific logging framework; you can use the framework of your choice (slf4j, log4j, logback, commons-logging, and so on). Apache Karaf uses a central configuration file irrespective of the logging frameworks in use. All changes in this configuration file are made on the fly; no need to restart anything. Again, Apache Karaf provides commands and MBeans dedicated to log management (changing the log level, direct display of the log in the shell console, and so on). Hot deployment is also an interesting feature provided by Apache Karaf. By default, the container monitors a deploy folder periodically. When a new file is dropped in the deploy folder, Apache Karaf checks the file type and delegates the deployment logic for this file to a deployer. Apache Karaf provides different deployers by default (spring, blueprint, features, war, and so on). If Java Authentication and Authorization Service (JAAS) is the Java implementation of Pluggable Authentication Modules (PAM), it's not very OSGi compliant by default. Apache Karaf leverages JAAS, exposing realm and login modules as OSGi services. Again, Apache Karaf provides dedicated JAAS shell commands and MBeans. The security framework is very flexible, allowing you to define the chain of login modules that you want for authentication. By default, Apache Karaf uses a PropertiesLoginModule using the etc/users.properties file for storage. The security framework also provides support for password encryption (you just have to enable encryption in the etc/org.apache.karaf.jaas.cfg configuration file). The new Apache Karaf version (3.0.0) also provides a complete Role Based Access Control (RBAC) system, allowing you to configure the users who can run commands, call MBeans, and so on. Apache Karaf is an enterprise-ready container and provides features dedicated to enterprise. The following enterprise features are not installed by default (to minimize the size and footprint of the container by default), but a simple command allows you to extend the container with enterprise functionalities: WebContainer allows you to deploy a Web Application Bundle (WAB) or WAR file. Apache Karaf is a complete HTTP server with JSP/servlet support, thanks to Pax Web. Java Naming and Directory Interface (JNDI) adds naming context support in Apache Karaf. You can bind an OSGi service to a JNDI name and look up these services using the name, thanks to Aries and Xbean naming. Java Transaction API (JTA) allows you to add a transaction engine (exposed as an OSGi service) in Apache Karaf, thanks to Aries JTA. Java Persistence API (JPA) allows you to add a persistence adapter (exposed as an OSGi service) in Apache Karaf, thanks to Aries JPA. Ready-to-use persistence engines can also be installed very easily (especially Apache OpenJPA and Hibernate). Java Database Connectivity (JDBC) or Java Message Service (JMS) are convenient features, allowing you to easily create JDBC DataSources or JMS ConnectionFactories and use them directly in the shell console. If you can completely administrate Apache Karaf using the shell commands and the JMX MBeans, you can also install Web Console. This Web Console uses the Felix Web Console and allows you to manage Karaf with a simple browser. Thanks to these features, Apache Karaf is a complete, rich, and enterprise-ready container. We can consider Apache Karaf as an OSGi application server. Provisioning in Apache Karaf In addition, Apache Karaf provides three core functionalities that can be used both internally in Apache Karaf or can be used by external applications deployed in the container: OSGi bundle management Configuration management Provisioning using Karaf Features As we learned earlier, the default artifact in OSGi is the bundle. Again, it's a regular JAR file with additional OSGi metadata in the MANIFEST file. The bundles are directly managed by the OSGi framework, but for convenience, Apache Karaf wraps the usage of bundles in specific commands and MBeans. A bundle has a specific life cycle. Especially when you install a bundle, the OSGi framework tries to resolve all the dependencies required by your bundle to promote it in a resolved state. The following is the life cycle of a bundle: The OSGi framework checks whether other bundles provide the packages imported by your bundle. The equivalent action for the OSGi services is performed when you start your bundle. It means that a bundle may require a lot of other bundles to start and so on for the transitive bundles. Moreover, a bundle may require configuration to work. Apache Karaf proposes a very convenient way to manage the configurations. The etc folder is periodically monitored to discover new configuration files and load the corresponding configurations. On the other hand, you have dedicated shell commands and MBeans to manage configurations (and configuration files). If a bundle requires a configuration to work, you first have to create a configuration file in the etc folder (with the expected filename) or use the config:* shell command or ConfigMBean to create the configuration. Considering that an OSGi application is a set of bundles, the installation of an OSGi application can be long and painful by hand. The deployment of an OSGi application is called provisioning as it gathers the following: The installation of a set of bundles, including transitive bundles The installation of a set of configurations required by these bundles OBR OSGi Bundle Repository (OBR) can be the first option to be considered in order to solve this problem. Apache Karaf can connect to the OBR server. The OBR server stores all the metadata for all the bundles, which includes the capabilities, packages, and services provided by a bundle and the requirements, packages, and services needed by a bundle. When you install a bundle via OBR, the OBR server checks the requirement of the installed bundle and finds the bundles that provide the capabilities matching the requirements. The OBR server can automatically install the bundles required for the first one.
Read more
  • 0
  • 0
  • 6282

article-image-understanding-cython
Packt
07 Oct 2013
8 min read
Save for later

Understanding Cython

Packt
07 Oct 2013
8 min read
If you were to create an API for Python, you should write it using Cython to create a more type-safe Python API. Or, you could take the C types from Cython to implement the same algorithms in your Python code, and they will be faster because you're specifying the types and you avoid a lot of the type conversion required. Consider you are implementing a fresh project in C. There are a few issues we always come across in starting fresh; for example, choosing the logging or configuration system we will use or implement. With Cython, we can reuse the Python logging system as well as the ConfigParser standard libraries from Python in our C code to get a head start. If this doesn't prove to be the correct solution, we can chop and change easily. We can even extend and get Python to handle all usage. Since the Python API is very powerful, we might as well make Python do as much as it can to get us off the ground. Another question is do we want Python be our "driver" (main entry function) or do we want to handle this from our C code? Cython cdef In the next two examples, I will demonstrate how we can reuse the Python logging and Python ConfigParser modules directly from C code. But there are a few formalities to get over first, namely the Python initialization API and the link load model for fully embedded Python applications for using the shared library method. It's very simple to embed Python within a C/C++ application; you will require the following boilerplate: #include <Python.h>int main (int argc, char ** argv){Py_SetProgramName (argv [0]);Py_Initialize ();/* Do all your stuff in side here...*/Py_Finalize ();return 0;} Make sure you always put the Python.h header at the very beginning of each C file, because Python contains a lot of headers defined for system headers to turn things on and off to make things behave correctly on your system. Later, I will introduce some important concepts about the GIL that you should know and the relevant Python API code you will need to use from time to time. But for now, these few calls will be enough for you to get off the ground. Linking models Linking models are extremely important when considering how we can extend or embed things in native applications. There are two main linking models for Cython: fully embedded Python and code, which looks like the following figure: This demonstrates a fully embedded Python application where the Python runtime is linked into the final binary. This means we already have the Python runtime, whereas before we had to run the Python interpreter to call into our Cython module. There is also a Python shared object module as shown in the following figure: We have now fully modularized Python. This would be a more Pythonic approach to Cython, and if your code base is mostly Python, this is the approach you should take if you simply want to have a native module to call into some native code, as this lends your code to be more dynamic and reusable. The public keyword Moving on from linking models, we should next look at the public keyword, which allows Cython to generate a C/C++ header file that we can include with the prototypes to call directly into Python code from C. The main caveat if you're going to call Python public declarations directly from C is if your link model is fully embedded and linked against libpython.so; you need to use the boilerplate code as shown in the previous section. And before calling anything with the function, you need to initialize the Python module example if you have a cythonfile.pyx file and compile it with public declarations such as the following: cdef public void cythonFunction ():print "inside cython function!!!" You will not only get a cythonfile.c file but also cythonfile.h; this declares a function called extern void initcythonfile (void). So, before calling anything to do with the Cython code, use the following: /* Boiler plate init Python */Py_SetProgramName (argv [0]);Py_Initialize ();/* Init our config module into Python memory */initpublicTest ();cythonFunction ();/* cleanup python before exit ... */Py_Finalize (); Calling initcythonfile can be considered as the following in Python: import cythonfile Just like the previous examples, this only affects you if you're generating a fully embedded Python binary. Logging into Python A good example of Cython's abilities in my opinion is reusing the Python logging module directly from C. So, for example, we want a few macros we can rely on, such as info (…) that can handle VA_ARGS and feels as if we are calling a simple printf method. I think that after this example, you should start to see how things might work when mixing C and Python now that the cdef and public keywords start to bring things to life: import loggingcdef public void initLogging (char * logfile):logging.basicConfig (filename = logfile,level = logging.DEBUG,format = '%(levelname)s %(asctime)s:%(message)s',datefmt = '%m/%d/%Y %I:%M:%S')cdef public void pyinfo (char * message):logging.info (message)cdef public void pydebug (char * message):logging.debug (message)cdef public void pyerror (char * message):logging.error (message) This could serve as a simple wrapper for calling directly into the Python logger, but we can make this even more awesome in our C code with C99 __VA_ARGS__ and an attribute that is similar to GCC printf. This will make it look and work just like any function that is similar to printf. We can define some headers to wrap our calls to this in C as follows: #ifndef __MAIN_H__#define __MAIN_H__#include <Python.h>#include <stdio.h>#include <stdarg.h>#define printflike __attribute__ ((format (printf, 3, 4)))extern void printflike cinfo (const char *, unsigned, const char *,...);extern void printflike cdebug (const char *, unsigned, const char *,...);extern void printflike cerror (const char *, unsigned, const char *,...);#define info(...) cinfo (__FILE__, __LINE__, __VA_ARGS__)#define error(...) cerror (__FILE__, __LINE__, __VA_ARGS__)#define debug(...) cdebug (__FILE__, __LINE__, __VA_ARGS__)#include "logger.h" // remember to import our cython public's#endif //__MAIN_H__ Now we have these macros calling cinfo and the rest, and we can see the file and line number where we call these logging functions: void cdebug (const char * file, unsigned line,const char * fmt, ...){char buffer [256];va_list args;va_start (args, fmt);vsprintf (buffer, fmt, args);va_end (args);char buf [512];snprintf (buf, sizeof (buf), "%s-%i -> %s",file, line, buffer);pydebug (buf);} On calling debug ("debug message"), we see the following output: Philips-MacBook:cpy-logging redbrain$ ./example log Philips-MacBook:cpy-logging redbrain$ cat log INFO 05/06/2013 12:28:24: main.c-62 -> info message DEBUG 05/06/2013 12:28:24: main.c-63 -> debug messageERROR 05/06/2013 12:28:24: main.c-64 -> error message Also, you should note that we import and do everything we would do in Python as we would in here, so don't be afraid to make lists or classes and use these to help out. Remember if you had a Cython module with public declarations calling into the logging module, this integrates your applications as if it were one. More importantly, you only need all of this boilerplate when you fully embed Python, not when you compile your module to a shared library. Python ConfigParser Another useful case is to make Python's ConfigParser accessible in some way from C; ideally, all we really want is to have a function to which we pass the path to a config file to receive a STATUS OK/FAIL message and a filled buffer of the configuration that we need: from ConfigParser import SafeConfigParser, NoSectionErrorcdef extern from "main.h":struct config:char * pathint numbercdef config myconfig Here, we've Cythoned our struct and declared an instance on the stack for easier management: cdef public config * parseConfig (char * cfg):# initialize the global stack variable for our config...myconfig.path = NULLmyconfig.number = 0# buffers for assigning python types into C typescdef char * path = NULLcdef number = 0parser = SafeConfigParser ()try:parser.readfp (open (cfg))pynumber = int (parser.get ("example", "number"))pypath = parser.get ("example", "path")except NoSectionError:print "No section named example"return NULLexcept IOError:print "no such file ", cfgreturn NULLfinally:myconfig.number = pynumbermyconfig.path = pypathreturn &myconfig This is a fairly trivial piece of Cython code that will return NULL on error as well as the pointer to the struct containing the configuration: Philips-MacBook:cpy-configparser redbrain$ ./example sample.cfgcfg->path = some/path/to/somethingcfg-number = 15 As you can see, we easily parsed a config file without using any C code. I always found figuring out how I was going to parse config files in C to be a nightmare. I usually ended up writing my own mini domain-specific language using Flex and Bison as a parser as well as my own middle-end, which is just too involved.
Read more
  • 0
  • 0
  • 6273

Packt
24 Mar 2014
5 min read
Save for later

Build a Chat Application using the Java API for WebSocket

Packt
24 Mar 2014
5 min read
Traditionally, web applications have been developed using the request/response model followed by the HTTP protocol. In this model, the request is always initiated by the client and then the server returns a response back to the client. There has never been any way for the server to send data to the client independently (without having to wait for a request from the browser) until now. The WebSocket protocol allows full-duplex, two-way communication between the client (browser) and the server. Java EE 7 introduces the Java API for WebSocket, which allows us to develop WebSocket endpoints in Java. The Java API for WebSocket is a brand-new technology in the Java EE Standard. A socket is a two-way pipe that stays alive longer than a single request. Applied to an HTML5-compliant browser, this would allow for continuous communication to or from a web server without the need to load a new page (similar to AJAX). Developing a WebSocket Server Endpoint A WebSocket server endpoint is a Java class deployed to the application server that handles WebSocket requests. There are two ways in which we can implement a WebSocket server endpoint via the Java API for WebSocket: either by developing an endpoint programmatically, in which case we need to extend the javax.websocket.Endpoint class, or by decorating Plain Old Java Objects (POJOs) with WebSocket-specific annotations. The two approaches are very similar; therefore, we will be discussing only the annotation approach in detail and briefly explaining the second approach, that is, developing WebSocket server endpoints programmatically, later in this section. In this article, we will develop a simple web-based chat application, taking full advantage of the Java API for WebSocket. Developing an annotated WebSocket server endpoint The following Java class code illustrates how to develop a WebSocket server endpoint by annotating a Java class: package net.ensode.glassfishbook.websocketchat.serverendpoint; import java.io.IOException; import java.util.logging.Level; import java.util.logging.Logger; import javax.websocket.OnClose; import javax.websocket.OnMessage; import javax.websocket.OnOpen; import javax.websocket.Session; import javax.websocket.server.ServerEndpoint; @ServerEndpoint("/websocketchat") public class WebSocketChatEndpoint { private static final Logger LOG = Logger.getLogger(WebSocketChatEndpoint.class.getName()); @OnOpen public void connectionOpened() { LOG.log(Level.INFO, "connection opened"); } @OnMessage public synchronized void processMessage(Session session, String message) { LOG.log(Level.INFO, "received message: {0}", message); try { for (Session sess : session.getOpenSessions()) { if (sess.isOpen()) { sess.getBasicRemote().sendText(message); } } } catch (IOException ioe) { LOG.log(Level.SEVERE, ioe.getMessage()); } } @OnClose public void connectionClosed() { LOG.log(Level.INFO, "connection closed"); } } The class-level @ServerEndpoint annotation indicates that the class is a WebSocket server endpoint. The URI (Uniform Resource Identifier) of the server endpoint is the value specified within the parentheses following the annotation (which is "/websocketchat" in this example)—WebSocket clients will use this URI to communicate with our endpoint. The @OnOpen annotation is used to decorate a method that needs to be executed whenever a WebSocket connection is opened by any of the clients. In our example, we are simply sending some output to the server log, but of course, any valid server-side Java code can be placed here. Any method annotated with the @OnMessage annotation will be invoked whenever our server endpoint receives a message from a client. Since we are developing a chat application, our code simply broadcasts the message it receives to all connected clients. In our example, the processMessage() method is annotated with @OnMessage, and takes two parameters: an instance of a class implementing the javax.websocket.Session interface and a String parameter containing the message that was received. Since we are developing a chat application, our WebSocket server endpoint simply broadcasts the received message to all connected clients. The getOpenSessions() method of the Session interface returns a set of session objects representing all open sessions. We iterate through this set to broadcast the received message to all connected clients by invoking the getBasicRemote() method on each session instance and then invoking the sendText() method on the resulting RemoteEndpoint.Basic implementation returned by calling the previous method. The getOpenSessions() method on the Session interface returns all the open sessions at the time it was invoked. It is possible for one or more of the sessions to have closed after the method was invoked; therefore, it is recommended to invoke the isOpen() method on a Session implementation before attempting to return data back to the client. An exception may be thrown if we attempt to access a closed session. Finally, we need to decorate a method with the @OnClose annotation in case we need to handle the event when a client disconnects from the server endpoint. In our example, we simply log a message into the server log. There is one additional annotation that we didn't use in our example—the @OnError annotation; it is used to decorate a method that needs to be invoked in case there's an error while sending or receiving data to or from the client. As we can see, developing an annotated WebSocket server endpoint is straightforward. We simply need to add a few annotations, and the application server will invoke our annotated methods as necessary. If we wish to develop a WebSocket server endpoint programmatically, we need to write a Java class that extends javax.websocket.Endpoint. This class has the onOpen(), onClose(), and onError() methods that are called at appropriate times during the endpoint's life cycle. There is no method equivalent to the @OnMessage annotation to handle incoming messages from clients. The addMessageHandler() method needs to be invoked in the session, passing an instance of a class implementing the javax.websocket.MessageHandler interface (or one of its subinterfaces) as its sole parameter. In general, it is easier and more straightforward to develop annotated WebSocket endpoints compared to their programmatic counterparts. Therefore, we recommend that you use the annotated approach whenever possible.
Read more
  • 0
  • 0
  • 6273
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-build-your-own-application-access-twitter-using-java-and-netbeans-part-1
Packt
05 Feb 2010
6 min read
Save for later

Build your own Application to access Twitter using Java and NetBeans: Part 1

Packt
05 Feb 2010
6 min read
Due to the fact that writing a Java app to control your Twitter account is quite a long process and requires several features, I intend to divide this article in several sections, so you can see in extreme detail all the bells and whistles involved in writing Java applications. Downloading and installing NetBeans for your developing platform To download NetBeans, open a web browser window and go to the NetBeans website. Then click on the Download button and select the All IDE download bundle. After downloading NetBeans, install it with the default options. Creating your SwingAndTweet project Open NetBeans and select File | New Project to open the New Project dialog. Now select Java from the Categories panel and Java Application from the Projects panel. Click on Next to continue. The New Java Application dialog will show up next. Type SwingAndTweet in the Project Name field, mark the Use Dedicated Folder for Storing Libraries option, deselect the Create Main Class box (we’ll deal with that later), make sure the Set as Main Project box is enabled and click on Next to continue: NetBeans will create the SwingAndTweet project and will show it under the Projects tab, in the NetBeans main window. Right click on the project’s name and select JFrame Form... in the pop-up menu: The New JFrame Form window will appear next. Type SwingAndTweetUI in the Class Name field, type swingandtweet in the Package field and click on Finish to continue: NetBeans will open the SwingAndTweetUI frame in the center panel of the main screen. Now you’re ready to assemble your Tweeter Java application! Now let me explain a little bit about what we did in the previous exercise: First, we created a new Java application called SwingAndTweet. Then we created a Swing JFrame component and we named it SwingAndTweetUI, because this is going to act as the foundation, where we’re going to put all the other Swing components required to interact with Twitter. Now I’m going to show you how to download and integrate the Twitter4J API to your SwingAndTweetJava application. Downloading and integrating the Twitter4J API into your NetBeans environment For us to be able to use the powerful classes and methods from the Twitter4J API, we need to tell NetBeans where to find them and integrate them into our Java applications. Open a web browser window, go to http://repo1.maven.org/maven2/net/homeip/yusuke/twitter4j/ and search for the latest twitter4j.2.X.X.jar file, or download the most recent version at the time of this writing from here:http://repo1.maven.org/maven2/net/homeip/yusuke/twitter4j/2.0.9/twitter4j-2.0.9.jar. Once you download it in your computer, go to NetBeans, right-click on the SwingAndTweet project and select Properties from the context menu. Once at the project properties screen, select the Libraries category under the Categories panel, click on the Add JAR/Folder... button at the middle-right part of the screen to open the Add JAR/Folder dialog, navigate to the directory where you downloaded the twitter4j-2.X.X.jar file and double click on it to add it to your project’s library path: Click on OK to close the Project Properties dialog and return to the NetBeans main screen. Ok, you have integrated the Twitter4J API to your SwingAndTweet application. Now, let’s see how to log into your Twitter account from our Java application... Logging into Twitter from Java and seeing your last Tweet In the following exercise, I’ll show you how easy it is to start communicating with Twitter from a Java application, thanks to the Twitter class from the Twitter4J API. You‘ll also learn how to check your last tweet through your Java application. Let’s see how to log into a Twitter account: Go to the Palette window and locate the JLabel component under the Swing Controls section; then drag and drop it into the TweetAndSwing JFrame component: Now drag a Button and a Text Editor, too. Once you have the three controls inside the SwingAndTweetUI JFrame control, arrange them as shown below: The next step is to change their names and captions, to make our application look more professional. Right click on the JLabel1 control, select Edit from the context menu, type My Last Tweet and hit Enter. Do the same procedure with the other two controls: erase the text in the jTextField1 control and type Login in the jButton1 control. Rearrange the jLabel1 and jTextField1 controls, and drag one of the ends of jTextField1 to increase its length all you can. Once done, your application will look like this: And now, let’s inject some life to our application! Double click on the JButton1 control to open your application’s code window. You’ll be inside a java method called jButton1ActionPerformed. This method will execute every time you click on the Login button, and this is where we’re going to put all the code for logging into your Twitter account. Delete the // TODO add your handling code here: line and type the following code inside the JButton1ActionPerformed method: Remember to replace username and password with your real Twitter username and password. If you look closely at the line numbers, you‘ll notice there are five error icons on lines 82, 84, 85,  88 and 89. That’s because we need to add some import lines at the beginning of your code, to indicate NetBeans where to find the Twitter and JOptionPane classes, and the TwitterException. Scroll up until you locate the package swingandtweet; line; then add the following lines: Now all the errors will disappear from your code. To see your Java application in action, press F6 or select Run  Run | Main Project from the NetBeans main menu. The Run Project window will pop up, asking you to select the main class for your project. The swingandtweet.SwingAndTweetUI class will already be selected, so just click on OK to continue. Your SwingAndTweetUI application window will appear next, showing the three controls you created. Click on the Login button and wait for the SwingAndTweet application to validate your Twitter username and password. If they’re correct, the following dialog will pop up: Click on OK to return to your SwingAndTweet application. Now you will see your last tweet on the textbox control: If you want to be really sure it’s working, go to your Twitter account and update your status through the Web interface; for example, type Testing my Java app. Then return to your SwingAndTweet application and click on the Login button again to see your last tweet. The textbox control will now reflect your latest tweet: As you can see, your SwingAndTweet Java application can now communicate with your Twitter account! Click on the X button to close the window and exit your SwingAndTweet application.
Read more
  • 0
  • 0
  • 6140

article-image-interacting-databases-through-java-persistence-api
Packt
23 Oct 2009
17 min read
Save for later

Interacting with Databases through the Java Persistence API

Packt
23 Oct 2009
17 min read
We will look into: Creating our first JPA entity Interacting with JPA entities with entity manager Generating forms in JSF pages from JPA entities Generating JPA entities from an existing database schema JPA named queries and JPQL Entity relationships Generating complete JSF applications from JPA entities Creating Our First JPA Entity JPA entities are Java classes whose fields are persisted to a database by the JPA API. JPA entities are Plain Old Java Objects (POJOs), as such, they don't need to extend any specific parent class or implement any specific interface. A Java class is designated as a JPA entity by decorating it with the @Entity annotation. In order to create and test our first JPA entity, we will be creating a new web application using the JavaServer Faces framework. In this example we will name our application jpaweb. As with all of our examples, we will be using the bundled GlassFish application server. To create a new JPA Entity, we need to right-click on the project and select New | Entity Class. After doing so, NetBeans presents the New Entity Class wizard. At this point, we should specify the values for the Class Name and Package fields (Customer and com.ensode.jpaweb in our example), then click on the Create Persistence Unit... button. The Persistence Unit Name field is used to identify the persistence unit that will be generated by the wizard, it will be defined in a JPA configuration file named persistence.xml that NetBeans will automatically generate from the Create Persistence Unit wizard. The Create Persistence Unit wizard will suggest a name for our persistence unit, in most cases the default can be safely accepted. JPA is a specification for which several implementations exist. NetBeans supports several JPA implementations including Toplink, Hibernate, KODO, and OpenJPA. Since the bundled GlassFish application server includes Toplink as its default JPA implementation, it makes sense to take this default value for the Persistence Provider field when deploying our application to GlassFish. Before we can interact with a database from any Java EE 5 application, a database connection pool and data source need to be created in the application server. A database connection pool contains connection information that allow us to connect to our database, such as the server name, port, and credentials. The advantage of using a connection pool instead of directly opening a JDBC connection to a database is that database connections in a connection pool are never closed, they are simply allocated to applications as they need them. This results in performance improvements, since the operations of opening and closing database connections are expensive in terms of performance. Data sources allow us to obtain a connection from a connection pool by obtaining an instance of javax.sql.DataSource via JNDI, then invoking its getConnection() method to obtain a database connection from a connection pool. When dealing with JPA, we don't need to directly obtain a reference to a data source, it is all done automatically by the JPA API, but we still need to indicate the data source to use in the application's Persistence Unit. NetBeans comes with a few data sources and connection pools pre-configured. We could use one of these pre-configured resources for our application, however, NetBeans also allows creating these resources "on the fly", which is what we will be doing in our example. To create a new data source we need to select the New Data Source... item from the Data Source combo box. A data source needs to interact with a database connection pool. NetBeans comes pre-configured with a few connection pools out of the box, but just like with data sources, it allows us to create a new connection pool "on demand". In order to do this, we need to select the New Database Connection... item from the Database Connection combo box. NetBeans includes JDBC drivers for a few Relational Database Management Systems (RDBMS) such as JavaDB, MySQL, and PostgreSQL "out of the box". JavaDB is bundled with both GlassFish and NetBeans, therefore we picked JavaDB for our example. This way we avoid having to install an external RDBMS. For RDBMS systems that are not supported out of the box, we need to obtain a JDBC driver and let NetBeans know of it's location by selecting New Driver from the Name combo box. We then need to navigate to the location of a JAR file containing the JDBC driver. Consult your RDBMS documentation for details. JavaDB is installed in our workstation, therefore the server name to use is localhost. By default, JavaDB listens to port 1527, therefore that is the port we specify in the URL. We wish to connect to a database called jpaintro, therefore we specify it as the database name. Since the jpaintro database does not exist yet, we pass the attribute create=true to JavaDB, this attribute is used to create the database if it doesn't exist yet. Every JavaDB database contains a schema named APP, since each user by default uses a schema named after his/her own login name. The easiest way to get going is to create a user named "APP" and select a password for this user. Clicking on the Show JDBC URL checkbox reveals the JDBC URL for the connection we are setting up. The New Database Connection wizard warns us of potential security risks when choosing to let NetBeans remember the password for the database connection. Database passwords are scrambled (but not encrypted) and stored in an XML file under the .netbeans/[netbeans version]/config/Databases/Connections directory. If we follow common security practices such as locking our workstation when we walk away from it, the risks of having NetBeans remember database passwords will be minimal. Once we have created our new data source and connection pool, we can continue configuring our persistence unit. It is a good idea to leave the Use Java Transaction APIs checkbox checked. This will instruct our JPA implementation to use the Java Transaction API (JTA) to allow the application server to manage transactions. If we uncheck this box, we will need to manually write code to manage transactions. Most JPA implementations allow us to define a table generation strategy. We can instruct our JPA implementation to create tables for our entities when we deploy our application, to drop the tables then regenerate them when our application is deployed, or not create any tables at all. NetBeans allows us to specify the table generation strategy for our application by clicking the appropriate value in the Table Generation Strategy radio button group. When working with a new application, it is a good idea to select the Drop and Create table generation strategy. This will allow us to add, remove, and rename fields in our JPA entity at will without having to make the same changes in the database schema. When selecting this table generation strategy, tables in the database schema will be dropped and recreated, therefore any data previously persisted will be lost. Once we have created our new data source, database connection and persistence unit, we are ready to create our new JPA entity. We can do so by simply clicking on the Finish button. At this point NetBeans generates the source for our JPA entity. JPA allows the primary field of a JPA entity to map to any column type (VARCHAR, NUMBER). It is best practice to have a numeric surrogate primary key, that is, a primary key that serves only as an identifier and has no business meaning in the application. Selecting the default Primary Key type of long will allow for a wide range of values to be available for the primary keys of our entities. package com.ensode.jpaweb;import java.io.Serializable;import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id;@Entitypublic class Customer implements Serializable { private static final long serialVersionUID = 1L; private Long id; public void setId(Long id) { this.id = id; } @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } //Other generated methods (hashCode(), equals() and //toString() omitted for brevity.} As we can see, a JPA entity is a standard Java object. There is no need to extend any special class or implement any special interface. What differentiates a JPA entity from other Java objects are a few JPA-specific annotations. The @Entity annotation is used to indicate that our class is a JPA entity. Any object we want to persist to a database via JPA must be annotated with this annotation. The @Id annotation is used to indicate what field in our JPA entity is its primary key. The primary key is a unique identifier for our entity. No two entities may have the same value for their primary key field. This annotation can be placed just above the getter method for the primary key class. This is the strategy that the NetBeans wizard follows. It is also correct to specify the annotation right above the field declaration. The @Entity and the @Id annotations are the bare minimum two annotations that a class needs in order to be considered a JPA entity. JPA allows primary keys to be automatically generated. In order to take advantage of this functionality, the @GeneratedValue annotation can be used. As we can see, the NetBeans generated JPA entity uses this annotation. This annotation is used to indicate the strategy to use to generate primary keys. All possible primary key generation strategies are listed in the following table:   Primary Key Generation Strategy   Description   GenerationType.AUTO   Indicates that the persistence provider will automatically select a primary key generation strategy. Used by default if no primary key generation strategy is specified.   GenerationType.IDENTITY   Indicates that an identity column in the database table the JPA entity maps to must be used to generate the primary key value.   GenerationType.SEQUENCE   Indicates that a database sequence should be used to generate the entity's primary key value.   GenerationType.TABLE   Indicates that a database table should be used to generate the entity's primary key value.       In most cases, the GenerationType.AUTO strategy works properly, therefore it is almost always used. For this reason the New Entity Class wizard uses this strategy. When using the sequence or table generation strategies, we might have to indicate the sequence or table used to generate the primary keys. These can be specified by using the @SequenceGenerator and @TableGenerator annotations, respectively. Consult the Java EE 5 JavaDoc at http://java.sun.com/javaee/5/docs/api/ for details. For further knowledge on primary key generation strategies you can refer EJB 3 Developer Guide by Michael Sikora, which is another book by Packt Publishing (http://www.packtpub.com/developer-guide-for-ejb3/book). Adding Persistent Fields to Our Entity At this point, our JPA entity contains a single field, its primary key. Admittedly not very useful, we need to add a few fields to be persisted to the database. package com.ensode.jpaweb;import java.io.Serializable;import javax.persistence.Entity;import javax.persistence.GeneratedValue;import javax.persistence.GenerationType;import javax.persistence.Id;@Entitypublic class Customer implements Serializable { private static final long serialVersionUID = 1L; private Long id; private String firstName; private String lastName; public void setId(Long id) { this.id = id; } @Id @GeneratedValue(strategy = GenerationType.AUTO) public Long getId() { return id; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } //Additional methods omitted for brevity} In this modified version of our JPA entity, we added two fields to be persisted to the database; firstName will be used to store the user's first name, lastName will be used to store the user's last name. JPA entities need to follow standard JavaBean coding conventions. This means that they must have a public constructor that takes no arguments (one is automatically generated by the Java compiler if we don't specify any other constuctors), and all fields must be private, and accessed through getter and setter methods. Automatically Generating Getters and Setters In NetBeans, getter and setter methods can be generated automatically. Simply declare new fields as usual then use the "insert code" keyboard shortcut (default is Alt+Insert), then select Getter and Setter from the resulting pop-up window, then click on the check box next to the class name to select all fields, then click on the Generate button. Before we can use JPA persist our entity's fields into our database, we need to write some additional code. Creating a Data Access Object (DAO) It is a good idea to follow the DAO design pattern whenever we write code that interacts with a database. The DAO design pattern keeps all database access functionality in DAO classes. This has the benefit of creating a clear separation of concerns, leaving other layers in our application, such as the user interface logic and the business logic, free of any persistence logic. There is no special procedure in NetBeans to create a DAO. We simply follow the standard procedure to create a new class by selecting File | New, then selecting Java as the category and the Java Class as the file type, then entering a name and a package for the class. In our example, we will name our class CustomerDAO and place it in the com.ensode.jpaweb package. At this point, NetBeans create a very simple class containing only the package and class declarations. To take complete advantage of Java EE features such as dependency injection, we need to make our DAO a JSF managed bean. This can be accomplished by simply opening faces-config.xml, clicking its XML tab, then right-clicking on it and selecting JavaServer Faces | Add Managed Bean. We get the Add Manged Bean dialog as seen here: We need to enter a name, fully qualified name, and scope for our managed bean (which, in our case, is our DAO), then click on the Add button. This action results in our DAO being declared as a managed bean in our application's faces-config.xml configuration file. <managed-bean> <managed-bean-name>CustomerDAO</managed-bean-name> <managed-bean-class> com.ensode.jpaweb.CustomerDAO </managed-bean-class> <managed-bean-scope>session</managed-bean-scope> </managed-bean> We could at this point start writing our JPA code manually, but with NetBeans there is no need to do so, we can simply right-click on our code and select Persistence | Use Entity Manager, and most of the work is automatically done for us. Here is how our code looks like after doing this trivial procedure: package com.ensode.jpaweb;import javax.annotation.Resource;import javax.naming.Context;import javax.persistence.EntityManager;import javax.persistence.PersistenceContext;@PersistenceContext(name = "persistence/LogicalName", unitName = "jpawebPU")public class CustomerDAO { @Resource private javax.transaction.UserTransaction utx; protected void persist(Object object) { try { Context ctx = (Context) new javax.naming.InitialContext(). lookup("java:comp/env"); utx.begin(); EntityManager em = (EntityManager) ctx.lookup("persistence/LogicalName"); em.persist(object); utx.commit(); } catch (Exception e) { java.util.logging.Logger.getLogger( getClass().getName()).log( java.util.logging.Level.SEVERE, "exception caught", e); throw new RuntimeException(e); } }} All highlighted code is automatically generated by NetBeans. The main thing NetBeans does here is add a method that will automatically insert a new row in the database, effectively persisting our entity's properties. As we can see, NetBeans automatically generates all necessary import statements. Additionally, our new class is automatically decorated with the @PersistenceContext annotation. This annotation allows us to declare that our class depends on an EntityManager (we'll discuss EntityManager in more detail shortly). The value of its name attribute is a logical name we can use when doing a JNDI lookup for our EntityManager. NetBeans by default uses persistence/LogicalName as the value for this property. The Java Naming and Directory Interface (JNDI) is an API we can use to obtain resources, such as database connections and JMS queues, from a directory service. The value of the unitName attribute of the @PersistenceContext annotation refers to the name we gave our application's Persistence Unit. NetBeans also creates a new instance variable of type javax.transaction.UserTransaction. This variable is needed since all JPA code must be executed in a transaction. UserTransaction is part of the Java Transaction API (JTA). This API allows us to write code that is transactional in nature. Notice that the UserTransaction instance variable is decorated with the @Resource annotation. This annotation is used for dependency injection. in this case an instance of a class of type javax.transaction.UserTransaction will be instantiated automatically at run-time, without having to do a JNDI lookup or explicitly instantiating the class. Dependency injection is a new feature of Java EE 5 not present in previous versions of J2EE, but that was available and made popular in the Spring framework. With standard J2EE code, it was necessary to write boilerplate JNDI lookup code very frequently in order to obtain resources. To alleviate this situation, Java EE 5 made dependency injection part of the standard. The next thing we see is that NetBeans added a persist method that will persist a JPA entity, automatically inserting a new row containing our entity's fields into the database. As we can see, this method takes an instance of java.lang.Object as its single parameter. The reason for this is that the method can be used to persist any JPA entity (although in our example, we will use it to persist only instances of our Customer entity). The first thing the generated method does is obtain an instance of javax.naming.InitialContext by doing a JNDI lookup on java:comp/env. This JNDI name is the root context for all Java EE 5 components. The next thing the method does is initiate a transaction by invoking uxt.begin(). Notice that since the value of the utx instance variable was injected via dependency injection (by simply decorating its declaration with the @Resource annotation), there is no need to initialize this variable. Next, the method does a JNDI lookup to obtain an instance of javax.persistence.EntityManager. This class contains a number of methods to interact with the database. Notice that the JNDI name used to obtain an EntityManager matches the value of the name attribute of the @PersistenceContext annotation. Once an instance of EntityManager is obtained from the JNDI lookup, we persist our entity's properties by simply invoking the persist() method on it, passing the entity as a parameter to this method. At this point, the data in our JPA entity is inserted into the database. In order for our database insert to take effect, we must commit our transaction, which is done by invoking utx.commit(). It is always a good idea to look for exceptions when dealing with JPA code. The generated method does this, and if an exception is caught, it is logged and a RuntimeException is thrown. Throwing a RuntimeException has the effect of rolling back our transaction automatically, while letting the invoking code know that something went wrong in our method. The UserTransaction class has a rollback() method that we can use to roll back our transaction without having to throw a RunTimeException. At this point we have all the code we need to persist our entity's properties in the database. Now we need to write some additional code for the user interface part of our application. NetBeans can generate a rudimentary JSF page that will help us with this task.
Read more
  • 0
  • 0
  • 6087

article-image-biztalk-esb-management-portal
Packt
19 Aug 2013
6 min read
Save for later

BizTalk: The ESB Management Portal

Packt
19 Aug 2013
6 min read
(For more resources related to this topic, see here.) Registering services in UDDI Thanks to the ESB Toolkit, we can easily populate our organization's services registry in UDDI with the services that interact with the ESB, either because the ESB exposes them or because they can be consumed through it. Before we can register service in UDDI we must first configure the registry settings. Registry settings The registery settings change how the UDDI registration functionality mentioned in preceding section behaves. UDDI Server: This sets URL of the UDDI server. Auto Publish: When enabled, any registry request will be automatically published. If it's disabled, the requests will require administrative approval. Anonymous: This setting indicates whether to use anonymous access to connect to the UDDI server or to use the UDDI Publisher Service account. Notification Enabled: This enables or disables the delivery of notifications when any registry activity occurs on the portal. SMTP Server: This is the address of the SMTP server that will send notification e-mail messages. Notification E-Mail: This is the e-mail address to which to send endpoint update notification e-mail messages. E-Mail From Address: This is the address that will show up as sender in notification messages sent. E-Mail Subject: This is the text to display in the subject line of notification e-mail messages. E-Mail Body: This is the text for the body of notification e-mail messages. Contact Name: This setting is name of the UDDI administrator to notify of endpoint update requests. Contact E-Mail: This setting is used for the e-mail address of the UDDI administrator for notifications of endpoint update requests. The following screenshot shows all of the settings mentioned in preceding list: In the ESB Management Portal, we can see in the top menu an entry that takes us to the Registry functionality, shown in the following screenshot: On this view, we can directly register a service into UDDI. To do this, first we have to search the endpoint that we want to publish. These can be endpoints of services that the ESB consumes through Send ports, or endpoints of services that the ESB exposes through receive locations. As an example, we will publish one of the services exposed by the ESB through the GlobalBank.ESB sample application that comes with the ESB Toolkit. First, we will search on the New Registry Entry page for the endpoints in the GlobalBank.ESB application, as shown in the following screenshot: Once we get the results, we will click on the Publish link of the DynamicResolutionReqResp_SOAP endpoint that actually exposes the /ESB. NorthAmericanServices/CustomerOrder.asmx service. We will be presented with a screen where we can fill in further details about the service registry entry, such as the service provider under which we want to publish the service (or we can even create a new service provider that will get registered in UDDI as well). After clicking on the Publish button at the bottom of the page, we will be directed back to the New Registry Entry screen, where we can filter again and see how our new registry entry is in Pending status, as it needs to be approved by an administrator. We can access the Manage Pending Requests module through the corresponding submenu under the top-level Registry menu. There we can see if there are any new registry entries that might be pending for approval. By using the buttons to the left of each item, we can view the details of the request, edit them, and approve or delete the request. Once we approve the request, we will receive a confirmation message on the portal telling us that it got approved. Then, we can go to the UDDI portal and look for the service provider that we just created, were we will see that our service got registered. The following screenshot shows how the service provider of the service we just published is shown in the UDDI portal: In the following screenshot we can see the actual service published, with its corresponding properties. With these simple steps, we can easily build our own services registry in UDDI based on the services our organization already has, so they can be used by the ESB or any other systems to discover services and know how to consume them. Understanding the Audit Log Audit Log is a small reporting feature that is meant to provide information about the status of messages that have been resubmitted to the ESB through the resubmission module. We can access this module through the Manage Audit Log menu. We will be presented with a list of the messages that were resubmitted, if those were resubmitted successfully or not, and even check the actual message that was resubmitted, as the message could have been modified before being resubmitted. Fault Settings On the Fault Settings page we can specify: Audit Options: This includes the type events that we want to audit: Audit Save: When a message associated with a fault is saved. Audit Successful Resubmit: When a message is successfully resubmitted. Audit Unsuccessful Resubmit: When the resubmission of a message fails. Alert Queue Options: Here we can enable or disable the queuing of the notifications generated when a fault message is published to the portal. Alert Email Options: Here we can enable and configure the service that will actually send e-mail notifications once fault messages are published to the portal. The three most important settings in this section are: Email Server: The e-mail server that will be actually used to send the e-mails. Email From Address: The address that will show up as sender in the e-mails sent. Email XSLT File Absolute Path: The XSLT transformation sheet that will be used to format the e-mails. The ESB Toolkit provides one, but we could customize it or create our own sheet according to our requirements. Summary In this article, we discussed the additional features of the ESB Management Portal. We learned about the registry settings, which are used for configuring the UDDI and setting up the e-mail notifications. We also learned how to configure fault settings and how to utilize the Audit Log features. Resources for Article : Further resources on this subject: Microsoft Biztalk server 2010 patterns: Operating Biztalk [Article] Setting up a BizTalk Server Environment [Article] Communicating from Dynamics CRM to BizTalk Server [Article]
Read more
  • 0
  • 0
  • 5977

article-image-developing-web-project-jasperreports
Packt
27 May 2013
11 min read
Save for later

Developing a Web Project for JasperReports

Packt
27 May 2013
11 min read
(For more resources related to this topic, see here.) Setting the environment First, we need to install the required software, Oracle Enterprise Pack for Eclipse 12c, from http://www.oracle.com/technetwork/middleware/ias/ downloads/wls-main-097127.html using Installers with Oracle WebLogic Server, Oracle Coherence and Oracle Enterprise Pack for Eclipse, and download the Oracle Database 11g Express Edition from http://www.oracle.com/technetwork/products/express-edition/overview/index.html. Setting the environment requires the following tasks: Creating database tables Configuring a data source in WebLogic Server 12c Copying JasperReports required JAR files to the server classpath First create a database table, which shall be the data source for creating the reports, with the following SQL script. If a database table has already been created, the table may be used for this article too. CREATE TABLE OE.Catalog(CatalogId INTEGER PRIMARY KEY, Journal VARCHAR(25), Publisher VARCHAR(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25)); INSERT INTO OE.Catalog VALUES('1', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'Database Resource Manager', 'Kimberly Floss'); INSERT INTO OE.Catalog VALUES('2', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'From ADF UIX to JSF', 'Jonas Jacobi'); INSERT INTO OE.Catalog VALUES('3', 'Oracle Magazine', 'Oracle Publishing', 'March-April 2005', 'Starting with Oracle ADF ', 'Steve Muench'); Next, configure a data source in WebLogic server with JNDI name jdbc/OracleDS. Next, we need to download some JasperReports JAR files including dependencies. Download the JAR/ZIP files listed below and extract the zip/tar.gz to a directory, c:/jasperreports for example.   JAR/ZIP Donwload URL jasperreports-4.7.0.jar http://sourceforge.net/projects/ jasperreports/files/jasperreports/JasperReports%204.7.0/ itext-2.1.0 http://mirrors.ibiblio.org/pub/mirrors/maven2/com/ lowagie/itext/2.1.0/itext-2.1.0.jar commons-beanutils-1.8.3-bin.zip http://commons.apache.org/beanutils/download_beanutils.cgi commons-digester-2.1.jar http://commons.apache.org/digester/download_digester.cgi commons-logging-1.1.1-bin http://commons.apache.org/logging/download_logging.cgi  poi-bin-3.8-20120326 zip or tar.gz http://poi.apache.org/download.html#POI-3.8 All the JasperReports libraries are open source. We shall be using the following JAR files to create a JasperReports report: JAR File Description commons-beanutils-1.8.3.jar JavaBeans utility classes commons-beanutils-bean-collections-1.8.3.jar Collections framework extension classes commons-beanutils-core-1.8.3.jar JavaBeans utility core classes commons-digester-2.1.jar Classes for processing XML documents. commons-logging-1.1.1.jar Logging classes iText-2.1.0.jar PDF library jasperreports-4.7.0.jar JasperReports API poi-3.8-20120326.jar, poi-excelant-3.8-20120326.jar, poi-ooxml-3.8-20120326.jar, poi-ooxml-schemas-3.8-20120326.jar, poi-scratchpad-3.8-20120326.jar Apache Jakarta POI  classes and dependencies. Add the Jasper Reports required by the JAR files to the user_projectsdomains base_domainbinstartWebLogic.bat script's CLASSPATH variable: set SAVE_CLASSPATH=%CLASSPATH%;C:jasperreportscommonsbeanutils- 1.8.3commons-beanutils-1.8.3.jar;C:jasperreportscommonsbeanutils- 1.8.3commons-beanutils-bean-collections-1.8.3.jar;C: jasperreportscommons-beanutils-1.8.3commons-beanutils-core- 1.8.3.jar;C:jasperreportscommons-digester-2.1.jar;C:jasperreports commons-logging-1.1.1commons-logging-1.1.1.jar;C:jasperreports itext-2.1.0.jar;C:jasperreportsjasperreports-4.7.0.jar;C: jasperreportspoi-3.8poi-3.8-20120326.jar;C:jasperreportspoi- 3.8poi-scratchpad-3.8-20120326.jar;C:jasperreportspoi-3.8poiooxml- 3.8-20120326.jar;C:jasperreportspoi-3.8.jar;C:jasperreports poi-3.8poi-excelant-3.8-20120326.jar;C:jasperreportspoi-3.8poiooxml- schemas-3.8-20120326.jar Creating a Dynamic Web project in Eclipse First, we need to create a web project for generating JasperReports reports. Select File | New | Other. In New wizard select Web | Dynamic Web Project. In Dynamic Web Project configuration specify a Project name (PDFExcelReports for example), select the Target Runtime as Oracle WebLogic Server 11g R1 ( 10.3.5). Click on Next. Select the default Java settings; that is, Default output folder as build/classes, and then click on Next. In WebModule, specify ContextRoot as PDFExcelReports and Content Directory as WebContent. Click on Finish. A web project for PDFExcelReports gets generated. Right-click on the project node in ProjectExplorer and select Project Properties. In Properties, select Project Facets. The Dynamic Web Module project facet should be selected by default as shown in the following screenshot: Next, create a User Library for JasperReports JAR files and dependencies. Select Java Build Path in Properties. Click on Add Library. In Add Library, select User Library and click on Next. In User Library, click on User Libraries. In User Libraries, click on New. In New User Library, specify a User library name (JasperReports) and click on OK. A new user library gets added to User Libraries. Click on Add JARs to add JAR files to the library. The following screenshot shows the JasperReports that are added: Creating the configuration file We require a JasperReports configuration file for generating reports. JasperReports XML configuration files are based on the jasperreport.dtd DTD, with a root element of jasperReport. We shall specify the JasperReports report design in an XML configuration bin file, which we have called config.xml. Create an XML file config.xml in the webContent folder by selecting XML | XML File in the New wizard. Some of the other elements (with commonly used subelements and attributes) in a JasperReports configuration XML file are listed in the following table: XML Element Description Sub-Elements Attributes jasperReport Root Element reportFont, parameter, queryString, field, variable, group, title, pageHeader, columnHeader, detail, columnFooter, pageFooter. name, columnCount, pageWidth, pageHeight, orientation, columnWidth, columnSpacing, leftMargin, rightMargin, topMargin, bottomMargin. reportFont Report level font definitions - name, isDefault, fontName, size, isBold, isItalic, isUnderline, isStrikeThrough, pdfFontName, pdfEncoding, isPdfEmbedded parameter Object references used in generating a report. Referenced with P${name} parameterDescription, defaultValueExpression name, class queryString Specifies the SQL query for retrieving data from a database. - - field Database table columns included in report. Referenced with F${name} fieldDescription name, class variable Variable used in the report XML file. Referenced with V${name} variableExpression, initialValueExpression name,class. title Report title band - pageHeader Page Header band - columnHeader Specifies the different columns in the report generated. band - detail Specifies the column values band - columnFooter Column footer band - A report section is represented with the band element. A band element includes staticText and textElement elements. A staticText element is used to add static text to a report (for example, column headers) and a textElement element is used to add dynamically generated text to a report (for example, column values retrieved from a database table). We won't be using all or even most of these element and attributes. Specify the page width with the pageWidth attribute in the root element jasperReport. Specify the report fonts using the reportFont element. The reportElement elements specify the ARIAL_NORMAL, ARIAL_BOLD, and ARIAL_ITALIC fonts used in the report. Specify a ReportTitle parameter using the parameter element. The queryString of the example JasperReports configuration XML file catalog.xml specifies the SQL query to retrieve the data for the report. <queryString><![CDATA[SELECT CatalogId, Journal, Publisher, Edition, Title, Author FROM OE.Catalog]]> </queryString> The PDF report has the columns CatalogId, Journal, Publisher, Edition, Title, and Author. Specify a report band for the report title. The ReportTitle parameter is invoked using the $P {ReportTitle} expression. Specify a column header using the columnHeader element. Specify static text with the staticText element. Specify the report detail with the detail element. A column text field is defined using the textField element. The dynamic value of a text field is defined using the textFieldExpression element: <textField> <reportElement x="0" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Cata logId}]]></textFieldExpression> </textField> Specify a page footer with the pageFooter element. Report parameters are defined using $P{}, report fields using $F{}, and report variables using $V{}. The config. xml file is listed as follows: <?xml version='1.0' encoding='utf-8'?> <!DOCTYPE jasperReport PUBLIC "-//JasperReports//DTD Report Design// EN" "http://jasperreports.sourceforge.net/dtds/jasperreport.dtd"> <jasperReport name="PDFReport" pageWidth="975"> The following code snippet specifies the report fonts: <reportFont name="Arial_Normal" isDefault="true" fontName="Arial" size="15" isBold="false" isItalic="false" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica" pdfEncoding="Cp1252" isPdfEmbedded="false"/> <reportFont name="Arial_Bold" isDefault="false" fontName="Arial" size="15" isBold="true" isItalic="false" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica-Bold" pdfEncoding="Cp1252" isPdfEmbedded="false"/> <reportFont name="Arial_Italic" isDefault="false" fontName="Arial" size="12" isBold="false" isItalic="true" isUnderline="false" isStrikeThrough="false" pdfFontName="Helvetica-Oblique" pdfEncoding="Cp1252" isPdfEmbedded="false"/> The following code snippet specifies the parameter for the report title, the SQL query to generate the report with, and the report fields. The resultset from the SQL query gets bound to the fields. <parameter name="ReportTitle" class="java.lang.String"/> <queryString><![CDATA[SELECT CatalogId, Journal, Publisher, Edition, Title, Author FROM Catalog]]></queryString> <field name="CatalogId" class="java.lang.String"/> <field name="Journal" class="java.lang.String"/> <field name="Publisher" class="java.lang.String"/> <field name="Edition" class="java.lang.String"/> <field name="Title" class="java.lang.String"/> <field name="Author" class="java.lang.String"/> Add the report title to the report as follows: <title> <band height="50"> <textField> <reportElement x="350" y="0" width="200" height="50" /> <textFieldExpression class="java.lang. String">$P{ReportTitle}</textFieldExpression> </textField> </band> </title> <pageHeader> <band> </band> </pageHeader> Add the column's header as follows: <columnHeader> <band height="20"> <staticText> <reportElement x="0" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[CATALOG ID]]></text> </staticText> <staticText> <reportElement x="125" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[JOURNAL]]></text> </staticText> <staticText> <reportElement x="250" y="0" width="150" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[PUBLISHER]]></text> </staticText> <staticText> <reportElement x="425" y="0" width="100" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[EDITION]]></text> </staticText> <staticText> <reportElement x="550" y="0" width="200" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[TITLE]]></text> </staticText> <staticText> <reportElement x="775" y="0" width="200" height="20"/> <textElement> <font isUnderline="false" reportFont="Arial_Bold"/> </textElement> <text><![CDATA[AUTHOR]]></text> </staticText> </band> </columnHeader> The following code snippet shows how to add the report detail, which consists of values retrieved using the SQL query from the Oracle database: <detail> <band height="20"> <textField> <reportElement x="0" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Cata logId}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="125" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Jour nal}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="250" y="0" width="150" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Publ isher}]]></textFieldExpression> </textField> <textField> <reportElement x="425" y="0" width="100" height="20"/> <textFieldExpression class="java.lang.String"><![CDATA[$F{Edit ion}]]></textFieldExpression> </textField> <textField pattern="0.00"> <reportElement x="550" y="0" width="200" height="20"/> <textFieldExpression class="java.lang. String"><![CDATA[$F{Title}]]></textFieldExpression> </textField> <textField> <reportElement x="775" y="0" width="200" height="20"/> <textFieldExpression class="java.lang. String"><![CDATA[$F{Author}]]></textFieldExpression> </textField> </band> </detail> Add the column and page footer including the page number as follows: <columnFooter> <band> </band> </columnFooter> <pageFooter> <band height="15"> <staticText> <reportElement x="0" y="0" width="40" height="15"/> <textElement> <font isUnderline="false" reportFont="Arial_Italic"/> </textElement> <text><![CDATA[Page #]]></text> </staticText> <textField> <reportElement x="40" y="0" width="100" height="15"/> <textElement> <font isUnderline="false" reportFont="Arial_Italic"/> </textElement> <textFieldExpression class="java.lang. Integer"><![CDATA[$V{PAGE_NUMBER}]]></textFieldExpression> </textField> </band> </pageFooter> <summary> <band> </band> </summary> </jasperReport> We need to create a JAR file for the config.xml file and add the JAR file to the WebLogic Server's domain's lib directory. Create a JAR file using the following command from the directory containing the config.xml as follows: >jar cf config.jar config.xml Add the config.jar file to the user_projectsdomainsbase_domainlib directory, which is in the classpath of the server.
Read more
  • 0
  • 0
  • 5964
article-image-overview-tcl-shell
Packt
15 Feb 2011
10 min read
Save for later

An Overview of the Tcl Shell

Packt
15 Feb 2011
10 min read
  Tcl/Tk 8.5 Programming Cookbook Over 100 great recipes to effectively learn Tcl/Tk 8.5 The quickest way to solve your problems with Tcl/Tk 8.5 Understand the basics and fundamentals of the Tcl/Tk 8.5 programming language Learn graphical User Interface development with the Tcl/Tk 8.5 Widget set Get a thorough and detailed understanding of the concepts with a real-world address book application Each recipe is a carefully organized sequence of instructions to efficiently learn the features and capabilities of the Tcl/Tk 8.5 language      Introduction So, you've installed Tcl, written some scripts, and now you're ready to get a deeper understanding of Tcl and all that it has to offer. So, why are we starting with the shell when it is the most basic tool in the Tcl toolbox? When I started using Tcl I needed to rapidly deliver a Graphical User Interface (GUI) to display a video from the IP-based network cameras. The solution had to run on Windows and Linux and it could not be browser-based due to the end user's security concerns. The client needed it quickly and our sales team had, as usual, committed to a delivery date without speaking to the developer in advance. So, with the requirement document in hand, I researched the open source tools available at the time and Tcl/Tk was the only language that met the challenge. The original solution quickly evolved into a full-featured IP Video Security system with the ability to record and display historic video as well as providing the ability to attach to live video feeds from the cameras. Next search capabilities were added to review the stored video and a method to navigate to specific dates and times. The final version included configuring advanced recording settings such as resolution, color levels, frame rate, and variable speed playback. All was accomplished with Tcl. Due to the time constraints, I was not able get a full appreciation of the capabilities of the shell. I saw it as a basic tool to interact with the interpreter to run commands and access the file system. When I had the time, I returned to the shell and realized just how valuable a tool it is and the many capabilities I had failed to make use of. When used to its fullest, the shell provides much more that an interface to the Tcl interpreter, especially in the early stages of the development process. Need to isolate and test a procedure in a program? Need a quick debugging tool? Need real-time notification of the values stored in a variable? The Tcl shell is the place to go. Since then, I have learned countless uses for the shell that would not only have sped up the development process, but also saved me several headaches in debugging the GUI and video collection. I relied on numerous dialog boxes to pop up values or turned to writing debugging information to error logs. While this was an excellent way to get what I needed, I could have minimized the overhead in terms of coding by simply relying on the shell to display the desired information in the early stages. While dialog windows and error logs are irreplaceable, I now add in quick debugging by using the commands the shell has to offer. If something isn't proceeding as expected, I drop in a command to write to standard out and voila! I have my answer. The shell continues to provide me with a reliable method to isolate issues with a minimum investment of time. The Tcl shell The Tcl Shell (Tclsh) provides an interface to the Tcl interpreter that accepts commands from both standard input and text files. Much like the Windows Command Line or Linux Terminal, the Tcl shell allows a developer to rapidly invoke a command and observe the return value or error messages in standard output. The shell differs based on the Operating System in use. For the Unix/Linux systems, this is the standard terminal console; while on a Windows system, the shell is launched separately via an executable. If invoked with no arguments, the shell interface runs interactively, accepting commands from the native command line. The input line is demarked with a percent sign (%) with the prompt located at the start position. If the shell is invoked from the command line (Windows DOS or Unix/Linux terminal) and arguments are passed, the interpreter will accept the first as the filename to be read. Any additional arguments are processed as variables. The shell will run until the exit command is invoked or until it has reached the end of the text file. When invoked with arguments, the shell sets several Tcl variables that may be accessed within your program, much like the C family of languages. These variables are: VariableExplanationargcThis variable contains the number of arguments passed in with the exception of the script file name. A value of 0 is returned if no arguments were passed in.argvThis variable contains a Tcl List with elements detailing the arguments passed in. An empty string is returned if no arguments were provided.argv0This variable contains the filename (if specified) or the name used to invoke the Tcl shell.TCL_interactiveThis variable contains a '1' if Tclsh is running in interactive mode, otherwise a '0' is contained.envThe env variable is maintained automatically, as an array in Tcl and is created at startup to hold the environment variables on your system. Writing to the Tcl console The following recipe illustrates a basic command invocation. In this example, we will use the puts command to output a "Hello World" message to the console. Getting ready To complete the following example, launch your Tcl Shell as appropriate, based on your operating platform. For example, on Windows, you would launch the executable contained in the Tcl installation location within the bin directory, while on a Unix/Linux installation, you would enter TCLsh at the command line, provided this is the executable name for your particular system. To check the name, locate the executable in the bin directory of your installation. How to do it… Enter the following command: % puts "Hello World" Hello World How it works… As you can see, the puts command writes what it was passed as an argument to standard out. Although this is a basic "Hello World" recipe, you can easily see how this 'simple' command can be used for rapid tracking of the location within a procedure, where a problem may have arisen. Add in variable values and some error handling and you can rapidly isolate issues and correct them without the additional efforts of creating a Dialog Window or writing to an error log. Mathematical expressions The expr command is used to evaluate mathematical expressions. This command can address everything from simple addition and subtraction to advanced computations, such as sine and cosine. This eliminates the need to make system calls to perform advanced mathematical functions. The expr command evaluates the input and arguments, and returns an integer or floating-point value. A Tcl expression consists of a combination of operators, operands, and parenthetical containers (parenthesis, braces, or brackets). There are no strict typing requirements, so any white space is stripped by the command automatically. Tcl supports non-numeric and string comparisons as well as Tcl-specific operators. Tcl expr operands Tcl operands are treated as integers, where feasible. They may be specified as decimal, binary (first two characters must be 0b), hexadecimal (first two characters must be 0x), or octal (first two characters must be 0o). Care should be taken when passing integers with a leading 0, for example 08, as the interpreter would evaluate 08 as an illegal octal value. If no integer formats are included, the command will evaluate the operand as a floating-point numeric value. For scientific notations, the character e (or E) is inserted as appropriate. If no numeric interpretation is feasible, the value will be evaluated as a string. In this case, the value must be enclosed within double quotes or braces. Please note that not all operands are accepted by all operators. To avoid inadvertent variable substitution, it is always best to enclose the operands within braces. For example, take a look at the following: expr 1+1*3 will return a value of 4. expr (1+1)*3 will return a value of 6. Operands may be presented in any of the following: OperandExplanationNumericInteger and floating-point values may be passed directly to the command.BooleanAll standard Boolean values (true, false, yes, no, 0, or 1) are supported.Tcl variableAll referenced variables (in Tcl, a variable is referenced using the $ notation, for example, myVariable is a named variable, whereas $myVariable is the referenced variable).Strings (in double quotes)Strings contained within double quotes may be passed with no need to include backslash, variable, or command substitution, as these are handled automatically.Strings (in braces)Strings contained within braces will be used with no substitution.Tcl commandsTcl commands must be enclosed within square braces. The command will be executed and the mathematical function is performed on the return value.Named functionsFunctions, such as sine, cosine, and so on. Tcl supports a subset of the C programming language math operators and treats them in the same manner and precedence. If a named function (such as sine) is encountered, expr automatically makes a call to the mathfunc namespace to minimize the syntax required to obtain the value. Tcl expr operators may be specified as noted in the following table, in the descending order of precedence: OperatorExplanation- + ~ !Unary minus, unary plus, bitwise NOT and logical NOT. Cannot be applied to string operands. Bit-wise NOT may be applied to only integers.**Exponentiation Numeric operands only.*/ %Multiply, divide, and remainder. Numeric operands only.+ -Add and subtract. Numeric operands only.<< >>Left shift and right shift. Integer operands only. A right shift always propagates the sign bit.< > <= >=Boolean Less, Boolean Greater, Boolean Less Than or Equal To, Boolean Greater Than or Equal To (A value of 1 is returned if the condition is true, otherwise a 0 is returned). If utilized for strings, string comparison will be applied.== !=Boolean Equal and Boolean Not Equal (A value of 1 is returned if the condition is true, otherwise a 0 is returned).eq neBoolean String Equal and Boolean String Not Equal (A value of 1 is returned if the condition is true, otherwise a 0 is returned). Any operand provided will be interpreted as a string.in niList Containment and Negated List Containment (A value of 1 is returned if the condition is true, otherwise a 0 is returned). The first operand is treated as a string value, the second as a list.&Bitwise AND Integers only.^Bitwise Exclusive OR Integers only.|Bitwise OR Integers only.&&Logical AND (a value of 1 is returned if both operands are 0, otherwise a 1 is returned). Boolean and numeric (integer and floating-point) operands only.x?y:zIf-then-else (if x evaluates to non-zero, then the return is the value of y, otherwise the value of z is returned). The x operand must have a Boolean or a numeric value.  
Read more
  • 0
  • 0
  • 5921

article-image-manipulating-images-javafx
Packt
25 Aug 2010
4 min read
Save for later

Manipulating Images with JavaFX

Packt
25 Aug 2010
4 min read
(For more resources on Java, see here.) One of the most celebrated features of JavaFX is its inherent support for media playback. As of version 1.2, JavaFX has the ability to seamlessly load images in different formats, play audio, and play video in several formats using its built-in components. To achieve platform independence and performance, the support for media playback in JavaFX is implemented as a two-tiered strategy: Platform-independent APIs—the JavaFX SDK comes with a media API designed to provide a uniform set of interfaces to media functionalities. Part of the platform-independence offerings include a portable codec (On2's VP6), which will play on all platforms where JavaFX media playback is supported. Platform-dependent implementations—to boost media playback performance, JavaFX also has the ability to use the native media engine supported by the underlying OS. For instance, playback on the Windows platform may be rendered by the Windows DirectShow media engine (see next recipe). This two-part article shows you how to use the supported media rendering components, including ImageView, MediaPlayer, and MediaView. These components provide high-level APIs that let developers create applications with engaging and interactive media content. Accessing media assets You may have seen the use of variable __DIR__ when accessing local resources, but may not fully know about its purpose and how it works. So, what does that special variable store? In this recipe, we will explore how to use the __DIR__ special variable and other means of loading resources locally or remotely. Getting ready The concepts presented in this recipe are used widely throughout the JavaFX application framework when pointing to resources. In general, classes that point to a local or remote resource uses a string representation of a URL where the resource is stored. This is especially true for the ImageView and MediaPlayer classes discussed in this article. How to do it... This recipe shows you three ways of creating a URL to point to a local or remote resource used by a JavaFX application. The full listing of the code presented here can be found in ch05/source-code/src/UrlAccess.fx. Using the __DIR__ pseudo-variable to access assets as packaged resources: var resImage = "{__DIR__}image.png"; Using a direct reference to a local file: var localImage = "file:/users/home/vladimir/javafx/ch005/source-code/src/image.png"; Using a URL to access a remote file: var remoteImage = "http://www.flickr.com/3201/2905493571_a6db13ce1b_d.jpg" How it works... Loading media assets in JavaFX requires the use of a well-formatted URL that points to the location of the resources. For instance, both the Image and the Media classes (covered later in this article series) require a URL string to locate and load the resource to be rendered. The URL must be an absolute path that specifies the fully-realized scheme, device, and resource location. The previous code snippets show the following three ways of accessing resources in JavaFX: __DIR__ pseudo-variable—often, you will see the use of JavaFX's pseudo variable __DIR__, used when specifying the location of a resource. It is a special variable that stores the String value of the directory where the executing class that referenced __DIR__ is located. This is valuable, especially when the resource is embedded in the application's JAR file. At runtime, __DIR__ stores the location of the resource in the JAR file, making it accessible for reading as a stream. In the previous code, for example, the expression {__DIR__}image.png explodes as jar:file:/users/home/vladimir/javafx/ch005/source-code/dist/source-code.jar!/image.png. Direct reference to local resources—when the application is deployed as a desktop application, you can specify the location of your resources using URLs that provides the absolute path to where the resources are located. In our code, we use file:/users/home/vladimir/javafx/ch005/source-code/src/image.png as the absolute fully qualified path to the image file image.png. Direct reference to remote resources—finally, when loading media assets, you are able to specify the path of a fully-qualified URL to a remote resource using HTTP. As long as there are no subsequent permissions required, classes such as Image and Media are able to pull down the resource with no problem. For our code, we use a URL to a Flickr image http://www.flickr.com/3201/2905493571_a6db13ce1b_d.jpg. There's more... Besides __DIR__, JavaFX provides the __FILE__ pseudo variable as well. As you may well guess, __FILE__ resolves to the fully qualified path of the of the JavaFX script file that contains the __FILE__ pseudo variable. At runtime, when your application is compiled, this will be the script class that contains the __FILE__ reference.
Read more
  • 0
  • 0
  • 5839

article-image-introducing-salesforce-chatter
Packt
21 Nov 2013
5 min read
Save for later

Introducing Salesforce Chatter

Packt
21 Nov 2013
5 min read
(For more resources related to this topic, see here.) An overview of cloud computing Cloud computing is a subscription-based service that provides us with computing resources and networked storage space. It allows you to access your information anytime and from anywhere. The only requirement is that one must have an Internet connection. That's all. If you have a cloud-based setup, there is no need to maintain the server in the future. We can think of cloud computing as similar to our e-mail account. Think of your accounts such as Gmail, Hotmail, and so on. We just need a web browser and an Internet connection to access our information. We do not need separate software to access our e-mail account; it is different from the text editor installed on our computer. There is no need of physically moving storage and information; everything is up and running over there and not at our end. It is the same with cloud; we choose what has to be stored and accessed on cloud. You also don't have to pay an employee or contractor to maintain the server since it is based on the cloud. While traditional technologies and computer setup require you to be physically present at the same place to access information, cloud removes this barrier and allows us to access information from anywhere. Cloud computing helps businesses to perform better by allowing employees to work from remote locations (anywhere on the globe). It provides mobile access to information and flexibility to the working of a business organization. Depending on your needs, we can subscribe to the following type of clouds: Public cloud: This cloud can be accessed by subscribers who have an Internet connection and access to cloud storage Private cloud: This is accessed by a limited group of people or members of an organization Community cloud: This is a cloud that is shared between two or more organizations that have similar requirements Hybrid cloud: This is a combination of at least two clouds, where the clouds are a combination of public, private, or community Depending on your need, you have the ability to subscribe to a specific cloud provider. Cloud providers follow the pay-as-you-go method. It means that, if your technological needs change, you can purchase more and continue working on cloud. You do not have to worry about the storage configuration and management of servers because everything is done by your cloud provider. An overview of salesforce.com Salesforce.com is the leader in pay-as-you-go enterprise cloud computing. It specializes in CRM software products for sales and customer services and supplies products for building and running business apps. Salesforce has recently developed a social networking product called Chatter for its business apps. With the concept of no software or hardware required, we are up and running and seeing immediate positive effects on our business. It is a platform for creating and deploying apps for social enterprise. This does not require us to buy or manage servers, software, or hardware. Here you can focus fully on building apps that include mobile functionality, business processes, reporting, and search. All apps run on secure servers and proven services that scale, tune, and back up data automatically. Collaboration in the past Collaboration always plays a key role to improve business outcomes; it is a crucial necessity in any professional business. The central meaning of communication has changed over time. With changes in people's individual living situations as well as advancements in technology, how one communicates with the rest of the world has been altered. A century or two ago, people could communicate using smoke signals, carrier pigeons and drum beats, or speak to one another, that is, face-to-face communication. As the world and technology developed, we found that we could send longer messages from long distances with ease. This has caused a decline in face-to-face-interaction and a substantial growth in communication via technology. The old way of face-to-face interaction impacted the business process as there was a gap between the collaboration of the client, company, or employees situated in distant places. So it reduced the profit, ROI, as well as customer satisfaction. In the past, there was no faster way available for communication, so collaboration was a time-consuming task for business; its effect was the loss of client retention. Imagine a situation where a sales representative is near to closing a deal, but the decision maker is out of the office. In the past, there was no fast/direct way to communicate. Sometimes this lack of efficient communication impacted the business negatively, in addition to the loss of potential opportunities. Summary In this article we learned cloud computing and Salesforce.com, and discussed about collaboration in the new era by comparing it to the ancient age. We discussed and introduced Salesforce Chatter and its effect on ROI (Return of Investment). Resources for Article: Further resources on this subject: Salesforce CRM Functions [Article] Configuration in Salesforce CRM [Article] Django 1.2 E-commerce: Data Integration [Article]
Read more
  • 0
  • 0
  • 5775
article-image-integrating-biztalk-server-and-microsoft-dynamics-crm
Packt
20 Jul 2011
7 min read
Save for later

Integrating BizTalk Server and Microsoft Dynamics CRM

Packt
20 Jul 2011
7 min read
What is Microsoft Dynamics CRM? Customer relationship management is a critical part of virtually every business. Dynamics CRM 2011 offers a solution for the three traditional areas of CRM: sales, marketing, and customer service. For customers interested in managing a sales team, Dynamics CRM 2011 has a strong set of features. This includes organizing teams into territories, defining price lists, managing opportunities, maintaining organization structures, tracking sales pipelines, enabling mobile access, and much more. If you are using Dynamics CRM 2011 for marketing efforts, then you have the ability to import data from multiple sources, plan campaigns and set up target lists, create mass communications, track responses to campaigns, share leads with the sales team, and analyze the success of a marketing program. Dynamics CRM 2011 also serves as a powerful hub for customer service scenarios. Features include rich account management, case routing and management, a built-in knowledge base, scheduling of call center resources, scripted Q&A workflows called Dialogs, contract management, and more. Besides these three areas, Microsoft pitches Dynamics CRM as a general purpose application platform called xRM, where the "X" stands for any sort of relationship management. Dynamics CRM has a robust underlying framework for screen design, security roles, data auditing, entity definition, workflow, and mobility, among others. Instead of building these foundational aspects into every application, we can build our data-driven applications within Dynamics CRM. Microsoft has made a big move into the cloud with this release of Dynamics CRM 2011. For the first time in company history, a product was released online (Dynamics CRM Online) prior to on-premises software. The hosted version of the application runs an identical codebase to the on-premises version meaning that code built to support a local instance will work just fine in the cloud. In addition to the big play in CRM hosting, Microsoft has also baked Windows Azure integration into Dynamics CRM 2011. Specifically, we now have the ability to configure a call-out to an Azure AppFabric Service Bus endpoint. To do this, the downstream service must implement a specific WCF interface and within CRM, the Azure AppFabric plugin is configured to call that downstream service through the Azure AppFabric Service Bus relay service. For BizTalk Server to accommodate this pattern, we would want to build a proxy service that implements the required Dynamics CRM 2011 interface and forwards requests into a BizTalk Server endpoint. This article will not demonstrate this scenario, however, as the focus will be on integrating with an onpremises instance only. Why Integrate Dynamics CRM and BizTalk Server? There are numerous reasons to tie these two technologies together. Recall that BizTalk Server is an enterprise integration bus that connects disparate applications. There can be a natural inclination to hoard data within a particular application, but if we embrace real-time message exchange, we can actually have a more agile enterprise. Consider a scenario when a customer's full "contact history" resides in multiple systems. The Dynamics CRM 2011 contact center may only serve a specific audience, and other systems within the company hold additional details about the company's customers. One design choice could be to bulk load that information into Dynamics CRM 2011 on a scheduled interval. However, it may be more effective to call out to a BizTalk Server service that aggregates data across systems and returns a composite view of a customer's history with a company. In a similar manner, think about how information is shared between systems. A public website for a company may include a registration page where visitors sign up for more information and deeper access to content. That registration event is relevant to multiple systems within the company. We could send that initial registration message to BizTalk Server and then broadcast that message to the multiple systems that want to know about that customer. A marketing application may want to respond with a personalized email welcoming that person to the website. The sales team may decide to follow up with that person if they expressed interest in purchasing products. Our Dynamics CRM 2011 customer service center could choose to automatically add the registration event so that it is ready whenever that customer calls in. In this case, BizTalk Server acts as a central router of data and invokes the exposed Dynamics CRM services to create customers and transactions. Communicating from BizTalk Server to Dynamics CRM The way that you send requests from BizTalk Server to Dynamics CRM 2011 has changed significantly in this release. In the previous versions of Dynamics CRM, a BizTalk "send" adapter was available for communicating with the platform. Dynamics CRM 2011 no longer ships with an adapter and developers are encouraged to use the WCF endpoints exposed by the product. Dynamics CRM has both a WCF REST and SOAP endpoint. The REST endpoint can only be used within the CRM application itself. For instance, you can build what is called a web resource that is embedded in a Dynamics CRM page. This resource could be a Microsoft Silverlight or HTML page that looks up data from three different Dynamics CRM entities and aggregates them on the page. This web resource can communicate with the Dynamics CRM REST API, which is friendly to JavaScript clients. Unfortunately, you cannot use the REST endpoint from outside of the Dynamics CRM environment, but because BizTalk cannot communicate with REST services, this has little impact on the BizTalk integration story. The Dynamics CRM SOAP API, unlike its ASMX web service predecessor, is static and operates with a generic Entity data structure. Instead of having a dynamic WSDL that exposes typed definitions for all of the standard and custom entities in the system, the Dynamics CRM 2011 SOAP API has a set of operations (for example, Create, Retrieve) that function with a single object type. The Entity object has a property identifying which concrete object it represents (for example, Account or Contract), and a name/value pair collection that represents the columns and values in the object it represents. For instance, an Entity may have a LogicalName set to "Account" and columns for "telephone1", "emailaddress", and "websiteurl." In essence, this means that we have two choices when interacting with Dynamics CRM 2011 from BizTalk Server. Our first option is to directly consume and invoke the untyped SOAP API. Doing this involves creating maps from a canonical schema to the type-less Entity schema. In the case of doing a Retrieve operation, we may also have to map the type-less Entity message back to a structured message for more processing. Below, we will walk through an example of this. The second option involves creating a typed proxy service for BizTalk Server to invoke. Dynamics CRM has a feature-rich Solution Development Kit (SDK) that allows us to create typed objects and send them to the Dynamics CRM SOAP endpoint. This proxy service will then expose a typed interface to BizTalk that operates as desired with a strongly typed schema. An upcoming exercise demonstrates this scenario. Which choice is best? For simple solutions, it may be fine to interact directly with the Dynamics CRM 2011 SOAP API. If you are updating a couple fields on an entity, or retrieving a pair of data values, the messiness of the untyped schema is worth the straightforward solution. However, if you are making large scale changes to entities, or getting back an entire entity and publishing to the BizTalk bus for more subscribers to receive, then working strictly with a typed proxy service is the best route. However, we will look at both scenarios below, and you can make that choice for yourself. Integrating Directly with the Dynamics CRM 2011 SOAP API In the following series of steps, we will look at how to consume the native Dynamics CRM SOAP interface in BizTalk Server. We will first look at how to query Dynamics CRM to return an Entity. After that, we will see the steps for creating a new Entity in Dynamics CRM. Querying Dynamics CRM from BizTalk Server In this scenario, BizTalk Server will request details about a specific Dynamics CRM "contact" record and send the result of that inquiry to another system.
Read more
  • 0
  • 0
  • 5601

article-image-prerequisites
Packt
25 Mar 2015
6 min read
Save for later

Prerequisites

Packt
25 Mar 2015
6 min read
In this article by Deepak Vohra, author of the book, Advanced Java® EE Development with WildFly® you will see how to create a Java EE project and its pre-requisites. (For more resources related to this topic, see here.) The objective of the EJB 3.x specification is to simplify its development by improving the EJB architecture. This simplification is achieved by providing metadata annotations to replace XML configuration. It also provides default configuration values by making entity and session beans POJOs (Plain Old Java Objects) and by making component and home interfaces redundant. The EJB 2.x entity beans is replaced with EJB 3.x entities. EJB 3.0 also introduced the Java Persistence API (JPA) for object-relational mapping of Java objects. WildFly 8.x supports EJB 3.2 and the JPA 2.1 specifications from Java EE 7. The sample application is based on Java EE 6 and EJB 3.1. The configuration of EJB 3.x with Java EE 7 is also discussed and the sample application can be used or modified to run on a Java EE 7 project. We have used a Hibernate 4.3 persistence provider. Unlike some of the other persistence providers, the Hibernate persistence provider supports automatic generation of relational database tables including the joining of tables. In this article, we will create an EJB 3.x project. This article has the following topics: Setting up the environment Creating a WildFly runtime Creating a Java EE project Setting up the environment We need to download and install the following software: WildFly 8.1.0.Final: Download wildfly-8.1.0.Final.zip from http://wildfly.org/downloads/. MySQL 5.6 Database-Community Edition: Download this edition from http://dev.mysql.com/downloads/mysql/. When installing MySQL, also install Connector/J. Eclipse IDE for Java EE Developers: Download Eclipse Luna from https://www.eclipse.org/downloads/packages/release/Luna/SR1. JBoss Tools (Luna) 4.2.0.Final: Install this as a plug-in to Eclipse from the Eclipse Marketplace (http://tools.jboss.org/downloads/installation.html). The latest version from Eclipse Marketplace is likely to be different than 4.2.0. Apache Maven: Download version 3.05 or higher from http://maven.apache.org/download.cgi. Java 7: Download Java 7 from http://www.oracle.com/technetwork/java/javase/downloads/index.html?ssSourceSiteId=ocomcn. Set the environment variables: JAVA_HOME, JBOSS_HOME, MAVEN_HOME, and MYSQL_HOME. Add %JAVA_HOME%/bin, %MAVEN_HOME%/bin, %JBOSS_HOME%/bin, and %MYSQL_HOME%/bin to the PATH environment variable. The environment settings used are C:wildfly-8.1.0.Final for JBOSS_HOME, C:Program FilesMySQLMySQL Server 5.6.21 for MYSQL_HOME, C:mavenapache-maven-3.0.5 for MAVEN_HOME, and C:Program FilesJavajdk1.7.0_51 for JAVA_HOME. Run the add-user.bat script from the %JBOSS_HOME%/bin directory to create a user for the WildFly administrator console. When prompted What type of user do you wish to add?, select a) Management User. The other option is b) Application User. Management User is used to log in to Administration Console, and Application User is used to access applications. Subsequently, specify the Username and Password for the new user. When prompted with the question, Is this user going to be used for one AS process to connect to another AS..?, enter the answer as no. When installing and configuring the MySQL database, specify a password for the root user (the password mysql is used in the sample application). Creating a WildFly runtime As the application is run on WildFly 8.1, we need to create a runtime environment for WildFly 8.1 in Eclipse. Select Window | Preferences in Eclipse. In Preferences, select Server | Runtime Environment. Click on the Add button to add a new runtime environment, as shown in the following screenshot: In New Server Runtime Environment, select JBoss Community | WildFly 8.x Runtime. Click on Next: In WildFly Application Server 8.x, which appears below New Server Runtime Environment, specify a Name for the new runtime or choose the default name, which is WildFly 8.x Runtime. Select the Home Directory for the WildFly 8.x server using the Browse button. The Home Directory is the directory where WildFly 8.1 is installed. The default path is C:wildfly-8.1.0.Final. Select the Runtime JRE as JavaSE-1.7. If the JDK location is not added to the runtime list, first add it from the JRE preferences screen in Eclipse. In Configuration base directory, select standalone as the default setting. In Configuration file, select standalone.xml as the default setting. Click on Finish: A new server runtime environment for WildFly 8.x Runtime gets created, as shown in the following screenshot. Click on OK: Creating a Server Runtime Environment for WildFly 8.x is a prerequisite for creating a Java EE project in Eclipse. In the next topic, we will create a new Java EE project for an EJB 3.x application. Creating a Java EE project JBoss Tools provides project templates for different types of JBoss projects. In this topic, we will create a Java EE project for an EJB 3.x application. Select File | New | Other in Eclipse IDE. In the New wizard, select the JBoss Central | Java EE EAR Project wizard. Click on the Next button: The Java EE EAR Project wizard gets started. By default, a Java EE 6 project is created. A Java EE EAR Project is a Maven project. The New Project Example window lists the requirements and runs a test for the requirements. The JBoss AS runtime is required and some plugins (including the JBoss Maven Tools plugin) are required for a Java EE project. Select Target Runtime as WildFly 8.x Runtime, which was created in the preceding topic. Then, check the Create a blank project checkbox. Click on the Next button: Specify Project name as jboss-ejb3, Package as org.jboss.ejb3, and tick the Use default Workspace location box. Click on the Next button: Specify Group Id as org.jboss.ejb3, Artifact Id as jboss-ejb3, Version as 1.0.0, and Package as org.jboss.ejb3.model. Click on Finish: A Java EE project gets created, as shown in the following Project Explorer window. The jboss-ejb3 project consists of three subprojects: jboss-ejb3-ear, jboss-ejb3-ejb, and jboss-ejb3-web. Each subproject consists of a pom.xml file for Maven. The jboss-ejb3-ejb subproject consists of a META-INF/persistence.xml file within the src/main/resources source folder for the JPA database persistence configuration. Summary In this article, we learned how to create a Java EE project and its prerequisites. Resources for Article: Further resources on this subject: Common performance issues [article] Running our first web application [article] Various subsystem configurations [article]
Read more
  • 0
  • 0
  • 5560
Modal Close icon
Modal Close icon