Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-soa-service-oriented-architecture
Packt
20 Oct 2009
17 min read
Save for later

SOA—Service Oriented Architecture

Packt
20 Oct 2009
17 min read
What is SOA? SOA is the acronym for Service Oriented Architecture. As it has come to be known, SOA is an architectural design pattern by which several guiding principles determine the nature of the design. Basically, SOA states that every component of a system should be a service, and the system should be composed of several loosely-coupled services. A service here means a unit of a program that serves a business process. "Loosely-coupled" here means that these services should be independent of each other, so that changing one of them should not affect any other services. SOA is not a specific technology, nor a specific language. It is just a blueprint, or a system design approach. It is an architecture model that aims to enhance the efficiency, agility, and productivity of an enterprise system. The key concepts of SOA are services, high interoperability and loose coupling. Several other architecture/technologies such as RPC, DCOM, and CORBA have existed for a long time, and attempted to address the client/server communication problems. The difference between SOA and these other approaches is that SOA is trying to address the problem from the client side, and not from the server side. It tries to decouple the client side from the server side, instead of bundling them, to make the client side application much easier to develop and maintain. This is exactly what happened when object-oriented programming (OOP) came into play 20 years ago. Prior to object-oriented programming, most designs were procedure-oriented, meaning the developer had to control the process of an application. Without OOP, in order to finish a block of work, the developer had to be aware of the sequence that the code would follow. This sequence was then hard-coded into the program, and any change to this sequence would result in a code change. With OOP, an object simply supplied certain operations; it was up to the caller of the object to decide the sequence of those operations. The caller could mash up all of the operations, and finish the job in whatever order needed. There was a paradigm shift from the object side to the caller side. This same paradigm shift is happening today. Without SOA, every application is a bundled, tightly coupled solution. The client-side application is often compiled and deployed along with the server-side applications, making it impossible to quickly change anything on the server side. DCOM and CORBA were on the right track to ease this problem by making the server-side components reside on remote machines. The client application could directly call a method on a remote object, without knowing that this object was actually far away, just like calling a method on a local object. However, the client-side applications continue to remain tightly coupled with these remote objects, and any change to the remote object will still result in a recompiling or redeploying of the client application. Now, with SOA, the remote objects are truly treated as remote objects. To the client applications, they are no longer objects; they are services. The client application is unaware of how the service is implemented, or of the signature that should be used when interacting with those services. The client application interacts with these services by exchanging messages. What a client application knows now is only the interfaces, or protocols of the services, such as the format of the messages to be passed in to the service, and the format of the expected returning messages from the service. Historically, there have been many other architectural design approaches, technologies, and methodologies to integrate existing applications. EAI (Enterprise Application Integration) is just one of them. Often, organizations have many different applications, such as order management systems, accounts receivable systems, and customer relationship management systems. Each application has been designed and developed by different people using different tools and technologies at different times, and to serve different purposes. However, between these applications, there are no standard common ways to communicate. EAI is the process of linking these applications and others in order to realize financial and operational competitive advantages. It may seem that SOA is just an extension of EAI. The similarity is that they are both designed to connect different pieces of applications in order to build an enterprise-level system for business. But fundamentally, they are quite different. EAI attempts to connect legacy applications without modifying any of the applications, while SOA is a fresh approach to solve the same problem. Why SOA? So why do we need SOA now? The answer is in one word—agility. Business requirements change frequently, as they always have. The IT department has to respond more quickly and cost-effectively to those changes. With a traditional architecture, all components are bundled together with each other. Thus, even a small change to one component will require a large number of other components to be recompiled and redeployed. Quality assurance (QA) effort is also huge for any code changes. The processes of gathering requirements, designing, development, QA, and deployment are too long for businesses to wait for, and become actual bottlenecks. To complicate matters further, some business processes are no longer static. Requirements change on an ad-hoc basis, and a business needs to be able to dynamically define its own processes whenever it wants. A business needs a system that is agile enough for its day-to-day work. This is very hard, if not impossible, with existing traditional infrastructure and systems. This is where SOA comes into play. SOA's basic unit is a service. These services are building blocks that business users can use to define their own processes. Services are designed and implemented so that they can serve different purposes or processes, and not just specific ones. No matter what new processes a business needs to build or what existing processes a business needs need to modify, the business users should always be able to use existing service blocks, in order to compete with others according to current marketing conditions. Also, if necessary, some new service blocks can be used. These services are also designed and implemented so that they are loosely coupled, and independent of one another. A change to one service does not affect any other service. Also, the deployment of a new service does not affect any existing service. This greatly eases release management and makes agility possible. For example, a GetBalance service can be designed to retrieve the balance for a loan. When a borrower calls in to query the status of a specific loan, this GetBalance service may be called by the application that is used by the customer service representatives. When a borrower makes a payment online, this service can also be called to get the balance of the loan, so that the borrower will know the balance of his or her loan after the payment. Yet in the payment posting process, this service can still be used to calculate the accrued interest for a loan, by multiplying the balance with the interest rate. Even further, a new process can be created by business users to utilize this service if a loan balance needs to be retrieved. The GetBalance service is developed and deployed independently from all of the above processes. Actually, the service exists without even knowing who the client will be or even how many clients there will be. All of the client applications communicate with this service through its interface, and its interface will remain stable once it is in production. If we have to change the implementation of this service, for example by fixing a bug, or changing an algorithm inside a method of the service, all of the client applications can still work without any change. When combined with the more mature Business Process Management (BPM) technology, SOA plays an even more important role in an organization's efforts to achieve agility. Business users can create and maintain processes within BPM, and through SOA they can plug a service into any of the processes. The front-end BPM application is loosely coupled to the back-end SOA system. This combination of BPM and SOA will give an organization much greater flexibility in order to achieve agility. How do we implement SOA? Now that we've established why SOA is needed by the business, the question becomes—how do we implement SOA? To implement SOA in an organization, three key elements have to be evaluated—people, process, and technology. Firstly, the people in the organization must be ready to adopt SOA. Secondly, the organization must know the processes that the SOA approach will include, including the definition, scope, and priority. Finally, the organization should choose the right technology to implement it. Note that people and processes take precedence over technology in an SOA implementation, but they are out of the scope of this article. In this article, we will assume people and processes are all ready for an organization to adopt SOA. Technically, there are many SOA approaches. At certain degrees, traditional technologies such as RPC, DCOM, CORBA, or some modern technologies such as IBM WebSphere MQ, Java RMI, and .NET Remoting could all be categorized as service-oriented, and can be used to implement SOA for one organization. However, all of these technologies have limitations, such as language or platform specifications, complexity of implementation, or the ability to support binary transports only. The most important shortcoming of these approaches is that the server-side applications are tightly coupled with the client-side applications, which is against the SOA principle. Today, with the emergence of web service technologies, SOA becomes a reality. Thanks to the dramatic increase in network bandwidth, and given the maturity of web service standards such as WS-Security, and WS-AtomicTransaction, an SOA back-end can now be implemented as a real system. SOA from different users' perspectives However, as we said earlier, SOA is not a technology, but only a style of architecture, or an approach to building software products. Different people view SOA in different ways. In fact, many companies now have their own definitions for SOA. Many companies claim they can offer an SOA solution, while they are really just trying to sell their products. The key point here is—SOA is not a solution. SOA alone can't solve any problem. It has to be implemented with a specific approach to become a real solution. You can't buy an SOA solution. You may be able to buy some kinds of products to help you realize your own SOA, but this SOA should be customized to your specific environment, for your specific needs. Even within the same organization, different players will think about SOA in quite different ways. What follows are just some examples of how different players in an organization judge the success of an SOA initiative using different criteria. [Gartner, Twelve Common SOA Mistakes and How to Avoid Them, Publication Date: 26 October 2007 ID Number: G00152446] To a programmer, SOA is a form of distributed computing in which the building blocks (services) may come from other applications or be offered to them. SOA increases the scope of a programmer's product and adds to his or her resources, while also closely resembling familiar modular software design principles. To a software architect, SOA translates to the disappearance of fences between applications. Architects turn to the design of business functions rather than to self-contained and isolated applications. The software architect becomes interested in collaboration with a business analyst to get a clear picture of the business functionality and scope of the application. SOA turns software architects into integration architects and business experts. For the Chief Investment Officers (CIOs), SOA is an investment in the future. Expensive in the short term, its long-term promises are lower costs, and greater flexibility in meeting new business requirements. Re-use is the primary benefit anticipated as a means to reduce the cost and time of new application development. For business analysts, SOA is the bridge between them and the IT organization. It carries the promise that IT designers will understand them better, because the services in SOA reflect the business functions in business process models. For CEOs, SOA is expected to help IT become more responsive to business needs and facilitate competitive business change. Complexities in SOA implementation Although SOA will make it possible for business parties to achieve agility, SOA itself is technically not simple to implement. In some cases, it even makes software development more complex than ever, because with SOA you are building for unknown problems. On one hand, you have to make sure that the SOA blocks you are building are useful blocks. On the other, you need a framework within which you can assemble those blocks to perform business activities. The technology issues associated with SOA are more challenging than vendors would like users to believe. Web services technology has turned SOA into an affordable proposition for most large organizations by providing a universally-accepted, standard foundation. However, web services play a technology role only for the SOA backplane, which is the software infrastructure that enables SOA-related interoperability and integration. The following figure shows the technical complexity of SOA. It has been taken from Gartner, Twelve Common SOA Mistakes and How to Avoid Them, Publication Date: 26 October 2007 ID Number: G00152446. As Gartner says, users must understand the complex world of middleware, and point-to-point web service connections only for small-scale, experimental SOA projects. If the number of services deployed grows to more than 20 or 30, then use a middleware-based intermediary—the SOA backplane. The SOA backplane could be an Enterprise Service Bus (ESB), a Message-Oriented Middleware (MOM), or an Object Request Broker (ORB). However, in this article, we will not cover it. We will build only point-to-point services using WCF. Web services There are many approaches to realizing SOA, but the most popular and practical one is—using web services. What is a web service? A web service is a software system designed to support interoperable machine-to-machine interaction over a network. A web service is typically hosted on a remote machine (provider), and called by a client application (consumer) over a network. After the provider of a web service publishes the service, the client can discover it and invoke it. The communications between a web service and a client application use XML messages. A web service is hosted within a web server and HTTP is used as the transport protocol between the server and the client applications. The following diagram shows the interaction of web services: Web services were invented to solve the interoperability problem between applications. In the early 90s, along with the LAN/WAN/Internet development, it became a big problem to integrate different applications. An application might have been developed using C++, or Java, and run on a Unix box, a Windows PC, or even a mainframe computer. There was no easy way for it to communicate with other applications. It was the development of XML that made it possible to share data between applications across hardware boundaries and networks, or even over the Internet. For example, a Windows application might need to display the price of a particular stock. With a web service, this application can make a request to a URL, and/or pass an XML string such as <QuoteRequest><GetPrice Symble='XYZ'/></QuoteRequest>. The requested URL is actually the Internet address of a web service, which, upon receiving the above quote request, gives a response, <QuoteResponse><QuotePrice Symble='XYZ'>51.22</QuotePrice></QuoteResponse/>. The Windows application then uses an XML parser to interpret the response package, and display the price on the screen. The reason it is called a web service is that it is designed to be hosted in a web server, such as Microsoft Internet Information Server, and called over the Internet, typically via the HTTP or HTTPS protocols. This is to ensure that a web service can be called by any application, using any programming language, and under any operating system, as long as there is an active Internet connection, and of course, an open HTTP/HTTPS port, which is true for almost every computer on the Internet. Each web service has a unique URL, and contains various methods. When calling a web service, you have to specify which method you want to call, and pass the required parameters to the web service method. Each web service method will also give a response package to tell the caller the execution results. Besides new applications being developed specifically as web services, legacy applications can also be wrapped up and exposed as web services. So, an IBM mainframe accounting system might be able to provide external customers with a link to check the balance of an account. Web service WSDL In order to be called by other applications, each web service has to supply a description of itself, so that other applications will know how to call it. This description is provided in a language called a WSDL. WSDL stands for Web Services Description Language. It is an XML format that defines and describes the functionalities of the web service, including the method names, parameter names, and types, and returning data types of the web service. For a Microsoft ASMX web service, you can get the WSDL by adding ?WSDL to the end of the web service URL, say http://localhost/MyService/MyService.asmx?WSDL. Web service proxy A client application calls a web service through a proxy. A web service proxy is a stub class between a web service and a client. It is normally auto-generated by a tool such as Visual Studio IDE, according to the WSDL of the web service. It can be re-used by any client application. The proxy contains stub methods mimicking all of methods of the web service so that a client application can call each method of the web service through these stub methods. It also contains other necessary information required by the client to call the web service such as custom exceptions, custom data and class types, and so on. The address of the web service can be embedded within the proxy class, or it can be placed inside a configuration file. A proxy class is always for a specific language. For each web service, there could be a proxy class for Java clients, a proxy class for C# clients, and yet another proxy class for COBOL clients. To call a web service from a client application, the proper proxy class first has to be added to the client project. Then, with an optional configuration file, the address of the web service can be defined. Within the client application, a web service object can be instantiated, and its methods can be called just as for any other normal method. SOAP There are many standards for web services. SOAP is one of them. SOAP was originally an acronym for Simple Object Access Protocol, and was designed by Microsoft. As this protocol became popular with the spread of web services, and its original meaning was misleading, the original acronym was dropped with version 1.2 of the standard. It is now merely a protocol, maintained by W3C. SOAP, now, is a protocol for exchanging XML-based messages over computer networks. It is widely-used by web services and has become its de-facto protocol. With SOAP, the client application can send a request in XML format to a server application, and the server application will send back a response in XML format. The transport for SOAP is normally HTTP / HTTPS, and the wide acceptance of HTTP is one of the reasons why SOAP is widely accepted today.
Read more
  • 0
  • 0
  • 4659

article-image-configuring-jboss-application-server-5
Packt
05 Jan 2010
7 min read
Save for later

Configuring JBoss Application Server 5

Packt
05 Jan 2010
7 min read
JBoss Web Server currently uses the Apache Tomcat 6.0 release and it is ships as service archive (SAR) application in the deploy folder. The location of the embedded web server has changed at almost every new release of JBoss. The following table could be a useful reference if you are using different versions of JBoss: JBoss release Location of Tomcat 5.0.0 GA deploy/jbossweb.sar 4.2.2 GA deploy/jboss-web.deployer 4.0.5 GA deploy/jbossweb-tomcat55.sar 3.2.X deploy/jbossweb-tomcat50.sar The main configuration file is server.xml which, by default, has the following minimal configuration: <Server><Listener className="org.apache.catalina.core.AprLifecycleListener"SSLEngine="on" /><Listener className="org.apache.catalina.core.JasperListener" /><Service name="jboss.web"><Connector protocol="HTTP/1.1" port="8080"address="${jboss.bind.address}"connectionTimeout="20000" redirectPort="8443" /><Connector protocol="AJP/1.3" port="8009"address="${jboss.bind.address}"redirectPort="8443" /><Engine name="jboss.web" defaultHost="localhost"><Realm className="org.jboss.web.tomcat.security.JBossWebRealm"certificatePrincipal="org.jboss.security.auth.certs.SubjectDNMapping" allRolesMode="authOnly" /><Host name="localhost"><Valve className="org.jboss.web.tomcat.service.jca.CachedConnectionValve"cachedConnectionManagerObjectName="jboss.jca:service=CachedConnectionManager"transactionManagerObjectName="jboss:service=TransactionManager" /></Host></Engine></Service></Server> Following is a short description for the key elements of the configuration: Element Description Server The Server is Tomcat itself, that is, an instance of the web application server and is a top-level component. Service An Engine is a request-processing component that represents the Catalina servlet engine. It examines the HTTP headers to determine the virtual host or context to which requests should be passed. Connector It's the gateway to Tomcat Engine. It ensures that requests are received from clients and are assigned to the Engine. Engine Engine handles all requests. It examines the HTTP headers to determine the virtual host or context to which requests should be passed. Host One virtual host. Each virtual host is differentiated by a fully qualified hostname. Valve A component that will be inserted into the request processing pipeline for the associated Catalina container. Each Valve has distinct processing capabilities. Realm It contains a set of users and roles. As you can see, all the elements are organized in a hierarchical structure where the Server element acts as top-level container: The lowest elements in the configuration are Valve and Realm, which can be nested into Engine or Host elements to provide unique processing capabilities and role management. Customizing connectors Most of the time when you want to customize your web container, you will have to change some properties of the connector. <Connector protocol="HTTP/1.1" port="8080"address="${jboss.bind.address}"connectionTimeout="20000" redirectPort="8443" /> A complete list of the connector properties can be found on the Jakarta Tomcat site (http://tomcat.apache.org/). Here, we'll discuss the most useful connector properties: port: The TCP port number on which this connector will create a server socket and await incoming connections. Your operating system will allow only one server application to listen to a particular port number on a particular IP address. acceptCount: The maximum queue length for incoming connection requests, when all possible request processing threads are in use. Any requests received when the queue is full will be refused. The default value is 10. connectionTimeout: The number of milliseconds the connector will wait after accepting a connection for the request URI line to be presented. The default value is 60000 (that is, 60 seconds). address: For servers with more than one IP address, this attribute specifies which address will be used for listening on the specified port. By default, this port will be used on all IP addresses associated with the server. enableLookups: Set to true if you want to perform DNS lookups in order to return the actual hostname of the remote client and to false in order to skip the DNS lookup and return the IP address in string form instead (thereby improving performance). By default, DNS lookups are enabled. maxHttpHeaderSize: The maximum size of the request and response HTTP header, specified in bytes. If not specified, this attribute is set to 4096 (4 KB). maxPostSize: The maximum size in bytes of the POST, which will be handled by the container FORM URL parameter parsing. The limit can be disabled by setting this attribute to a value less than or equal to zero. If not specified, this attribute is set to 2097152 (2 megabytes). maxThreads: The maximum number of request processing threads to be created by this connector, which therefore determines the maximum number of simultaneous requests that can be handled. If not specified, this attribute is set to 200. The new Apache Portable Runtime connector Apache Portable Runtime (APR) is a core Apache 2.x library designed to provide superior scalability, performance, and better integration with native server technologies. The mission of the Apache Portable Runtime (APR) project is to create and maintain software libraries that provide a predictable and consistent interface to underlying platform-specific implementations. The primary goal is to provide an API to which software developers may code and be assured of predictable if not identical behaviour regardless of the platform on which their software is built, relieving them of the need to code special-case conditions to work around or take advantage of platform-specific deficiencies or features. The high-level performance of the new APR connector is made possible by the introduction of socket pollers for persistent connections (keepalive). This increases the scalability of the server, and by using sendfile system calls, static content is delivered faster and with lower CPU utilization. Once you have set up the APR connector, you are allowed to use the following additional properties in your connector: keepAliveTimeout: The number of milliseconds the APR connector will wait for another HTTP request, before closing the connection. If not set, this attribute will use the default value set for the connectionTimeout attribute. pollTime: The duration of a poll call; by default it is 2000 (5 ms). If you try to decrease this value, the connector will issue more poll calls, thus reducing latency of the connections. Be aware that this will put slightly more load on the CPU as well. pollerSize: The number of sockets that the poller kept alive connections can hold at a given time. The default value is 768, corresponding to 768 keepalive connections. useSendfile: Enables using kernel sendfile for sending certain static files. The default value is true. sendfileSize: The number of sockets that the poller thread dispatches for sending static files asynchronously. The default value is 1024. If you want to consult the full documentation of APR, you can visit http://apr.apache.org/. Installing the APR connector In order to install the APR connector, you need to add some native libraries to your JBoss server. The native libraries can be found at http://www.jboss.org/jbossweb/downloads/jboss-native/. Download the version that is appropriate for your OS. Once you are ready, you need to simply unzip the content of the archive into your JBOSS_HOME directory. As an example, Unix users (such as HP users) would need to perform the following steps: cd jboss-5.0.0.GAtar tvfz jboss-native-2.0.6-hpux-parisc2-ssl.tar.gz Now, restart JBoss and, from the console, verify that the connector is bound to Http11AprProtocol. A word of caution!At the time of writing, the APR library still has some open issues that prevent it from loading correctly on some platforms, particularly on the 32-bit Windows. Please consult the JBoss Issue Tracker (https://jira.jboss.org/jira/secure/IssueNavigator.jspa?) to verify that there are no open issues for your platform.
Read more
  • 0
  • 0
  • 4635

article-image-java-oracle-database
Packt
07 Jul 2010
5 min read
Save for later

Java in Oracle Database

Packt
07 Jul 2010
5 min read
"The views expressed in this article are the author's own and do not necessarily reflect the views of Oracle." (For more resources on Oracle, see here.) Introduction This article is better understood by people who have some familiarity with Oracle database, SQL, PL/SQL, and of course Java (including JDBC). Beginners can also understand the article to some extent, because it does not contain many specifics/details. The article can be useful to software developers, designers and architects working with Java. Oracle database provides a Java runtime in its database server process. Because of this, it is possible not only to store Java sources and Java classes in an Oracle database, but also to run the Java classes within the database server. Such Java classes will be 'executed' by the Java Virtual Machine embedded in the database server. The Java platform provided is J2SE-compliant, and in addition to the JVM, it includes all the Java system classes. So, conceptually, whatever Java code that can be run using the JREs (like Sun's JRE) on the operating system, can be run within the Oracle database too. Java stored procedure The key unit of the Java support inside the Oracle database is the 'Java Stored Procedure' (that may be referred to as JSP, as long as it is not confused with JavaServer Pages). A Java stored procedure is an executable unit stored inside the Oracle database, and whose implementation is in Java. It is similar to PL/SQL stored procedures and functions. Creation Let us see an example of how to create a simple Java stored procedure. We will create a Java stored procedure that adds two given numbers and returns the sum. The first step is to create a Java class that looks like the following: public class Math{ public static int add(int x, int y) { return x + y; }} This is a very simple Java class that just contains one static method that returns the sum of two given numbers. Let us put this code in a file called Math.java, and compile it (say, by doing 'javac Math.java') to get Math.class file. The next step is to 'load' Math.class into the Oracle database. That is, we have to put the class file located in some directory into the database, so that the class file gets stored in the database. There are a few ways to do this, and one of them is to use the command-line tool called loadjava provided by Oracle, as follows: loadjava -v -u scott/tiger Math.class Generally, in Oracle database, things are always stored in some 'schema' (also known as 'user'). Java classes are no exception. So, while loading a Java class file into the database, we need to specify the schema where the Java class should be stored. Here, we have given 'scott' (along with the password). There are a lot of other things that can be done using loadjava, but we will not go into them here. Next, we have to create a 'PL/SQL wrapper' as follows: SQL> connect scott/tigerConnected.SQL>SQL> create or replace function addition(a IN number, b IN number) return number 2 as language java name 'Math.add(int, int) return int'; 3 /Function created.SQL> We have created the PL/SQL wrapper called 'addition', for the Java method Math.add(). The syntax is same as the one used to create a PL/SQL function/procedure, but here we have specified that the implementation of the function is in the Java method Math.add(). And that's it. We've created a Java stored procedure! Basically, what we have done is, implemented our requirement in Java, and then exposed the Java implementation via PL/SQL. Using Jdeveloper, an IDE from Oracle, all these steps (creating the Java source, compiling it, loading it into the database, and creating the PL/SQL wrapper) can be done easily from within the IDE. One thing to remember is that, we can create Java stored procedures for Java static methods only, but not for instance methods. This is not a big disadvantage, and in fact makes sense, because even the main() method, which is the entry point for a Java program, is also 'static'. Here, since Math.add() is the entry point, it has to be 'static'. So, we can write as many static methods in our Java code as needed and make them entry points by creating the PL/SQL wrappers for them. Invocation We can call the Java stored procedure we have just created, just like any PL/SQL procedure/function is called, either from SQL or PL/SQL: SQL> select addition(10, 20) from dual;ADDITION(10,20)--------------- 30SQL>SQL> declare 2 s number; 3 begin 4 s := addition(10, 20); 5 dbms_output.put_line('SUM = ' || s); 6 end; 7 /SUM = 30PL/SQL procedure successfully completed.SQL> Here, the 'select' query, as well as the PL/SQL block, invoked the PL/SQL function addition(), which in turn invoked the underlying Java method Math.add(). A main feature of the Java stored procedure is that, the caller (like the 'select' query above) has no idea that the procedure is indeed implemented in Java. Thus, the stored procedures implemented in PL/SQL and Java can be called alike, without requiring to know the language in which the underlying implementation is. So, in general, whatever Java code we have, can be seamlessly integrated into the PL/SQL code via the PL/SQL wrappers. Putting in other words, we now have more than one language option to implement a stored procedure - PL/SQL and Java. If we have any project where stored procedures are to be implemented, then Java is a good option, because today it is relatively easier to find a Java programmer.
Read more
  • 0
  • 0
  • 4635

article-image-getting-started-javafx
Packt
05 Oct 2010
11 min read
Save for later

Getting Started with JavaFX

Packt
05 Oct 2010
11 min read
  JavaFX 1.2 Application Development Cookbook Over 60 recipes to create rich Internet applications with many exciting features Easily develop feature-rich internet applications to interact with the user using various built-in components of JavaFX Make your application visually appealing by using various JavaFX classes—ListView, Slider, ProgressBar—to display your content and enhance its look with the help of CSS styling Enhance the look and feel of your application by embedding multimedia components such as images, audio, and video Part of Packt's Cookbook series: Each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible Read more about this book (For more resources on JavaFX, see here.) Using javafxc to compile JavaFX code While it certainly makes it easier to build JavaFX with the support of an IDE (see the NetBeans and Eclipse recipes), it is not a requirement. In some situations, having direct access to the SDK tools is preferred (automated build for instance). This recipe explores the build tools that are shipped with the JavaFX SDK and provides steps to show you how to manually compile your applications. Getting ready To use the SDK tools, you will need to download and install the JavaFX SDK. See the recipe Installing the JavaFX SDK, for instructions on how to do it. How to do it... Open your favorite text/code editor and type the following code. The full code is available from ch01/source-code/src/hello/HelloJavaFX.fx. package hello;import javafx.stage.Stage;import javafx.scene.Sceneimport javafx.scene.text.Text;import javafx.scene.text.Font;Stage { title: "Hello JavaFX" width: 250 height: 80 scene: Scene { content: [ Text { font : Font {size : 16} x: 10 y: 30 content: "Hello World!" } ] }} Save the file at location hello/Main.fx. To compile the file, invoke the JavaFX compiler from the command line from a directory up from the where the file is stored (for this example, it would be executed from the src directory): javafxc hello/Main.fx If your compilation command works properly, you will not get any messages back from the compiler. You will, however, see the file HelloJavaFX.class created by the compiler in the hello directory. If, however, you get a "file not found" error during compilation, ensure that you have properly specified the path to the HelloJavaFX.fx file. How it works... The javafxc compiler works in similar ways as your regular Java compiler. It parses and compiles the JavaFX script into Java byte code with the .class extension. javafxc accepts numerous command-line arguments to control how and what sources get compiled, as shown in the following command: javafxc [options] [sourcefiles] [@argfiles] where options are your command-line options, followed by one or more source files, which can be followed by list of argument files. Below are some of the more commonly javafxc arguments: classpath (-cp)—the classpath option specifies the locations (separated by a path separator character) where the compiler can find class files and/or library jar files that are required for building the application. javafxc -cp .:lib/mylibrary.jar MyClass.fx sourcepath—in more complicated project structure, you can use this option to specify one or more locations where the compiler should search for source file and satisfy source dependencies. javafxc -cp . -sourcepath .:src:src1:src2 MyClass.fx -d—with this option, you can set the target directory where compiled class files are to be stored. The compiler will create the package structure of the class under this directory and place the compiled JavaFX classes accordingly. javafxc -cp . -d build MyClass.fx The @argfiles option lets you specify a file which can contain javafxc command-line arguments. When the compiler is invoked and a @argfile is found, it uses the content of the file as an argument for javafxc. This can help shorten tediously long arguments into short, succinct commands. Assume file cmdargs has the following content: -d build-cp .:lib/api1.jar:lib/api2.jar:lib/api3.jar-sourcepath core/src:components/src:tools/src Then you can invoke javafxc as: $> javafxc @cmdargs See also Installing the JavaFX SDK Creating and using JavaFX classes JavaFX is an object-oriented scripting language. As such, object types, represented as classes, are part of the basic constructs of the language. This section shows how to declare, initialize, and use JavaFX classes. Getting ready If you have used other scripting languages such as ActionScript, JavaScript, Python, or PHP, the concepts presented in this section should be familiar. If you have no idea what a class is or what it should be, just remember this: a class is code that represents a logical entity (tree, person, organization, and so on) that you can manipulate programmatically or while using your application. A class usually exposes properties and operations to access the state or behavior of the class. How to do it... Let's assume we are building an application for a dealership. You may have a class called Vehicle to represent cars and other type of vehicles processed in the application. The next code example creates the Vehicle class. Refer to ch01/source-code/src/javafx/Vehicle.fx for full listing of the code presented here. Open your favorite text editor (or fire up your favorite IDE). Type the following class declaration: class Vehicle { var make; var model; var color; var year; function drive () : Void { println("You are driving a " "{year} {color} {make} {model}!") }} Once your class is properly declared, it is now ready to be used. To use the class, add the following (highlighted code) to the file: class Vehicle {...}var vehicle = Vehicle { year:2010 color: "Grey" make:"Mini" model:"Cooper"};vehicle.drive(); Save the file as Vehicle.fx. Now, from the command-line, compile it with: $> javafxc Vehicle.fx If you are using an IDE, you can simply right, click on the file to run it. When the code executes, you should see: $> You are driving a 2010 Grey Mini Cooper! How it works... The previous snippet shows how to declare a class in JavaFX. Albeit a simple class, it shows the basic structure of a JavaFX class. It has properties represented by variables declarations: var make;var model;var color;var year; and it has a function: function drive () : Void { println("You are driving a " "{year} {color} {make} {model}!")} which can update the properties and/or modify the behavior (for details on JavaFX functions, see the recipe Creating and Using JavaFX functions). In this example, when the function is invoked on a vehicle object, it causes the object to display information about the vehicle on the console prompt. Object literal initialization Another aspect of JavaFX class usage is object declaration. JavaFX supports object literal declaration to initialize a new instance of the class. This format lets developers declaratively create a new instance of a class using the class's literal representation and pass in property literal values directly into the initialization block to the object's named public properties. var vehicle = Vehicle { year:2010 color: "Grey" make:"Mini" model:"Cooper"}; The previous snippet declares variable vehicle and assigns to it a new instance of the Vehicle class with year = 2010, color = Grey, make = Mini, and model = Cooper. The values that are passed in the literal block overwrite the default values of the named public properties. There's more... JavaFX class definition mechanism does not support a constructor as in languages such as Java and C#. However, to allow developers to hook into the life cycle of the object's instance creation phase, JavaFX exposes a specialized code block called init{} to let developers provide custom code which is executed during object initialization. Initialization block Code in the init block is executed as one of the final steps of object creation after properties declared in the object literal are initialized. Developers can use this facility to initialize values and initialize resources that the new object will need. To illustrate how this works, the previous code snippet has been modified with an init block. You can get the full listing of the code at ch01/source-code/src/javafx/Vehicle2.fx. class Vehicle {... init { color = "Black"; } function drive () : Void { println("You are driving a " "{year} {color} {make} {model}!"); }}var vehicle = Vehicle { year:2010 make:"Mini" model:"Cooper"};vehicle.drive(); Notice that the object literal declaration of object vehicle no longer includes the color declaration. Nevertheless, the value of property color will be initialized to Black in the init{} code block during the object's initialization. When you run the application, it should display: You are driving a 2010 Black Mini Cooper! See also Declaring and using variables in JavaFX Creating and using JavaFX functions Creating and using variables in JavaFX JavaFX is a statically type-safe and type-strict scripting language. Therefore, variables (and anything which can be assigned to a variable, including functions and expressions) in JavaFX, must be associated with a type, which indicates the expected behavior and representation of the variable. This sections explores how to create, initialize, and update JavaFX variables. Getting ready Before we look at creating and using variables, it is beneficial to have an understanding of what is meant by data type and be familiar with some common data types such as String, Integer, Float, and Boolean. If you have written code in other scripting languages such as ActionScript, Python, and Ruby, you will find the concepts in this recipe easy to understand. How to do it... JavaFX provides two ways of declaring variables including the def and the var keywords. def X_STEP = 50;prntln (X_STEP);X_STEP++; // causes errorvar x : Number;x = 100;...x = x + X_LOC; How it works... In JavaFX, there are two ways of declaring a variable: def—The def keyword is used to declare and assign constant values. Once a variable is declared with the def keyword and assigned a value, it is not allowed be reassigned a new value. var—The var keyword declares variables which are able to be updated at any point after their declaration. There's more... All variables must have an associated type. The type can be declared explicitly or be automatically coerced by the compiler. Unlike Java (similar to ActionScript and Scala), the type of the variable follows the variable's name separated by a colon. var location:String; Explicit type declaration The following code specifies the type (class) that the variable will receive at runtime: var location:String;location = "New York"; The compiler also supports a short-hand notation that combines declaration and initialization. var location:String = "New York"; Implicit coercion In this format, the type is left out of the declaration. The compiler automatically converts the variable to the proper type based on the assignment. var location;location = "New York"; Variable location will automatically receive a type of String during compilation because the first assignment is a string literal. Or, the short-hand version: var location = "New York"; JavaFX types Similar to other languages, JavaFX supports a complete set of primitive types as listed: :String—this type represents a collection of characters contained within within quotes (double or single, see following). Unlike Java, the default value for String is empty (""). "The quick brown fox jumps over the lazy dog" or 'The quick brown fox jumps over the lazy dog' :Number—this is a numeric type that represents all numbers with decimal points. It is backed by the 64-bit double precision floating point Java type. The default value of Number is 0.0. 0.01234100.01.24e12 :Integer—this is a numeric type that represents all integral numbers. It is backed by the 32-bit integer Java type. The default value of an Integer is 0. -44700xFF :Boolean—as the name implies, this type represents the binary value of either true or false. :Duration—this type represent a unit of time. You will encounter its use heavily in animation and other instances where temporal values are needed. The supported units include ms, s, m, and h for millisecond, second, minute, and hour respectively. 12ms4s12h0.5m :Void—this type indicates that an expression or a function returns no value. Literal representation of Void is null. Variable scope Variables can have three distinct scopes, which implicitly indicates the access level of the variable when it is being used. Script level Script variables are defined at any point within the JavaFX script file outside of any code block (including class definition). When a script-level variable is declared, by default it is globally visible within the script and is not accessible from outside the script (without additional access modifiers). Instance level A variable that is defined at the top-level of a class is referred to as an instance variable. An instance level is visible within the class by the class members and can be accessed by creating an instance of the class. Local level The least visible scope are local variables. They are declared within code blocks such as functions. They are visible only to members within the block.
Read more
  • 0
  • 0
  • 4556

article-image-python-image-manipulation
Packt
12 Aug 2010
5 min read
Save for later

Python Image Manipulation

Packt
12 Aug 2010
5 min read
(For more resources on Python, see here.) So let's get on with it! Installation prerequisites Before we jump in to the main topic, it is necessary to install the following packages. Python In this article, we will use Python Version 2.6, or to be more specific, Version 2.6.4. It can be downloaded from the following location: http://python.org/download/releases/ Windows platform For Windows, just download and install the platform-specific binary distribution of Python 2.6.4. Other platforms For other platforms, such as Linux, Python is probably already installed on your machine. If the installed version is not 2.6, build and install it from the source distribution. If you are using a package manager on a Linux system, search for Python 2.6. It is likely that you will find the Python distribution there. Then, for instance, Ubuntu users can install Python from the command prompt as: $sudo apt-get python2.6 Note that for this, you must have administrative permission on the machine on which you are installing Python. Python Imaging Library (PIL) We will learn image-processing techniques by making extensive use of the Python Imaging Library (PIL) throughout this article. PIL is an open source library. You can download it from http://www.pythonware.com/products/pil/. Install the PIL Version 1.1.6 or later. Windows platform For Windows users, installation is straightforward—use the binary distribution PIL 1.1.6 for Python 2.6. Other platforms For other platforms, install PIL 1.1.6 from the source. Carefully review the README file in the source distribution for the platform-specific instructions. Libraries listed in the following table are required to be installed before installing PIL from the source. For some platforms like Linux, the libraries provided in the OS should work fine. However, if those do not work, install a pre-built "libraryName-devel" version of the library. For example, for JPEG support, the name will contain "jpeg-devel-", and something similar for the others. This is generally applicable to rpm-based distributions. For Linux flavors like Ubuntu, you can use the following command in a shell window. $sudo apt-get install python-imaging However, you should make sure that this installs Version 1.1.6 or later. Check PIL documentation for further platform-specific instructions. For Mac OSX, see if you can use fink to install these libraries. See http://www.finkproject.org/ for more details. You can also check the website http://pythonmac.org or Darwin ports website http://darwinports.com/ to see if a binary package installer is available. If such a pre-built version is not available for any library, install it from the source. The PIL prerequisites for installing PIL from source are listed in the following table: Library URL Version Installation options (a) or (b) libjpeg (JPEG support) http://www.ijg.org/files 7 or 6a or 6b (a) Pre-built version. For example: jpeg-devel-7 Check if you can do: sudo apt-install libjpeg (works on some flavors of Linux) (b) Source tarball. For example: jpegsrc.v7.tar.gz zib (PNG support) http://www.gzip.org/zlib/ 1.2.3 or later (a) Pre-built version. For example: zlib-devel-1.2.3.. (b) Install from the source. freetype2 (OpenType /TrueType support) http://www.freetype.org 2.1.3 or later (a) Pre-built version. For example: freetype2-devel-2.1.3.. (b) Install from the source. PyQt4 This package provides Python bindings for Qt libraries. We will use PyQt4 to generate GUI for the image-processing application that we will develop later in this article. The GPL version is available at: http://www.riverbankcomputing.co.uk/software/pyqt/download. Windows platform Download and install the binary distribution pertaining to Python 2.6. For example, the executable file's name could be 'PyQt-Py2.6-gpl-4.6.2-2.exe'. Other than Python, it includes everything needed for GUI development using PyQt. Other platforms Before building PyQt, you must install SIP Python binding generator. For further details, refer to the SIP homepage: http://www.riverbankcomputing.com/software/sip/. After installing SIP, download and install PyQt 4.6.2 or later, from the source tarball. For Linux/Unix source, the filename will start with PyQt-x11-gpl-.. and for Mac OS X, PyQt-mac-gpl-... Linux users should also check if PyQt4 distribution is already available through the package manager. Summary of installation prerequisites   Package Download location Version Windows platform Linux/Unix/OS X platforms Python http://python.org/download/releases/ 2.6.4 (or any 2.6.x) Install using binary distribution (a) Install from binary; Also install additional developer packages (For example, with python-devel in the package name in the rpm systems) OR (b) Build and install from the source tarball. (c) MAC users can also check websites such as http://darwinports.com/ or http://pythonmac.org/. PIL http://www.pythonware.com/products/pil/ 1.1.6 or later Install PIL 1.1.6 (binary) for Python 2.6 (a) Install prerequisites if needed. Refer to Table #1 and the README file in PIL source distribution. (b) Install PIL from source. (c) MAC users can also check websites like http://darwinports.com/ or http://pythonmac.org/. PyQt4 http://www.riverbankcomputing.co.uk/software/pyqt/download 4.6.2 or later Install using binary pertaining to Python 2.6 (a) First install SIP 4.9 or later. (b) Then install PyQt4.
Read more
  • 0
  • 0
  • 4555

article-image-troux-enterprise-architecture-managing-ea-function
Packt
25 Aug 2010
9 min read
Save for later

Troux Enterprise Architecture: Managing the EA function

Packt
25 Aug 2010
9 min read
(For more resources on Troux, see here.) Targeted charter Organizations need a mission statement and charter. What should the mission and charter be for EA? The answer to this question depends on how the CIO views the function and where the function resides on the maturity model. The CIO could believe that EA should be focused on setting standards and identifying cost reduction opportunities. Conversely, the CIO could believe the function should focus on evaluation of emerging technologies and innovation. These two extremes are polar opposites. Each would require a different staffing model and different success criteria. The leader of EA must understand how the CIO views the function, as well as what the culture of the business will accept. Are IT and the business familiar with top-down direction, or does the company normally follow a consensus style of management? Is there a market leadership mentality or is the company a fast follower regarding technical innovation? To run a successful EA operation, the head of Enterprise Architecture needs to understand these parameters and factor them into the overall direction of the department. The following diagram illustrates finding the correct position between the two extremes of being focused on standards or innovation: Using standards to enforce polices on a culture that normally works through consensus will not work very well. Also, why focus resources on developing a business strategy or evaluating emerging technology if the company is totally focused on the next quarter's financial results? Sometimes, with the appropriate support from the CIO and other upper management, EA can become the change agent to encourage long-term planning. If a company has been too focused on tactics, EA can be the only department in IT that has the time and resources available to evaluate emerging solutions. The leader of the architecture function must understand the overall context in which the department resides. This understanding will help to develop the best structure for the department and hire people with the correct skill set. Let us look at the organization structure of the EA function. How large should the department be, where should the department report, and what does the organization structure look like? In most cases, there are also other areas within IT that perform what might be considered EA department responsibilities. How should the structure account for "domain architects" or "application architects" who do not report to the head of Enterprise Architecture? As usual, the answer to these questions is "it depends". The architecture department can be sized appropriately with an understanding of the overall role Enterprise Architecture plays within the broader scope of IT. If EA also runs the project management office (PMO) for IT, then the department is likely to be as large as fifty or more resources. In the case where the PMO resides outside of architecture, the architecture staffing level is normally between fifteen and thirty people. To be effective in a large enterprise, (five hundred or more applications development personnel) the EA department should be no smaller than about fifteen people. The following diagram provides a sample organization chart that assumes a balance is required between being focused on technical governance and IT strategy: The sample organization chart shows the balance between resources applied to tactical work and strategic work. The left side of the chart shows the teams focused on governance. Responsibilities include managing the ARB and maintaining standards and the architecture website. An architecture website is critical to maintaining awareness of the standards and best practices developed by the EA department. The sample organizational model assumes that a team of Solution Architects is centralized. These are experienced resources who help project teams with major initiatives that span the enterprise. These resources act like internal consultants and, therefore, must possess a broad spectrum of skills. Depending on the overall philosophy of the CIO, the Domain Architects may also be centralized. These are people with a high degree of experience within specific major technical domains. The domains match to the overall architectural framework of the enterprise and include platforms, software (including middleware), network, data, and security. These resources could also be decentralized into various applications development or engineering groups within IT. If Domain Architects are decentralized, at least two resources are needed within EA to ensure that each area is coordinated with the others across technical disciplines. If EA is responsible for evaluation of emerging technologies, then a team is needed to focus on execution of proof-of-architecture projects and productivity tool evaluations. A service can be created to manage various contracts and relationships with outside consulting agencies. These are typically companies focused on providing research, tracking IT advancements, and, in some cases, monitoring technology evolution within the company's industry. There are leaders (management) in each functional area within the architecture organization. As the resources under each area are limited, a good practice is to assume the leadership positions are also working positions. Depending on the overall culture of the company, the leadership positions could be Director- or Manager-level positions. In either case, these leaders must work with senior leaders across IT, the business, and outside vendors. For this reason, to be effective, they must be people with senior titles granted the authority to make important recommendations and decisions on a daily basis. In most companies, there is considerable debate about whether standards are set by the respective domain areas or by the EA department. The leader of EA, working with the CIO or CTO, must be flexible and able to adapt to the culture. If there is a need to centralize, then the architecture team must take steps to ensure there is buy-in for standards and ensure that governance processes are followed. This is done by building partnerships with the business and IT areas that control the allocation of funds to important projects. If the culture believes in decentralized standards management, then the head of architecture must ensure that there is one, and only one, official place where standards are documented and managed. The ARB, in this case, becomes the place where various opinions and viewpoints are worked out. However, it must be clear that the ARB is a function of Enterprise Architecture, and those that do not follow the collaborative review processes will not be able to move forward without obtaining a management consensus. Staffing the function Staffing the EA function is a challenge. To be effective, the group must have people who are respected for their technical knowledge and are able to communicate well using consensus and collaboration techniques. Finding people with the right combination of skills is difficult. Enterprise Architects may require higher salaries as compared to other staff within IT. Winning the battle with the human resources department about salaries and reporting levels within the corporate hierarchy is possible through the use of industry benchmarks. Requesting that jobs be evaluated against similar roles in the same industry will help make the point about what type of people are needed within the architecture department. People working in the EA department are different and here's why. In baseball, professional scouts rate prospects according to a scale on five different dimensions. Players that score high on all five are called "five tool players." These include hitting, hitting for power, running speed, throwing strength, and fielding. In evaluating resources for EA, there are also five major dimensions to consider. These include program management, software architecture, data architecture, network architecture, and platform architecture. As the following figure shows, an experience scale can be established for each dimension, yielding a complete picture of a candidate. People with the highest level of attainment across all five dimensions would be "five tool players". To be the most flexible in meeting the needs of the business and IT, the head of EA should strive for a good mix of resources covering the five dimensions. Resources who have achieved level 4 or level 5 across all of these would be the best candidates for the Solution Architect positions. These resources can do almost anything technical and are valuable across a wide array of enterprise-wide projects and initiatives. Resources who have mastered a particular dimension, such as data architecture or network architecture, are the best candidates for the Domain Architect positions. Software architecture is a broad dimension that includes software design, industry best practices, and middleware. Included within this area would be resources skilled in application development using various programming languages and design styles like object-oriented programming and SOA. As already seen, the Business Architect role spans all IT domains. The best candidates for Business Architecture need not be proficient in the five disciplines of IT architecture, but they will do a better job if they have a good awareness of what IT Architects do. Business Architects may be centralized and report into the EA function, or they may be decentralized across IT or even reside within business units. They are typically people with deep knowledge of business functions, business processes, and applications. Business Architects must be good communicators and have strong analytical abilities. They should be able to work without a great deal of supervision, be good at planning work, and can be trusted to deliver results per a schedule. Following are some job descriptions for these resources. They are provided as samples because each company will have its own unique set. Vice President/Director of Enterprise Architecture The Vice President/Director of Enterprise Architecture would normally have more than 10 or 15 years of experience depending on the circumstances of the organization. He or she would have experience with, and probably has mastered, all five of the key architecture skill set dimensions. The best resource is one with superior communication skills who is able to effect change across large and diverse organizations. The resource will also have experience within the industry in which the company competes. Leadership qualities are the most important aspect of this role, but having a technical background is also important. This person must be able to translate complex ideas, technology, and programs into language upper management can relate to. This person is a key influencer on technical decisions that affect the business on a long-term basis.
Read more
  • 0
  • 0
  • 4501
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-part-1-deploying-multiple-applications-capistrano-single-project
Rodrigo Rosenfeld
01 Jul 2014
9 min read
Save for later

Part 1: Deploying Multiple Applications with Capistrano from a Single Project

Rodrigo Rosenfeld
01 Jul 2014
9 min read
Capistrano is a deployment tool written in Ruby that is able to deploy projects using any language or framework, through a set of recipes, which are also written in Ruby. Capistrano expects an application to have a single repository and it is able to run arbitrary commands on the server through an SSH non-interactive session. Capistrano was designed assuming that an application is completely described by a single repository with all code belonging to it. For example, your web application is written with Ruby on Rails and simply serving that application would be enough. But what if you decide to use a separate application for managing your users, in a separate language and framework? Or maybe some issue tracker application? You could setup a proxy server to properly deliver each request to the right application based upon the request path for example. But the problem remains: how do you use Capistrano to manage more complex scenarios like this if it supports a single repository? The typical approach is to integrate Capistrano on each of the component applications and then switching between those projects before deploying those components. Not only this is a lot of work to deploy all of these components, but it may also lead to a duplication of settings. For example, if your main application and the user management application both use the same database for a given environment, you’d have to duplicate this setting in each of the components. For the Market Tracker product, used byLexisNexis clients (which we develop at e-Core for Matterhorn Transactions Inc.), we were looking for a better way to manage many component applications, in lots of environments and servers. We wanted to manage all of them from a single repository, instead of adding Capistrano integration to each of our component’s repositories and having to worry about keeping the recipes in sync between each of the maintained repository branches. Motivation The Market Tracker application we maintain consists of three different applications: the main one, another to export search results to Excel files, and an administrative interface to manage users and other entities. We host the application in three servers: two for the real thing and another back-up server. The first two are identical ones and allow us to have redundancy and zero downtime deployments except for a few cases where we change our database schema in incompatible ways with previous versions. To add to the complexity of deploying our three composing applications to each of those servers, we also need to deploy them multiple times for different environments like production, certification, staging, and experimental. All of them run on the same server, in separate ports, and they are running separate databases:Solr and Redis instances. This is already complex enough to manage when you integrate Capistrano to each of your projects, but it gets worse. Sometimes you find bugs in production and have to release quick fixes, but you can't deploy the version in the master branch that has several other changes. At other times you find bugs on your Capistrano recipes themselves and fix them on the master. Or maybe you are changing your deploy settings rather than the application’s code. When you have to deploy to production, depending on how your Capistrano recipes work, you may have to change to the production branch, backport any changes for the Capistrano recipes from the master and finally deploy the latest fixes. This happens if your recipe will use any project files as a template and they moved to another place in the master branch, for example. We decided to try another approach, similar to what we do with our database migrations. Instead of integrating the database migrations into the main application (the default on Rails, Django, Grails, and similar web frameworks) we prefer to handle it as a separate project. In our case we use theactive_record_migrations gem, which brings standalone support for ActiveRecord migrations (the same that is bundled with Rails apps by default). Our database is shared between the administrative interface project and the main web application and we feel it's better to be able to manage our database schema independently from the projects using the database. We add the migrations project to the other application as submodules so that we know what database schema is expected to work for a particular commit of the application, but that's all. We wanted to apply the same principles to our Capistrano recipes. We wanted to manage all of our applications on different servers and environments from a single project containing the Capistrano recipes. We also wanted to store the common settings in a single place to avoid code duplication, which makes it hard to add new environments or update existing ones. Grouping all applications' Capistrano recipes in a single project It seems we were not the first to want all Capistrano recipes for all of our applications in a single project. We first tried a project called caphub. It worked fine initially and its inheritance model would allow us to avoid our code duplication. Well, not entirely. The problem is that we needed some kind of multiple inheritances or mixins. We have some settings, like token private key, that are unique across environments, like Certification and Production. But we also have other settings that are common in within a server. For example, the database host name will be the same for all applications and environments inside our collocation facility, but it will be different in our backup server at Amazon EC2. CapHub didn't help us to get rid of the duplication in such cases, but it certainly helped us to find a simple solution to get what we wanted. Let's explore how Capistrano 3 allows us to easily manage such complex scenarios that are more common than you might think. Capistrano stages Since Capistrano 3, multistage support is built-in (there was a multistage extension for Capistrano 2). That means you can writecap stage_nametask_name, for examplecap production deploy. By default,cap install will generate two stages: production and staging. You can generate as many as you want, for example: cap install STAGES=production,cert,staging,experimental,integrator But how do we deploy each of those stages to our multiple servers, since the settings for each stage may be different across the servers? Also, how can we manage separate applications? Even though those settings are called "stages" by Capistrano, you can use it as you want. For example, suppose our servers are named m1,m2, and ec2 and the applications are named web, exporter and admin. We can create settings likem1_staging_web, ec2_production_admin, and so on. This will result in lots of files (specifically 45 = 5 x 3 x 3 to support five environments, three applications, and three servers) but it's not a big deal if you consider the settings files can be really small, as the examples will demonstrate later on in this article by using mixins. Usually people will start with staging and production only, and then gradually add other environments. Also, they usually start with one or two servers and keep growing as they feel the need. So supporting 45 combinations is not such a pain since you don’t write all of them at once. On the other hand, if you have enough resources to have a separate server for each of your environments, Capistrano will allow you to add multiple "server" declarations and assign roles to them, which can be quite useful if you're running a cluster of servers. In our case, to avoid downtime we don't upgrade all servers in our cluster at once. We also don't have the budget to host 45 virtual machines or even 15. So the little effort to generate 45 small settings files compensates the savings with hosting expenses. Using mixins My next post will create an example deployment project from scratch providing detail for everything that has been discussed in this post. But first, let me introduce the concept of what we call a mixin in our project. Capistrano 3 is simply a wrapper on top of Rake. Rake is a build tool written in Ruby, similar to “make.” It has targets and targets have prerequisites. This fits nicely in the way Capistrano works, where some deployment tasks will depend on other tasks. Instead of a Rakefile (Rake’s Makefile) Capistrano will use a Capfile, but other than that it works almost the same way. The Domain Specific Language (DSL) in a Capfile is enhanced as you include Capistrano extensions to the Rake DSL. Here’s a sample Capfile, generated by cap install, when you install Capistrano: # Load DSL and Setup Up Stages require'capistrano/setup' # Includes default deployment tasks require'capistrano/deploy' # Includes tasks from other gems included in your Gemfile # # For documentation on these, see for example: # # https://github.com/capistrano/rvm # https://github.com/capistrano/rbenv # https://github.com/capistrano/chruby # https://github.com/capistrano/bundler # https://github.com/capistrano/rails # # require 'capistrano/rvm' # require 'capistrano/rbenv' # require 'capistrano/chruby' # require 'capistrano/bundler' # require 'capistrano/rails/assets' # require 'capistrano/rails/migrations' # Loads custom tasks from `lib/capistrano/tasks' if you have any defined. Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r } Just like a Rakefile, a Capfile is valid Ruby code, which you can easily extend using regular Ruby code. So, to support a mixin DSL, we simply need to extend the DSL, like this:   defmixin (path) loadFile.join('config', 'mixins', path +'.rb') end Pretty simple, right? We prefer to add this to a separate file, like lib/mixin.rb and add this to the Capfile: $:.unshiftFile.dirname(__FILE__) require 'lib/mixin' After that, calling mixin 'environments/staging' should load settings that are common for the staging environment from a file called config/mixins/environments/staging.rb in the root of the Capistrano-enabled project. This is the base to set up our deployment project that we will create in the next post. About the author Rodrigo Rosenfeld Rosas lives in Vitória-ES, Brazil, with his lovely wife and daughter. He graduated in Electrical Engineering with a Master’s degree in Robotics and Real-time Systems.For the past five years Rodrigo has focused on building and maintaining single page web applications. He is the author of some gems includingactive_record_migrations, rails-web-console, the JS specs runner oojspec, sequel-devise and the Linux X11 utility ktrayshortcut.Rodrigo was hired by e-Core (Porto Alegre - RS, Brazil) to work from home, building and maintaining software forMatterhorn Transactions Inc. with a team of great developers. Matterhorn'smain product, the Market Tracker, is used by LexisNexis clients.
Read more
  • 0
  • 0
  • 4496

article-image-integrating-spring-framework-hibernate-orm-framework-part-2
Packt
29 Dec 2009
5 min read
Save for later

Integrating Spring Framework with Hibernate ORM Framework: Part 2

Packt
29 Dec 2009
5 min read
Configuring Hibernate in a Spring context Spring provides the LocalSessionFactoryBean class as a factory for a SessionFactory object. The LocalSessionFactoryBean object is configured as a bean inside the IoC container, with either a local JDBC DataSource or a shared DataSource from JNDI. The local JDBC DataSource can be configured in turn as an object of org.apache.commons.dbcp.BasicDataSource in the Spring context: <bean id="dataSource" class="org.apache.commons.dbcp.BasicDataSource"> <property name="driverClassName"> <value>org.hsqldb.jdbcDriver</value> </property> <property name="url"> <value>jdbc:hsqldb:hsql://localhost/hiberdb</value> </property> <property name="username"> <value>sa</value> </property> <property name="password"> <value></value> </property></bean> In this case, the org.apache.commons.dbcp.BasicDataSource (the Jakarta Commons Database Connection Pool) must be in the application classpath. Similarly, a shared DataSource can be configured as an object of org.springframework.jndi.JndiObjectFactoryBean. This is the recommended way, which is used when the connection pool is managed by the application server. Here is the way to configure it: <bean id="dataSource" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName"> <value>java:comp/env/jdbc/HiberDB</value> </property></bean> When the DataSource is configured, you can configure the LocalSessionFactoryBean instance upon the configured DataSource as follows: <bean id="sessionFactory"class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource"/> </property> ...</bean> Alternatively, you may set up the SessionFactory object as a server-side resource object in the Spring context. This object is linked in as a JNDI resource in the JEE environment to be shared with multiple applications. In this case, you need to use JndiObjectFactoryBean instead of LocalSessionFactoryBean: <bean id="sessionFactory" class="org.springframework.jndi.JndiObjectFactoryBean"> <property name="jndiName"> <value>java:comp/env/jdbc/hiberDBSessionFactory</value> </property></bean> JndiObjectFactoryBean is another factory bean for looking up any JNDI resource. When you use JndiObjectFactoryBean to obtain a preconfigured SessionFactory object, the SessionFactory object should already be registered as a JNDI resource. For this purpose, you may run a server-specific class which creates a SessionFactory object and registers it as a JNDI resource. LocalSessionFactoryBean uses three properties: datasource, mappingResources, and hibernateProperties. These properties are as follows: datasource refers to a JDBC DataSource object that is already defined as another bean inside the container. mappingResources specifies the Hibernate mapping files located in the application classpath. hibernateProperties determines the Hibernate configuration settings. We have the sessionFactory object configured as follows: <bean id="sessionFactory"class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource"/> </property> <property name="mappingResources"> <list> <value>com/packtpub/springhibernate/ch13/Student.hbm.xml</value> <value>com/packtpub/springhibernate/ch13/Teacher.hbm.xml</value> <value>com/packtpub/springhibernate/ch13/Course.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect </prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.max_fetch_depth">2</prop> </props> </property></bean> The mappingResources property loads mapping definitions in the classpath. You may use mappingJarLocations, or mappingDirectoryLocations to load them from a JAR file, or from any directory of the file system, respectively. It is still possible to configure Hibernate with hibernate.cfg.xml, instead of configuring Hibernate as just shown. To do so, configure sessionFactory with the configLocation property, as follows: <bean id="sessionFactory"class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource"> <ref bean="dataSource"/> </property> <property name="configLocation"> <value>/conf/hibernate.cfg.xml</value> </property></bean> Note that hibernate.cfg.xml specifies the Hibernate mapping definitions in addition to the other Hibernate properties. When the SessionFactory object is configured, you can configure DAO implementations as beans in the Spring context. These DAO beans are the objects which are looked up from the Spring IoC container and consumed by the business layer. Here is an example of DAO configuration: <bean id="studentDao" class="com.packtpub.springhibernate.ch13.HibernateStudentDao"> <property name="sessionFactory"> <ref local="sessionFactory"/> </property></bean> This is the DAO configuration for a DAO class that extends HibernateDaoSupport, or directly uses a SessionFactory property. When the DAO class has a HibernateTemplate property, configure the DAO instance as follows: <bean id="studentDao" class="com.packtpub.springhibernate.ch13.HibernateStudentDao"> <property name="hibernateTemplate"> <bean class="org.springframework.orm.hibernate3.HibernateTemplate"> <constructor-arg> <ref local="sessionFactory"/> </constructor-arg> </bean> </property></bean> According to the preceding declaration, the HibernateStudentDao class has a hibernateTemplate property that is configured via the IoC container, to be initialized through constructor injection and a SessionFactory instance as a constructor argument. Now, any client of the DAO implementation can look up the Spring context to obtain the DAO instance. The following code shows a simple class that creates a Spring application context, and then looks up the DAO object from the Spring IoC container: package com.packtpub.springhibernate.ch13; public class DaoClient { public static void main(String[] args) { ApplicationContext ctx = new ClassPathXmlApplicationContext("com/packtpub/springhibernate/ch13/applicationContext.xml"); StudentDao stdDao = (StudentDao)ctx.getBean("studentDao"); Student std = new Student(); //set std properties //save std stdDao.saveStudent(std); }}
Read more
  • 0
  • 0
  • 4486

article-image-ejb-3-security
Packt
23 Oct 2009
15 min read
Save for later

EJB 3 Security

Packt
23 Oct 2009
15 min read
Authentication and authorization in Java EE Container Security There are two aspects covered by Java EE container security: authentication and authorization. Authentication is the process of verifying that users are who they claim to be. Typically this is performed by the user providing credentials such as a password. Authorization, or access control, is the process of restricting operations to specific users or categories of users. The EJB specification provides two kinds of authorization: declarative and programmatic, as we shall see later in the article. The Java EE security model introduces a few concepts common to both authentication and authorization. A principal is an entity that we wish to authenticate. The format of a principal is application-specific but an example is a username. A role is a logical grouping of principals. For example, we can have administrator, manager, and employee roles. The scope over which a common security policy applies is known as a security domain, or realm. Authentication For authentication, every Java EE compliant application server provides the Java Authentication and Authorization Service (JAAS) API. JAAS supports any underlying security system. So we have a common API regardless of whether authentication is username/password verification against a database, iris or fingerprint recognition for example. The JAAS API is fairly low level and most application servers provide authentication mechanisms at a higher level of abstraction. These authentication mechanisms are application-server specific however. We will not cover JAAS any further here, but look at authentication as provided by the GlassFish application server. GlassFish Authentication There are three actors we need to define on the GlassFish application server for authentication purposes: users, groups, and realms. A user is an entity that we wish to authenticate. A user is synonymous with a principal. A group is a logical grouping of users and is not the same as a role. A group's scope is global to the application server. A role is a logical grouping of users whose scope is limited to a specific application. Of course for some applications we may decide that roles are identical to groups. For other applications we need some mechanism for mapping the roles onto groups. We shall see how this is done later. A realm, as we have seen, is the scope over which a common security policy applies. GlassFish provides three kinds of realms: file, certificate, and admin-realm. The file realm stores user, group, and realm credentials in a file named keyfile. This file is stored within the application server file system. A file realm is used by web clients using http or EJB application clients. The certificate realm stores a digital certificate and is used for authenticating web clients using https. The admin-realm is similar to the file realm and is used for storing administrator credentials. GlassFish comes pre-configured with a default file realm named file. We can add, edit, and delete users, groups, and realms using the GlassFish administrator console. We can also use the create-file-user option of the asadmin command line utility. To add a user named scott to a group named bankemployee, in the file realm, we would use the command: <target name="create-file-user"> <exec executable="${glassfish.home}/bin/asadmin" failonerror="true" vmlauncher="false"> <arg line="create-file-user --user admin --passwordfile userpassword --groups bankemployee scott"/> </exec> </target> --user specifies the GlassFish administrator username, admin in our example. --passwordfile specifies the name of the file containing password entries. In our example this file is userpassword. Users, other than GlassFish administrators, are identified by AS_ADMIN_USERPASSWORD. In our example the content of the userpassword file is: AS_ADMIN_USERPASSWORD=xyz This indicates that the user's password is xyz. --groups specifies the groups associated with this user (there may be more than one group). In our example there is just one group, named bankemployee. Multiple groups are colon delineated. For example if the user belongs to both the bankemployee and bankcustomer groups, we would specify: --groups bankemployee:bankcustomer The final entry is the operand which specifies the name of the user to be created. In our example this is scott. There is a corresponding asadmin delete-file-user option to remove a user from the file realm. Mapping Roles to Groups The Java EE specification specifies that there must be a mechanism for mapping local application specific roles to global roles on the application server. Local roles are used by an EJB for authorization purposes. The actual mapping mechanism is application server specific. As we have seen in the case of GlassFish, the global application server roles are called groups. In GlassFish, local roles are referred to simply as roles. Suppose we want to map an employee role to the bankemployee group. We would need to create a GlassFish specific deployment descriptor, sun-ejb-jar.xml, with the following element: <security-role-mapping> <role-name>employee</role-name> <group-name>bankemployee</group-name> </security-role-mapping> We also need to access the configuration-security screen in the administrator console. We then disable the Default Principal To Role Mapping flag. If the flag is enabled then the default is to map a group onto a role with the same name. So the bankemployee group will be mapped to the bankemployee role. We can leave the default values for the other properties on the configuration-security screen. Many of these features are for advanced use where third party security products can be plugged in or security properties customized. Consequently we will give only a brief description of these properties here. Security Manager: This refers to the JVM security manager which performs code-based security checks. If the security manager is disabled GlassFish will have better performance. However, even if the security manager is disabled, GlassFish still enforces standard Java EE authentication/authorization. Audit Logging: If this is enabled, GlassFish will provide an audit trail of all authentication and authorization decisions through audit modules. Audit modules provide information on incoming requests, outgoing responses and whether authorization was granted or denied. Audit logging applies for web-tier and ejb-tier authentication and authorization. A default audit module is provided but custom audit modules can also be created. Default Realm: This is the default realm used for authentication. Applications use this realm unless they specify a different realm in their deployment descriptor. The default value is file. Other possible values are admin-realm and certificate. We discussed GlassFish realms in the previous section. Default Principal: This is the user name used by GlassFish at run time if no principal is provided. Normally this is not required so the property can be left blank. Default Principal Password: This is the password of the default principal. JACC: This is the class name of a JACC (Java Authorization Contract for Containers) provider. This enables the GlassFish administrator to set up third-party plug in modules conforming to the JACC standard to perform authorization. Audit Modules: If we have created custom modules to perform audit logging, we would select from this list. Mapped Principal Class: This is only applicable when Default Principal to Role Mapping is enabled. The mapped principal class is used to customize the java.security.Principal implementation class used in the default principal to role mapping. If no value is entered, the com.sun.enterprise.deployment.Group implementation of java.security.Principal is used. Authenticating an EJB Application Client Suppose we want to invoke an EJB, BankServiceBean, from an application client. We also want the application client container to authenticate the client. There are a number of steps we first need to take which are application server specific. We will assume that all roles will have the same name as the corresponding application server groups. In the case of GlassFish we need to use the administrator console and enable Default Principal To Role Mapping. Next we need to define a group named bankemployee with one or more associated users. An EJB application client needs to use IOR (Interoperable Object Reference) authentication. The IOR protocol was originally created for CORBA (Common Object Request Broker Architecture) but all Java EE compliant containers support IOR. An EJB deployed on one Java EE compliant vendor may be invoked by a client deployed on another Java EE compliant vendor. Security interoperability between these vendors is achieved using the IOR protocol. In our case the client and target EJB both happen to be deployed on the same vendor, but we still use IOR for propagating security details from the application client container to the EJB container. IORs are configured in vendor specific XML files rather than the standard ejb-jar.xml file. In the case of GlassFish, this is done within the <ior-security-config> element within the sun-ejb-jar.xml deployment descriptor file. We also need to specify the invoked EJB, BankServiceBean, in the deployment descriptor. An example of the sun-ejb-jar.xml deployment descriptor is shown below: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE sun-ejb-jar PUBLIC "-//Sun Microsystems, Inc.//DTD       Application Server 9.0 EJB 3.0//EN"       "http://www.sun.com/software/appserver/dtds/sun-ejb-jar_3_0-0.dtd"> <sun-ejb-jar>   <enterprise-beans>     <ejb>       <ejb-name>BankServiceBean</ejb-name>         <ior-security-config>           <as-context>              <auth-method>USERNAME_PASSWORD</auth-method>              <realm>default</realm>              <required>true</required>           </as-context>         </ior-security-config>     </ejb>   </enterprise-beans> </sun-ejb-jar> The as in <as-context> stands for the IOR authentication service. This specifies authentication mechanism details. The <auth-method> element specifies the authentication method. This is set to USERNAME_PASSWORD which is the only value for an application client. The <realm> element specifies the realm in which the client is authenticated. The <required> element specifies whether the above authentication method is required to be used for client authentication. When creating the corresponding EJB JAR file, the sun-ejb-jar.xml file should be included in the META-INF directory, as follows: <target name="package-ejb" depends="compile">     <jar jarfile="${build.dir}/BankService.jar">         <fileset dir="${build.dir}">              <include name="ejb30/session/**" />                           <include name="ejb30/entity/**" />               </fileset>               <metainf dir="${config.dir}">             <include name="persistence.xml" />                          <include name="sun-ejb-jar.xml" />         </metainf>     </jar> </target> As soon as we run the application client, GlassFish will prompt with a username and password form, as follows: If we reply with the username scott and password xyz the program will run. If we run the application with an invalid username or password we will get the following error message: javax.ejb.EJBException: nested exception is: java.rmi.AccessException: CORBA NO_PERMISSION 9998 ..... EJB Authorization Authorization, or access control, is the process of restricting operations to specific roles. In contrast with authentication, EJB authorization is completely application server independent. The EJB specification provides two kinds of authorization: declarative and programmatic. With declarative authorization all security checks are performed by the container. An EJB's security requirements are declared using annotations or deployment descriptors. With programmatic authorization security checks are hard-coded in the EJBs code using API calls. However, even with programmatic authorization the container is still responsible for authentication and for assigning roles to principals. Declarative Authorization As an example, consider the BankServiceBean stateless session bean with methods findCustomer(), addCustomer() and updateCustomer(): package ejb30.session; import javax.ejb.Stateless; import javax.persistence.EntityManager; import ejb30.entity.Customer; import javax.persistence.PersistenceContext; import javax.annotation.security.RolesAllowed; import javax.annotation.security.PermitAll; import java.util.*; @Stateless @RolesAllowed("bankemployee") public class BankServiceBean implements BankService { @PersistenceContext(unitName="BankService") private EntityManager em; private Customer cust; @PermitAll public Customer findCustomer(int custId) { return ((Customer) em.find(Customer.class, custId)); } public void addCustomer(int custId, String firstName, String lastName) { cust = new Customer(); cust.setId(custId); cust.setFirstName(firstName); cust.setLastName(lastName); em.persist(cust); } public void updateCustomer(Customer cust) { Customer mergedCust = em.merge(cust); } } We have prefixed the bean class with the annotation: @RolesAllowed("bankemployee") This specifies the roles allowed to access any of the bean's method. So only users belonging to the bankemployee role may access the addCustomer() and updateCustomer() methods. More than one role can be specified by means of a brace delineated list, as follows: @RolesAllowed({"bankemployee", "bankcustomer"}) We can also prefix a method with @RolesAllowed, in which case the method annotation will override the class annotation. The @PermitAll annotation allows unrestricted access to a method, overriding any class level @RolesAllowed annotation. As with EJB 3 in general, we can use deployment descriptors as alternatives to the @RolesAllowed and @PermitAll annotations. Denying Authorization Suppose we want to deny all users access to the BankServiceBean.updateCustomer() method. We can do this using the @DenyAll annotation: @DenyAll public void updateCustomer(Customer cust) { Customer mergedCust = em.merge(cust); } Of course if you have access to source code you could simply delete the method in question rather than using @DenyAll. However suppose you do not have access to the source code and have received the EJB from a third party. If you in turn do not want your clients accessing a given method then you would need to use the <exclude-list> element in the ejb-jar.xml deployment descriptor: <?xml version="1.0" encoding="UTF-8"?> <ejb-jar version="3.0"                         xsi_schemaLocation="http://java.sun.com/xml/ns/javaee             http://java.sun.com/xml/ns/javaee/ejb-jar_3_0.xsd"> <enterprise-beans> <session> <ejb-name>BankServiceBean</ejb-name> </session> </enterprise-beans> <assembly-descriptor> <exclude-list><method> <ejb-name>BankServiceBean</ejb-name> <method-name>updateCustomer</method-name></method></exclude-list> </assembly-descriptor> </ejb-jar> EJB Security Propagation Suppose a client with an associated role invokes, for example, EJB A. If EJB A then invokes, for example, EJB B then by default the client's role is propagated to EJB B. However, you can specify with the @RunAs annotation that all methods of an EJB execute under a specific role. For example, suppose the addCustomer() method in the BankServiceBean EJB invokes the addAuditMessage() method of the AuditServiceBean EJB: @Stateless @RolesAllowed("bankemployee") public class BankServiceBean implements BankService { private @EJB AuditService audit; ....      public void addCustomer(int custId, String firstName,                                                          String lastName) {              cust = new Customer();              cust.setId(custId);              cust.setFirstName(firstName);              cust.setLastName(lastName);              em.persist(cust);              audit.addAuditMessage(1, "customer add attempt");      }      ... } Note that only a client with an associated role of bankemployee can invoke addCustomer(). If we prefix the AuditServiceBean class declaration with @RunAs("bankauditor") then the container will run any method in AuditServiceBean as the bankauditor role, regardless of the role which invokes the method. Note that the @RunAs annotation is applied only at the class level, @RunAs cannot be applied at the method level. @Stateless @RunAs("bankauditor") public class AuditServiceBean implements AuditService { @PersistenceContext(unitName="BankService") private EntityManager em; @TransactionAttribute( TransactionAttributeType.REQUIRES_NEW) public void addAuditMessage (int auditId, String message) { Audit audit = new Audit(); audit.setId(auditId); audit.setMessage(message); em.persist(audit); } } Programmatic Authorization With programmatic authorization the bean rather than the container controls authorization. The javax.ejb.SessionContext object provides two methods which support programmatic authorization: getCallerPrincipal() and isCallerInRole(). The getCallerPrincipal() method returns a java.security.Principal object. This object represents the caller, or principal, invoking the EJB. We can then use the Principal.getName() method to obtain the name of the principal. We have done this in the addAccount() method of the BankServiceBean as follows: Principal cp = ctx.getCallerPrincipal(); System.out.println("getname:" + cp.getName()); The isCallerInRole() method checks whether the principal belongs to a given role. For example, the code fragment below checks if the principal belongs to the bankcustomer role. If the principal does not belong to the bankcustomer role, we only persist the account if the balance is less than 99. if (ctx.isCallerInRole("bankcustomer")) {     em.persist(ac); } else if (balance < 99) {            em.persist(ac);   } When using the isCallerInRole() method, we need to declare all the security role names used in the EJB code using the class level @DeclareRoles annotation: @DeclareRoles({"bankemployee", "bankcustomer"}) The code below shows the BankServiceBean EJB with all the programmatic authorization code described in this section: package ejb30.session; import javax.ejb.Stateless; import javax.persistence.EntityManager; import ejb30.entity.Account; import javax.persistence.PersistenceContext; import javax.annotation.security.RolesAllowed; import java.security.Principal; import javax.annotation.Resource; import javax.ejb.SessionContext; import javax.annotation.security.DeclareRoles; import java.util.*; @Stateless @DeclareRoles({"bankemployee", "bankcustomer"}) public class BankServiceBean implements BankService { @PersistenceContext(unitName="BankService") private EntityManager em; private Account ac; @Resource SessionContext ctx; @RolesAllowed({"bankemployee", "bankcustomer"}) public void addAccount(int accountId, double balance, String accountType) { ac = new Account(); ac.setId(accountId); ac.setBalance(balance); ac.setAccountType(accountType); Principal cp = ctx.getCallerPrincipal(); System.out.println("getname:" + cp.getName()); if (ctx.isCallerInRole("bankcustomer")) { em.persist(ac); } else if (balance < 99) { em.persist(ac); } } ..... } Where we have a choice declarative authorization is preferable to programmatic authorization. Declarative authorization avoids having to mix business code with security management code. We can change a bean's security policy by simply changing an annotation or deployment descriptor instead of modifying the logic of a business method. However, some security rules, such as the example above of only persisting an account within a balance limit, can only be handled by programmatic authorization. Declarative security is based only on the principal and the method being invoked, whereas programmatic security can take state into consideration. Because an EJB is typically invoked from the web-tier by a servlet, JSP page or JSF component, we will briefly mention Java EE web container security. The web-tier and EJB tier share the same security model. So the web-tier security model is based on the same concepts of principals, roles and realms.
Read more
  • 0
  • 0
  • 4478

article-image-getting-ready-coffeescript
Packt
02 Apr 2015
20 min read
Save for later

Getting Ready with CoffeeScript

Packt
02 Apr 2015
20 min read
In this article by Mike Hatfield, author of the book, CoffeeScript Application Development Cookbook, we will see that JavaScript, though very successful, can be a difficult language to work with. JavaScript was designed by Brendan Eich in a mere 10 days in 1995 while working at Netscape. As a result, some might claim that JavaScript is not as well rounded as some other languages, a point well illustrated by Douglas Crockford in his book titled JavaScript: The Good Parts, O'Reilly Media. These pitfalls found in the JavaScript language led Jeremy Ashkenas to create CoffeeScript, a language that attempts to expose the good parts of JavaScript in a simple way. CoffeeScript compiles into JavaScript and helps us avoid the bad parts of JavaScript. (For more resources related to this topic, see here.) There are many reasons to use CoffeeScript as your development language of choice. Some of these reasons include: CoffeeScript helps protect us from the bad parts of JavaScript by creating function closures that isolate our code from the global namespace by reducing the curly braces and semicolon clutter and by helping tame JavaScript's notorious this keyword CoffeeScript helps us be more productive by providing features such as list comprehensions, classes with inheritance, and many others Properly written CoffeeScript also helps us write code that is more readable and can be more easily maintained As Jeremy Ashkenas says: "CoffeeScript is just JavaScript." We can use CoffeeScript when working with the large ecosystem of JavaScript libraries and frameworks on all aspects of our applications, including those listed in the following table: Part Some options User interfaces UI frameworks including jQuery, Backbone.js, AngularJS, and Kendo UI Databases Node.js drivers to access SQLite, Redis, MongoDB, and CouchDB Internal/external services Node.js with Node Package Manager (NPM) packages to create internal services and interfacing with external services Testing Unit and end-to-end testing with Jasmine, Qunit, integration testing with Zombie, and mocking with Persona Hosting Easy API and application hosting with Heroku and Windows Azure Tooling Create scripts to automate routine tasks and using Grunt Configuring your environment and tools One significant aspect to being a productive CoffeeScript developer is having a proper development environment. This environment typically consists of the following: Node.js and the NPM CoffeeScript Code editor Debugger In this recipe, we will look at installing and configuring the base components and tools necessary to develop CoffeeScript applications. Getting ready In this section, we will install the software necessary to develop applications with CoffeeScript. One of the appealing aspects of developing applications using CoffeeScript is that it is well supported on Mac, Windows, and Linux machines. To get started, you need only a PC and an Internet connection. How to do it... CoffeeScript runs on top of Node.js—the event-driven, non-blocking I/O platform built on Chrome's JavaScript runtime. If you do not have Node.js installed, you can download an installation package for your Mac OS X, Linux, and Windows machines from the start page of the Node.js website (http://nodejs.org/). To begin, install Node.js using an official prebuilt installer; it will also install the NPM. Next, we will use NPM to install CoffeeScript. Open a terminal or command window and enter the following command: npm install -g coffee-script This will install the necessary files needed to work with CoffeeScript, including the coffee command that provides an interactive Read Evaluate Print Loop (REPL)—a command to execute CoffeeScript files and a compiler to generate JavaScript. It is important to use the -g option when installing CoffeeScript, as this installs the CoffeeScript package as a global NPM module. This will add the necessary commands to our path. On some Windows machines, you might need to add the NPM binary directory to your path. You can do this by editing the environment variables and appending ;%APPDATA%npm to the end of the system's PATH variable. Configuring Sublime Text What you use to edit code can be a very personal choice, as you, like countless others, might use the tools dictated by your team or manager. Fortunately, most popular editing tools either support CoffeeScript out of the box or can be easily extended by installing add-ons, packages, or extensions. In this recipe, we will look at adding CoffeeScript support to Sublime Text and Visual Studio. Getting ready This section assumes that you have Sublime Text or Visual Studio installed. Sublime Text is a very popular text editor that is geared to working with code and projects. You can download a fully functional evaluation version from http://www.sublimetext.com. If you find it useful and decide to continue to use it, you will be encouraged to purchase a license, but there is currently no enforced time limit. How to do it... Sublime Text does not support CoffeeScript out of the box. Thankfully, a package manager exists for Sublime Text; this package manager provides access to hundreds of extension packages, including ones that provide helpful and productive tools to work with CoffeeScript. Sublime Text does not come with this package manager, but it can be easily added by following the instructions on the Package Control website at https://sublime.wbond.net/installation. With Package Control installed, you can easily install the CoffeeScript packages that are available using the Package Control option under the Preferences menu. Select the Install Package option. You can also access this command by pressing Ctrl + Shift + P, and in the command list that appears, start typing install. This will help you find the Install Package command quickly. To install the CoffeeScript package, open the Install Package window and enter CoffeeScript. This will display the CoffeeScript-related packages. We will use the Better CoffeeScript package: As you can see, the CoffeeScript package includes syntax highlighting, commands, shortcuts, snippets, and compilation. How it works... In this section, we will explain the different keyboard shortcuts and code snippets available with the Better CoffeeScript package for Sublime. Commands You can run the desired command by entering the command into the Sublime command pallet or by pressing the related keyboard shortcut. Remember to press Ctrl + Shift + P to display the command pallet window. Some useful CoffeeScript commands include the following: Command Keyboard shortcut Description Coffee: Check Syntax Alt + Shift + S This checks the syntax of the file you are editing or the currently selected code. The result will display in the status bar at the bottom. Coffee: Compile File Alt + Shift + C This compiles the file being edited into JavaScript. Coffee: Run Script Alt + Shift + R This executes the selected code and displays a buffer of the output. The keyboard shortcuts are associated with the file type. If you are editing a new CoffeeScript file that has not been saved yet, you can specify the file type by choosing CoffeeScript in the list of file types in the bottom-left corner of the screen. Snippets Snippets allow you to use short tokens that are recognized by Sublime Text. When you enter the code and press the Tab key, Sublime Text will automatically expand the snippet into the full form. Some useful CoffeeScript code snippets include the following: Token Expands to log[Tab] console.log cla class Name constructor: (arguments) ->    # ... forin for i in array # ... if if condition # ... ifel if condition # ... else # ... swi switch object when value    # ... try try # ... catch e # ... The snippets are associated with the file type. If you are editing a new CoffeeScript file that has not been saved yet, you can specify the file type by selecting CoffeeScript in the list of file types in the bottom-left corner of the screen. Configuring Visual Studio In this recipe, we will demonstrate how to add CoffeeScript support to Visual Studio. Getting ready If you are on the Windows platform, you can use Microsoft's Visual Studio software. You can download Microsoft's free Express edition (Express 2013 for Web) from http://www.microsoft.com/express. How to do it... If you are a Visual Studio user, Version 2010 and above can work quite effectively with CoffeeScript through the use of Visual Studio extensions. If you are doing any form of web development with Visual Studio, the Web Essentials extension is a must-have. To install Web Essentials, perform the following steps: Launch Visual Studio. Click on the Tools menu and select the Extensions and Updates menu option. This will display the Extensions and Updates window (shown in the next screenshot). Select Online in the tree on the left-hand side to display the most popular downloads. Select Web Essentials 2012 from the list of available packages and then click on the Download button. This will download the package and install it automatically. Once the installation is finished, restart Visual Studio by clicking on the Restart Now button. You will likely find Web Essentials 2012 ranked highly in the list of Most Popular packages. If you do not see it, you can search for Web Essentials using the Search box in the top-right corner of the window. Once installed, the Web Essentials package provides many web development productivity features, including CSS helpers, tools to work with Less CSS, enhancements to work with JavaScript, and, of course, a set of CoffeeScript helpers. To add a new CoffeeScript file to your project, you can navigate to File | New Item or press Ctrl + Shift + A. This will display the Add New Item dialog, as seen in the following screenshot. Under the Web templates, you will see a new CoffeeScript File option. Select this option and give it a filename, as shown here: When we have our CoffeeScript file open, Web Essentials will display the file in a split-screen editor. We can edit our code in the left-hand pane, while Web Essentials displays a live preview of the JavaScript code that will be generated for us. The Web Essentials CoffeeScript compiler will create two JavaScript files each time we save our CoffeeScript file: a basic JavaScript file and a minified version. For example, if we save a CoffeeScript file named employee.coffee, the compiler will create employee.js and employee.min.js files. Though I have only described two editors to work with CoffeeScript files, there are CoffeeScript packages and plugins for most popular text editors, including Emacs, Vim, TextMate, and WebMatrix. A quick dive into CoffeeScript In this recipe, we will take a quick look at the CoffeeScript language and command line. How to do it... CoffeeScript is a highly expressive programming language that does away with much of the ceremony required by JavaScript. It uses whitespace to define blocks of code and provides shortcuts for many of the programming constructs found in JavaScript. For example, we can declare variables and functions without the var keyword: firstName = 'Mike' We can define functions using the following syntax: multiply = (a, b) -> a * b Here, we defined a function named multiply. It takes two arguments, a and b. Inside the function, we multiplied the two values. Note that there is no return statement. CoffeeScript will always return the value of the last expression that is evaluated inside a function. The preceding function is equivalent to the following JavaScript snippet: var multiply = function(a, b) { return a * b; }; It's worth noting that the CoffeeScript code is only 28 characters long, whereas the JavaScript code is 50 characters long; that's 44 percent less code. We can call our multiply function in the following way: result = multiply 4, 7 In CoffeeScript, using parenthesis is optional when calling a function with parameters, as you can see in our function call. However, note that parenthesis are required when executing a function without parameters, as shown in the following example: displayGreeting = -> console.log 'Hello, world!' displayGreeting() In this example, we must call the displayGreeting() function with parenthesis. You might also wish to use parenthesis to make your code more readable. Just because they are optional, it doesn't mean you should sacrifice the readability of your code to save a couple of keystrokes. For example, in the following code, we used parenthesis even though they are not required: $('div.menu-item').removeClass 'selected' Like functions, we can define JavaScript literal objects without the need for curly braces, as seen in the following employee object: employee = firstName: 'Mike' lastName: 'Hatfield' salesYtd: 13204.65 Notice that in our object definition, we also did not need to use a comma to separate our properties. CoffeeScript supports the common if conditional as well as an unless conditional inspired by the Ruby language. Like Ruby, CoffeeScript also provides English keywords for logical operations such as is, isnt, or, and and. The following example demonstrates the use of these keywords: isEven = (value) -> if value % 2 is 0    'is' else    'is not'   console.log '3 ' + isEven(3) + ' even' In the preceding code, we have an if statement to determine whether a value is even or not. If the value is even, the remainder of value % 2 will be 0. We used the is keyword to make this determination. JavaScript has a nasty behavior when determining equality between two values. In other languages, the double equal sign is used, such as value == 0. In JavaScript, the double equal operator will use type coercion when making this determination. This means that 0 == '0'; in fact, 0 == '' is also true. CoffeeScript avoids this using JavaScript's triple equals (===) operator. This evaluation compares value and type such that 0 === '0' will be false. We can use if and unless as expression modifiers as well. They allow us to tack if and unless at the end of a statement to make simple one-liners. For example, we can so something like the following: console.log 'Value is even' if value % 2 is 0 Alternatively, we can have something like this: console.log 'Value is odd' unless value % 2 is 0 We can also use the if...then combination for a one-liner if statement, as shown in the following code: if value % 2 is 0 then console.log 'Value is even' CoffeeScript has a switch control statement that performs certain actions based on a list of possible values. The following lines of code show a simple switch statement with four branching conditions: switch task when 1    console.log 'Case 1' when 2    console.log 'Case 2' when 3, 4, 5    console.log 'Case 3, 4, 5' else    console.log 'Default case' In this sample, if the value of a task is 1, case 1 will be displayed. If the value of a task is 3, 4, or 5, then case 3, 4, or 5 is displayed, respectively. If there are no matching values, we can use an optional else condition to handle any exceptions. If your switch statements have short operations, you can turn them into one-liners, as shown in the following code: switch value when 1 then console.log 'Case 1' when 2 then console.log 'Case 2' when 3, 4, 5 then console.log 'Case 3, 4, 5' else console.log 'Default case' CoffeeScript provides a number of syntactic shortcuts to help us be more productive while writing more expressive code. Some people have claimed that this can sometimes make our applications more difficult to read, which will, in turn, make our code less maintainable. The key to highly readable and maintainable code is to use a consistent style when coding. I recommend that you follow the guidance provided by Polar in their CoffeeScript style guide at http://github.com/polarmobile/coffeescript-style-guide. There's more... With CoffeeScript installed, you can use the coffee command-line utility to execute CoffeeScript files, compile CoffeeScript files into JavaScript, or run an interactive CoffeeScript command shell. In this section, we will look at the various options available when using the CoffeeScript command-line utility. We can see a list of available commands by executing the following command in a command or terminal window: coffee --help This will produce the following output: As you can see, the coffee command-line utility provides a number of options. Of these, the most common ones include the following: Option Argument Example Description None None coffee This launches the REPL-interactive shell. None Filename coffee sample.coffee This command will execute the CoffeeScript file. -c, --compile Filename coffee -c sample.coffee This command will compile the CoffeeScript file into a JavaScript file with the same base name,; sample.js, as in our example. -i, --interactive   coffee -i This command will also launch the REPL-interactive shell. -m, --map Filename coffee--m sample.coffee This command generates a source map with the same base name, sample.js.map, as in our example. -p, --print Filename coffee -p sample.coffee This command will display the compiled output or compile errors to the terminal window. -v, --version None coffee -v This command will display the correct version of CoffeeScript. -w, --watch Filename coffee -w -c sample.coffee This command will watch for file changes, and with each change, the requested action will be performed. In our example, our sample.coffee file will be compiled each time we save it. The CoffeeScript REPL As we have been, CoffeeScript has an interactive shell that allows us to execute CoffeeScript commands. In this section, we will learn how to use the REPL shell. The REPL shell can be an excellent way to get familiar with CoffeeScript. To launch the CoffeeScript REPL, open a command window and execute the coffee command. This will start the interactive shell and display the following prompt: For example, if we enter the expression x = 4 and press return, we would see what is shownin the following screenshot In the coffee> prompt, we can assign values to variables, create functions, and evaluate results. When we enter an expression and press the return key, it is immediately evaluated and the value is displayed. For example, if we enter the expression x = 4 and press return, we would see what is shown in the following screenshot: This did two things. First, it created a new variable named x and assigned the value of 4 to it. Second, it displayed the result of the command. Next, enter timesSeven = (value) -> value * 7 and press return: You can see that the result of this line was the creation of a new function named timesSeven(). We can call our new function now: By default, the REPL shell will evaluate each expression when you press the return key. What if we want to create a function or expression that spans multiple lines? We can enter the REPL multiline mode by pressing Ctrl + V. This will change our coffee> prompt to a ------> prompt. This allows us to enter an expression that spans multiple lines, such as the following function: When we are finished with our multiline expression, press Ctrl + V again to have the expression evaluated. We can then call our new function: The CoffeeScript REPL offers some handy helpers such as expression history and tab completion. Pressing the up arrow key on your keyboard will circulate through the expressions we previously entered. Using the Tab key will autocomplete our function or variable name. For example, with the isEvenOrOdd() function, we can enter isEven and press Tab to have the REPL complete the function name for us. Debugging CoffeeScript using source maps If you have spent any time in the JavaScript community, you would have, no doubt, seen some discussions or rants regarding the weak debugging story for CoffeeScript. In fact, this is often a top argument some give for not using CoffeeScript at all. In this recipe, we will examine how to debug our CoffeeScript application using source maps. Getting ready The problem in debugging CoffeeScript stems from the fact that CoffeeScript compiles into JavaScript which is what the browser executes. If an error arises, the line that has caused the error sometimes cannot be traced back to the CoffeeScript source file very easily. Also, the error message is sometimes confusing, making troubleshooting that much more difficult. Recent developments in the web development community have helped improve the debugging experience for CoffeeScript by making use of a concept known as a source map. In this section, we will demonstrate how to generate and use source maps to help make our CoffeeScript debugging easier. To use source maps, you need only a base installation of CoffeeScript. How to do it... You can generate a source map for your CoffeeScript code using the -m option on the CoffeeScript command: coffee -m -c employee.coffee How it works... Source maps provide information used by browsers such as Google Chrome that tell the browser how to map a line from the compiled JavaScript code back to its origin in the CoffeeScript file. Source maps allow you to place breakpoints in your CoffeeScript file and analyze variables and execute functions in your CoffeeScript module. This creates a JavaScript file called employee.js and a source map called employee.js.map. If you look at the last line of the generated employee.js file, you will see the reference to the source map: //# sourceMappingURL=employee.js.map Google Chrome uses this JavaScript comment to load the source map. The following screenshot demonstrates an active breakpoint and console in Goggle Chrome: Debugging CoffeeScript using Node Inspector Source maps and Chrome's developer tools can help troubleshoot our CoffeeScript that is destined for the Web. In this recipe, we will demonstrate how to debug CoffeeScript that is designed to run on the server. Getting ready Begin by installing the Node Inspector NPM module with the following command: npm install -g node-inspector How to do it... To use Node Inspector, we will use the coffee command to compile the CoffeeScript code we wish to debug and generate the source map. In our example, we will use the following simple source code in a file named counting.coffee: for i in [1..10] if i % 2 is 0    console.log "#{i} is even!" else    console.log "#{i} is odd!" To use Node Inspector, we will compile our file and use the source map parameter with the following command: coffee -c -m counting.coffee Next, we will launch Node Inspector with the following command: node-debug counting.js How it works... When we run Node Inspector, it does two things. First, it launches the Node debugger. This is a debugging service that allows us to step through code, hit line breaks, and evaluate variables. This is a built-in service that comes with Node. Second, it launches an HTTP handler and opens a browser that allows us to use Chrome's built-in debugging tools to use break points, step over and into code, and evaluate variables. Node Inspector works well using source maps. This allows us to see our native CoffeeScript code and is an effective tool to debug server-side code. The following screenshot displays our Chrome window with an active break point. In the local variables tool window on the right-hand side, you can see that the current value of i is 2: The highlighted line in the preceding screenshot depicts the log message. Summary This article introduced CoffeeScript and lays the foundation to use CoffeeScript to develop all aspects of modern cloud-based applications. Resources for Article: Further resources on this subject: Writing Your First Lines of CoffeeScript [article] Why CoffeeScript? [article] ASP.Net Site Performance: Improving JavaScript Loading [article]
Read more
  • 0
  • 0
  • 4474
article-image-introduction-jsf-part-1
Packt
30 Dec 2009
6 min read
Save for later

An Introduction to JSF: Part 1

Packt
30 Dec 2009
6 min read
While the main focus of this article is learning how to use JSF UI components, and not to cover the JSF framework in complete detail, a basic understanding of fundamental JSF concepts is required before we can proceed. Therefore, by way of introduction, let's look at a few of the building blocks of JSF applications: the Model-View-Controller architecture, the JSF request processing lifecycle, managed beans, EL expressions, UI components, converters, validators, and internationalization (I18N). The Model-View-Controller architecture Like many other web frameworks, JSF is based on the Model-View-Controller (MVC) architecture. The MVC pattern promotes the idea of “separation of concerns”, or the decoupling of the presentation, business, and data access tiers of an application. The Model in MVC represents “state” in the application. This includes the state of user interface components (for example: the selection state of a radio button group, the enabled state of a button, and so on) as well as the application’s data (the customers, products, invoices, orders, and so on). In a JSF application, the Model is typically implemented using Plain Old Java Objects (POJOs) based on the JavaBeans API. These classes are also described as the “domain model” of the application, and act as Data Transfer Objects (DTOs) to transport data between the various tiers of the application. JSF enables direct data binding between user interface components and domain model objects using the Expression Language (EL), greatly simplifying data transfer between the View and the Model in a Java web application. The View in MVC represents the user interface of the application. The View is responsible for rendering data to the user, and for providing user interface components such as labels, text fields, buttons, radios, and checkboxes that support user interaction. As users interact with components in the user interface, events are fired by these components and delivered to Controller objects by the MVC framework. In this respect, JSF has much in common with a desktop GUI toolkit such as Swing or AWT. We can think of JSF as a GUI toolkit for building web applications. JSF components are organized in the user interface declaratively using UI component tags in a JSF view (typically a JSP or Facelets page). The Controller in MVC represents an object that responds to user interface events and to query or modify the Model. When a JSF page is displayed in the browser, the UI components declared in the markup are rendered as HTML controls. The JSF markup supports the JSF Expression Language (EL), a scripting language that enables UI components to bind to managed beans for data transfer and event handling. We use value expressions such as #{backingBean.name} to connect UI components to managed bean properties for data binding, and we use method expressions such as #{backingBean.sayHello} to register an event handler (a managed bean method with a specific signature) on a UI component. In a JSF application, the entity classes in our domain model act as the Model in MVC terms, a JSF page provides the View, and managed beans act as Controller objects. The JSF EL provides the scripting language necessary to tie the Model, View, and Controller concepts together. There is an important variation of the Controller concept that we should discuss before moving forward. Like the Struts framework, JSF implements what is known as the “Front Controller” pattern, where a single class behaves like the primary request handler or event dispatcher for the entire system. In the Struts framework, the ActionServlet performs the role of the Front Controller, handling all incoming requests and delegating request processing to application-defined Action classes. In JSF, the FacesServlet implements the Front Controller pattern, receiving all incoming HTTP requests and processing them in a sophisticated chain of events known as the JSF request processing lifecycle. The JSF Request Processing Lifecycle In order to understand the interplay between JSF components, converters, validators, and managed beans, let’s take a moment to discuss the JSF request processing lifecycle. The JSF lifecycle includes six phases: Restore/create view – The UI component tree for the current view is restored from a previous request, or it is constructed for the first time. Apply request values – The incoming form parameter values are stored in server-side UI component objects. Conversion/Validation – The form data is converted from text to the expected Java data types and validated accordingly (for example: required fields, length and range checks, valid dates, and so on). Update model values – If conversion and validation was successful, the data is now stored in our application’s domain model. Invoke application – Any event handler methods in our managed beans that were registered with UI components in the view are executed. Render response – The current view is re-rendered in the browser, or another view is displayed instead (depending on the navigation rules for our application). To summarize the JSF request handling process, the FacesServlet (the Front Controller) first handles an incoming request sent by the browser for a particular JSF page by attempting to restore or create for the first time the server-side UI component tree representing the logical structure of the current View (Phase 1). Incoming form data sent by the browser is stored in the components such as text fields, radio buttons, checkboxes, and so on, in the UI component tree (Phase 2). The data is then converted from Strings to other Java types and is validated using both standard and custom converters and validators (Phase 3). Once the data is converted and validated successfully, it is stored in the application’s Model by calling the setter methods of any managed beans associated with the View (Phase 4). After the data is stored in the Model, the action method (if any) associated with the UI component that submitted the form is called, along with any other event listener methods that were registered with components in the form (Phase 5). At this point, the application’s logic is invoked and the request may be handled in an application-defined way. Once the Invoke Application phase is complete, the JSF application sends a response back to the web browser, possibly displaying the same view or perhaps another view entirely (Phase 6). The renderers associated with the UI components in the view are invoked and the logical structure of the view is transformed into a particular presentation format or markup language. Most commonly, JSF views are rendered as HTML using the framework’s default RenderKit, but JSF does not require pages to be rendered only in HTML. In fact, JSF was designed to be a presentation technology neutral framework, meaning that views can be rendered according to the capabilities of different client devices. For example, we can render our pages in HTML for web browsers and in WML for PDAs and wireless devices.
Read more
  • 0
  • 0
  • 4447

article-image-introduction-cloud-computing-microsoft-azure
Packt
13 Jan 2011
6 min read
Save for later

Introduction to cloud computing with Microsoft Azure

Packt
13 Jan 2011
6 min read
What is an enterprise application? Before we hop into the cloud, let's talk about who this book is for. Who are "enterprise developers"? In the United States, over half of the economy is small businesses, usually privately owned, with a couple dozen of employees and revenues up to the millions of dollars. The applications that run these businesses have lower requirements because of smaller data volumes and a low number of application users. A single server may host several applications. Many of the business needs for these companies can be met with off-the-shelf software requiring little to no modification. The minority of the United States economy is made up of huge publicly owned corporations—think Microsoft, Apple, McDonald's, Coca-Cola, Best Buy, and so on. These companies have thousands of employees and revenues in the billions of dollars. Because these companies are publicly owned, they are subject to tight regulatory scrutiny. The applications utilized by these companies must faithfully keep track of an immense amount of data to be utilized by hundreds or thousands of users, and must comply with all matters of regulations. The infrastructure for a single application may involve dozens of servers. A team of consultants is often retained to install and maintain the critical systems of a business, and there is often an ecosystem of internal applications built around the enterprise systems that are just as critical. These are the applications we consider to be "enterprise applications", and the people who develop and extend them are "enterprise developers". The high availability of cloud platforms makes them attractive for hosting these critical applications, and there are many options available to the enterprise developer. What is cloud computing? At its most basic, cloud computing is moving applications accessible from our internal network onto an internet (cloud)-accessible space. We're essentially renting virtual machines in someone else's data center, with the capabilities for immediate scale-out, failover, and data synchronization. In the past, having an Internet-accessible application meant we were building a website with a hosted database. Cloud computing changes that paradigm—our application could be a website, or it could be a client installed on a local PC accessing a common data store from anywhere in the world. The data store could be internal to our network or itself hosted in the cloud. The following diagram outlines three ways in which cloud computing can be utilized for an application. In option 1, both data and application have been hosted in the cloud, the second option is to host our application in the cloud and our data locally, and the third option is to host our data in the cloud and our application locally. The expense (or cost) model is also very different. In our local network, we have to buy the hardware and software licenses, install and configure the servers, and finally we have to maintain them. All this counts in addition to building and maintaining the application! In cloud computing, the host usually handles all the installation, configuration, and maintenance of the servers, allowing us to focus mostly on the application. The direct costs of running our application in the cloud are only for each machine-hour of use and storage utilization. The individual pieces of cloud computing have all been around for some time. Shared mainframes and supercomputers have for a long time billed the end users based on that user's resource consumption. Space for websites can be rented on a monthly basis. Providers offer specialized application hosting and, relatively recently, leased virtual machines have also become available. If there is anything revolutionary about cloud computing, then it is its ability to combine all the best features of these different components into a single affordable service offering. Some benefits of cloud computing Cloud computing sounds great so far, right? So, what are some of the tangible benefits of cloud computing? Does cloud computing merit all the attention? Let's have a look at some of the advantages: Low up-front cost:At the top of the benefits list is probably the low up-front cost. With cloud computing, someone else is buying and installing the servers, switches, and firewalls, among other things. In addition to the hardware, software licenses and assurance plans are also expensive on the enterprise level, even with a purchasing agreement. In most cloud services, including Microsoft's Azure platform, we do not need to purchase separate licenses for operating systems or databases. In Azure, the costs include licenses for Windows Azure OS and SQL Azure. As a corollary, someone else is responsible for the maintenance and upkeep of the servers—no more tape backups that must be rotated and sent to off-site storage, no extensive strategies and lost weekends bringing servers up to the current release level, and no more counting the minutes until the early morning delivery of a hot swap fan to replace the one that burned out the previous afternoon. Easier disaster recovery and storage management:With synchronized storage across multiple data centers, located in different regions in the same country or even in different countries, disaster recovery planning becomes significantly easier. If capacity needs to be increased, it can be done quite easily by logging into a control panel and turning on an additional VM. It would be a rare instance indeed when our provider doesn't sell us additional capacity. When the need for capacity passes, we can simply turn off the VMs we no longer need and pay only for the uptime and storage utilization. Simplified migration:Migration from a test to a production environment is greatly simplified. In Windows Azure, we can test an updated version of our application in a local sandbox environment. When we're ready to go live, we deploy our application to a staged environment in the cloud and, with a few mouse clicks in the control panel, we turn off the live virtual machine and activate the staging environment as the live machine—we barely miss a beat! The migration can be performed well in advance of the cut-over, so daytime migrations and midnight cut-overs can become routine. Should something go wrong, the environments can be easily reversed and the issues analyzed the following day. Familiar environment:Finally, the environment we're working on is very familiar. In Azure's case, the environment can include the capabilities of IIS and .NET (or Java or PHP and Apache), with Windows and SQL Server or MySQL. One of the great features of Windows is that it can be confi gured in so many ways, and to an extent, Azure can also be configured in many ways, supporting a rich and familiar application environment.
Read more
  • 0
  • 0
  • 4363

article-image-build-your-own-application-access-twitter-using-java-and-netbeans-part-2
Packt
19 Feb 2010
17 min read
Save for later

Build your own Application to access Twitter using Java and NetBeans: Part 2

Packt
19 Feb 2010
17 min read
In this tutorial, we’ll develop the simple Java application further to add some more functions. Now that we can connect to our Twitter account via the Twitter4J API, it would be nice to use a login dialog instead of hard-coding our Twitter username and password in the Java application. But before we start to build our enhanced SwingAndTweet application, let me show you how it will look like once we finish all the exercises in this part of the tutorial: And now, let the show begin… Creating a Login dialog for our SwingAndTweet application Open your NetBeans IDE along with your SwingAndTweet project, and make sure you’re in the Design View. Go to the Palette window and locate the Dialog component under the Swing Windows section; then drag and drop it anywhere inside the SwingAndTweetUI JFrame component: A JDialog will be added automatically to your SwingAndTweet application, and it will show up in the Component Inspector tab located at the lower-left part of the screen, under Other Components: Right-click on the jDialog1 component in the Inspector tab and select Change Variable Name… from the pop-up menu. The Rename dialog will show up next. Type twitterLogin in the New Name field and press Enter to change the dialog’s name. Now you can start adding text fields, labels and buttons to your twitterLogin dialog. Double-click on the twitterLogin component under the Inspector tab. The twitterLogin dialog will show up empty in the Design Editor window. Use the Palette window to add two JLabels, one JTextField, one JPasswordField and two JButtons to the twitterLogin dialog. Arrange these controls as shown below: Now let’s change the names of the JTextField control, the JPasswordField control and the two JButton controls, so we can easily identify them within your SwingAndTweet application’s code. Right-click on the first text field (jLabel2), select Change Variable Name… from the pop-up menu and replace the text field’s name with txtUsername. Do the same with the other fields; use txtPassword for the JPasswordField control, btnLogin for the Login button and btnExit for the Exit button. And now the last touch. Right-click anywhere inside the twitterLogin dialog, being careful not to right-click inside any of the controls, and select the Properties option from the pop-up menu. The twitterLogin [JDialog] – Properties dialog will appear next. Locate the title property, double-click on the null value and type Twitter Login to replace it. Next, scroll down the properties list until you find the modal property; click on its checkbox to enable it and then click on Close to save your changes. Basically, in the previous exercise we added all the Swing controls you’re going to need to type your username and password, so you can connect to your Twitter account. The twitterLogin dialog is going to take care of the login process for your SwingAndTweet application. We replaced the default names for the JTextField, the JPasswordField and the two JButton controls because it will be easier to identify them during the coding process of the application. On step 8 we used the Properties window of the twitterLogin dialog to change the title property and give your dialog a decent title. We also enabled the modal property on step 9, so you can’t just close the dialog and jump right to the SwingAndTweetUI main window; you’ll have to enter a valid Twitter username and password combination for that. Invoking the Login dialog Ok, now we have a good-looking dialog called twitterLogin. The next step is to invoke it before our main SwingAndTweet JFrame component shows up, so we need to insert some code inside the SwingAndTweetUI() constructor method. Click on the Source button of the Editor window to change to the Source View: Now locate the SwingAndTweetUI() constructor, and type the following lines right after the initComponents(); line: int loginWidth = twitterLogin.getPreferredSize().width; int loginHeight = twitterLogin.getPreferredSize().height; twitterLogin.setBounds(0,0,loginWidth,loginHeight); twitterLogin.setVisible(true); The code inside the SwingAndTweetUI() constructor method shall now look like this: To see your new twitterLogin dialog in action, press F6 or select Run | Run Project to run your SwingAndTweetUI application. The twitterLogin dialog will pop right up. You’ll be able to type in your username and password, but since we haven’t added any functionality yet, the buttons won’t do anything right now. Click on the Close (X) button to close the dialog window and the SwingAndTweetUI main window will appear next. Click on its Close (X) button to exit your Twitter Java application. Now let’s take a look at the code we added to your twitterLogin dialog. On the first line, int loginWidth = twitterLogin.getPreferredSize().width; we declare an integer variable named loginWidth, and assign to it the preferred width of the twitterLogin dialog. The getPreferredSize method retrieves the value of the preferredSize property from the twitterLogin dialog through the .width field. On the second line, int loginHeight = twitterLogin.getPreferredSize().height; we declare another integer variable named loginHeight, and assign to it the preferred height of the twitterLogin dialog. This time, the getPreferredSize() method retrieves the value of the preferredWidth property from the twitterLogin dialog through the .height field. On the next line, twitterLogin.setBounds(0,0,loginWidth,loginHeight); we use the setBounds method to set the x,y coordinates where the dialog should appear on the screen, along with its corresponding width and height. The first two parameters are for the x,y coordinates; in this case, x=0 and y=0, which means the twitterLogin dialog will show up at the upper-left part of the screen. The last two parameters receive the value of the loginWidth and loginHeight variables to establish the twitterLogin dialog’s width and height, respectively. The last line, twitterLogin.setVisible(true); makes the twitterLogin dialog appear on the screen. And since the modal property of this dialog is enabled, once it shows up on the screen it won’t let you do anything else with your SwingAndTweet1 application until you close it up or enter a valid Twitter username and password, as we’ll see in the next exercise. Adding functionality to the twitterLogin dialog Now your twitterLogin dialog is ready to roll! Basically, it won’t let you go to the SwingAndTweet main window until you’ve entered a valid Twitter username and password. And for doing that, we’re going to use the same login code from Build your own Application to access Twitter using Java and NetBeans: Part 1 of this article series. Go to the end of your SwingAndTweetUI application source code and locate the // Variables declaration – do not modify line. Below this line, you’ll see all the variables used in your application: the btnExit and btnLogin buttons, the text fields from your twitterLogin dialog and your SwingAndTweetUI main window, etc. Add the following line just below the // End of variables declaration line:     Twitter twitter; Now click on the Design button to change to the Design View: You’ll see the twitterLogin dialog again –in case you don’t, double-click on the twitterLogin component under the Inspector tab. Now double-click on the Login button to go back to the Code View. The btnLoginActionPerformed method will show up next. Add the following code inside this method: try { twitter = new Twitter(txtUsername.getText(), String.valueOf(txtPassword.getPassword())); twitter.verifyCredentials(); JOptionPane.showMessageDialog(null, "You're logged in!"); twitterLogin.dispose(); } catch (TwitterException e) { JOptionPane.showMessageDialog (null, "Login failed"); } Make sure you write each line on its own, to avoid errors. The btnLoginActionPerformed method shall look like this: Now you’re ready to test your twitterLogin dialog. Press F6 to run your application. The Twitter Login dialog will show up next. Type your Twitter username and password, and then click on the OK button. If the username and password are correct, the You’re logged in! dialog will show up and you’ll be able to go to the SwingAndTweetUI main window. If they’re not correct, the Login failed dialog will appear instead and, after you click on the OK button, you’ll return to the twitterLogin dialog until you type a correct Twitter username and password combination. To exit your SwingAndTweetUI application, click on the Close(X) button of the twitterLogin dialog and then on the Close(X) button of the SwingAndTweetUI window. You’ll be taken back to the NetBeans IDE. Click on the Design button to go back to the Design View, and double-click on the Exit button to open the btnExitActionPerformed method. Type System.exit(0); inside the btnExitActionPerformed method, as shown below: Now go back to the Design View again, right-click anywhere inside the twitterLogin dialog (just be careful not to right-click over any of the dialog’s controls) and select the Events | Window | windowClosing option from the pop-up menu: NetBeans will change to Code View automatically and you’ll be inside the twitterLoginWindowClosing method. Type System.exit(0); inside this method, as shown below: Now run your application to test the new functionality in your loginTwitter dialog. You’ll be able to exit the SwingAndTweet application when clicking on the Exit or Close(X) buttons, and you’ll be able to go to your application’s main window if you type a correct Twitter username and password combination. You can close your SwingAndTweet application now. And now, let’s examine what we just accomplished. First you added the Twitter twitter; line to your application code. With this line we’re declaring a Twitter object named twitter, and it will be available throughout all the application code. On step 4, you added some lines of code to the btnLoginActionPerformed method; this code will be executed every time you click on the Login button from the twitterLogin dialog. All the code is enclosed in a try block, so that if an error occurs during the login process, a TwitterException will be thrown and the code inside the catch block will execute. The first line inside the try block is twitter = new Twitter(txtUsername.getText(),String.valueOf(txtPassword.getPassword())); This code creates the twitter object that we’re going to use throughout the application. It uses the text value you entered in the txtUsername and txtPassword fields to log into your Twitter account. The next line, twitter.verifyCredentials(); checks to see if the username and password provided to the twitter object are correct; if that’s true, a message dialog box shows up in the screen with the You’re logged in! message and the rest of the code executes once you click on the OK button of this message dialog; otherwise, the code in the catch block executes and a message dialog shows up in the screen with the Login failed message, and the twitterLogin dialog keeps waiting for you to type a correct username and password combination. The next line in the sequence, JOptionPane.showMessageDialog(null, "You're logged in!"); shows the message dialog that we talked about before, and the last line inside the try block, twitterLogin.dispose(); makes the twitterLogin dialog disappear from the screen once you’ve logged into your Twitter account successfully. The only line of code inside the catch block is JOptionPane.showMessageDialog (null, "Login failed"); This line executes when there’s an error in the Twitter login process; it shows the Login failed message in the screen and waits for you to press the OK button. On step 9 we added one line of code to the btnExitActionPerformed method: System.exit(0); This line closes your SwingAndTweet application whenever you click on the Exit button. Finally, on steps 10-12 we added another System.exit(0); line to the twitterLoginWindowClosing method, to close your SwingAndTweet application whenever you click on the Close(X) button of the twitterLogin dialog. Showing your Twitter timeline right after logging in Now let’s see some real Twitter action! The following exercise will show you how to show your most recent tweets inside a text area. Click on the Design button to go to the Design View; then double-click on the [JFrame] component under the Inspector tab to show the SwingAndTweetUI dialog in the Design View window: The SwingAndTweet frame will show the three controls we created during Build your own Application to access Twitter using Java and NetBeans: Part 1. Replace the My Last Tweet text in the JLabel control with the What’s happening text. Then right-click on the JTextField control and select the Change Variable Name… option from the pop-up menu, to change its name from jTextField1 to txtUpdateStatus. Now do the same with the JButton control. Right-click on it and select the Change Variable Name… option from the pop-up menu to change its name from jButton1 to btnUpdateStatus. Right-click on the same button again, but this time select the Edit Text option from the pop-up menu and replace the Login text with Update. Rearrange the three controls as per the following screenshot (you’ll need to make the SwingAndTweet container wider): Now drag a JTextArea control from the Palette window and drop it inside the SwingAndTweetUI container. Resize the text area so it fills up the rest of the container, as shown below: Double-click on the Update button to open the btnUpdateStatusActionPerformed method. The first thing you’ll notice is that it’s not empty; this is because this used to be the old Login button, remember? Now just replace all the code inside this method, as shown below: private void btnUpdateStatusActionPerformed(java.awt.event.ActionEvent evt) { try { if (txtUpdateStatus.getText().isEmpty()) JOptionPane.showMessageDialog (null, "You must write something!"); else { twitter.updateStatus(txtUpdateStatus.getText()); jTextArea1.setText(null); java.util.List<Status> statusList = twitter.getUserTimeline(); for (int i=0; i<statusList.size(); i++) { jTextArea1.append(String.valueOf(statusList.get(i).getText())+"n"); jTextArea1.append("-----------------------------n"); } } } catch (TwitterException e) { JOptionPane.showMessageDialog (null, "A Twitter Error ocurred!"); } txtUpdateStatus.setText(""); jTextArea1.updateUI(); The next step is to modify your btnLoginActionPerformed method; we need to add several lines of code to show your Twitter timeline. The complete method is shown below (the lines you need to add are shown in bold): private void btnLoginActionPerformed(java.awt.event.ActionEvent evt) { try { twitter = new Twitter(txtUsername.getText(), String.valueOf(txtPassword.getPassword())); twitter.verifyCredentials(); // JOptionPane.showMessageDialog(null, "You're logged in!"); java.util.List<Status> statusList = twitter.getUserTimeline(); for (int i=0; i<statusList.size(); i++) { jTextArea1.append(String.valueOf(statusList.get(i).getText())+"n"); jTextArea1.append("-----------------------------n"); } twitterLogin.dispose(); } catch (TwitterException e) { JOptionPane.showMessageDialog (null, "Login failed"); } jTextArea1.updateUI(); } Once you have added all the necessary code in each button’s actionPerformed method, press F6 to run the SwingAndTweet application and check if all things work as intended. If you type a message in the txtUpdateStatus text field and then click on the Update button, the timeline information inside the JTextArea1 control will change to reflect your new Twitter status: You can close your SwingAndTweet application now. That was cool, right? Now you have a much better-looking Twitter client! And you can update your status, too! Let’s examine the code we added in this last exercise… private void btnUpdateStatusActionPerformed(java.awt.event.ActionEvent evt) { try { if (txtUpdateStatus.getText().isEmpty()) JOptionPane.showMessageDialog (null, "You must write something!"); else { twitter.updateStatus(txtUpdateStatus.getText()); jTextArea1.setText(null); java.util.List<Status> statusList = twitter.getUserTimeline(); for (int i=0; i<statusList.size(); i++) { jTextArea1.append(String.valueOf(statusList.get(i).getText())+"n"); jTextArea1.append("-----------------------------n"); } } } catch (TwitterException e) { JOptionPane.showMessageDialog (null, "A Twitter Error ocurred!"); } txtUpdateStatus.setText(""); jTextArea1.updateUI(); On step 7 we added some code to the btnUpdateStatusActionPerformed method. This code will execute whenever you click on the Update button to update your Twitter status. First, let’s look at the code inside the try block. The first two lines, if (txtUpdateStatus.getText().isEmpty()) JOptionPane.showMessageDialog (null, "You must write something!"); are the first part of a simple if-else statement that checks to see if you’ve written something inside the txtUpdateStatus text field; if it’s empty, a message dialog will show the You must write something! message on the screen, and then it will wait for you to click on the OK button. If the txtUpdateStatus text field is not empty, the code inside the else block will execute instead of showing up the message dialog. The next part of the code is the else block. The first line inside this block, twitter.updateStatus(txtUpdateStatus.getText()); updates your twitter status with the text you wrote in the txtUpdateStatus text field; if an error occurs at this point, a TwitterException is thrown and the program execution will jump to the catch block. If your Twitter status was updated correctly, the next line to execute is jTextArea1.setText(null); This line erases all the information inside the jTextArea1 control. And the next line, java.util.List<Status> statusList = twitter.getUserTimeline(); gets the 20 most recent tweets from your timeline and assigns them to the statusList variable. The next line is the beginning of a for statement: for (int i=0; i<statusList.size(); i++) { Basically, what this for block does is iterate through all the 20 most recent tweets in your timeline, one at a time, executing the two statements inside this block on each iteration: jTextArea1.append(String.valueOf(statusList.get(i).getText())+"n"); jTextArea1.append("-----------------------------n"); Although the getUserTimeline() function retrieves the 20 most recent tweets, we need to use the statusList.size() statement as the loop continuation condition inside the for block to get the real number of tweets obtained, because they can be less than 20, and we can’t iterate through something that maybe doesn’t exist, right? The first line appends the text of each individual tweet to the jTextArea1 control, along with a new-line character ("n") so each tweet is shown in one individual line, and the second line appends the "-----------------------------n" text as a separator between each individual tweet, along with a new-line character. The final result is a list of the 20 most recent tweets inside the jTextArea1 control. The only line of code inside the catch block displays the A Twitter Error occurred! message in case something goes wrong when trying to update your Twitter status. The next line of code right after the catch block is txtUpdateStatus.setText(""); This line just clears the content inside the txtUpdateStatus control, so you don’t accidentally insert the same message two times in a row. And finally, the last line of code in the btnUpdateStatusActionPerformed method is jTextArea1.updateUI(); This line updates the jTextArea1 control, so you can see the list of your 20 most recent tweets after updating your status. private void btnLoginActionPerformed(java.awt.event.ActionEvent evt) { try { twitter = new Twitter(txtUsername.getText(), String.valueOf(txtPassword.getPassword())); twitter.verifyCredentials(); // JOptionPane.showMessageDialog(null, "You're logged in!"); java.util.List<Status> statusList = twitter.getUserTimeline(); for (int i=0; i<statusList.size(); i++) { jTextArea1.append(String.valueOf(statusList.get(i).getText())+"n"); jTextArea1.append("-----------------------------n"); } twitterLogin.dispose(); } catch (TwitterException e) { JOptionPane.showMessageDialog (null, "Login failed"); } jTextArea1.updateUI(); And now let’s have a look at the code we added inside the btnLoginActionPerformed method. The first thing you’ll notice is that we’ve added the '//' characters to the // JOptionPane.showMessageDialog(null, "You're logged in!"); line; this means it’s commented out and it won’t be executed, because it’s safe to go directly to the SwingAndTweet main window right after typing your Twitter username and password. The next lines are identical to the ones inside the btnUpdateStatusActionPerformed method we saw before; the first line retrieves your 20 most recent tweets, and the for block displays the list of tweets inside the jTextArea1 control.  And the last line of code, jTextArea1.updateUI(); updates the jTextArea1 control so you can see the most recent information regarding your latest tweets. Summary Well, now your SwingAndTweet application looks better, don’t you think so? In this article, we enhanced the SwingAndTweet application which we build in the first part of the tutorials series. In short, we: Created a twitterLogin dialog to take care of the login process Added functionality to show your 20 most recent tweets right after logging in Added the functionality to update your Twitter status
Read more
  • 0
  • 0
  • 4291
article-image-part-2-deploying-multiple-applications-capistrano-single-project
Rodrigo Rosenfeld
01 Jul 2014
8 min read
Save for later

Part 2: Deploying Multiple Applications with Capistrano from a Single Project

Rodrigo Rosenfeld
01 Jul 2014
8 min read
In part 1, we covered Capistrano and why you would use it. We also covered mixins, which provide the base for what we will do in this post, which is to deploy a sample project using Capistrano. For this project, suppose our user interface is a combination of two applications,app1 and app2. They should be deployed to servers do and ec2. And we'll provide two environments,production and cert. Make sure Ruby and Bundler are installed before you start. First, we create a new directory for our project, and add a Gemfile to it with capistrano as a dependency. Then we will create the Capistrano directory structure: mkdircapsample cd capsample bundle init echo "gem 'capistrano'" >>Gemfile bundle bundle exec cap install STAGES="do_prod_app1,do_prod_app2,do_cert_app1,do_cert_app2,ec2_prod_app1,ec2_prod_app2,ec2_cert_app1,ec2_cert_app2" This will create nine files under config/deploy, one for each server/environment/application group. This is just to demonstrate the idea. We'll completely override their entire content later on. It will also create a Capfile file that works in a similar way to a regular Rakefile. With Rake, you can get a list of the available tasks with rake -T. With Capistrano you can get the same using: bundle exec cap -T Behind the scenes, cap is a binary distributed with the capistrano gem that will run Rake with Capfile set as the Rakefile and supporting a few other options like --roles.Now create a new file,lib/mixin.rb, with the content mentioned in the Using mixins section in part 1. Then add this to the top of the Capfile: $: . unshiftFile.dirname(__FILE__) require'lib/mixin' Each of the files under config/deploy will look very similar to each other. For instance, ec2_prod_app1 would look like this: mixin 'servers/ec2' mixin'environments/production' mixin'applications/app1' Then config/mixins/servers/ec2.rb would look like this: server 'ec2.mydomain.com', roles: [:main] set :database_host, 'ec2-db.mydomain.com' This file contains definitions that are valid (or default) for the whole server, no matter what environment or application we're deploying. In this example the database host is shared for all applications and environments hosted on our ec2 server. Something to note here is that we're adding a single role named main to our server. If we specified all roles, like [:web, :db, :assets, :puma], then they would be shared with all recipes relying on this server mixin. So, a better approach would be to add them on the application's recipe, if required. For instance, you might want to add something like set :server_name, 'ec2.mydomain.com' to your server definitions. Then you can dynamically set the role in the application's recipe by calling role :db, [fetch(:server_name)] and so on for all required roles. However, this is usually not necessary for third-party recipes as they let you decide which role the recipe should act on. For example, if you want to deploy your application with Puma you can write set :puma_role, :main. Before we discuss a full example for the application recipe, let's look at what config/mixins/environments/production.rb might look like: set :branch, 'production' set :encoding_key, '098f6bcd4621d373cade4e832627b4f6' set :database_name, 'app_production' set :app1_port, 3000 set :app2_port, 3001 set :redis_port, 6379 set :solr_port, 8080 In this example, we're assuming that the ports for app1 and app2 , Redis and Solr will be the same for production in all servers, as well as the database name. Finally, the recipes themselves, which tell Capistrano how to set up an application, will be defined byconfig/mixins/applications/app1.rb. Here's an example for a simple Rails application: Rake :: Task['load:defaults'].invoke Rake::Task['load:defaults'].clear require'capistrano/rails' require'capistrano/puma' Rake::Task['load:defaults'].reenable Rake::Task['load:defaults'].invoke set :application, 'app1' set :repo_url, 'git@example.com:me/app1.git' set :rails_env, 'production' set :assets_roles, :main set :migration_role, :main set :puma_role, :main set :puma_bind, "tcp://0.0.0.0:#{fetch :app1_port}" namespace :railsdo desc'Generate settings file' task :generate_settingsdo on roles(:all) do template ="config/templates/database.yml.erb" dbconfig=StringIO.new(ERB.new(File.read template).result binding) upload! dbconfig, release_path.join('config', 'database.yml') end end end before 'deploy:migrate', 'rails:generate_settings' # Create directories expected by Puma default settings: before 'puma:restart', 'create_log_and_tmp'do on roles(:all) do within shared_pathdo execute :mkdir, '-p', 'log', 'tmp/pids' end end end Make sure you remove the lines that set application and repo_url on the config/deploy.rb file generated bycap install. Also, if you're deploying a Rails application using this recipe you should also add the capistrano-rails andcapistrano3-puma gems to your Gemfile and run bundle again. In case you're running rbenv or rvmto install Ruby in the server, make sure you include either capistrano-rbenv or capistrano-rvm gems and require them on the recipe. You may also need to provide more information in this case. For rbenv you'd need to tell it which version to use with set :rbenv_ruby, '2.1.2' for example. Sometimes you'll find out that some settings are valid for all applications under all environments in all servers. The most important one to notice is the location for our applications as they must not conflict with each other. Another setting that could be shared across all combinations could be the private key used to connect to all servers. For such cases, you should add those settings directly to config/deploy.rb: set :deploy_to, -> { "/home/vagrant/apps/#{fetch :environment}/#{fetch :application}" } set :ssh_options, { keys: %w(~/.vagrant.d/insecure_private_key) } I strongly recommend connecting to your servers with a regular account rather than root. For our applications we use userbenv to manage our Ruby versions, so we're able to deploy them as regular users as long as our applications listen to high port numbers. We'd then setup our proxy server (nginx in our case) to forward the requests on port 80 and 443 to each application's port accordingly to the requested domains and paths. This is set up by some Chef recipes. Those recipes run as root in our servers. To connect using another user, just pass it in the server declaration. To connect to vagrant@192.168.33.10, this is how you'd set it up: server '192.168.33.10', user: 'vagrant', roles: [:main] set :ssh_options, { keys: %w(~/.vagrant.d/insecure_private_key) } Finally, we create a config/database.yml that's suited for our environment on demand, before running the migrations task. Here's what the template config/templates/database.ymlcould look like: production: adapter: postgresql encoding: unicode pool: 30 database: <%= fetch :database_name %> host: <%= fetch :database_host %> I've omitted the settings for app2 , but in case it was another Rails application, we could extract the common logic between them to another common_rails mixin. Also notice that because we're not requiring capistrano/rails and capistrano/puma in the Capfile, their default values won't be set as Capistrano has already invoked the load:defaults task before our mixins are loaded. That's why we clear that task, require the recipes, and then re-enable and re-run the task so that the default for those recipes have the opportunity to load. Another approach is to require those recipes directly in the Capfile. But unless the recipes are carefully crafted to only run their commands for very specific roles, it's likely that you can get unexpected behavior if you deploy an application with Rails, another one with Grails, and yet another with NodeJS. If any of them has commands that run for all roles, or if the role names between them conflict somehow you'd be in trouble. So, unless you have total control and understanding about all your third-party recipes, I'd recommend that you use the approach outlined in the examples above. Conclusion All the techniques presented here are used to manage our real complex scenario at e-Core, where we support multiple applications in lots of environments that are replicated in three servers. We found that this allowed us to quickly add new environments or servers as needed to recreate our application in no time. Also, I'd like to thank Juan Ibiapina, who worked with me on all these recipes to ensure our deployment procedures are fully automated—almost. We still manage our databases and documents manually because we prefer to. About the author Rodrigo Rosenfeld Rosas lives in Vitória-ES, Brazil, with his lovely wife and daughter. He graduated in Electrical Engineering with a Master’s degree in Robotics and Real-time Systems. For the past five years Rodrigo has focused on building and maintaining single page web applications. He is the author of some gems includingactive_record_migrations,rails-web-console, the JS specs runner oojspec, sequel-devise, and the Linux X11 utility ktrayshortcut. Rodrigo was hired by e-Core (Porto Alegre-RS, Brazil) to work from home, building and maintaining software for Matterhorn Transactions Inc. with a team of great developers. Matterhorn's main product, the Market Tracker, is used by LexisNexis clients .
Read more
  • 0
  • 0
  • 4276

article-image-introduction-legacy-modernization-oracle
Packt
16 Oct 2009
13 min read
Save for later

Introduction to Legacy Modernization in Oracle

Packt
16 Oct 2009
13 min read
IT organizations are under increasing demand to increase the ability of the business to innovate while controlling and often reducing costs. Legacy modernization is a real opportunity for these goals to be achieved. To attain these goals, the organization needs to take full advantage of emerging advances in platform and software innovations, while leveraging the investment that has been made in the business processes within the legacy environment.To make good choices for a specific roadmap to modernization, the decision makers should work to have a good understanding of what these modernization options are, and how to get there. Overview of the Modernization Options There are five primary approaches to legacy modernization: Re-architecting to a new environment SOA integration and enablement Replatforming through re-hosting and automated migration Replacement with COTS solutions Data Modernization Other organizations may have different nomenclature for what they call each type of modernization, but any of these options can generally fit into one of these five categories. Each of the options can be carried out in concert with the others, or as a standalone effort. They are not mutually exclusive endeavors. Further, in a large modernization project, multiple approaches are often used for parts of the larger modernization initiative. The right mix of approaches is determined by the business needs driving the modernization, organization's risk tolerance and time constraints, the nature of the source environment and legacy applications. Where the applications no longer meet business needs and require significant changes, re-architecture might be the best way forward. On the other hand, for very large applications that mostly meet the business needs, SOA enablement or re-platforming might be lower risk options. You will notice that the first thing we talk about in this section—the Legacy Understanding phase—isn't listed as one of the modernization options. It is mentioned at this stage because it is a critical step that is done as a precursor to any option your organization chooses. Legacy Understanding Once we have identified our business drivers and the first steps in this process, we must understand what we have before we go ahead and modernize it. Legacy environments are very complex and quite often have little or no current documentation. This introduces a concept of analysis and discovery that is valuable for any modernization technique. Application Portfolio Analysis (APA) In order to make use of any modernization approach, the first step an organization must take is to carry out an APA of the current applications and their environment. This process has many names. You may hear terms such as Legacy Understanding, Application Re-learn, or Portfolio Understanding. All these activities provide a clear view of the current state of the computing environment. This process equips the organization with the information that it needs to identify the best areas for modernization. For example, this process can reveal process flows, data flows, how screens interact with transactions and programs, program complexity and maintainability metrics and can even generate pseudocode to re-document candidate business rules. Additionally, the physical repositories that are created as a result of the analysis can be used in the next stages of modernization, be it in SOA enablement, re-architecture, or re-platforming. Efforts are currently underway by the Object Management Group (OMG) to create a standard method to exchange this data between applications. The following screenshot shows the Legacy Portfolio Analysis: APA Macroanalysis The first form of APA analysis is a very high-level abstract view of the application environment. This level of analytics looks at the application in the context of the overall IT organization. Systems information is collected at a very high level. The key here is to understand which applications exist, how they interact, and what the identified value of the desired function is. With this type of analysis, organizations can manage overall modernization strategies and identify key applications that are good candidates for SOA integration, re-architecture, or re-platforming versus a replacement with Commercial Off-the-Shelf (COTS) applications. Data structures, program code, and technical characteristics are not analyzed here. The following macro-level process flow diagram was automatically generated from Relativity Technologies Modernization Workbench tool. Using this, the user can automatically get a view of the screen flows within a COBOL application. This is used to help identify candidate areas for modernization, areas of complexity, transfer of knowledge, or legacy system documentation. The key thing about these types of reports is that they are dynamic and automatically generated. The previous flow diagram illustrates some interesting points about the system that can be understood quickly by the analyst. Remember, this type of diagram is generated automatically, and can provide instant insight into the system with no prior knowledge. For example, we now have some basic information such as: MENSAT1.MENMAP1 is the main driver and is most likely a menu program. There are four called programs. Two programs have database interfaces. This is a simplistic view, but if you can imagine hundreds of programs in a visual perspective, we can quickly identify clusters of complexity, define potential subsystems, and do much more, all from an automated tool with visual navigation and powerful cross-referencing capabilities. This type of tool can also help to re-document existing legacy assets. APA Microanalysis The second type of portfolio analysis is APA microanalysis. This examines applications at the program level. This level of analysis can be used to understand things like program logic or candidate business rules for enablement, or business rule transformation. This process will also reveal things such as code complexity, data exchange schemas, and specific interaction within a screen flow. These are all critical when considering SOA integration, re-architecture, or a re-platforming project. The following are more models generated from the Relativity Modernization Technologies Workbench tool. The first is a COBOL transaction taken from a COBOL process. We are able to take a low-level view of a business rule slice taken from a COBOL program, and understand how this process flows. The particulars of this flow map diagram are not important; rather, this model can be automatically generated and is dynamic based on the current state of the code. The second model shows how a COBOL program interacts with a screen conversation. In this example, we are able to look at specific paragraphs within a particular program. We can identify specific CICS transaction and understand which paragraphs (or subroutines) are interacting with the database. The models can be used to further refine our drive for a more re-architected system, which helps us to  identify business rules and populate a rules engine, This example is just another example of a COBOL program that interacts with screens—shown in gray, and the paragraphs that execute CICS transactions—shown in white. So with these color coded boxes, we can quickly identify paragraphs, screens, databases, and CICS transactions. Application Portfolio Management (APM) APA is only a part of IT approach known as Application Portfolio Management. While APA analysis is critical for any modernization project, APM provides guideposts on how to combine the APA results, business assessment of the applications' strategic value and future needs, and IT infrastructure directions to come up with a long term application portfolio strategy and related technology targets to support it. It is often said that you cannot modernize that which you do not know. With APM, you can effectively manage change within an organization, understand the impact of change, and also manage its compliance. APM is a constant process, be it part of a modernization project or an organization's portfolio management and change control strategy. All applications are in a constant state of change. During any modernization, things are always in a state of flux. In a modernization project, legacy code is changed, new development is done (often in parallel), and data schemas are changed. When looking into APM tool offerings, consider products that can provide facilities to capture these kinds of changes in information and provide an active repository, rather than a static view. Ideally, these tools must adhere to emerging technical standards, like those being pioneered by  the OMG. Re-Architecturing Re-architecting is based on the concept that all legacy applications contain invaluable business logic and data relevant to the business, and these assets should be leveraged in the new system, rather than throwing it all out to rebuild from scratch. Since the new modern IT environment elevates a lot of this logic above the code using declarative models supported by BPM tools, ESBs, Business Rules engines, Data integration and access solutions, some of the original technical code can be replaced by these middleware tools to achieve greater agility. The following screenshot shows an example of a system after re-architecture. The previous example shows what a system would look like, from a higher level, after re-architecture. We see that this isn't a simple transformation of one code base to another in a one-to-one format. It is also much more than remediation and refactoring of the legacy code to standard java code. It is a system that fully leverages technologies suited for the required task, for example, leveraging Identity Management for security, business rules for core business, and BPEL for process flow. Thus, re-architecting focuses on recovering and reassembling the process relevant to business from a legacy application, while eliminating the technology-specific code. Here, we want to capture the value of the business process that is independent of the legacy code base, and move it into a different paradigm. Re-architecting is typically used to handle modernizations that involve changes in architecture, such as the introduction of object orientation and process-driven services. The advantage that re-architecting has over greenfield development is that re-architecting recognizes that there is information in the application code and surrounding artifacts (example, DDLs, COPYBOOKS, user training manuals) that is useful as a source for the re-architecting process, such as application process interaction, data models, and workflow. Re-architecting will usually go outside the source code of the legacy application to incorporate concepts like workflow and new functionality that were never part of the legacy application. However, it also recognized that this legacy application contains key business rules and processes that need to be harvested and brought forward. Some of the important considerations for maximizing re-use by extracting business rules from legacy applications as part of a re-architecture project include: Eliminate dead code, environmental specifics, resolve mutually exclusive logic. Identify key input/output data (parameters, screen input, DB and file records, and so on). Keep in mind many rules outside of code (for example, screen flow described in a training manual. Populate a data dictionary specific to application/industry context. Identify and tag rules based on transaction types and key data, policy parameters, key results (output data). Isolate rules into tracking repository. Combine automation and human review to track relationships, eliminate redundancies, classify and consolidate, add annotation. A parallel method of extracting knowledge from legacy applications uses modeling techniques, often based on UML. This method attempts to mine UML artifacts from the application code and related materials, and then create full-fledged models representing the complete application. Key considerations for mining models include: Convenient code representation helps to quickly filter out technical details. Allow user-selected artifacts to be quickly represented in UML entities. Allow user to add relationships and annotate the objects to assemble more complete UML model. Use external information if possible to refine use cases (screen flows) and activity diagrams—remember that some actors, flows, and so on may not appear in the code. Export to XML-based standard notation to facilitate refinement and forward-re-engineering through UML-based tools. Modernization with this method leverages the years of investment in the legacy code base, it is much less costly and less risky than starting a new application from ground zero. However, since it does involve change, it does have its risks. As a result, a number of other modernization options have been developed that involve less risk. The next set of modernization option provide a different set of benefits with respect to a fully re-architected SOA environment. The important thing is that these other techniques allow an organization to break the process of reaching the optimal modernization target into a series of phases that lower the overall risk of modernization for an organization. In the following figure, we can see that re-architecture takes a monolithic legacy system and applies technology and process to deliver a highly adaptable modern architecture. Since SOA integration is the least invasive approach to legacy application modernization, this technique allows legacy components to be used as part of an SOA infrastructure very quickly and with little risk. Further, it is often the first step in the larger modernization process. In this method, the source code remains mostly unchanged (we will talk more about that later) and the application is wrapped using SOA components, thus creating services that can be exposed and registered to an SOA management facility on a new platform, but are implemented via the exiting legacy code. The exposed services can then be re-used and combined with the results of other more invasive modernization techniques such as re-architecting. Using SOA integration, an organization can begin to make use of SOA concepts, including the orchestration of services into business processes, leaving the legacy application intact. Of course, the appropriate interfaces into the legacy application must exist and the code behind these interfaces must perform useful functions in a manner that can be packaged as services. SOA readiness assessment involves analysis of service granularity, exception handling, transaction integrity and reliability requirements, considerations of response time, message sizes, and scalability, issues of end-to-end messaging security, and requirements for services orchestration and SLA management. Following an assessment, any issues discovered need to be rectified before exposing components as services, and appropriate run-time and lifecycle governance policies created and implemented. It is important to note that there are three tiers where integration can be done: Data, Screen, and Code. So, each of the tiers, based upon the state and structure of the code, can be extended with this technique. As mentioned before, this is often the first step in modernization. In this example, we can see that the legacy systems still stay on the legacy platform. Here, we isolate and expose this information as a business service using legacy adapters. The table below lists important considerations in SOA integration and enablement projects. Criteria for identifying well defined services Represent a core enterprise function re-usable by many client applications Present a coarse-grained interface Single interaction vs. multi-screen flows UI, business logic, data access layers Exception handling-returning results without branching to another screen Discovering "Services" beyond screen flows Conversational vs. sync/async calls COMMAREA transactions (re-factored to use reasonable message size) Security policies and their enforcement RACF vs. LDAP-based or SSO mechanism End-to-end messaging security and Authentication, Authorization, Audition   Services integration and orchestration Wrapping and proxying via middle-tier gate-way vs. mainframe-based services Who's responsible for input validation? Orchestrating "composite" MF services Supporting bidirectional integration Quality of Service (QoS) requirements Response time, throughput, scalability End-to-end monitoring and SLA management Transaction integrity and global transaction coordination End-to-end monitoring and tracing Services lifecycle governance Ownership of service interfaces and change control process Service discovery (repository, tools) Orchestration, extension BPM integration
Read more
  • 0
  • 0
  • 4248
Modal Close icon
Modal Close icon