Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-creating-lazarus-component
Packt
22 Apr 2013
14 min read
Save for later

Creating a Lazarus Component

Packt
22 Apr 2013
14 min read
(For more resources related to this topic, see here.) Creating a new component package We are going to create a custom-logging component, and add it to the Misc tab of the component palette. To do this, we first need to create a new package and add out component to that package along with any other required resources, such as an icon for the component. To create a new package, do the following: Select package from the main menu. Select New Package.... from the submenu. Select a directory that appears in the Save dialog and create a new directory called MyComponents. Select the MyComponents directory. Enter MyComponents as the filename and press the Save button. Now, you have a new package that is ready to have components added to it. Follow these steps: On the Package dialog window, click on the add (+) button. Select the New Component tab. Select TComponent as Ancestor Type. Set New class name to TMessageLog. Set Palette Page to Misc. Leave all the other settings as they are. You should now have something similar to the following screenshot. If so, click on the Create New Component button: You should see messagelog.pas listed under the Files node in the Package dialog window. Let's open this file and see what the auto-generated code contains. Double-click on the file or choose Open file from More menu in the Package dialog. Do not name your component the same as the package. This will cause you problems when you compile the package later. If you were to do this, the .pas file would be over written, because the compile procedure creates a .pas file for the package automatically. The code in the Source Editor window is given as follows: unit TMessageLog;{$mode objfpc}{$H+}interfaceusesClasses, SysUtils, LResources, Forms, Controls, Graphics, Dialogs,StdCtrls;typeTMessageLog = class(TComponent)private{ Private declarations }protected{ Protected declarations }public{ Public declarations }published{ Published declarations }end;procedure Register;implementationprocedure Register;beginRegisterComponents('Misc',[TMessageLog]);end;end. What should stand out in the auto-generated code is the global procedure RegisterComponents. RegisterComponents is contained in the Classes unit. The procedure registers the component (or components if you create more than one in the unit) to the component page that is passed to it as the first parameter of the procedure. Since everything is in order, we can now compile the package and install the component. Click the Compile button on the toolbar. Once the compile procedure has been completed, select Install, which is located in the menu under the Use button. You will be presented with a dialog telling you that Lazarus needs to be rebuilt. Click on the Yes button, as shown in the following screenshot: The Lazarus rebuilding process will take some time. When it is complete, it will need to be restarted. If this does not happen automatically, then restart Lazarus yourself. On restarting Lazarus, select the Misc tab on the component palette. You should see the new component as the last component on the tab, as shown in the following screenshot: You have now successfully created and installed a new component. You can now create a new application and add this component to a Lazarus form. The component in its current state does not perform any action. Let us now look at adding properties and events to the component that will be accessible in the Object Inspector window at design time. Adding properties Properties of a component that you would like to have visible in the Object Inspector window must be declared as published. Properties are attributes that determine an object's status and behavior. A property is a name that is mapped to read and write methods or access data directly. This means, when you read or write a property, you are accessing a field or calling a method of the object. For example, let us add a FileName property to TMessageLog, which is the name of the file that messages will be written to. The actual field of the object that will store this data will be named fFileName. To the TMessageLog private declaration section, add: fFileName: String; To the TMessagLog published declaration section, add: property FileName: String read fFileName write fFileName; With these changes, when the packages are compiled and installed, the property FileName will be visible in the Object Inspector window when the TMessageLog declaration is added to a form in a project. You can do this now if you would like to verify this. Adding events Any interaction that a user has with a component, such as clicking it, generates an event. Events are also generated by the system in response to a method call or a change in a component's property, or if different component's property changes, such as the focus being set on one component causes the current component in focus to lose it, which triggers an event call. Event handlers are methods of the form containing the component; this technique is referred to as delegation. You will notice that when you double-click on a component's event in the object inspector it creates a new procedure of the form. Events are properties, and such methods are assigned to event properties, as we just saw with normal properties. Because events are the properties and use of delegation, multiple events can share the same event handler. The simplest way to create an event is to define a method of the type TNotifyEvent. For example, if we want to add an OnChange event to TMessageLog, we could add the following code: ...privateFonChange : TNotifyEvent;...publicproperty OnChange: TNotifyEvent read FOnChange write FOnChange;…end; When you double-click on the OnChange event in Object Inspector, the following method stub would be created in the form containing the TMessageLog component: procedure TForm.MessageLogChange(Sender: TObject);beginend; Some properties, such as OnChange or OnFocus, are sometimes called on the change of value of a component's property or the firing of another event. Traditionally, in this case, a method with the prefix of Do and with the suffix of the On event are called. So, in the case of our OnChange event, it would be called from the DoChange method (as called by some other method). Let us assume that, when a filename is set for the TMessageLog component, the procedure SetFileName is called, and that calls DoChange. The code would look as follows: procedure SetFileName(name : string);beginFFileName = name;//fire the eventDoChange;end;procedure DoChange;beginif Assigned(FOnChange) thenFOnChange(Self);end; The DoChange procedure checks to see if anything has been assigned to the FOnChange field. If it is assigned, then it executes what is assigned to it. What this means is that if you double-click on the OnChange event in Object Inspector, it assigns the method name you enter to FOnChange, and this is the method that is called by DoChange. Events with more parameters You probably noticed that the OnChange event only had one parameter, which was Sender and is of the type Object. Most of the time, this is adequate, but there may be times when we want to send other parameters into an event. In those cases, TNotifyEvent is not an adequate type, and we will need to define a new type. The new type will need to be a method pointer type, which is similar to a procedural type but has the keyword of object at the end of the declaration. In the case of TMessageLog, we may need to perform some action before or after a message is written to the file. To do this, we will need to declare two method pointers, TBeforeWriteMsgEvent and TAfterWriteMsgEvent, both of which will be triggered in another method named WriteMessage. The modification of our code will look as follows: typeTBeforeWriteMsgEvent = procedure(var Msg: String; var OKToWrite:Boolean) of Object;TAfterWriteMsgEvent = procedure(Msg: String) of Object;TmessageLog = class(TComponent)…publicfunction WriteMessage(Msg: String): Boolean;...publishedproperty OnBeforeWriteMsg: TBeforeWriteMsgEvent read fBeforeWriteMsgwrite fBeforeWriteMsg;property OnAfterWriteMsg: TAfterWriteMsgEvent read fAfterWriteMsgwrite fAfterWriteMsg;end;implementationfunction TMessageLog.WriteMessage(Msg: String): Boolean;varOKToWrite: Boolean;beginResult := FALSE;OKToWrite := TRUE;if Assigned(fBeforeWriteMsg) thenfBeforeWriteMsg(Msg, OKToWrite);if OKToWrite thenbegintryAssignFile(fLogFile, fFileName);if FileExists(fFileName) thenAppend(fLogFile)elseReWrite(fLogFile);WriteLn(fLogFile, DateTimeToStr(Now()) + ' - ' + Msg);if Assigned(fAfterWriteMsg) thenfAfterWriteMsg(Msg);Result := TRUE;CloseFile(fLogFile);exceptMessageDlg('Cannot write to log file, ' + fFileName + '!',mtError, [mbOK], 0);CloseFile(fLogFile);end; // try...exceptend; // ifend; // WriteMessage While examining the function WriteMessage, we see that, before the Msg parameter is written to the file, the FBeforeWriteMsg field is checked to see if anything is assigned to it, and, if so, the write method of that field is called with the parameters Msg and OKToWrite. The method pointer TBeforeWriteMsgEvent declares both of these parameters as var types. So if any changes are made to the method, the changes will be returned to WriteMessage function. If the Msg parameter is successfully written to the file, the FAfterWriteMsg parameter is checked for assigned and executed parameter (if it is). The file is then closed and the function's result is set to True. If the Msg parameter value is not able to be written to the file, then an error dialog is shown, the file is closed, and the function's result is set to False. With the changes that we have made to the TMessageLog unit, we now have a functional component. You can now save the changes, recompile, reinstall the package, and try out the new component by creating a small application using the TMessageLog component. Property editors Property editors are custom dialogs for editing special properties of a component. The standard property types, such as strings, images, or enumerated types, have default property editors, but special property types may require you to write custom property editors. Custom property editors must extend from the class TPropertyEditor or one of its descendant classes. Property editors must be registered in the Register procedure using the function RegisterPropertyEditor from the unit PropEdits. An example of property editor class declaration is given as follows: TPropertyEditor = classpublicfunction AutoFill: Boolean; Virtual;procedure Edit; Virtual; // double-clicking the property value toactivateprocedure ShowValue; Virtual; //control-clicking the propertyvalue to activatefunction GetAttributes: TPropertyAttributes; Virtual;function GetEditLimit: Integer; Virtual;function GetName: ShortString; Virtual;function GetHint(HintType: TPropEditHint; x, y: integer): String;Virtual;function GetDefaultValue: AnsiString; Virtual;function SubPropertiesNeedsUpdate: Boolean; Virtual;function IsDefaultValue: Boolean; Virtual;function IsNotDefaultValue: Boolean; Virtual;procedure GetProperties(Proc: TGetPropEditProc); Virtual;procedure GetValues(Proc: TGetStrProc); Virtual;procedure SetValue(const NewValue: AnsiString); Virtual;procedure UpdateSubProperties; Virtual;end; Having a class as a property of a component is a good example of a property that would need a custom property editor. Because a class has many fields with different formats, it is not possible for Lazarus to have the object inspector make these fields available for editing without a property editor created for a class property, as with standard type properties. For such properties, Lazarus shows the property name in parentheses followed by a button with an ellipsis (…) that activates the property editor. This functionality is handled by the standard property editor called TClassPropertyEditor, which can then be inherited to create a custom property editor, as given in the following code: TClassPropertyEditor = class(TPropertyEditor)publicconstructor Create(Hook: TPropertyEditorHook; APropCount: Integer);Override;function GetAttributes: TPropertyAttributes; Override;procedure GetProperties(Proc: TGetPropEditProc); Override;function GetValue: AnsiString; Override;property SubPropsTypeFilter: TTypeKinds Read FSubPropsTypeFilterWrite SetSubPropsTypeFilterDefault tkAny;end; Using the preceding class as a base class, all you need to do to complete a property editor is add a dialog in the Edit method as follows: TMyPropertyEditor = class(TClassPropertyEditor)publicprocedure Edit; Override;function GetAttributes: TPropertyAttributes; Override;end;procedure TMyPropertyEditor.Edit;varMyDialog: TCommonDialog;beginMyDialog := TCommonDialog.Create(NIL);try…//Here you can set attributes of the dialogMyDialog.Options := MyDialog.Options + [fdShowHelp];...finallyMyDialog.Free;end;end; Component editors Component editors control the behavior of a component when double-clicked or right-clicked in the form designer. Classes that define a component editor must descend from TComponentEditor or one of its descendent classes. The class should be registered in the Register procedure using the function RegisterComponentEditor. Most of the methods of TComponentEditor are inherited from it's ancestor TBaseComponentEditor, and, if you are going to write a component editor, you need to be aware of this class and its methods. Declaration of TBaseComponentEditor is as follows: TBaseComponentEditor = classprotectedpublicconstructor Create(AComponent: TComponent;ADesigner: TComponentEditorDesigner); Virtual;procedure Edit; Virtual; Abstract;procedure ExecuteVerb(Index: Integer); Virtual; Abstract;function GetVerb(Index: Integer): String; Virtual; Abstract;function GetVerbCount: Integer; Virtual; Abstract;procedure PrepareItem(Index: Integer; const AnItem: TMenuItem);Virtual; Abstract;procedure Copy; Virtual; Abstract;function IsInInlined: Boolean; Virtual; Abstract;function GetComponent: TComponent; Virtual; Abstract;function GetDesigner: TComponentEditorDesigner; Virtual;Abstract;function GetHook(out Hook: TPropertyEditorHook): Boolean;Virtual; Abstract;procedure Modified; Virtual; Abstract;end; Let us look at some of the more important methods of the class. The Edit method is called on the double-clicking of a component in the form designer. GetVerbCount and GetVerb are called to build the context menu that is invoked by right-clicking on the component. A verb is a menu item. GetVerb returns the name of the menu item. GetVerbCount gets the total number of items on the context menu. The PrepareItem method is called for each menu item after the menu is created, and it allows the menu item to be customized, such as adding a submenu or hiding the item by setting its visibility to False. ExecuteVerb executes the menu item. The Copy method is called when the component is copied to the clipboard. A good example of a component editor is the TCheckListBox component editor. It is a descendant from TComponentEditor so all the methods of the TBaseComponentEditor do not need to be implemented. TComponentEditor provides empty implementation for most methods and sets defaults for others. Using this, methods that are needed for the TCheckListBoxComponentEditor component are overwritten. An example of the TCheckListBoxComponentEditor code is given as follows: TCheckListBoxComponentEditor = class(TComponentEditor)protectedprocedure DoShowEditor;publicprocedure ExecuteVerb(Index: Integer); override;function GetVerb(Index: Integer): String; override;function GetVerbCount: Integer; override;end;procedure TCheckGroupComponentEditor.DoShowEditor;varDlg: TCheckGroupEditorDlg;beginDlg := TCheckGroupEditorDlg.Create(NIL);try// .. shortenedDlg.ShowModal;// .. shortenedfinallyDlg.Free;end;end;procedure TCheckGroupComponentEditor.ExecuteVerb(Index: Integer);begincase Index of0: DoShowEditor;end;end;function TCheckGroupComponentEditor.GetVerb(Index: Integer): String;beginResult := 'CheckBox Editor...';end;function TCheckGroupComponentEditor.GetVerbCount: Integer;beginResult := 1;end; Summary In this article, we learned how to create a new Lazarus package and add a new component to that using the New Package dialog window to create our own custom component, TMessageLog. We also learned about compiling and installing a new component into the IDE, which requires Lazarus to rebuild itself in order to do so. Moreover, we discussed component properties. Then, we became acquainted with the events, which are triggered by any interaction that a user has with a component, such as clicking it, or by a system response, which could be caused by the change in any component of a form that affects another component. We studied that Events are properties, and they are handled through a technique called delegation. We discovered the simplest way to create an event is to create a descendant of TNotifyEvent—if you needed to send more parameters to an event and a single parameter provided by TNotifyEvent, then you need to declare a method pointer. We learned that property editors are custom dialogs for editing special properties of a component that aren't of a standard type, such as string or integer, and that they must extend from TPropertyEditor. Then, we discussed the component editors, which control the behavior of a component when it is right-clicked or double- clicked in the form designer, and that a component editor must descend from TComponentEditor or a descendant class of it. Finally, we looked at an example of a component editor for the TCheckListBox. Resources for Article : Further resources on this subject: User Extensions and Add-ons in Selenium 1.0 Testing Tools [Article] 10 Minute Guide to the Enterprise Service Bus and the NetBeans SOA Pack [Article] Support for Developers of Spring Web Flow 2 [Article]
Read more
  • 0
  • 0
  • 10509

article-image-configuring-jdbc-oracle-jdeveloper
Packt
21 Oct 2009
14 min read
Save for later

Configuring JDBC in Oracle JDeveloper

Packt
21 Oct 2009
14 min read
Introduction Unlike Eclipse IDE, which requires a plug-in, JDeveloper has a built-in provision to establish a JDBC connection with a database. JDeveloper is the only Java IDE with an embedded application server, the Oracle Containers for J2EE (OC4J). This database-based web application may run in JDeveloper without requiring a third-party application server. However, JDeveloper also supports third-party application servers. Starting with JDeveloper 11, application developers may point the IDE to an application server instance (or OC4J instance), including third-party application servers that they want to use for testing during development. JDeveloper provides connection pooling for the efficient use of database connections. A database connection may be used in an ADF BC application, or in a JavaEE application. A database connection in JDeveloper may be configured in the Connections Navigator. A Connections Navigator connection is available as a DataSource registered with a JNDI naming service. The database connection in JDeveloper is a reusable named connection that developers configure once and then use in as many of their projects as they want. Depending on the nature of the project and the database connection, the connection is configured in the bc4j.xcfg file or a JavaEE data source. Here, it is necessary to distinguish between data source and DataSource. A data source is a source of data; for example an RDBMS database is a data source. A DataSource is an interface that represents a factory for JDBC Connection objects. JDeveloper uses the term Data Source or data source to refer to a factory for connections. We will also use the term Data Source or data source to refer to a factory for connections, which in the javax.sql package is represented by the DataSource interface. A DataSource object may be created from a data source registered with the JNDI (Java Naming and Directory) naming service using JNDI lookup. A JDBC Connection object may be obtained from a DataSource object using the getConnection method. As an alternative to configuring a connection in the Connections Navigator a data source may also be specified directly in the data source configuration file data-sources.xml. In this article we will discuss the procedure to configure a JDBC connection and a JDBC data source in JDeveloper 10g IDE. We will use the MySQL 5.0 database server and MySQL Connector/J 5.1 JDBC driver, which support the JDBC 4.0 specification. In this article you will learn the following: Creating a database connection in JDeveloper Connections Navigator. Configuring the Data Source and Connection Pool associated with the connection configured in the Connections Navigator. The common JDBC Connection Errors. Before we create a JDBC connection and a data source we will discuss connection pooling and DataSource. Connection Pooling and DataSource The javax.sql package provides the API for server-side database access. The main interfaces in the javax.sql package are DataSource, ConnectionPoolDataSource, and PooledConnection. The DataSource interface represents a factory for connections to a database. DataSource is a preferred method of obtaining a JDBC connection. An object that implements the DataSource interface is typically registered with a Java Naming and Directory API-based naming service. DataSource interface implementation is driver-vendor specific. The DataSource interface has three types of implementations: Basic implementation: In basic implementation there is 1:1 correspondence between a client's Connection object and the connection with the database. This implies that for every Connection object, there is a connection with the database. With the basic implementation, the overhead of opening, initiating, and closing a connection is incurred for each client session. Connection pooling implementation: A pool of Connection objects is available, from which connections are assigned to the different client sessions. A connection pooling manager implements the connection pooling. When a client session does not require a connection, the connection is returned to the connection pool and becomes available to other clients. Thus, the overheads of opening, initiating, and closing connections are reduced. Distributed transaction implementation: Distributed transaction implementation produces a Connection object that is mostly used for distributed transactions and is always connection pooled. A transaction manager implements the distributed transactions. An advantage of using a data source is that code accessing a data source does not have to be modified when an application is migrated to a different application server. Only the data source properties need to be modified. A JDBC driver that is accessed with a DataSource does not register itself with a DriverManager. A DataSource object is created using a JNDI lookup and subsequently a Connection object is created from the DataSource object. For example, if a data source JNDI name is jdbc/OracleDS a DataSource object may be created using JNDI lookup. First, create an InitialContext object and subsequently create a DataSource object using the InitialContext lookup method. From the DataSource object create a Connection object using the getConnection() method: InitialContext ctx=new InitialContext(); DataSource ds=ctx.lookup("jdbc/OracleDS"); Connection conn=ds.getConnection(); The JNDI naming service, which we used to create a DataSource object is provided by J2EE application servers such as the Oracle Application Server Containers for J2EE (OC4J) embedded in the JDeveloper IDE. A connection in a pool of connections is represented by the PooledConnection interface, not the Connection interface. The connection pool manager, typically the application server, maintains a pool of PooledConnection objects. When an application requests a connection using the DataSource.getConnection() method, as we did using the jdbc/OracleDS data source example, the connection pool manager returns a Connection object, which is actually a handle to an object that implements the PooledConnection interface. A ConnectionPoolDataSource object, which is typically registered with a JNDI naming service, represents a collection of PooledConnection objects. The JDBC driver provides an implementation of the ConnectionPoolDataSource, which is used by the application server to build and manage a connection pool. When an application requests a connection, if a suitable PooledConnection object is available in the connection pool, the connection pool manager returns a handle to the PooledConnection object as a Connection object. If a suitable PooledConnection object is not available, the connection pool manager invokes the getPooledConnection() method of the ConnectionPoolDataSource to create a new PooledConnection object. For example, if connectionPoolDataSource is a ConnectionPoolDataSource object a new PooledConnection gets created as follows: PooledConnection pooledConnection=connectionPoolDataSource.getPooledConnection(); The application does not have to invoke the getPooledConnection() method though; the connection pool manager invokes the getPooledConnection() method and the JDBC driver implementing the ConnectionPoolDataSource creates a new PooledConnection, and returns a handle to it. The connection pool manager returns a Connection object, which is a handle to a PooledConnection object, to the application requesting a connection. When an application closes a Connection object using the close() method, as follows, the connection does not actually get closed. conn.close(); The connection handle gets deactivated when an application closes a Connection object with the close() method. The connection pool manager does the deactivation. When an application closes a Connection object with the close() method any client info properties that were set using the setClientInfo method are cleared. The connection pool manager is registered with a PooledConnection object using the addConnectionEventListener() method. When a connection is closed, the connection pool manager is notified and the connection pool manager deactivates the handle to the PooledConnection object, and returns the PooledConnection object to the connection pool to be used by another application. The connection pool manager is also notified if a connection has an error. A PooledConnection object is not closed until the connection pool is being reinitialized, the server is shutdown, or a connection becomes unusable. In addition to connections being pooled, PreparedStatement objects are also pooled by default if the database supports statement pooling. It can be discovered if a database supports statement pooling using the supportsStatementPooling() method of the DatabaseMetaData interface. The PeparedStatement pooling is also managed by the connection pool manager. To be notified of PreparedStatement events such as a PreparedStatement getting closed or a PreparedStatement becoming unusable, a connection pool manager is registered with a PooledConnection manager using the addStatementEventListener() method. A connection pool manager deregisters a PooledConnection object using the removeStatementEventListener() method. Methods addStatementEventListener and removeStatementEventListener are new methods in the PooledConnection interface in JDBC 4.0. Pooling of Statement objects is another new feature in JDBC 4.0. The Statement interface has two new methods in JDBC 4.0 for Statement pooling: isPoolable() and setPoolable(). The isPoolable method checks if a Statement object is poolable and the setPoolable method sets the Statement object to poolable. When an application closes a PreparedStatement object using the close() method the PreparedStatement object is not actually closed. The PreparedStatement object is returned to the pool of PreparedStatements. When the connection pool manager closes a PooledConnection object by invoking the close() method of PooledConnection all the associated statements also get closed. Pooling of PreparedStatements provides significant optimization, but if a large number of statements are left open, it may not be an optimal use of resources. Thus, the following procedure is followed to obtain a connection in an application server using a data source: Create a data source with a JNDI name binding to the JNDI naming service. Create an InitialContext object and look up the JNDI name of the data source using the lookup method to create a DataSource object. If the JDBC driver implements the DataSource as a connection pool, a connection pool becomes available. Request a connection from the connection pool. The connection pool manager checks if a suitable PooledConnection object is available. If a suitable PooledConnection object is available, the connection pool manager returns a handle to the PooledConnection object as a Connection object to the application requesting a connection. If a PooledConnection object is not available the connection pool manager invokes the getPooledConnection() method of the ConnectionPoolDataSource, which is implemented by the JDBC driver. The JDBC driver implementing the ConnectionPoolDataSource creates a PooledConnection object and returns a handle to it. The connection pool manager returns a handle to the PooledConnection object as a Connection object to the application requesting a connection. When an application closes a connection, the connection pool manager deactivates the handle to the PooledConnection object and returns the PooledConnection object to the connection pool. ConnectionPoolDataSource provides some configuration properties to configure a connection pool. The configuration pool properties are not set by the JDBC client, but are implemented or augmented by the connection pool. The properties can be set in a data source configuration. Therefore, it is not for the application itself to change the settings, but for the administrator of the pool, who also happens to be the developer sometimes, to do so. Connection pool properties supported by ConnectionPoolDataSource are discussed in following table: Connection Pool Property Type Description maxStatements int Maximum number of statements the pool should keep open. 0 (zero) indicates that statement caching is not enabled. initialPoolSize int The initial number of connections the pool should have at the time of creation. minPoolSize int The minimum number of connections in the pool. 0 (zero) indicates that connections are created as required. maxPoolSize int The maximum number of connections in the connection pool. 0 indicates that there is no maximum limit. maxIdleTime int Maximum duration (in seconds) a connection can be kept open without being used before the connection is closed. 0 (zero) indicates that there is no limit. propertyCycle int The interval in seconds the pool should wait before implementing the current policy defined by the connection pool properties. maxStatements int The maximum number of statements the pool can keep open. 0 (zero) indicates that statement caching is not enabled.     Setting the Environment Before getting started, we have to install the JDeveloper 10.1.3 IDE and the MySQL 5.0 database. Download JDeveloper from: http://www.oracle.com/technology/software/products/jdev/index.html. Download the MySQL Connector/J 5.1, the MySQL JDBC driver that supports JDBC 4.0 specification. To install JDeveloper extract the JDeveloper ZIP file to a directory. Log in to the MySQL database and set the database to test. Create a database table, Catalog, which we will use in a web application. The SQL script to create the database table is listed below: CREATE TABLE Catalog(CatalogId VARCHAR(25)PRIMARY KEY, Journal VARCHAR(25), Publisher VARCHAR(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25));INSERT INTO Catalog VALUES('catalog1', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'Database Resource Manager', 'Kimberly Floss');INSERT INTO Catalog VALUES('catalog2', 'Oracle Magazine', 'Oracle Publishing', 'Nov-Dec 2004', 'From ADF UIX to JSF', 'Jonas Jacobi'); MySQL does not support ROWID, for which support has been added in JDBC 4.0. Having installed the JDeveloper IDE, next we will configure a JDBC connection in the Connections Navigator. Select the Connections tab and right-click on the Database node to select New Database Connection. Click on Next in Create Database Connection Wizard. In the Create Database Connection Type window, specify a Connection Name—MySQLConnection for example—and set Connection Type to Third Party JDBC Driver, because we will be using MySQL database, which is a third-party database for Oracle JDeveloper and click on Next. If a connection is to be configured with Oracle database select Oracle (JDBC) as the Connection Type and click on Next. In the Authentication window specify Username as root (Password is not required to be specified for a root user by default), and click on Next. In the Connection window, we will specify the connection parameters, such as the driver name and connection URL; click on New to specify a Driver Class. In the Register JDBC Driver window, specify Driver Class as com.mysql.jdbc.Driver and click on Browse to select a Library for the Driver Class. In the Select Library window, click on New to create a new library for the MySQL Connector/J 5.1 JAR file. In the Create Library window, specify Library Name as MySQL and click on Add Entry to add a JAR file entry for the MySQL library. In the Select Path Entry window select mysql-connector-java-5.1.3-rcmysql-connector-java-5.1.3-rc-bin.jar and click on Select. In the Create Library window, after a Class Path entry gets added to the MySQL library, click on OK. In the Select Library window, select the MySQL library and click on OK. In the Register JDBC Driver window, the MySQL library gets specified in the Library field and the mysql-connector-java-5.1.3-rcmysql-connector-java-5.1.3-rc-bin.jar gets specified in the Classpath field. Now, click on OK. The Driver Class, Library, and Classpath fields get specified in the Connection window. Specify URL as jdbc:mysql://localhost:3306/test, and click on Next. In the Test window click on Test Connection to test the connection that we have configured. A connection is established and a success message gets output in the Status text area. Click on Finish in the Test window. A connection configuration, MySQLConnection, gets added to the Connections navigator. The connection parameters are displayed in the structure view. To modify any of the connection settings, double-click on the Connection node. The Edit Database Connection window gets displayed. The connection Username, Password, Driver Class, and URL may be modified in the Edit window. A database connection configured in the Connections navigator has a JNDI name binding in the JNDI naming service provided by OC4J. Using the JNDI name binding, a DataSource object may be created in a J2EE application. To view, or modify the configuration settings of the JDBC connection select Tools | Embedded OC4J Server Preferences in JDeveloper. In the window displayed, select Global | Data Sources node, and to update the data-sources.xml file with the connection defined in the Connections navigator, click on the Refresh Now button. Checkboxes may be selected to Create data-source elements where not defined, and to Update existing data-source elements. The connection pool and data source associated with the connection configured in the Connections navigator get listed. Select the jdev-connection-pool-MySQLConnection node to list the connection pool properties as Property Set A and Property Set B. The tuning properties of the JDBC connection pool may be set in the Connection Pool window. The different tuning attributes are listed in following table:        
Read more
  • 0
  • 0
  • 10382

article-image-how-to-apply-themes-sails-applications-part-1
Luis Lobo
29 Sep 2016
8 min read
Save for later

How to Apply Themes to Sails Applications, Part 1

Luis Lobo
29 Sep 2016
8 min read
The Sails Framework is a popular MVC framework that is designed for building practical, production-ready Node.js apps. Themes customize the look and feel of your app, but Sails does not come with a configuration or setting for handling themes by itself. This two-part post shows one of the ways you can set up theming for your Sails application, thus making use of some of Sails’ capabilities. You may have an application that needs to handle theming for different reasons, like custom branding, licensing, dynamic theme configuration, and so on. You can adjust the theming of your application, based on external factors, like patterns in the domain of the site you are browsing. Imagine you have an application that handles deliveries that you customize per client. So, your app renders the default theme when browsed as http://www.smartdelivery.com, but when logged in as a customer, let's say, "Burrito", it changes the domain name as http://burrito.smartdelivery.com. In this series we make use of Less as our language to define our CSS. Sails already handles Less right out of the box. The default Less file is located in /assets/styles/importer.less. We will also use Bootstrap as our base CSS Framework, importing its Less file into our importer.less file. The technique showed here consists of having a base CSS, and a theme CSS that varies according to the host name. Step 1 - Adding Bootstrap to Sails We use Bower to add Bootstrap to our project. First, install it by issuing the following command: npm install bower --save-dev Then, initialize the Bower configuration file. node_modules/bower/bin/bower init This command allows us to configure our bower.json file. Answer the questions asked by bower. ? name sails-themed-application ? description Sails Themed Application ? main file app.js ? keywords ? authors lobo ? license MIT ? homepage ? set currently installed components as dependencies? Yes ? add commonly ignored files to ignore list? Yes ? would you like to mark this package as private which prevents it from being accidentally published to the registry? No { name: 'sails-themed-application', description: 'Sails Themed Application', main: 'app.js', authors: [ 'lobo' ], license: 'MIT', homepage: '', ignore: [ '**/.*', 'node_modules', 'bower_components', 'assets/vendor', 'test', 'tests' ] } This generates a bower.json file in the root of your project. Now we need to tell bower to install everything in a specific directory. Create a file named .bowerrc and put this configuration into it: {"directory" : "assets/vendor"} Finally, install Bootstrap: node_modules/bower/bin/bower install bootstrap --save --production This action creates a folder in assets named vendor, with boostrap inside of it. Since Bootstrap uses JQuery, you also have a jquery folder: ├── api │ ├── controllers │ ├── models │ ├── policies │ ├── responses │ └── services ├── assets │ ├── images │ ├── js │ │ └── dependencies │ ├── styles │ ├── templates │ ├── themes │ └── vendor │ ├── bootstrap │ │ ├── dist │ │ │ ├── css │ │ │ ├── fonts │ │ │ └── js │ │ ├── fonts │ │ ├── grunt │ │ ├── js │ │ ├── less │ │ │ └── mixins │ │ └── nuget │ └── jquery │ ├── dist │ ├── external │ │ └── sizzle │ │ └── dist │ └── src │ ├── ajax │ │ └── var │ ├── attributes │ ├── core │ │ └── var │ ├── css │ │ └── var │ ├── data │ │ └── var │ ├── effects │ ├── event │ ├── exports │ ├── manipulation │ │ └── var │ ├── queue │ ├── traversing │ │ └── var │ └── var ├── config │ ├── env │ └── locales ├── tasks │ ├── config │ └── register └── views We need now to add Bootstrap into our importer. Edit /assets/styles/importer.less and add this instruction at the end of it: @import "../vendor/bootstrap/less/bootstrap.less"; Now you need to tell Sails where to import Bootstrap and JQuery JavaScript files from. Edit /tasks/pipeline.js and add the following code after it loads the sails.io.js file: // Load sails.io before everything else 'js/dependencies/sails.io.js', // <ADD THESE LINES> // JQuery JS 'vendor/jquery/dist/jquery.min.js', // Bootstrap JS 'vendor/bootstrap/dist/js/bootstrap.min.js', // </ADD THESE LINES> Now you have to edit your views layout and pages to use the Bootstrap style. In this series I created an application from scratch, so I have the default views and layouts. In your layout, insert the following line after your tag: <link rel="stylesheet" href="/themes/<%= typeof theme == 'undefined' ? 'default' : theme %>.css"> This loads a second CSS file, which defaults to /themes/default.css, into your views. As a sample, here are the /views/layout.ejs and /views/homepage.ejs I changed (the text under the headings is random text): /views/layout.ejs <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width, initial-scale=1"> <!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags --> <title><%= typeof title == 'undefined' ? 'Sails Themed Application' : title %></title> <!--STYLES--> <link rel="stylesheet" href="/styles/importer.css"> <!--STYLES END--> <!-- THIS IS WHERE THE THEME CSS IS LOADED --> <link rel="stylesheet" href="/themes/<%= typeof theme == 'undefined' ? 'default' : theme %>.css"> </head> <body> <%- body %> <!--TEMPLATES--> <!--TEMPLATES END--> <!--SCRIPTS--> <script src="/js/dependencies/sails.io.js"></script> <script src="/vendor/jquery/dist/jquery.min.js"></script> <script src="/vendor/bootstrap/dist/js/bootstrap.min.js"></script> <!--SCRIPTS END--> </body> </html> Notice the lines after the <!--STYLES END--> tag. /views/homepage.ejs <nav class="navbar navbar-inverse navbar-fixed-top"> <div class="container"> <div class="navbar-header"> <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar" aria-expanded="false" aria-controls="navbar"> <span class="sr-only">Toggle navigation</span> <span class="icon-bar"></span> <span class="icon-bar"></span> <span class="icon-bar"></span> </button> <a class="navbar-brand" href="#">Project name</a> </div> <div id="navbar" class="navbar-collapse collapse"> <form class="navbar-form navbar-right"> <div class="form-group"> <input type="text" placeholder="Email" class="form-control"> </div> <div class="form-group"> <input type="password" placeholder="Password" class="form-control"> </div> <button type="submit" class="btn btn-success">Sign in</button> </form> </div><!--/.navbar-collapse --> </div> </nav> <!-- Main jumbotron for a primary marketing message or call to action --> <div class="jumbotron"> <div class="container"> <h1>Hello, world!</h1> <p>This is a template for a simple marketing or informational website. It includes a large callout called a jumbotron and three supporting pieces of content. Use it as a starting point to create something more unique.</p> <p><a class="btn btn-primary btn-lg" href="#" role="button">Learn more &raquo;</a></p> </div> </div> <div class="container"> <!-- Example row of columns --> <div class="row"> <div class="col-md-4"> <h2>Heading</h2> <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> <div class="col-md-4"> <h2>Heading</h2> <p>Donec id elit non mi porta gravida at eget metus. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus. Etiam porta sem malesuada magna mollis euismod. Donec sed odio dui. </p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> <div class="col-md-4"> <h2>Heading</h2> <p>Donec sed odio dui. Cras justo odio, dapibus ac facilisis in, egestas eget quam. Vestibulum id ligula porta felis euismod semper. Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus.</p> <p><a class="btn btn-default" href="#" role="button">View details &raquo;</a></p> </div> </div> <hr> <footer> <p>&copy; 2015 Company, Inc.</p> </footer> </div> <!-- /container --> You can now lift Sails and see your Bootstrapped Sails application. Now that we have our Bootstrapped Sails app set up, in Part 2 we will compile our theme’s CSS and the necessary Less files, and we will set bup the theme Sails hook to complete our application. About the author Luis Lobo Borobia is the CTO at FictionCity.NET, is a mentor and advisor, independent software engineer consultant, and conference speaker. He has a background as a software analyst and designer, creating, designing, and implementing Software products and solutions, frameworks, and platforms for several kinds of industries. In the last years he has focused on research and development for the Internet of Things, using the latest bleeding-edge software and hardware technologies available.
Read more
  • 0
  • 1
  • 9679

article-image-contexts-and-dependency-injection-netbeans
Packt
06 Feb 2015
18 min read
Save for later

Contexts and Dependency Injection in NetBeans

Packt
06 Feb 2015
18 min read
In this article by David R. Heffelfinger, the author of Java EE 7 Development with NetBeans 8, we will introduce Contexts and Dependency Injection (CDI) and other aspects of it. CDI can be used to simplify integrating the different layers of a Java EE application. For example, CDI allows us to use a session bean as a managed bean, so that we can take advantage of the EJB features, such as transactions, directly in our managed beans. In this article, we will cover the following topics: Introduction to CDI Qualifiers Stereotypes Interceptor binding types Custom scopes (For more resources related to this topic, see here.) Introduction to CDI JavaServer Faces (JSF) web applications employing CDI are very similar to JSF applications without CDI; the main difference is that instead of using JSF managed beans for our model and controllers, we use CDI named beans. What makes CDI applications easier to develop and maintain are the excellent dependency injection capabilities of the CDI API. Just as with other JSF applications, CDI applications use facelets as their view technology. The following example illustrates a typical markup for a JSF page using CDI: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"    "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html      >    <h:head>        <title>Create New Customer</title>    </h:head>    <h:body>        <h:form>            <h3>Create New Customer</h3>            <h:panelGrid columns="3">                <h:outputLabel for="firstName" value="First Name"/>                <h:inputText id="firstName" value="#{customer.firstName}"/>                <h:message for="firstName"/>                  <h:outputLabel for="middleName" value="Middle Name"/>                <h:inputText id="middleName"                  value="#{customer.middleName}"/>                <h:message for="middleName"/>                  <h:outputLabel for="lastName" value="Last Name"/>                <h:inputText id="lastName" value="#{customer.lastName}"/>                <h:message for="lastName"/>                  <h:outputLabel for="email" value="Email Address"/>                <h:inputText id="email" value="#{customer.email}"/>                <h:message for="email"/>                <h:panelGroup/>                <h:commandButton value="Submit"                  action="#{customerController.navigateToConfirmation}"/>            </h:panelGrid>        </h:form>    </h:body> </html> As we can see, the preceding markup doesn't look any different from the markup used for a JSF application that does not use CDI. The page renders as follows (shown after entering some data): In our page markup, we have JSF components that use Unified Expression Language expressions to bind themselves to CDI named bean properties and methods. Let's take a look at the customer bean first: package com.ensode.cdiintro.model;   import java.io.Serializable; import javax.enterprise.context.RequestScoped; import javax.inject.Named;   @Named @RequestScoped public class Customer implements Serializable {      private String firstName;    private String middleName;    private String lastName;    private String email;      public Customer() {    }      public String getFirstName() {        return firstName;    }      public void setFirstName(String firstName) {        this.firstName = firstName;    }      public String getMiddleName() {        return middleName;    }      public void setMiddleName(String middleName) {        this.middleName = middleName;    }      public String getLastName() {        return lastName;    }      public void setLastName(String lastName) {        this.lastName = lastName;    }      public String getEmail() {        return email;    }      public void setEmail(String email) {        this.email = email;    } } The @Named annotation marks this class as a CDI named bean. By default, the bean's name will be the class name with its first character switched to lowercase (in our example, the name of the bean is "customer", since the class name is Customer). We can override this behavior if we wish by passing the desired name to the value attribute of the @Named annotation, as follows: @Named(value="customerBean") A CDI named bean's methods and properties are accessible via facelets, just like regular JSF managed beans. Just like JSF managed beans, CDI named beans can have one of several scopes as listed in the following table. The preceding named bean has a scope of request, as denoted by the @RequestScoped annotation. Scope Annotation Description Request @RequestScoped Request scoped beans are shared through the duration of a single request. A single request could refer to an HTTP request, an invocation to a method in an EJB, a web service invocation, or sending a JMS message to a message-driven bean. Session @SessionScoped Session scoped beans are shared across all requests in an HTTP session. Each user of an application gets their own instance of a session scoped bean. Application @ApplicationScoped Application scoped beans live through the whole application lifetime. Beans in this scope are shared across user sessions. Conversation @ConversationScoped The conversation scope can span multiple requests, and is typically shorter than the session scope. Dependent @Dependent Dependent scoped beans are not shared. Any time a dependent scoped bean is injected, a new instance is created. As we can see, CDI has equivalent scopes to all JSF scopes. Additionally, CDI adds two additional scopes. The first CDI-specific scope is the conversation scope, which allows us to have a scope that spans across multiple requests, but is shorter than the session scope. The second CDI-specific scope is the dependent scope, which is a pseudo scope. A CDI bean in the dependent scope is a dependent object of another object; beans in this scope are instantiated when the object they belong to is instantiated and they are destroyed when the object they belong to is destroyed. Our application has two CDI named beans. We already discussed the customer bean. The other CDI named bean in our application is the controller bean: package com.ensode.cdiintro.controller;   import com.ensode.cdiintro.model.Customer; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.inject.Named;   @Named @RequestScoped public class CustomerController {      @Inject    private Customer customer;      public Customer getCustomer() {        return customer;    }      public void setCustomer(Customer customer) {        this.customer = customer;    }      public String navigateToConfirmation() {        //In a real application we would        //Save customer data to the database here.          return "confirmation";    } } In the preceding class, an instance of the Customer class is injected at runtime; this is accomplished via the @Inject annotation. This annotation allows us to easily use dependency injection in CDI applications. Since the Customer class is annotated with the @RequestScoped annotation, a new instance of Customer will be injected for every request. The navigateToConfirmation() method in the preceding class is invoked when the user clicks on the Submit button on the page. The navigateToConfirmation() method works just like an equivalent method in a JSF managed bean would, that is, it returns a string and the application navigates to an appropriate page based on the value of that string. Like with JSF, by default, the target page's name with an .xhtml extension is the return value of this method. For example, if no exceptions are thrown in the navigateToConfirmation() method, the user is directed to a page named confirmation.xhtml: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html      >    <h:head>        <title>Success</title>    </h:head>    <h:body>        New Customer created successfully.        <h:panelGrid columns="2" border="1" cellspacing="0">            <h:outputLabel for="firstName" value="First Name"/>            <h:outputText id="firstName" value="#{customer.firstName}"/>              <h:outputLabel for="middleName" value="Middle Name"/>            <h:outputText id="middleName"              value="#{customer.middleName}"/>              <h:outputLabel for="lastName" value="Last Name"/>            <h:outputText id="lastName" value="#{customer.lastName}"/>              <h:outputLabel for="email" value="Email Address"/>            <h:outputText id="email" value="#{customer.email}"/>          </h:panelGrid>    </h:body> </html> Again, there is nothing special we need to do to access the named beans properties from the preceding markup. It works just as if the bean was a JSF managed bean. The preceding page renders as follows: As we can see, CDI applications work just like JSF applications. However, CDI applications have several advantages over JSF, for example (as we mentioned previously) CDI beans have additional scopes not found in JSF. Additionally, using CDI allows us to decouple our Java code from the JSF API. Also, as we mentioned previously, CDI allows us to use session beans as named beans. Qualifiers In some instances, the type of bean we wish to inject into our code may be an interface or a Java superclass, but we may be interested in injecting a subclass or a class implementing the interface. For cases like this, CDI provides qualifiers we can use to indicate the specific type we wish to inject into our code. A CDI qualifier is an annotation that must be decorated with the @Qualifier annotation. This annotation can then be used to decorate the specific subclass or interface. In this section, we will develop a Premium qualifier for our customer bean; premium customers could get perks that are not available to regular customers, for example, discounts. Creating a CDI qualifier with NetBeans is very easy; all we need to do is go to File | New File, select the Contexts and Dependency Injection category, and select the Qualifier Type file type. In the next step in the wizard, we need to enter a name and a package for our qualifier. After these two simple steps, NetBeans generates the code for our qualifier: package com.ensode.cdiintro.qualifier;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.PARAMETER; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.inject.Qualifier;   @Qualifier @Retention(RUNTIME) @Target({METHOD, FIELD, PARAMETER, TYPE}) public @interface Premium { } Qualifiers are standard Java annotations. Typically, they have retention of runtime and can target methods, fields, parameters, or types. The only difference between a qualifier and a standard annotation is that qualifiers are decorated with the @Qualifier annotation. Once we have our qualifier in place, we need to use it to decorate the specific subclass or interface implementation, as shown in the following code: package com.ensode.cdiintro.model;   import com.ensode.cdiintro.qualifier.Premium; import javax.enterprise.context.RequestScoped; import javax.inject.Named;   @Named @RequestScoped @Premium public class PremiumCustomer extends Customer {      private Integer discountCode;      public Integer getDiscountCode() {        return discountCode;    }      public void setDiscountCode(Integer discountCode) {        this.discountCode = discountCode;    } } Once we have decorated the specific instance we need to qualify, we can use our qualifiers in the client code to specify the exact type of dependency we need: package com.ensode.cdiintro.controller;   import com.ensode.cdiintro.model.Customer; import com.ensode.cdiintro.model.PremiumCustomer; import com.ensode.cdiintro.qualifier.Premium;   import java.util.logging.Level; import java.util.logging.Logger; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.inject.Named;   @Named @RequestScoped public class PremiumCustomerController {      private static final Logger logger = Logger.getLogger(            PremiumCustomerController.class.getName());    @Inject    @Premium    private Customer customer;      public String saveCustomer() {          PremiumCustomer premiumCustomer =          (PremiumCustomer) customer;          logger.log(Level.INFO, "Saving the following information n"                + "{0} {1}, discount code = {2}",                new Object[]{premiumCustomer.getFirstName(),                    premiumCustomer.getLastName(),                    premiumCustomer.getDiscountCode()});          //If this was a real application, we would have code to save        //customer data to the database here.          return "premium_customer_confirmation";    } } Since we used our @Premium qualifier to decorate the customer field, an instance of the PremiumCustomer class is injected into that field. This is because this class is also decorated with the @Premium qualifier. As far as our JSF pages go, we simply access our named bean as usual using its name, as shown in the following code; <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html      >    <h:head>        <title>Create New Premium Customer</title>    </h:head>    <h:body>        <h:form>            <h3>Create New Premium Customer</h3>            <h:panelGrid columns="3">                <h:outputLabel for="firstName" value="First Name"/>                 <h:inputText id="firstName"                    value="#{premiumCustomer.firstName}"/>                <h:message for="firstName"/>                  <h:outputLabel for="middleName" value="Middle Name"/>                <h:inputText id="middleName"                     value="#{premiumCustomer.middleName}"/>                <h:message for="middleName"/>                  <h:outputLabel for="lastName" value="Last Name"/>                <h:inputText id="lastName"                    value="#{premiumCustomer.lastName}"/>                <h:message for="lastName"/>                  <h:outputLabel for="email" value="Email Address"/>                <h:inputText id="email"                    value="#{premiumCustomer.email}"/>                <h:message for="email"/>                  <h:outputLabel for="discountCode" value="Discount Code"/>                <h:inputText id="discountCode"                    value="#{premiumCustomer.discountCode}"/>                <h:message for="discountCode"/>                   <h:panelGroup/>                <h:commandButton value="Submit"                      action="#{premiumCustomerController.saveCustomer}"/>            </h:panelGrid>        </h:form>    </h:body> </html> In this example, we are using the default name for our bean, which is the class name with the first letter switched to lowercase. Now, we are ready to test our application: After submitting the page, we can see the confirmation page. Stereotypes A CDI stereotype allows us to create new annotations that bundle up several CDI annotations. For example, if we need to create several CDI named beans with a scope of session, we would have to use two annotations in each of these beans, namely @Named and @SessionScoped. Instead of having to add two annotations to each of our beans, we could create a stereotype and annotate our beans with it. To create a CDI stereotype in NetBeans, we simply need to create a new file by selecting the Contexts and Dependency Injection category and the Stereotype file type. Then, we need to enter a name and package for our new stereotype. At this point, NetBeans generates the following code: package com.ensode.cdiintro.stereotype;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.enterprise.inject.Stereotype;   @Stereotype @Retention(RUNTIME) @Target({METHOD, FIELD, TYPE}) public @interface NamedSessionScoped { } Now, we simply need to add the CDI annotations that we want the classes annotated with our stereotype to use. In our case, we want them to be named beans and have a scope of session; therefore, we add the @Named and @SessionScoped annotations as shown in the following code: package com.ensode.cdiintro.stereotype;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.enterprise.context.SessionScoped; import javax.enterprise.inject.Stereotype; import javax.inject.Named;   @Named @SessionScoped @Stereotype @Retention(RUNTIME) @Target({METHOD, FIELD, TYPE}) public @interface NamedSessionScoped { } Now we can use our stereotype in our own code: package com.ensode.cdiintro.beans;   import com.ensode.cdiintro.stereotype.NamedSessionScoped; import java.io.Serializable;   @NamedSessionScoped public class StereotypeClient implements Serializable {      private String property1;    private String property2;      public String getProperty1() {        return property1;    }      public void setProperty1(String property1) {        this.property1 = property1;    }      public String getProperty2() {        return property2;    }      public void setProperty2(String property2) {        this.property2 = property2;    } } We annotated the StereotypeClient class with our NamedSessionScoped stereotype, which is equivalent to using the @Named and @SessionScoped annotations. Interceptor binding types One of the advantages of EJBs is that they allow us to easily perform aspect-oriented programming (AOP) via interceptors. CDI allows us to write interceptor binding types; this lets us bind interceptors to beans and the beans do not have to depend on the interceptor directly. Interceptor binding types are annotations that are themselves annotated with @InterceptorBinding. Creating an interceptor binding type in NetBeans involves creating a new file, selecting the Contexts and Dependency Injection category, and selecting the Interceptor Binding Type file type. Then, we need to enter a class name and select or enter a package for our new interceptor binding type. At this point, NetBeans generates the code for our interceptor binding type: package com.ensode.cdiintro.interceptorbinding;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Inherited; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.interceptor.InterceptorBinding;   @Inherited @InterceptorBinding @Retention(RUNTIME) @Target({METHOD, TYPE}) public @interface LoggingInterceptorBinding { } The generated code is fully functional; we don't need to add anything to it. In order to use our interceptor binding type, we need to write an interceptor and annotate it with our interceptor binding type, as shown in the following code: package com.ensode.cdiintro.interceptor;   import com.ensode.cdiintro.interceptorbinding.LoggingInterceptorBinding; import java.io.Serializable; import java.util.logging.Level; import java.util.logging.Logger; import javax.interceptor.AroundInvoke; import javax.interceptor.Interceptor; import javax.interceptor.InvocationContext;   @LoggingInterceptorBinding @Interceptor public class LoggingInterceptor implements Serializable{      private static final Logger logger = Logger.getLogger(            LoggingInterceptor.class.getName());      @AroundInvoke    public Object logMethodCall(InvocationContext invocationContext)            throws Exception {          logger.log(Level.INFO, new StringBuilder("entering ").append(                invocationContext.getMethod().getName()).append(                " method").toString());          Object retVal = invocationContext.proceed();          logger.log(Level.INFO, new StringBuilder("leaving ").append(                invocationContext.getMethod().getName()).append(                " method").toString());          return retVal;    } } As we can see, other than being annotated with our interceptor binding type, the preceding class is a standard interceptor similar to the ones we use with EJB session beans. In order for our interceptor binding type to work properly, we need to add a CDI configuration file (beans.xml) to our project. Then, we need to register our interceptor in beans.xml as follows: <?xml version="1.0" encoding="UTF-8"?> <beans               xsi_schemaLocation="http://>    <interceptors>          <class>        com.ensode.cdiintro.interceptor.LoggingInterceptor      </class>    </interceptors> </beans> To register our interceptor, we need to set bean-discovery-mode to all in the generated beans.xml and add the <interceptor> tag in beans.xml, with one or more nested <class> tags containing the fully qualified names of our interceptors. The final step before we can use our interceptor binding type is to annotate the class to be intercepted with our interceptor binding type: package com.ensode.cdiintro.controller;   import com.ensode.cdiintro.interceptorbinding.LoggingInterceptorBinding; import com.ensode.cdiintro.model.Customer; import com.ensode.cdiintro.model.PremiumCustomer; import com.ensode.cdiintro.qualifier.Premium; import java.util.logging.Level; import java.util.logging.Logger; import javax.enterprise.context.RequestScoped; import javax.inject.Inject; import javax.inject.Named;   @LoggingInterceptorBinding @Named @RequestScoped public class PremiumCustomerController {      private static final Logger logger = Logger.getLogger(            PremiumCustomerController.class.getName());    @Inject    @Premium    private Customer customer;      public String saveCustomer() {          PremiumCustomer premiumCustomer = (PremiumCustomer) customer;          logger.log(Level.INFO, "Saving the following information n"                + "{0} {1}, discount code = {2}",                new Object[]{premiumCustomer.getFirstName(),                    premiumCustomer.getLastName(),                    premiumCustomer.getDiscountCode()});          //If this was a real application, we would have code to save        //customer data to the database here.          return "premium_customer_confirmation";    } } Now, we are ready to use our interceptor. After executing the preceding code and examining the GlassFish log, we can see our interceptor binding type in action. The lines entering saveCustomer method and leaving saveCustomer method were added to the log by our interceptor, which was indirectly invoked by our interceptor binding type. Custom scopes In addition to providing several prebuilt scopes, CDI allows us to define our own custom scopes. This functionality is primarily meant for developers building frameworks on top of CDI, not for application developers. Nevertheless, NetBeans provides a wizard for us to create our own CDI custom scopes. To create a new CDI custom scope, we need to go to File | New File, select the Contexts and Dependency Injection category, and select the Scope Type file type. Then, we need to enter a package and a name for our custom scope. After clicking on Finish, our new custom scope is created, as shown in the following code: package com.ensode.cdiintro.scopes;   import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.ElementType.FIELD; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.RetentionPolicy.RUNTIME; import java.lang.annotation.Inherited; import java.lang.annotation.Retention; import java.lang.annotation.Target; import javax.inject.Scope;   @Inherited @Scope // or @javax.enterprise.context.NormalScope @Retention(RUNTIME) @Target({METHOD, FIELD, TYPE}) public @interface CustomScope { } To actually use our scope in our CDI applications, we would need to create a custom context which, as mentioned previously, is primarily a concern for framework developers and not for Java EE application developers. Therefore, it is beyond the scope of this article. Interested readers can refer to JBoss Weld CDI for Java Platform, Ken Finnigan, Packt Publishing. (JBoss Weld is a popular CDI implementation and it is included with GlassFish.) Summary In this article, we covered NetBeans support for CDI, a new Java EE API introduced in Java EE 6. We provided an introduction to CDI and explained additional functionality that the CDI API provides over standard JSF. We also covered how to disambiguate CDI injected beans via CDI Qualifiers. Additionally, we covered how to group together CDI annotations via CDI stereotypes. We also, we saw how CDI can help us with AOP via interceptor binding types. Finally, we covered how NetBeans can help us create custom CDI scopes. Resources for Article: Further resources on this subject: Java EE 7 Performance Tuning and Optimization [article] Java EE 7 Developer Handbook [article] Java EE 7 with GlassFish 4 Application Server [article]
Read more
  • 0
  • 0
  • 9639

article-image-introduction-odoo
Packt
04 Sep 2015
12 min read
Save for later

Introduction to Odoo

Packt
04 Sep 2015
12 min read
 In this article by Greg Moss, author of Working with Odoo, he explains that Odoo is a very feature-filled business application framework with literally hundreds of applications and modules available. We have done our best to cover the most essential features of the Odoo applications that you are most likely to use in your business. Setting up an Odoo system is no easy task. Many companies get into trouble believing that they can just install the software and throw in some data. Inevitably, the scope of the project grows and what was supposed to be a simple system ends up being a confusing mess. Fortunately, Odoo's modular design will allow you to take a systematic approach to implementing Odoo for your business. (For more resources related to this topic, see here.) What is an ERP system? An Enterprise Resource Planning (ERP) system is essentially a suite of business applications that are integrated together to assist a company in collecting, managing, and reporting information throughout core business processes. These business applications, typically called modules, can often be independently installed and configured based on the specific needs of the business. As the needs of the business change and grow, additional modules can be incorporated into an existing ERP system to better handle the new business requirements. This modular design of most ERP systems gives companies great flexibility in how they implement the system. In the past, ERP systems were primarily utilized in manufacturing operations. Over the years, the scope of ERP systems have grown to encompass a wide range of business-related functions. Recently, ERP systems have started to include more sophisticated communication and social networking features. Common ERP modules The core applications of an ERP system typically include: Sales Orders Purchase Orders Accounting and Finance Manufacturing Resource Planning (MRP) Customer Relationship Management (CRM) Human Resources (HR) Let's take a brief look at each of these modules and how they address specific business needs. Selling products to your customer Sales Orders, commonly abbreviated as SO, are documents that a business generates when they sell products and services to a customer. In an ERP system, the Sales Order module will usually allow management of customers and products to optimize efficiency for data entry of the sales order. Many sales orders begin as customer quotes. Quotes allow a salesperson to collect order information that may change as the customer makes decisions on what they want in their final order. Once a customer has decided exactly what they wish to purchase, the quote is turned into a sales order and is confirmed for processing. Depending on the requirements of the business, there are a variety of methods to determine when a customer is invoiced or billed for the order. This preceding screenshot shows a sample sales order in Odoo. Purchasing products from suppliers Purchase Orders, often known as PO, are documents that a business generates when they purchase products from a vendor. The Purchase Order module in an ERP system will typically include management of vendors (also called suppliers) as well as management of the products that the vendor carries. Much like sales order quotes, a purchase order system will allow a purchasing department to create draft purchase orders before they are finalized into a specific purchasing request. Often, a business will configure the Sales Order and Purchase Order modules to work together to streamline business operations. When a valid sales order is entered, most ERP systems will allow you to configure the system so that a purchase order can be automatically generated if the required products are not in stock to fulfill the sales order. ERP systems will allow you to set minimum quantities on-hand or order limits that will automatically generate purchase orders when inventory falls below a predetermined level. When properly configured, a purchase order system can save a significant amount of time in purchasing operations and assist in preventing supply shortages. Managing your accounts and financing in Odoo Accounting and finance modules integrate with an ERP system to organize and report business transactions. In many ERP systems, the accounting and finance module is known as GL for General Ledger. All accounting and finance modules are built around a structure known as the chart of accounts. The chart of accounts organizes groups of transactions into categories such as assets, liabilities, income, and expenses. ERP systems provide a lot of flexibility in defining the structure of your chart of accounts to meet the specific requirements for your business. Accounting transactions are grouped by date into periods (typically by month) for reporting purposes. These reports are most often known as financial statements. Common financial statements include balance sheets, income statements, cash flow statements, and statements of owner's equity. Handling your manufacturing operations The Manufacturing Resource Planning (MRP) module manages all the various business operations that go into the manufacturing of products. The fundamental transaction of an MRP module is a manufacturing order, which is also known as a production order in some ERP systems. A manufacturing order describes the raw products or subcomponents, steps, and routings required to produce a finished product. The raw products or subcomponents required to produce the finished product are typically broken down into a detailed list called a bill of materials or BOM. A BOM describes the exact quantities required of each component and are often used to define the raw material costs that go into manufacturing the final products for a company. Often an MRP module will incorporate several submodules that are necessary to define all the required operations. Warehouse management is used to define locations and sublocations to store materials and products as they move through the various manufacturing operations. For example, you may receive raw materials in one warehouse location, assemble those raw materials into subcomponents and store them in another location, then ultimately manufacture the end products and store them in a final location before delivering them to the customer. Managing customer relations in Odoo In today's business environment, quality customer service is essential to being competitive in most markets. A Customer Relationship Management (CRM) module assists a business in better handling the interactions they may have with each customer. Most CRM systems also incorporate a presales component that will manage opportunities, leads, and various marketing campaigns. Typically, a CRM system is utilized the most by the sales and marketing departments within a company. For this reason, CRM systems are often considered to be sales force automation tools or SFA tools. Sales personnel can set up appointments, schedule call backs, and employ tools to manage their communication. More modern CRM systems have started to incorporate social networking features to assist sales personnel in utilizing these newly emerging technologies. Configuring human resource applications in Odoo Human Resource modules, commonly known as HR, manage the workforce- or employee-related information in a business. Some of the processes ordinarily covered by HR systems are payroll, time and attendance, benefits administration, recruitment, and knowledge management. Increased regulations and complexities in payroll and benefits have led to HR modules becoming a major component of most ERP systems. Modern HR modules typically include employee kiosk functions to allow employees to self-administer many tasks such as putting in a leave request or checking on their available vacation time. Finding additional modules for your business requirements In addition to core ERP modules, Odoo has many more official and community-developed modules available. At the time of this article's publication, the Odoo application repository had 1,348 modules listed for version 7! Many of these modules provide small enhancements to improve usability like adding payment type to a sales order. Other modules offer e-commerce integration or complete application solutions, such as managing a school or hospital. Here is a short list of the more common modules you may wish to include in an Odoo installation: Point of Sale Project Management Analytic Accounting Document Management System Outlook Plug-in Country-Specific Accounting Templates OpenOffice Report Designer You will be introduced to various Odoo modules that extend the functionality of the base Odoo system. You can find a complete list of Odoo modules at http://apps.Odoo.com/. This preceding screenshot shows the module selection page in Odoo. Getting quickly into Odoo Do you want to jump in right now and get a look at Odoo 7 without any complex installations? Well, you are lucky! You can access an online installation of Odoo, where you can get a peek at many of the core modules right from your web browser. The installation is shared publicly, so you will not want to use this for any sensitive information. It is ideal, however, to get a quick overview of the software and to get an idea for how the interface functions. You can access a trial version of Odoo at https://www.Odoo.com/start. Odoo – an open source ERP solution Odoo is a collection of business applications that are available under an open source license. For this reason, Odoo can be used without paying license fees and can be customized to suit the specific needs of a business. There are many advantages to open source software solutions. We will discuss some of these advantages shortly. Free your company from expensive software license fees One of the primary downsides of most ERP systems is they often involve expensive license fees. Increasingly, companies must pay these license fees on an annual basis just to receive bug fixes and product updates. Because ERP systems can require companies to devote great amounts of time and money for setup, data conversion, integration, and training, it can be very expensive, often prohibitively so, to change ERP systems. For this reason, many companies feel trapped as their current ERP vendors increase license fees. Choosing open source software solutions, frees a company from the real possibility that a vendor will increase license fees in the years ahead. Modify the software to meet your business needs With proprietary ERP solutions, you are often forced to accept the software solution the vendor provides chiefly "as is". While you may have customization options and can sometimes pay the company to make specific changes, you rarely have the freedom to make changes directly to the source code yourself. The advantages to having the source code available to enterprise companies can be very significant. In a highly competitive market, being able to develop solutions that improve business processes and give your company the flexibility to meet future demands can make all the difference. Collaborative development Open source software does not rely on a group of developers who work secretly to write proprietary code. Instead, developers from all around the world work together transparently to develop modules, prepare bug fixes, and increase software usability. In the case of Odoo, the entire source code is available on Launchpad.net. Here, developers submit their code changes through a structure called branches. Changes can be peer reviewed, and once the changes are approved, they are incorporated into the final source code product. Odoo – AGPL open source license The term open source covers a wide range of open source licenses that have their own specific rights and limitations. Odoo and all of its modules are released under the Affero General Public License (AGPL) version 3. One key feature of this license is that any custom-developed module running under Odoo must be released with the source code. This stipulation protects the Odoo community as a whole from developers who may have a desire to hide their code from everyone else. This may have changed or has been appended recently with an alternative license. You can find the full AGPL license at http://www.gnu.org/licenses/agpl-3.0.html. A real-world case study using Odoo The goal is to do more than just walk through the various screens and reports of Odoo. Instead, we want to give you a solid understanding of how you would implement Odoo to solve real-world business problems. For this reason, this article will present a real-life case study in which Odoo was actually utilized to improve specific business processes. Silkworm, Inc. – a mid-sized screen printing company Silkworm, Inc. is a highly respected mid-sized silkscreen printer in the Midwest that manufactures and sells a variety of custom apparel products. They have been kind enough to allow us to include some basic aspects of their business processes as a set of real-world examples implementing Odoo into a manufacturing operation. Using Odoo, we will set up the company records (or system) from scratch and begin by walking through their most basic sales order process, selling T-shirts. From there, we will move on to manufacturing operations, where custom art designs are developed and then screen printed onto raw materials for shipment to customers. We will come back to this real-world example so that you can see how Odoo can be used to solve real-world business solutions. Although Silkworm is actively implementing Odoo, Silkworm, Inc. does not directly endorse or recommend Odoo for any specific business solution. Every company must do their own research to determine whether Odoo is a good fit for their operation. Summary In this article, we have learned about the ERP system and common ERP modules. An introduction about Odoo and features of it. Resources for Article: Further resources on this subject: Getting Started with Odoo Development[article] Machine Learning in IPython with scikit-learn [article] Making Goods with Manufacturing Resource Planning [article]
Read more
  • 0
  • 0
  • 9467

article-image-transformation
Packt
20 Dec 2013
23 min read
Save for later

Apache Camel: Transformation

Packt
20 Dec 2013
23 min read
(For more resources related to this topic, see here.) The latest version of the example code for this article can be found at http://github.com/CamelCookbook/camel-cookbook-examples. You can also download the example code files for all Packt books you have purchased from your account at https://www.packtpub.com. If you purchased this book elsewhere, you can visit https://www.packtpub.com/books/content/support and register to have the files e-mailed directly to you. In this article we will explore a number of ways in which Camel performs message content transformation: Let us first look at some important concepts regarding transformation of messages in Camel: Using the transform statement. This allows you to reference Camel Expression Language code within the route to do message transformations. Calling a templating component, such as Camel's XSLT or Velocity template style components. This will typically reference an external template resource that is used in transforming your message. Calling a Java method (for example, beanref), defined by you, within a Camel route to perform the transformation. This is a special case processor that can invoke any referenced Java object method. Camel's Type Converter capability that can automatically cast data from one type to another transparently within your Camel route. This capability is extensible, so you can add your own Type Converters. Camel's Data Format capability that allows us to use built-in, or add our own, higher order message format converters. A Camel Data Format goes beyond simple data type converters, which handle simple data type translations such as String to int, or File to String. Data Formats are used to translate between a low-level representation (XML) and a high-level one (Java objects). Other examples include encrypting/decrypting data, and compressing/decompressing data. For more, see http://camel.apache.org/data-format.html. A number of Camel architectural concepts are used throughout this article. Full details can be found at the Apache Camel website at http://camel.apache.org. The code for this article is contained within the camel-cookbook-transformation module of the examples. Transforming using a Simple Expression When you want to transform a message in a relatively straightforward way, you use Camel's transform statement along with one of the Expression Languages provided by the framework. For example, Camel's Simple Expression Language provides you with a quick, inline mechanism for straightforward transformations. This recipe will show you how to use Camel's Simple Expression Language to transform the message body. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.simple package. Spring XML files are located under src/main/resources/META-INF/spring and are prefixed with simple. How to do it... In a Camel route, use a transform DSL statement containing the Expression Language code to do your transformation. In the XML DSL, this is written as follows: <route> <from uri="direct:start"/> <transform> <simple>Hello ${body}</simple> </transform> </route> In the Java DSL, the same route is expressed as: from("direct:start") .transform(simple("Hello ${body}")); In this example, the message transformation prefixes the incoming message with the phrase Hello using the Simple Expression Language. The processing step after the transform statement will see the transformed message content in the body of the exchange. How it works... Camel's Simple Expression Language is quite good at manipulating the String content through its access to all aspects of the message being processed, through its rich String and logical operators. The result of your Simple Expression becomes the new message body after the transform step. This includes predicates such as using Simple's logical operators, to evaluate a true or false condition; the results of that Boolean operation become the new message body containing a String: "true" or "false". The advantage of using a distinct transform step within a route, as opposed to embedding it within a processor, is that the logic is clearly visible to other programmers. Ensure that the expression embedded within your route is kept simple so as to not distract the next developer from the overall purpose of the integration. It is best to move more complex (or just lengthy) transformation logic into its own subroute, and invoke it using direct: or seda:. There's more... The transform statement will work with any Expression Language available in Camel, so if you need more powerful message processing capabilities you can leverage scripting languages such as Groovy or JavaScript (among many others) as well. The Transforming inline with XQuery recipe will show you how to use the XQuery Expression Language to do transformations on XML messages. See also Message Translator: http://camel.apache.org/message-translator.html Camel Expression capabilities: http://camel.apache.org/expression.html Camel Simple Expression Language: http://camel.apache.org/simple.html Languages supported by Camel: http://camel.apache.org/languages.html The Transforming inline with XQuery recipe   Transforming inline with XQuery Camel supports the use of Camel's XQuery Expression Language along with the transform statement as a quick and easy way to transform an XML message within a route. This recipe will show you how to use an XQuery Expression to do in-route XML transformation. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.xquery package. Spring XML files are located under src/main/resources/META-INF/spring and prefixed with xquery. To use the XQuery Expression Language, you need to add a dependency element for the camel-saxon library, which provides the implementation for the XQuery Expression Language. Add the following to the dependencies section of your Maven POM: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-saxon</artifactId> <version>${camel-version}</version> </dependency>   How to do it... In the Camel route, specify a transform statement followed by the XQuery Expression Language code to do your transformation. In the XML DSL, this is written as: <route> <from uri="direct:start"/> <transform> <xquery> <books>{ for $x in /bookstore/book where $x/price>30 order by $x/title return $x/title }</books> </xquery> </transform> </route> When using the XML DSL, remember to XML encode the XQuery embedded XML elements. Therefore, < becomes &lt; and > becomes &gt;. In the Java DSL, the same route is expressed as: from("direct:start") .transform(xquery("<books>{ for $x in /bookstore/book " + "where $x/price>30 order by $x/title " + "return $x/title }</books>")); Feed the following input XML message through the transformation: <bookstore> <book category="COOKING"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="CHILDREN"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="PROGRAMMING"> <title lang="en">Apache Camel Developer's Cookbook</title> <author>Scott Cranton</author> <author>Jakub Korab</author> <year>2013</year> <price>49.99</price> </book> <book category="WEB"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> The resulting message will be: <books> <title lang="en">Apache Camel Developer's Cookbook</title> <title lang="en">Learning XML</title> </books> The processing step after the transform statement will see the transformed message content in the body of the exchange. How it works... Camel's XQuery Expression Language is a good way to inline XML transformation code within your route. The result of the XQuery Expression becomes the new message body after the transform step. All of the message's body, headers, and properties are made available to the XQuery Processor, so you can reference them directly within your XQuery statement. This provides you with a powerful mechanism for transforming XML messages. If you are more comfortable with XSLT, take a look at the Transforming with XSLT recipe. In-lining the transformation within your integration route can sometimes be an advantage as you can clearly see what is being changed. However, when the transformation expression becomes so complex that it starts to overwhelm the integration route, you may want to consider moving the transformation expression outside of the route. See the Transforming using a Simple Expression recipe for another inline transformation example, and see the Transforming with XSLT recipe for an example of externalizing your transformation. You can fetch the XQuery Expression from an external file using Camel's resource reference syntax. To reference an XQuery file on the classpath you can specify: <transform> <xquery>resource:classpath:/path/to/myxquery.xml</xquery> </transform> This is equivalent to using XQuery as an endpoint: <to uri="xquery:classpath:/path/to/myxquery.xml"/>   There's more... The XQuery Expression Language allows you to pass in headers associated with the message. These will show up as XQuery variables that can be referenced within your XQuery statements. Consider, from the previous example, to allow the value of the books that are filtered to be passed in with the message body, that is, parameterize the XQuery, you can modify the XQuery statement as follows: <transform> <xquery> declare variable $in.headers.myParamValue as xs:integer external; <books value='{$in.headers.myParamValue}'&gt;{ for $x in /bookstore/book where $x/price>$in.headers.myParamValue order by $x/title return $x/title }&lt;/books&gt; </xquery> </transform> Message headers will be associated with an XQuery variable called in.headers.<name of header>. To use this in your XQuery, you need to explicitly declare an external variable of the same name and XML Schema (xs:) type as the value of the message header. The transform statement will work with any Expression Language enabled within Camel, so if you need more powerful message processing capabilities you can leverage scripting languages such as Groovy or JavaScript (among many others) as well. The Transforming using a Simple Expression recipe will show you how to use the Simple Expression Language to do transformations on String messages. See also Message Translator: http://camel.apache.org/message-translator.html Camel Expression capabilities: http://camel.apache.org/expression.html Camel XQuery Expression Language: http://camel.apache.org/xquery.html XQuery language: http://www.w3.org/XML/Query/ Languages supported by Camel: http://camel.apache.org/languages.html The Transforming with XSLT recipe The Transforming using a Simple Expression recipe   Transforming with XSLT When you want to transform an XML message using XSLT, use Camel's XSLT Component. This is similar to the Transforming inline with XQuery recipe except that there is no XSLT Expression Language, so it can only be used as an endpoint. This recipe will show you how to transform a message using an external XSLT resource. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.xslt package. Spring XML files are located under src/main/resources/META-INF/spring and prefixed with xslt. How to do it... In a Camel route, add the xslt processor step into the route at the point where you want the XSLT transformation to occur. The XSLT file must be referenced as an external resource, and depending on where the file is located, prefixed with either classpath:(default if not using a prefix), file:, or http:. In the XML DSL, this is written as: <route> <from uri="direct:start"/> <to uri="xslt:book.xslt"/> </route> In the Java DSL, the same route is expressed as: from("direct:start") .to("xslt:book.xslt"); The next processing step in the route will see the transformed message content in the body of the exchange. How it works... The following example shows how the preceding steps will process an XML file. Consider the following input XML message: <bookstore> <book category="COOKING"> <title lang="en">Everyday Italian</title> <author>Giada De Laurentiis</author> <year>2005</year> <price>30.00</price> </book> <book category="CHILDREN"> <title lang="en">Harry Potter</title> <author>J K. Rowling</author> <year>2005</year> <price>29.99</price> </book> <book category="PROGRAMMING"> <title lang="en">Apache Camel Developer's Cookbook</title> <author>Scott Cranton</author> <author>Jakub Korab</author> <year>2013</year> <price>49.99</price> </book> <book category="WEB"> <title lang="en">Learning XML</title> <author>Erik T. Ray</author> <year>2003</year> <price>39.95</price> </book> </bookstore> Process this with the following XSLT contained in books.xslt: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet version="1.0" > <xsl:output omit-xml-declaration="yes"/> <xsl:template match="/"> <books> <xsl:apply-templates select="/bookstore/book/title[../price>30]"> <xsl:sort select="."/> </xsl:apply-templates> </books> </xsl:template> <xsl:template match="node()|@*"> <xsl:copy> <xsl:apply-templates select="node()|@*"/> </xsl:copy> </xsl:template> </xsl:stylesheet> The result will appear as follows: <books> <title lang="en">Apache Camel Developer's Cookbook</title> <title lang="en">Learning XML</title> </books> The Camel XSLT Processor internally runs the message body through a registered Java XML transformer using the XSLT file referenced by the endpoint. This processor uses Camel's Type Converter capabilities to convert the input message body type to one of the supported XML source models in the following order of priority: StAXSource (off by default; this can be enabled by setting allowStAX=true on the endpoint URI) SAXSource StreamSource DOMSource Camel's Type Converter can convert from most input types (String, File, byte[], and so on) to one of the XML source types for most XML content loaded through other Camel endpoints with no extra work on your part. The output data type for the message is, by default, a String, and is configurable using the output parameter on the xslt endpoint URI. There's more... The XSLT Processor passes in headers, properties, and parameters associated with the message. These will show up as XSLT parameters that can be referenced within your XSLT statements. You can pass in the names of the books as parameters to the XSLT template; to do so, modify the previous XLST as follows: <xsl:param name="myParamValue"/> <xsl:template match="/"> <books> <xsl:attribute name="value"> <xsl:value-of select="$myParamValue"/> </xsl:attribute> <xsl:apply-templates select="/bookstore/book/title[../price>$myParamValue]"> <xsl:sort select="."/> </xsl:apply-templates> </books> </xsl:template> The Exchange instance will be associated with a parameter called exchange; the IN message with a parameter called in; and the message headers, properties, and parameters will be associated XSLT parameters with the same name. To use these in your XSLT, you need to explicitly declare a parameter of the same name in your XSLT file. In the previous example, it is possible to use either a message header or exchange property called myParamValue. See also Message Translator: http://camel.apache.org/message-translator.html Camel XSLT Component: http://camel.apache.org/xslt Camel Type Converter: http://camel.apache.org/type-converter.html XSL working group: http://www.w3.org/Style/XSL/ The Transforming inline with XQuery recipe   Transforming from Java to XML with JAXB Camel's JAXB Component is one of a number of components that can be used to convert your XML data back and forth from Java objects. It provides a Camel Data Format that allows you to use JAXB annotated Java classes, and then marshal (Java to XML) or unmarshal (XML to Java) your data. JAXB is a Java standard for translating between XML data and Java that is used by creating annotated Java classes that bind, or map, to your XML data schema. The framework takes care of the rest. This recipe will show you how to use the JAXB Camel Data Format to convert back and forth from Java to XML. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.jaxb package. The Spring XML files are located under src/main/resources/META-INF/spring and prefixed with jaxb. To use Camel's JAXB Component, you need to add a dependency element for the camel-jaxb library, which provides the implementation for the JAXB Data Format. Add the following to the dependencies section of your Maven POM: <dependency> lt;groupId>org.apache.camel</groupId> <artifactId>camel-jaxb</artifactId> <version>${camel-version}</version> lt;/dependency>   How to do it... The main steps for converting between Java and XML are as follows: Given a JAXB annotated model, reference that model within a named Camel Data Format. Use that named Data Format within your Camel route using the marshal and unmarshal DSL statements. Create an annotated Java model using standard JAXB annotations. There are a number of external tools that can automate this creation from existing XML or XSD (XML Schema) files: @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "", propOrder = { "title", "author", "year", "price" } ) @XmlRootElement(name = "book") public class Book { @XmlElement(required = true) protected Book.Title title; @XmlElement(required = true) protected List<String> author; protected int year; protected double price; // getters and setters } Instantiate a JAXB Data Format within your Camel route that refers to the Java package(s) containing your JAXB annotated classes. In the XML DSL, this is written as: <camelContext > <dataFormats> <jaxb id="myJaxb" contextPath="org.camelcookbook .transformation.myschema"/> </dataFormats> <!-- route definitions here --> </camelContext> In the Java DSL, the Data Format is defined as: public class JaxbRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { DataFormat myJaxb= new JaxbDataFormat( "org.camelcookbook.transformation.myschema"); // route definitions here } } Reference the Data Format within your route, choosing marshal(Java to XML) or unmarshal(XML to Java) as appropriate. In the XML DSL, this routing logic is written as: <route> <from uri="direct:unmarshal"/> <unmarshal ref="myJaxb"/> </route> In the Java DSL, this is expressed as: from("direct:unmarshal").unmarshal(myJaxb);   How it works... Using Camel JAXB to translate your XML data back and forth to Java makes it much easier for the Java processors defined later on in your route to do custom message processing. This is useful when the built-in XML translators (for example, XSLT or XQuery) are not enough, or you just want to call existing Java code. Camel JAXB eliminates the boilerplate code from your integration flows by providing a wrapper around the standard JAXB mechanisms for instantiating the Java binding for the XML data. There's more... Camel JAXB works just fine with existing JAXB tooling like the maven-jaxb2-plugin plugin, which can automatically create JAXB-annotated Java classes from an XML Schema (XSD). See also Camel JAXB: http://camel.apache.org/jaxb.html Available Data Formats: http://camel.apache.org/data-format.html JAXB Specification: http://jcp.org/en/jsr/detail?id=222   Transforming from Java to JSON Camel's JSON Component is used when you need to convert your JSON data back and forth from Java. It provides a Camel Data Format that, without any requirement for an annotated Java class, allows you to marshal (Java to JSON) or unmarshal (JSON to Java) your data. There is only one step to using Camel JSON to marshal and unmarshal XML data. Within your Camel route, insert the marshal(Java to JSON), or unmarshal(JSON to Java) statement, and configure it to use the JSON Data Format. This recipe will show you how to use the camel-xstream library to convert from Java to JSON, and back. Getting ready The Java code for this recipe is located in the org.camelcookbook.transformation.json package. The Spring XML files are located under src/main/resources/META-INF/spring and prefixed with json. To use Camel's JSON Component, you need to add a dependency element for the camel-xstream library, which provides an implementation for the JSON Data Format using the XStream library. Add the following to the dependencies section of your Maven POM: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xstream</artifactId> <version>${camel-version}</version> </dependency>   How to do it... Reference the Data Format within your route, choosing the marshal (Java to JSON), or unmarshal (JSON to Java) statement, as appropriate: In the XML DSL, this is written as follows: <route> <from uri="direct:marshal"/> <marshal> <json/> </marshal> <to uri="mock:marshalResult"/> </route> In the Java DSL, this same route is expressed as: from("direct:marshal") .marshal().json() .to("mock:marshalResult");   How it works... Using Camel JSON simplifies translating your data between JSON and Java. This is convenient when you are dealing with REST endpoints and need Java processors in Camel to do custom message processing later on in the route. Camel JSON provides a wrapper around the JSON libraries for instantiating the Java binding for the JSON data, eliminating more boilerplate code from your integration flows. There's more... Camel JSON works with the XStream library by default, and can be configured to use other JSON libraries, such as Jackson or GSon. These other libraries provide additional features, more customization, and more flexibility that can be leveraged by Camel. To use them, include their respective Camel components, for example, camel-jackson, and specify the library within the json element: <dataFormats> <json id="myJson" library="Jackson"/> </dataFormats>   See also Camel JSON: http://camel.apache.org/json.html Available Data Formats: http://camel.apache.org/data-format.html   Transforming from XML to JSON Camel provides an XML JSON Component that converts your data back and forth between XML and JSON in a single step, without an intermediate Java object representation. It provides a Camel Data Format that allows you to marshal (XML to JSON), or unmarshal (JSON to XML) your data. This recipe will show you how to use the XML JSON Component to convert from XML to JSON, and back. Getting ready Java code for this recipe is located in the org.camelcookbook.transformation.xmljson package. Spring XML files are located under src/main/resources/META-INF/spring and prefixed with xmljson. To use Camel's XML JSON Component, you need to add a dependency element for the camel-xmljson library, which provides an implementation for the XML JSON Data Format. Add the following to the dependencies section of your Maven POM: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-xmljson</artifactId> <version>${camel-version}</version> </dependency>   How to do it... Reference the xmljson Data Format within your route, choosing the marshal(XML to JSON), or unmarshal(JSON to XML) statement, as appropriate: In the XML DSL, this is written as follows: <route> <from uri="direct:marshal"/> <marshal> <xmljson/> </marshal> <to uri="mock:marshalResult"/> </route> In the Java DSL, this same route is expressed as: from("direct:marshal") .marshal().xmljson() .to("mock:marshalResult");   How it works... Using the Camel XML JSON Component simplifies translating your data between XML and JSON, making it convenient to use when you are dealing with REST endpoints. The XML JSON Data Format wraps around the Json-lib library, which provides the core translation capabilities, eliminating more boilerplate code from your integration flows. There's more... You may need to configure XML JSON if you want to fine-tune the output of your transformation. For example, consider the following JSON: [{"@category":"PROGRAMMING","title":{"@lang":"en","#text": "Apache Camel Developer's Cookbook"},"author":[ "Scott Cranton","Jakub Korab"],"year":"2013","price":"49.99"}] This will be converted as follows, by default, which may not be exactly what you want (notice the <a> and <e> elements): <?xml version="1.0" encoding="UTF-8"?> <a> <e category="PROGRAMMING"> <author> <e>Scott Cranton</e> <e>Jakub Korab</e> </author> <price>49.99</price> <title lang="en">Apache Camel Developer's Cookbook</title> <year>2013</year> </e> </a> To configure XML JSON to use <bookstore> as the root element instead of <a>, use <book> for the individual elements instead of <e>, and expand the multiple author values to use a sequence of <author> elements, you would need to tune the configuration of the Data Format before referencing it in your route. In the XML DSL, the definition of the Data Format and the route that uses it is written as follows: <dataFormats> <xmljson id="myXmlJson" rootName="bookstore" elementName="book" expandableProperties="author author"/> </dataFormats> <route> <from uri="direct:unmarshalBookstore"/> <unmarshal ref="myXmlJson"/> <to uri="mock:unmarshalResult"/> </route> In the Java DSL, the same thing is expressed as: XmlJsonDataFormat xmlJsonFormat = new XmlJsonDataFormat(); xmlJsonFormat.setRootName("bookstore"); xmlJsonFormat.setElementName("book"); xmlJsonFormat.setExpandableProperties( Arrays.asList("author", "author")); from("direct:unmarshalBookstore") .unmarshal(xmlJsonFormat) .to("mock:unmarshalBookstoreResult"); This will result in the previous JSON being unmarshalled as follows: <?xml version="1.0" encoding="UTF-8"?> <bookstore> <book category="PROGRAMMING"> <author>Scott Cranton</author> <author>Jakub Korab</author><price>49.99</price> <title lang="en">Apache Camel Developer's Cookbook</title> <year>2013</year> </book> </bookstore>   See also Camel XML JSON: http://camel.apache.org/xmljson.html Available Data Formats: http://camel.apache.org/data-format.html Json-lib: http://json-lib.sourceforge.net   Summary We saw how Apache Camel is flexible in allowing us to transform and convert messages in various formats. This makes Apache Camel an ideal choice for integrating different systems together. Resources for Article: Further resources on this subject: Drools Integration Modules: Spring Framework and Apache Camel [Article] Installing Apache Karaf [Article] Working with AMQP [Article]
Read more
  • 0
  • 0
  • 9453
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-welcome-spring-framework
Packt
30 Apr 2015
17 min read
Save for later

Welcome to the Spring Framework

Packt
30 Apr 2015
17 min read
In this article by Ravi Kant Soni, author of the book Learning Spring Application Development, you will be closely acquainted with the Spring Framework. Spring is an open source framework created by Rod Johnson to address the complexity of enterprise application development. Spring is now a long time de facto standard for Java enterprise software development. The framework was designed with developer productivity in mind and this makes it easier to work with the existing Java and JEE APIs. Using Spring, we can develop standalone applications, desktop applications, two tier applications, web applications, distributed applications, enterprise applications, and so on. (For more resources related to this topic, see here.) Features of the Spring Framework Lightweight: Spring is described as a lightweight framework when it comes to size and transparency. Lightweight frameworks reduce complexity in application code and also avoid unnecessary complexity in their own functioning. Non intrusive: Non intrusive means that your domain logic code has no dependencies on the framework itself. Spring is designed to be non intrusive. Container: Spring's container is a lightweight container, which contains and manages the life cycle and configuration of application objects. Inversion of control (IoC): Inversion of Control is an architectural pattern. This describes the Dependency Injection that needs to be performed by external entities instead of creating dependencies by the component itself. Aspect-oriented programming (AOP): Aspect-oriented programming refers to the programming paradigm that isolates supporting functions from the main program's business logic. It allows developers to build the core functionality of a system without making it aware of the secondary requirements of this system. JDBC exception handling: The JDBC abstraction layer of the Spring Framework offers a exceptional hierarchy that simplifies the error handling strategy. Spring MVC Framework: Spring comes with an MVC web application framework to build robust and maintainable web applications. Spring Security: Spring Security offers a declarative security mechanism for Spring-based applications, which is a critical aspect of many applications. ApplicationContext ApplicationContext is defined by the org.springframework.context.ApplicationContext interface. BeanFactory provides a basic functionality, while ApplicationContext provides advance features to our spring applications, which make them enterprise-level applications. Create ApplicationContext by using the ClassPathXmlApplicationContext framework API. This API loads the beans configuration file and it takes care of creating and initializing all the beans mentioned in the configuration file: import org.springframework.context.ApplicationContext; import org.springframework.context.support.ClassPathXmlApplicationContext;   public class MainApp {   public static void main(String[] args) {      ApplicationContext context =    new ClassPathXmlApplicationContext("beans.xml");      HelloWorld helloWorld =    (HelloWorld) context.getBean("helloworld");      helloWorld.getMessage(); } } Autowiring modes There are five modes of autowiring that can be used to instruct Spring Container to use autowiring for Dependency Injection. You use the autowire attribute of the <bean/> element to specify the autowire mode for a bean definition. The following table explains the different modes of autowire: Mode Description no By default, the Spring bean autowiring is turned off, meaning no autowiring is to be performed. You should use the explicit bean reference called ref for wiring purposes. byName This autowires by the property name. If the bean property is the same as the other bean name, autowire it. The setter method is used for this type of autowiring to inject dependency. byType Data type is used for this type of autowiring. If the data type bean property is compatible with the data type of the other bean, autowire it. Only one bean should be configured for this type in the configuration file; otherwise, a fatal exception will be thrown. constructor This is similar to the byType autowire, but here a constructor is used to inject dependencies. autodetect Spring first tries to autowire by constructor; if this does not work, then it tries to autowire by byType. This option is deprecated. Stereotype annotation Generally, @Component, a parent stereotype annotation, can define all beans. The following table explains the different stereotype annotations: Annotation Use Description @Component Type This is a generic stereotype annotation for any Spring-managed component. @Service Type This stereotypes a component as a service and is used when defining a class that handles the business logic. @Controller Type This stereotypes a component as a Spring MVC controller. It is used when defining a controller class, which composes of a presentation layer and is available only on Spring MVC. @Repository Type This stereotypes a component as a repository and is used when defining a class that handles the data access logic and provide translations on the exception occurred at the persistence layer. Annotation-based container configuration For a Spring IoC container to recognize annotation, the following definition must be added to the configuration file: <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/beans    http://www.springframework.org/schema/beans/spring-beans.xsd    http://www.springframework.org/schema/context    http://www.springframework.org/schema/context/spring-context-    3.2.xsd">   <context:annotation-config />                             </beans> Aspect-oriented programming (AOP) supports in Spring AOP is used in Spring to provide declarative enterprise services, especially as a replacement for EJB declarative services. Application objects do what they're supposed to do—perform business logic—and nothing more. They are not responsible for (or even aware of) other system concerns, such as logging, security, auditing, locking, and event handling. AOP is a methodology of applying middleware services, such as security services, transaction management services, and so on on the Spring application. Declaring an aspect An aspect can be declared by annotating the POJO class with the @Aspect annotation. This aspect is required to import the org.aspectj.lang.annotation.aspect package. The following code snippet represents the aspect declaration in the @AspectJ form: import org.aspectj.lang.annotation.Aspect; import org.springframework.stereotype.Component;   @Aspect @Component ("myAspect") public class AspectModule { // ... } JDBC with the Spring Framework The DriverManagerDataSource class is used to configure the DataSource for application, which is defined in the Spring.xml configuration file. The central class of Spring JDBC's abstraction framework is the JdbcTemplate class that includes the most common logic in using the JDBC API to access data (such as handling the creation of connection, creation of statement, execution of statement, and release of resources). The JdbcTemplate class resides in the org.springframework.jdbc.core package. JdbcTemplate can be used to execute different types of SQL statements. DML is an abbreviation of data manipulation language and is used to retrieve, modify, insert, update, and delete data in a database. Examples of DML are SELECT, INSERT, or UPDATE statements. DDL is an abbreviation of data definition language and is used to create or modify the structure of database objects in a database. Examples of DDL are CREATE, ALTER, and DROP statements. The JDBC batch operation in Spring The JDBC batch operation allows you to submit multiple SQL DataSource to process at once. Submitting multiple SQL DataSource together instead of separately improves the performance: JDBC with batch processing Hibernate with the Spring Framework Data persistence is an ability of an object to save its state so that it can regain the same state. Hibernate is one of the ORM libraries that is available to the open source community. Hibernate is the main component available for a Java developer with features such as POJO-based approach and supports relationship definitions. The object query language used by Hibernate is called as Hibernate Query Language (HQL). HQL is an SQL-like textual query language working at a class level or a field level. Let's start learning the architecture of Hibernate. Hibernate annotations is the powerful way to provide the metadata for the object and relational table mapping. Hibernate provides an implementation of the Java Persistence API so that we can use JPA annotations with model beans. Hibernate will take care of configuring it to be used in CRUD operations. The following table explains JPA annotations: JPA annotation Description @Entity The javax.persistence.Entity annotation is used to mark a class as an entity bean that can be persisted by Hibernate, as Hibernate provides the JPA implementation. @Table The javax.persistence.Table annotation is used to define table mapping and unique constraints for various columns. The @Table annotation provides four attributes, which allows you to override the name of the table, its catalogue, and its schema. This annotation also allows you to enforce unique constraints on columns in the table. For now, we will just use the table name as Employee. @Id Each entity bean will have a primary key, which you annotate on the class with the @Id annotation. The javax.persistence.Id annotation is used to define the primary key for the table. By default, the @Id annotation will automatically determine the most appropriate primary key generation strategy to be used. @GeneratedValue javax.persistence.GeneratedValue is used to define the field that will be autogenerated. It takes two parameters, that is, strategy and generator. The GenerationType.IDENTITY strategy is used so that the generated id value is mapped to the bean and can be retrieved in the Java program. @Column javax.persistence.Column is used to map the field with the table column. We can also specify the length, nullable, and uniqueness for the bean properties. Object-relational mapping (ORM, O/RM, and O/R mapping) ORM stands for Object-relational Mapping. ORM is the process of persisting objects in a relational database such as RDBMS. ORM bridges the gap between object and relational schemas, allowing object-oriented application to persist objects directly without having the need to convert object to and from a relational format: Hibernate Query Language (HQL) Hibernate Query Language (HQL) is an object-oriented query language that works on persistence object and their properties instead of operating on tables and columns. To use HQL, we need to use a query object. Query interface is an object-oriented representation of HQL. The query interface provides many methods; let's take a look at a few of them: Method Description public int executeUpdate() This is used to execute the update or delete query public List list() This returns the result of the relation as a list public Query setFirstResult(int rowno) This specifies the row number from where a record will be retrieved public Query setMaxResult(int rowno) This specifies the number of records to be retrieved from the relation (table) public Query setParameter(int position, Object value) This sets the value to the JDBC style query parameter public Query setParameter(String name, Object value) This sets the value to a named query parameter The Spring Web MVC Framework Spring Framework supports web application development by providing comprehensive and intensive support. The Spring MVC framework is a robust, flexible, and well-designed framework used to develop web applications. It's designed in such a way that development of a web application is highly configurable to Model, View, and Controller. In an MVC design pattern, Model represents the data of a web application, View represents the UI, that is, user interface components, such as checkbox, textbox, and so on, that are used to display web pages, and Controller processes the user request. Spring MVC framework supports the integration of other frameworks, such as Struts and WebWork, in a Spring application. This framework also helps in integrating other view technologies, such as Java Server Pages (JSP), velocity, tiles, and FreeMarker in a Spring application. The Spring MVC Framework is designed around a DispatcherServlet. The DispatcherServlet dispatches the http request to handler, which is a very simple controller interface. The Spring MVC Framework provides a set of the following web support features: Powerful configuration of framework and application classes: The Spring MVC Framework provides a powerful and straightforward configuration of framework and application classes (such as JavaBeans). Easier testing: Most of the Spring classes are designed as JavaBeans, which enable you to inject the test data using the setter method of these JavaBeans classes. The Spring MVC framework also provides classes to handle the Hyper Text Transfer Protocol (HTTP) requests (HttpServletRequest), which makes the unit testing of the web application much simpler. Separation of roles: Each component of a Spring MVC Framework performs a different role during request handling. A request is handled by components (such as controller, validator, model object, view resolver, and the HandlerMapping interface). The whole task is dependent on these components and provides a clear separation of roles. No need of the duplication of code: In the Spring MVC Framework, we can use the existing business code in any component of the Spring MVC application. Therefore, no duplicity of code arises in a Spring MVC application. Specific validation and binding: Validation errors are displayed when any mismatched data is entered in a form. DispatcherServlet in Spring MVC The DispatcherServlet of the Spring MVC Framework is an implementation of front controller and is a Java Servlet component for Spring MVC applications. DispatcherServlet is a front controller class that receives all incoming HTTP client request for the Spring MVC application. DispatcherServlet is also responsible for initializing the framework components that will be used to process the request at various stages. The following code snippet declares the DispatcherServlet in the web.xml deployment descriptor: <servlet> <servlet-name>SpringDispatcher</servlet-name> <servlet-class>    org.springframework.web.DispatcherServlet </servlet-class> <load-on-startup>1</load-on-startup> </servlet>   <servlet-mapping> <servlet-name>SpringDispatcher</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> In the preceding code snippet, the user-defined name of the DispatcherServlet class is SpringDispatcher, which is enclosed with the <servlet-name> element. When our newly created SpringDispatcher class is loaded in a web application, it loads an application context from an XML file. DispatcherServlet will try to load the application context from a file named SpringDispatcher-servlet.xml, which will be located in the application's WEB-INF directory: <beans xsi_schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context- 3.0.xsd http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd">   <mvc:annotation-driven />   <context:component-scan base- package="org.packt.Spring.chapter7.springmvc" />   <beanclass="org.springframework.web.servlet.view. InternalResourceViewResolver">    <property name="prefix" value="/WEB-INF/views/" />    <property name="suffix" value=".jsp" /> </bean>   </beans> Spring Security The Spring Security framework is the de facto standard to secure Spring-based applications. The Spring Security framework provides security services for enterprise Java software applications by handling authentication and authorization. The Spring Security framework handles authentication and authorization at the web request and the method invocation level. The two major operations provided by Spring Security are as follows: Authentication: Authentication is the process of assuring that a user is the one who he/she claims to be. It's a combination of identification and verification. The identification process can be performed in a number of different ways, that is, username and password that can be stored in a database, LDAP, or CAS (single sign-out protocol), and so on. Spring Security provides a password encoder interface to make sure that the user's password is hashed. Authorization: Authorization provides access control to an authenticated user. It's the process of assurance that the authenticated user is allowed to access only those resources that he/she is authorized for use. Let's take a look at an example of the HR payroll application, where some parts of the application have access to HR and to some other parts, all the employees have access. The access rights given to user of the system will determine the access rules. In a web-based application, this is often done by URL-based security and is implemented using filters that play an primary role in securing the Spring web application. Sometimes, URL-based security is not enough in web application because URLs can be manipulated and can have relative pass. So, Spring Security also provides method level security. An authorized user will only able to invoke those methods that he is granted access for. Securing web application's URL access HttpServletRequest is the starting point of Java's web application. To configure web security, it's required to set up a filter that provides various security features. In order to enable Spring Security, add filter and their mapping in the web.xml file: <!—Spring Security --> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter. DelegatingFilterProxy</filter-class> </filter>   <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> Logging in to a web application There are multiple ways supported by Spring security for users to log in to a web application: HTTP basic authentication: This is supported by Spring Security by processing the basic credentials presented in the header of the HTTP request. It's generally used with stateless clients, who on each request pass their credential. Form-based login service: Spring Security supports the form-based login service by providing a default login form page for users to log in to the web application. Logout service: Spring Security supports logout services that allow users to log out of this application. Anonymous login: This service is provided by Spring Security that grants authority to an anonymous user, such as a normal user. Remember-me support: This is also supported by Spring Security and remembers the identity of a user across multiple browser sessions. Encrypting passwords Spring Security supports some hashing algorithms such as MD5 (Md5PasswordEncoder), SHA (ShaPasswordEncoder), and BCrypt (BCryptPasswordEncoder) for password encryption. To enable the password encoder, use the <password-encoder/> element and set the hash attribute, as shown in the following code snippet: <authentication-manager> <authentication-provider>    <password-encoder hash="md5" />    <jdbc-user-service data-source-    ref="dataSource"    . . .   </authentication-provider> </authentication-manager> Mail support in the Spring Framework The Spring Framework provides a simplified API and plug-in for full e-mail support, which minimizes the effect of the underlying e-mailing system specifications. The Sprig e-mail supports provide an abstract, easy, and implementation independent API to send e-mails. The Spring Framework provides an API to simplify the use of the JavaMail API. The classes handle the initialization, cleanup operations, and exceptions. The packages for the JavaMail API provided by the Spring Framework are listed as follows: Package Description org.springframework.mail This defines the basic set of classes and interfaces to send e-mails. org.springframework.mail.java This defines JavaMail API-specific classes and interfaces to send e-mails. Spring's Java Messaging Service (JMS) Java Message Service is a Java Message-oriented middleware (MOM) API responsible for sending messages between two or more clients. JMS is a part of the Java enterprise edition. JMS is a broker similar to a postman who acts like a middleware between the message sender and the receiver. Message is nothing, but just bytes of data or information exchanged between two parties. By taking different specifications, a message can be described in various ways. However, it's nothing, but an entity of communication. A message can be used to transfer a piece of information from one application to another, which may or may not run on the same platform. The JMS application Let's look at the sample JMS application pictorial, as shown in the following diagram: We have a Sender and a Receiver. The Sender is responsible for sending a message and the Receiver is responsible for receiving a message. We need a broker or MOM between the Sender and Receiver, who takes the sender's message and passes it from the network to the receiver. Message oriented middleware (MOM) is basically an MQ application such as ActiveMQ or IBM-MQ, which are two different message providers. The sender promises loose coupling and it can be .NET or mainframe-based application. The receiver can be Java or Spring-based application and it sends back the message to the sender as well. This is a two-way communication, which is loosely coupled. Summary This article covered the architecture of Spring Framework and how to set up the key components of the Spring application development environment. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [article] Serving and processing forms [article] Time Travelling with Spring [article]
Read more
  • 0
  • 0
  • 9351

article-image-containerizing-web-application-docker-part-1
Darwin Corn
10 Jun 2016
4 min read
Save for later

Containerizing a Web Application with Docker Part 1

Darwin Corn
10 Jun 2016
4 min read
Congratulations, you’ve written a web application! Now what? Part one of this post deals with steps to take after development, more specifically the creation of a Docker image that contains the application. In part two, I’ll lay out deploying that image to the Google Cloud Platform as well as some further reading that'll help you descend into the rabbit hole that is DevOps. For demonstration purposes, let’s say that you’re me and you want to share your adventures in TrapRap and Death Metal (not simultaneously, thankfully!) with the world. I’ve written a simple Ember frontend for this purpose, and through the course of this post, I will explain to you how I go about containerizing it. Of course, the beauty of this procedure is that it will work with any frontend application, and you are certainly welcome to Bring Your Own Code. Everything I use is publically available on GitHub, however, and you’re certainly welcome to work through this post with the material presented as well. So, I’ve got this web app. You can get it here, or you can run: $ git clone https://github.com/ndarwincorn/docker-demo.git Do this for wherever it is you keep your source code. You’ll need ember-cli and some familiarity with Ember to customize it yourself, or you can just cut to the chase and build the Docker image, which is what I’m going to do in this post. I’m using Docker 1.10, but there’s no reason this wouldn’t work on a Mac running Docker Toolbox (or even Boot2Docker, but don’t quote me on that) or a less bleeding edge Linux distro. Since installing Docker is well documented, I won’t get into that here and will continue with the assumption that you have a working, up-to-date Docker installed on your machine, and that the Docker daemon is running. If you’re working with your own app, feel free to skip below to my explanation of the process and then come back here when you’ve got a Dockerfile in the root of your application. In the root of the application, run the following (make sure you don’t have any locally-installed web servers listening on port 80 already): # docker build -t docker-demo . # docker run -d -p 80:80 --name demo docker-demo Once the command finishes by printing a container ID, launch a web browser and navigate to http://localhost. Hey! Now you can listen to my music served from a LXC container running on your very own computer. How did we accomplish this? Let’s take it piece-by-piece (here’s where to start reading again if you’ve approached this article with your own app): I created a simple Dockerfile using the official Nginx image because I have a deep-seated mistrust of Canonical and don’t want to use the Dockerfile here. Here’s what it looks like in my project: docker-demo/Dockerfile FROM nginx COPY dist/usr/share/nginx/html Running the docker build command reads the Dockerfile and uses it to configure a docker image based on the nginx image. During image configuration, it copies the contents of the dist folder in my project to /srv/http/docker-demo in the container, which the nginx configuration that was mentioned is pointed to. The -t flag tells Docker to ‘tag’ (name) the image we’ve just created as ‘docker-demo’. The docker run command takes that image and builds a container from it. The -d flag is short for ‘detach’, or run the /usr/bin/nginx command built into the image from our Dockerfile and leave the container running. The -p flag maps a port on the host to a port in the container, and --name names the container for later reference. The command should return a container ID that can be used to manipulate it later. In part two, I’ll show you how to push the image we created to the Google Cloud Platform and then launch it as a container in a specially-purposed VM on their Compute Engine. About the Author Darwin Corn is a Systems Analyst for the Consumer Direct Care Network. He is a mid-level professional with diverse experience in the Information Technology world.
Read more
  • 0
  • 0
  • 9350

article-image-managing-it-portfolio-using-troux-enterprise-architecture
Packt
12 Aug 2010
16 min read
Save for later

Managing the IT Portfolio using Troux Enterprise Architecture

Packt
12 Aug 2010
16 min read
(For more resources on Troux, see here.) Managing the IT Portfolio using Troux Enterprise Architecture Almost every company today is totally dependent on IT for day-to-day operations. Large companies literally spend billions on IT-related personnel, software, equipment, and facilities. However, do business leaders really know what they get in return for these investments? Upper management knows that a successful business model depends on information technology. Whether the company is focused on delivery of services or development of products, management depends on its IT team to deliver solutions that meet or exceed customer expectations. However, even though companies continue to invest heavily in various technologies, for most companies, knowing the return-on-investment in technology is difficult or impossible. When upper management asks where the revenues are for the huge investments in software, servers, networks, and databases, few IT professionals are able to answer. There are questions that are almost impossible to answer without guessing, such as: Which IT projects in the portfolio of projects will actually generate revenue? What are we getting for spending millions on vendor software? When will our data center run out of capacity? This article will explore how IT professionals can be prepared when management asks the difficult questions. By being prepared, IT professionals can turn conversations with management about IT costs into discussions about the value IT provides. Using consolidated information about the majority of the IT portfolio, IT professionals can work with business leaders to select revenue-generating projects, decrease IT expenses, and develop realistic IT plans. The following sections will describe what IT professionals can do to be ready with accurate information in response to the most challenging questions business leaders might ask. Management repositories IT has done a fine job of delivering solutions for years. However, pressure to deliver business projects quickly has created a mentality in most IT organizations of "just put it in and we will go back and do the clean up later." This has led to a layering effect where older "legacy" technology remains in place, while new technology is adopted. With this complex mix of legacy solutions and emerging technology, business leaders have a hard time understanding how everything fits together and what value is provided from IT investments. Gone are the days when the Chief Information Officer (CIO) could say "just trust me" when business people asked questions about IT spending. In addition, new requirements for corporate compliance combined with the expanding use of web-based solutions makes managing technology more difficult than ever. With the advent of Software-as-a-Service (SaaS) or cloud computing, the technical footprint, or ecosystem, of IT has extended beyond the enterprise itself. Virtualization of platforms and service-orientation adds to the mind-numbing mix of technologies available to IT. However, there are many systems available to help companies manage their technological portfolio. Unfortunately, multiple teams within the business and within IT see the problem of managing the IT portfolio differently. In many companies, there is no centralized effort to gather and store IT portfolio information. Teams with a need for IT asset information tend to purchase or build a repository specific to their area of responsibility. Some examples of these include: Business goals repository Change management database Configuration management database Business process management database Fixed assets database Metadata repository Project portfolio management database Service catalog Service registry While each of these repositories provides valuable information about IT portfolios, they are each optimized to meet a specific set of requirements. The following table shows the main types of information stored in each of these repositories along with a brief statement about its functional purpose: Repository Main content Main purpose Business goals Goal statements and assignments Documents business goals and who is responsible Change management database Change request tickets, application owners Captures change requests and who can authorize change Configuration management database Identifies actual hardware and software in use across the enterprise Supports Information Technology Infrastructure Library (ITIL) processes Business process management database Business processes, information flows, and process owners Used to develop applications and document business processes Fixed assets database Asset identifiers for hardware and software, asset life, purchase cost, and depreciation amounts Documents cost and depreciable life of IT assets Metadata repository Data about the company databases and files Documents the names, definitions, data types, and locations of the company data Project portfolio management database Project names, classifications, assignments, business value and scope Used to manage IT workload and assess value of IT projects to the business Service catalog Defines hardware and compatible software available for project use Used to manage hardware and software implementations assigned to the IT department Service registry Names and details of reusable software services Used to manage, control, and report on reusable software It is easy to see that while each of these repositories serves a specific purpose, none supports an overarching view across the others. For example, one might ask: How many SQL Server databases do we have installed and what hardware do they run on? To answer this question, IT managers would have to extract data from the metadata repository and combine it with data from the Configuration Management Database (CMDB). The question could be extended: How much will it cost in early expense write-offs if we retire the SQL Server DB servers into a new virtual grid of servers? To answer this question, IT managers need to determine not only how many servers host SQL Server, but how old they are, what they cost at purchase time, and how much depreciation is left on them. Now the query must span at least three systems (CMDB, fixed assets, and metadata repository). The accuracy of the answer will also depend on the relative validity of the data in each repository. There could be overlapping data in some, and outright errors in others. Changing the conversation When upper management asks difficult questions, they are usually interested in cost, risk management, or IT agility. Not knowing a great deal about IT, they are curious about why they need to spend millions on technology and what they get for their investments. The conversation ends up being primarily about cost and how to reduce expenses. This is not a good position to be in if you are running a support function like Enterprise Architecture. How can you explain IT investments in a way that management can understand? If you are not prepared with facts, management has no choice but to assume that costs are out of control and they can be reduced, usually by dramatic amounts. As a good corporate citizen, it is your job to help reduce costs. Like everyone in management, getting the most out of the company's assets is your responsibility. However, as we in IT know, it's just as important to be ready for changes in technology and to be on top of technology trends. As technology leaders, it is our job to help the company stay current through investments that may pay off in the future rather than show an immediate return. The following diagram shows various management functions and technologies that are used to manage the business of IT: The dimensions of these tools and processes span systems that run the business to change the business and from the ones using operational information to using strategic information. Various technologies that support data about IT assets are shown. These include: Business process analytics and management information Service-oriented architecture governance Asset-liability management Information technology systems management Financial management information Project portfolio and management information The key to changing the conversation about IT is having the ability to bring the information of these disciplines into a single view. The single view provides the ability to actually discuss IT in a strategic way. Gathering data and reporting on the actual metrics of IT, in a way business leaders can understand, supports strategic planning. The strategic planning process combined with fact-based metrics establishes credibility with upper management and promotes improved decision making on a daily basis. Troux Technologies Solving the IT-business communication problem has been difficult until recently. Troux Technologies (www.troux.com) developed a new open-architected repository and software solution, called the Troux Transformation Platform, to help IT manage the vast array of technology deployed within the company. Troux customers use the suite of applications and advanced integration platform within the product architecture to deliver bottom-line results. By locating where IT expenses are redundant, or out-of-step with business strategy, Troux customers experience significant cost savings. When used properly, the platform also supports improved IT efficiency, quicker response to business requirements, and IT risk reduction. In today's globally-connected markets, where shocks and innovations happen at an unprecedented rate, antiquated approaches to Strategic IT Planning and Enterprise Architecture have become a major obstruction. The inability of IT to plan effectively has driven business leaders to seek solutions available outside the enterprise. Using SaaS or Application Service Providers (ASPs) to meet urgent business objectives can be an effective means to meet short-term goals. However, to be complete, even these solutions usually require integration with internal systems. IT finds itself dealing with unspecified service-level requirements, developing integration architectures, and cleaning up after poorly planned activities by business leaders who don't understand what capabilities exist within the software running inside the company. A global leader in Strategic IT Planning and Enterprise Architecture software, Troux has created an Enterprise Architecture repository that IT can use to put itself at the center of strategic planning. Troux has been successful in implementing its repository at a number of companies. A partial list of Troux's customers can be found on the website. There are other enterprise-level repository vendors on the market. However, leading analysts, such as The Gartner Group and Forrester Research, have published recent studies ranking Troux as a leader in the IT strategy planning tools space. Troux Transformation Platform Troux's sophisticated integration and collaboration capabilities support multiple business initiatives such as handling mergers, aligning business and IT plans, and consolidating IT assets. The business-driven platform provides new levels of visibility into the complex web of IT resources, programs, and business strategy so the business can see instantly where IT spending and programs are redundant or out-of-step with business strategy. The business suite of applications helps IT to plan and execute faster with data assimilated from various trusted sources within the company. The platform provides information necessary to relevant stakeholders such as Business Analysts, Enterprise Architects, The Program Management Office, Solutions Architects, and executives within the business and IT. The transformation platform is not only designed to address today's urgent cost-restructuring agendas, but it also introduces an ongoing IT management discipline, allowing EA and business users to drive strategic growth initiatives. The integration platform provides visibility and control to: Uncover and fix business/IT disconnects: This shows how IT directly supports business strategies and capabilities, and ensures that mismatched spending can be eliminated. Troux Alignment helps IT think like a CFO and demonstrate control and business purpose for the billions that are spent on IT assets, by ensuring that all stakeholders have valid and relevant IT information. Identify and eliminate redundant IT spending: This uncovers the many untapped opportunities with Troux Optimization to free up needless spend, and apply it either to the bottom line or to support new business initiatives. Speed business response and simplify IT: This speeds the creation and deployment of a set of standard, reusable building blocks that are proven to work in agile business cycles. Troux Standards enables the use of IT standards in real time, thereby streamlining the process of IT governance. Accelerate business transformation for government agencies: This helps federal agencies create an actionable Enterprise Architecture and comply with constantly changing mandates. Troux eaGov automatically identifies opportunities to reduce costs to business and IT risks, while fostering effective initiative planning and execution within or across agencies. Support EA methodology: Companies adopting The Open Group Architecture Framework (TOGAF™) can use the Troux for TOGAF solution to streamline their efforts. Unlock the full potential of IT portfolio investment: Unifies Strategic IT Planning, EA, and portfolio project management through a common IT governance process. The Troux CA Clarity Connection enables the first bi-directional integration in the market between CA Clarity Project Portfolio Management (PPM) and the Troux EA repository for enhanced IT investment portfolio planning, analysis, and control. Understand your deployed IT assets: Using the out-of-the-box connection to HP's Universal Configuration Management Database (uCMDB), link software and hardware with the applications they support. All of these capabilities are enabled through an open-architected platform that provides uncomplicated data integration tools. The platform provides Architecture-modeling capabilities for IT Architects, an extensible database schema (or meta-model), and integration interfaces that are simple to automate and bring online with minimal programming efforts. Enterprise Architecture repository The Troux Transformation Platform acts as the consolidation point across all the various IT management databases and even some management systems outside the control of IT. By collecting data from across various areas, new insights are possible, leading to reductions in operating costs and improvements in service levels to the business. While it is possible to combine these using other products on the market or even develop a home-grown EA repository, Troux has created a very easy-to-use API for data collection purposes. In addition, Troux provides a database meta-model for the repository that is extensible. Meta-model extensibility makes the product adaptable to the other management systems across the company. Troux also supports a configurable user interface allowing for a customized view into the repository. This capability makes the catalog appear as if it were a part of the other control systems already in place at the company. Additionally, Troux provides an optional set of applications that support a variety of roles, out of the box, with no meta-model extensions or user interface configurations required. These include: Troux Standards: This application supports the IT technology standards and lifecycle governance process usually conducted by the Enterprise Architecture department. Troux Optimization: This application supports the Application portfolio lifecycle management process conducted by the Enterprise Program Management Office (EPMO) and/or Enterprise Architecture. Troux Alignment: This application supports the business and IT assets and application-planning processes conducted by IT Engineering, Corporate Finance, and Enterprise Architecture. Even these three applications that are available out-of-the-box from Troux can be customized by extending their underlying meta-models and customizing the user interface. The EA repository provides output that is viewable online. Standard reports are provided or custom reports can be developed as per the specific needs of the user community. Departments within or even outside of IT can use the customized views, standard reports, and custom reports to perform analyses. For example, the Enterprise Program Management Office (EPMO) can produce reports that link projects with business goals. The EPMO can review the project portfolio of the company to identify projects that do not support company goals. Decisions can be made about these projects, thereby stopping them, slowing them down, or completing them faster. Resources can be moved from the stopped or completed low-value projects to the higher-value projects, leading to increased revenue or reduced costs for the company. In a similar fashion, the Internal Audit department can check on the level of compliance to company IT standards or use the list of applications stored within the catalog to determine the best audit schedule to follow. Less time can be spent auditing applications with minimal impact on company operations or on applications and projects targeted as low value. Application development can use data from the catalog to understand the current capabilities of the existing applications of the company. As staff changes or "off-shore" resources are applied to projects, knowing what existing systems do in advance of a new project can save many hours of work. Information can be extracted from the EA repository directly into requirements documentation, which is always the starting point for new applications, as well as maintenance projects on existing applications. One study performed at a major financial services company showed that over 40% of project development time was spent in the upfront work of documenting and explaining current application capabilities to business sponsors of projects. By supplying development teams with lists of application capabilities early in the project life cycle, time to gather and document requirements can be reduced significantly. Of course, one of the biggest benefactors of the repository is the EA group. In most companies, EA's main charter is to be the steward of information about applications, databases, hardware, software, and network architecture. EA can perform analyses using the data from the repository leading to recommendations for changes by middle and upper management. In addition, EA is responsible for collecting, setting, and managing the IT standards for the company. The repository supports a single source for IT standards, whether they are internal or external standards. The standards portion of the repository can be used as the centerpiece of IT governance. The function of the Architecture Review Board (ARB) is fully supported by Troux Standards. Capacity Planning and IT Engineering functions will also gain substantially through the use of an EA repository. The useful life of IT assets can be analyzed to create a master plan for technical refresh or reengineering efforts. The annual spend on IT expenses can be reduced dramatically through increased levels of virtualization of IT assets, consolidation of platforms, and even consolidation of whole data centers. IT Engineering can review what is currently running across the company and recommend changes to reduce software maintenance costs, eliminate underutilized hardware, and consolidate federated databases. Lastly, IT Operations can benefit from a consolidated view into the technical footprint running at any point in time. Even when system availability service levels call for near-real-time error correction, it may take hours for IT Operations personnel to diagnose problems. They tend not to have a full understanding of what applications run on what servers, which firewalls support which networks, and which databases support which applications. Problem determination time can be reduced by providing accurate technical architecture information to those focused on keeping systems running and meeting business service-level requirements. Summary This article identified the problem IT has with understanding what technologies it has under management. While many solutions are in place in many companies to gain a better view into the IT portfolio, none are designed to show the impact of IT assets in the aggregate. Without the capabilities provided by an EA repository, IT management has a difficult time answering tough questions asked by business leaders. Troux Technologies offers a solution to this problem using the Troux Transformation Platform. The platform acts as a master metadata repository and becomes the focus of many efforts that IT may run to reduce significant costs and improve business service levels. Further resources on this subject: Troux Enterprise Architecture: Managing the EA function [article]
Read more
  • 0
  • 0
  • 9108

article-image-introduction-logging-tomcat-7
Packt
21 Mar 2012
9 min read
Save for later

Introduction to Logging in Tomcat 7

Packt
21 Mar 2012
9 min read
(For more resources on Apache, see here.) JULI Previous versions of Tomcat (till 5.x) use Apache common logging services for generating logs. A major disadvantage with this logging mechanism is that it can handle only single JVM configuration and makes it difficult to configure separate logging for each class loader for independent application. In order to resolve this issue, Tomcat developers have introduced a separate API for Tomcat 6 version, that comes with the capability of capturing each class loader activity in the Tomcat logs. It is based on java.util.logging framework. By default, Tomcat 7 uses its own Java logging API to implement logging services. This is also called as JULI. This API can be found in TOMCAT_HOME/bin of the Tomcat 7 directory structures (tomcat-juli.jar). The following screenshot shows the directory structure of the bin directory where tomcat-juli.jar is placed. JULI also provides the feature for custom logging for each web application, and it also supports private per-application logging configurations. With the enhanced feature of separate class loader logging, it also helps in detecting memory issues while unloading the classes at runtime. For more information on JULI and the class loading issue, please refer to http://tomcat.apache.org/tomcat-7.0-doc/logging.html and http://tomcat.apache.org/tomcat-7.0-doc/class-loader-howto.html respectively. Loggers, appenders, and layouts There are some important components of logging which we use at the time of implementing the logging mechanism for applications. Each term has its individual importance in tracking the events of the application. Let's discuss each term individually to find out their usage: Loggers:It can be defined as the logical name for the log file. This logical name is written in the application code. We can configure an independent logger for each application. Appenders: The process of generation of logs are handled by appenders. There are many types of appenders, such as FileAppender, ConsoleAppender, SocketAppender, and so on, which are available in log4j. The following are some examples of appenders for log4j: log4j.appender.CATALINA=org.apache.log4j.DailyRollingFileAppender log4j.appender.CATALINA.File=${catalina.base}/logs/catalina.out log4j.appender.CATALINA.Append=true log4j.appender.CATALINA.Encoding=UTF-8 The previous four lines of appenders define the DailyRollingFileAppender in log4j, where the filename is catalina.out . These logs will have UTF-8 encoding enabled. If log4j.appender.CATALINA.append=false, then logs will not get updated in the log files. # Roll-over the log once per day log4j.appender.CATALINA.DatePattern='.'dd-MM-yyyy'.log' log4j.appender.CATALINA.layout = org.apache.log4j.PatternLayout log4j.appender.CATALINA.layout.ConversionPattern = %d [%t] %-5p %c- %m%n The previous three lines of code show the roll-over of log once per day. Layout: It is defined as the format of logs displayed in the log file. The appender uses layout to format the log files (also called as patterns). The highlighted code shows the pattern for access logs: <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> Loggers, appenders, and layouts together help the developer to capture the log message for the application event. Types of logging in Tomcat 7 We can enable logging in Tomcat 7 in different ways based on the requirement. There are a total of five types of logging that we can configure in Tomcat, such as application, server, console, and so on. The following figure shows the different types of logging for Tomcat 7. These methods are used in combination with each other based on environment needs. For example, if you have issues where Tomcat services are not displayed, then console logs are very helpful to identify the issue, as we can verify the real-time boot sequence. Let's discuss each logging method briefly. Application log These logs are used to capture the application event while running the application transaction. These logs are very useful in order to identify the application level issues. For example, suppose your application performance is slow on a particular transition, then the details of that transition can only be traced in application log. The biggest advantage of application logs is we can configure separate log levels and log files for each application, making it very easy for the administrators to troubleshoot the application. Log4j is used in 90 percent of the cases for application log generation. Server log Server logs are identical to console logs. The only advantage of server logs is that they can be retrieved anytime but console logs are not available after we log out from the console. Console log This log gives you the complete information of Tomcat 7 startup and loader sequence. The log file is named as catalina.out and is found in TOMCAT_HOME/logs. This log file is very useful in checking the application deployment and server startup testing for any environment. This log is configured in the Tomcat file catalina.sh, which can be found in TOMCAT_HOME/bin. The previous screenshot shows the definition for Tomcat logging. By default, the console logs are configured as INFO mode. There are different levels of logging in Tomcat such as WARNING, INFORMATION, CONFIG, and FINE. The previous screenshot shows the Tomcat log file location, after the start of Tomcat services. The previous screenshot shows the output of the catalina.out file, where Tomcat services are started in 1903 ms. Access log Access logs are customized logs, which give information about the following: Who has accessed the application What components of the application are accessed Source IP and so on These logs play a vital role in traffic analysis of many applications to analyze the bandwidth requirement and also helps in troubleshooting the application under heavy load. These logs are configured in server.xml in TOMCAT_HOME/conf. The following screenshot shows the definition of access logs. You can customize them according to the environment and your auditing requirement. Let's discuss the pattern format of the access logs and understand how we can customize the logging format: <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="%h %l %u %t &quot;%r&quot; %s %b" resolveHosts="false"/> Class Name: This parameter defines the class name used for generation of logs. By default, Apache Tomcat 7 uses the org.apache.catalina.valves.AccessLogValve class for access logs. Directory: This parameter defines the directory location for the log file. All the log files are generated in the log directory—TOMCAT_HOME/logs—but we can customize the log location based on our environment setup and then update the directory path in the definition of access logs. Prefix: This parameter defines the prefix of the access log filename, that is, by default, access log files are generated by the name localhost_access_log.yy-mm-dd.txt. Suffix: This parameter defines the file extension of the log file. Currently it is in .txt format. Pattern: This parameter defines the format of the log file. The pattern is a combination of values defined by the administrator, for example, %h = remote host address. The following screenshot shows the default log format for Tomcat 7. Access logs show the remote host address, date/time of request, method used for response, URI mapping, and HTTP status code. In case you have installed the web traffic analysis tool for application, then you have to change the access logs to a different format. Host manager These logs define the activity performed using Tomcat Manager, such as various tasks performed, status of application, deployment of application, and lifecycle of Tomcat. These configurations are done on the logging.properties, which can be found in TOMCAT_HOME/conf. The previous screenshot shows the definition of host, manager, and host-manager details. If you see the definitions, it defines the log location, log level, and prefix of the filename. In logging.properties, we are defining file handlers and appenders using JULI. The log file for manager looks similar to the following: I28 Jun, 2011 3:36:23 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:37:13 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:37:42 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: undeploy: Undeploying web application at '/sample' 28 Jun, 2011 3:37:43 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:42:59 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:43:01 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' 28 Jun, 2011 3:53:44 AM org.apache.catalina.core.ApplicationContext log INFO: HTMLManager: list: Listing contexts for virtual host 'localhost' Types of log levels in Tomcat 7 There are seven levels defined for Tomcat logging services (JULI). They can be set based on the application requirement. The following figure shows the sequence of log levels for JULI: Every log level in JULI had its own functionality. The following table shows the functionality of each log level in JULI: Log level Description SEVERE(highest) Captures exception and Error WARNING Warning messages INFO Informational message, related to server activity CONFIG Configuration message FINE Detailed activity of server transaction (similar to debug) FINER More detailed logs than FINE FINEST(least) Entire flow of events (similar to trace) For example, let's take an appender from logging.properties and find out the log level used; the first log appender for localhost is using FINE as the log level, as shown in the following code snippet: localhost.org.apache.juli.FileHandler.level = FINE localhost.org.apache.juli.FileHandler.directory = ${catalina.base}/logs localhost.org.apache.juli.FileHandler.prefix = localhost. The following code shows the default file handler configuration for logging in Tomcat 7 using JULI. The properties are mentioned and log levels are highlighted: ############################################################ # Facility specific properties. # Provides extra control for each logger. ############################################################ org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers = 2localhost.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager] .level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/manager] .handlers = 3manager.org.apache.juli.FileHandler org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host- manager].level = INFO org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host- manager].handlers = 4host-manager.org.apache.juli.FileHandler
Read more
  • 0
  • 0
  • 8999
article-image-running-your-applications-aws-part-2
Cheryl Adams
19 Aug 2016
6 min read
Save for later

Running Your Applications with AWS - Part 2

Cheryl Adams
19 Aug 2016
6 min read
An active account with AWS means you are on your way with building in the cloud.  Before you start building, you need to tackle the Billing and Cost Management, under Account. It is likely that you are starting with a Free-Tier, so it is important to know that you still have the option of paying for additional services. Also, if you decide to continue with AWS,you should get familiar with this page.  This is not your average bill or invoice page—it is much more than that. The Billing & Cost Management Dashboard is a bird’s-eye view of all of your account activity. Once you start accumulating pay-as-you-go services, this page will give you a quick review of your monthly spending based on services. Part of managing your cloud services includes billing, so it is a good idea to become familiar with this from the start. Amazon also gives you the option of setting up cost-based alerts for your system, which is essential if youwant to be alerted by any excessive cost related to your cloud services. Budgets allow you to receive e-mailed notifications or alerts if spending exceeds the budget that you have created.    If you want to dig in even deeper, try turning on the Cost Explorer for an analysis of your spending. The Billing and Cost Management section of your account is much more than just invoices. It is the AWS complete cost management system for your cloud. Being familiar with all aspects of the cost management system will help you to monitor your cloud services, and hopefully avoid any expenses that may exceed your budget. In our previous discussion, we considered all AWSservices.  Let’s take another look at the details of the services. Amazon Web Services Based on this illustration, you can see that the build options are grouped by words such asCompute, Storage & Content Delivery and  Databases.  Each of these objects or services lists a step-by-step routine that is easy to follow. Within the AWS site, there are numerous tutorials with detailed build instructions. If you are still exploring in the free-tier, AWS also has an active online community of users whotry to answer most questions. Let’s look at the build process for Amazon’s EC2 Virtual Server. The first thing that you will notice is that Amazon provides 22 different Amazon Machine Images (AMIs) to choose from (at the time this post was written).At the top of the screen is a Step process that will guide you through the build. It should be noted that some of the images available are not defined as a part of the free-tier plan. The remaining images that do fit into the plan should fit almost any project need. For this walkthrough, let’s select SUSE Linux (free eligible). It is important to note that just because the image itself is free, that does not mean all the options available within that image are free. Notice on this screen that Amazon has pre-selected the only free-tier option available for this image. From this screen you are given two options: (Review and Launch) or (Next Configure Instance Details).  Let’s try Review and Launch to see what occurs. Notice that our Step process advanced to Step 7. Amazon gives you a soft warning regarding the state of the build and potential risk. If you are okay with these risks, you can proceed and launch your server. It is important to note that the Amazon build process is user driven. It will allow you to build a server with these potential risks in your cloud. It is recommended that you carefully consider each screen before proceeding. In this instance,select Previous and not Cancel to return to Step 3. Selecting Cancelwill stop the build process and return you to the AWS main services page. Until you actually launch your server, nothing is built or saved. There are information bubbles for each line in Step 3: Configure Instance Details. Review the content of each bubble, make any changes if needed, and then proceed to the next step. Select the storage size; then select Next Tag Instance. Enter Values and Continue or Learn More for further information. Select the Next: Configure Security Group button. Security is an extremely important part of setting up your virtual server. It is recommended that you speak to your security administrator to determine the best option. For source, it is recommended that you avoid using the Anywhereoption. This selection will put your build at risk. Select my IP or custom IP as shown. If you are involved in a self-study plan, you can select the Learn More link to determine the best option. Next: Review and Launch The full details of this screen be expanded, reviewed or edited. If everything appears to be okay,proceed to Launch. One additional screen will appear for adding Private and/or Public Keys to access your new server. Make the appropriate selection and proceed to the Launch Instances. One more screen will appear for adding Private and/or Public Keys to access your new server. Make the appropriate selection and proceed to Launch Instances to see the build process. You can access your new server from the EC2 Dashboard. This example of a build process gives you a window into how the  AWS build process works. The other objects and services have a similar step-through process. Once you have launched your server, you should be able to access it and proceed with your development. Additional details for development are also available through the site. Amazon’s Web Services Platform is an all-in-one solution for your graduation to the cloud. Not only can you manage your technical environment, but also it has features that allow you to manage your budget. By setting up your virtual applicances and servers appropriately, you can maximize the value of the first  12 months of your free-tier. Carefully monitoring activities through alerts and notification will help you to avoid having any billing surprises. Going through the tutorials and visting the online community will only aid to increase your knowledge base of AWS. AWS is inviting everyone to test their services on this exciting platform, so I would definitely recommend taking advantage of it. Have fun! About the author Cheryl Adams is a senior cloud data andinfrastructure architect in the healthcare data realm. She is also the co-author of Professional Hadoop by Wrox.
Read more
  • 0
  • 0
  • 8857

Packt
22 May 2015
27 min read
Save for later

Financial Derivative – Options

Packt
22 May 2015
27 min read
In this article by Michael Heydt, author of Mastering pandas for Finance, we will examine working with options data provided by Yahoo! Finance using pandas. Options are a type of financial derivative and can be very complicated to price and use in investment portfolios. Because of their level of complexity, there have been many books written that are very heavy on the mathematics of options. Our goal will not be to cover the mathematics in detail but to focus on understanding several core concepts in options, retrieving options data from the Internet, manipulating it using pandas, including determining their value, and being able to check the validity of the prices offered in the market. (For more resources related to this topic, see here.) Introducing options An option is a contract that gives the buyer the right, but not the obligation, to buy or sell an underlying security at a specific price on or before a certain date. Options are considered derivatives as their price is derived from one or more underlying securities. Options involve two parties: the buyer and the seller. The parties buy and sell the option, not the underlying security. There are two general types of options: the call and the put. Let's look at them in detail: Call: This gives the holder of the option the right to buy an underlying security at a certain price within a specific period of time. They are similar to having a long position on a stock. The buyer of a call is hoping that the value of the underlying security will increase substantially before the expiration of the option and, therefore, they can buy the security at a discount from the future value. Put: This gives the option holder the right to sell an underlying security at a certain price within a specific period of time. A put is similar to having a short position on a stock. The buyer of a put is betting that the price of the underlying security will fall before the expiration of the option and they will, thereby, be able to gain a profit by benefitting from receiving the payment in excess of the future market value. The basic idea is that one side of the party believes that the underlying security will increase in value and the other believes it will decrease. They will agree upon a price known as the strike price, where they place their bet on whether the price of the underlying security finishes above or below this strike price on the expiration date of the option. Through the contract of the option, the option seller agrees to give the buyer the underlying security on the expiry of the option if the price is above the strike price (for a call). The price of the option is referred to as the premium. This is the amount the buyer will pay to the seller to receive the option. This price of an option depends upon many factors, of which the following are the primary factors: The current price of the underlying security How long the option needs to be held before it expires (the expiry date) The strike price on the expiry date of the option The interest rate of capital in the market The volatility of the underlying security There being an adequate interest between buyer and seller around the given option The premium is often established so that the buyer can speculate on the future value of the underlying security and be able to gain rights to the underlying security in the future at a discount in the present. The holder of the option, known as the buyer, is not obliged to exercise the option on its expiration date, but the writer, also referred to as the seller, however, is obliged to buy or sell the instrument if the option is exercised. Options can provide a variety of benefits such as the ability to limit risk and the advantage of providing leverage. They are often used to diversify an investment portfolio to lower risk during times of rising or falling markets. There are four types of participants in an options market: Buyers of calls Sellers of calls Buyers of puts Sellers of puts Buyers of calls believe that the underlying security will exceed a certain level and are not only willing to pay a certain amount to see whether that happens, but also lose their entire premium if it does not. Their goal is that the resulting payout of the option exceeds their initial premium and they, therefore, make a profit. However, they are willing to forgo their premium in its entirety if it does not clear the strike price. This then becomes a game of managing the risk of the profit versus the fixed potential loss. Sellers of calls are on the other side of buyers. They believe the price will drop and that the amount they receive in payment for the premium will exceed any loss in the price. Normally, the seller of a call would already own the stock. They do not believe the price will exceed the strike price and that they will be able to keep the underlying security and profit if the underlying security stays below the strike by an amount that does not exceed the received premium. Loss is potentially unbounded as the stock increases in price above the strike price, but that is the risk for an upfront receipt of cash and potential gains on loss of price in the underlying instrument. A buyer of a put is betting that the price of the stock will drop beyond a certain level. By buying a put they gain the option to force someone to buy the underlying instrument at a fixed price. By doing this, they are betting that they can force the sale of the underlying instrument at a strike price that is higher than the market price and in excess of the premium that they pay to the seller of the put option. On the other hand, the seller of the put is betting that they can make an offer on an instrument that is perceived to lose value in the future. They will offer the option for a price that gives them cash upfront, and they plan that at maturity of the option, they will not be forced to purchase the underlying instrument. Therefore, it keeps the premium as pure profit. Or, the price of the underlying instruments drops only a small amount so that the price of buying the underlying instrument relative to its market price does not exceed the premium that they received. Notebook setup The examples in this article will be based on the following configuration in IPython: In [1]:    import pandas as pd    import numpy as np    import pandas.io.data as web    from datetime import datetime      import matplotlib.pyplot as plt    %matplotlib inline      pd.set_option('display.notebook_repr_html', False)    pd.set_option('display.max_columns', 7)    pd.set_option('display.max_rows', 15)    pd.set_option('display.width', 82)    pd.set_option('precision', 3) Options data from Yahoo! Finance Options data can be obtained from several sources. Publicly listed options are exchanged on the Chicago Board Options Exchange (CBOE) and can be obtained from their website. Through the DataReader class, pandas also provides built-in (although in the documentation referred to as experimental) access to options data. The following command reads all currently available options data for AAPL: In [2]:    aapl_options = web.Options('AAPL', 'yahoo') aapl_options = aapl_options.get_all_data().reset_index() This operation can take a while as it downloads quite a bit of data. Fortunately, it is cached so that subsequent calls will be quicker, and there are other calls to limit the types of data downloaded (such as getting just puts). For convenience, the following command will save this data to a file for quick reload at a later time. Also, it helps with repeatability of the examples. The data retrieved changes very frequently, so the actual examples in the book will use the data in the file provided with the book. It saves the data for later use (it's commented out for now so as not to overwrite the existing file). Here's the command we are talking about: In [3]:    #aapl_options.to_csv('aapl_options.csv') This data file can be reloaded with the following command: In [4]:    aapl_options = pd.read_csv('aapl_options.csv',                              parse_dates=['Expiry']) Whether from the Web or the file, the following command restructures and tidies the data into a format best used in the examples to follow: In [5]:    aos = aapl_options.sort(['Expiry', 'Strike'])[      ['Expiry', 'Strike', 'Type', 'IV', 'Bid',          'Ask', 'Underlying_Price']]    aos['IV'] = aos['IV'].apply(lambda x: float(x.strip('%'))) Now, we can take a look at the data retrieved: In [6]:    aos   Out[6]:            Expiry Strike Type     IV   Bid   Ask Underlying_Price    158 2015-02-27     75 call 271.88 53.60 53.85           128.79    159 2015-02-27     75 put 193.75 0.00 0.01           128.79    190 2015-02-27     80 call 225.78 48.65 48.80           128.79    191 2015-02-27     80 put 171.88 0.00 0.01           128.79    226 2015-02-27     85 call 199.22 43.65 43.80           128.79 There are 1,103 rows of options data available. The data is sorted by Expiry and then Strike price to help demonstrate examples. Expiry is the data at which the particular option will expire and potentially be exercised. We have the following expiry dates that were retrieved. Options typically are offered by an exchange on a monthly basis and within a short overall duration from several days to perhaps two years. In this dataset, we have the following expiry dates: In [7]:    aos['Expiry'].unique()   Out[7]:    array(['2015-02-26T17:00:00.000000000-0700',          '2015-03-05T17:00:00.000000000-0700',          '2015-03-12T18:00:00.000000000-0600',          '2015-03-19T18:00:00.000000000-0600',          '2015-03-26T18:00:00.000000000-0600',          '2015-04-01T18:00:00.000000000-0600',          '2015-04-16T18:00:00.000000000-0600',          '2015-05-14T18:00:00.000000000-0600',          '2015-07-16T18:00:00.000000000-0600',          '2015-10-15T18:00:00.000000000-0600',          '2016-01-14T17:00:00.000000000-0700',          '2017-01-19T17:00:00.000000000-0700'], dtype='datetime64[ns]') For each option's expiration date, there are multiple options available, split between puts and calls, and with different strike values, prices, and associated risk values. As an example, the option with the index 158 that expires on 2015-02-27 is for buying a call on AAPL with a strike price of $75. The price we would pay for each share of AAPL would be the bid price of $53.60. Options typically sell 100 units of the underlying security, and, therefore, this would mean that this option would cost of 100 x $53.60 or $5,360 upfront: In [8]:    aos.loc[158]   Out[8]:    Expiry             2015-02-27 00:00:00    Strike                               75    Type                              call    IV                                 272    Bid                               53.6    Ask                               53.9    Underlying_Price                   129    Name: 158, dtype: object This $5,360 does not buy us the 100 shares of AAPL. It gives us the right to buy 100 shares of AAPL on 2015-02-27 at $75 per share. We should only buy if the price of AAPL is above $75 on 2015-02-27. If not, we will have lost our premium of $5360 and purchasing below will only increase our loss. Also, note that these quotes were retrieved on 2015-02-25. This specific option has only two days until it expires. That has a huge effect on the pricing: We have paid $5,360 for the option to buy 100 shares of AAPL on 2015-02-27 if the price of AAPL is above $75 on that date. The price of AAPL when the option was priced was $128.79 per share. If we were to buy 100 shares of AAPL now, we would have paid $12,879 now. If AAPL is above $75 on 2015-02-27, we can buy 100 shares for $7500. There is not a lot of time between the quote and Expiry of this option. With AAPL being at $128.79, it is very likely that the price will be above $75 in two days. Therefore, in two days: We can walk away if the price is $75 or above. Since we paid $5360, we probably wouldn't want to do that. At $75 or above, we can force execution of the option, where we give the seller $7,500 and receive 100 shares of AAPL. If the price of AAPL is still $128.79 per share, then we will have bought $12,879 of AAPL for $7,500+$5,360, or $12,860 in total. In technicality, we will have saved $19 over two days! But only if the price didn't drop. If for some reason, AAPL dropped below $75 in two days, we kept our loss to our premium of $5,360. This is not great, but if we had bought $12,879 of AAPL on 2015-02-5 and it dropped to $74.99 on 2015-02-27, we would have lost $12,879 – $7,499, or $5,380. So, we actually would have saved $20 in loss by buying the call option. It is interesting how this math works out. Excluding transaction fees, options are a zero-loss game. It just comes down to how much risk is involved in the option versus your upfront premium and how the market moves. If you feel you know something, it can be quite profitable. Of course, it can also be devastatingly unprofitable. We will not examine the put side of this example. It would suffice to say it works out similarly from the side of the seller. Implied volatility There is one more field in our dataset that we didn't look at—implied volatility (IV). We won't get into the details of the mathematics of how this is calculated, but this reflects the amount of volatility that the market has factored into the option. This is different than historical volatility (typically the standard deviation of the previous year of returns). In general, it is informative to examine the IV relative to the strike price on a particular Expiry date. The following command shows this in tabular form for calls on 2015-02-27: In [9]:    calls1 = aos[(aos.Expiry=='2015-02-27') & (aos.Type=='call')]    calls1[:5]   Out[9]:            Expiry Strike Type     IV   Bid   Ask Underlying_Price    158 2015-02-27     75 call 271.88 53.60 53.85           128.79    159 2015-02-27     75   put 193.75 0.00   0.01           128.79    190 2015-02-27     80 call 225.78 48.65 48.80           128.79    191 2015-02-27     80   put 171.88 0.00   0.01           128.79    226 2015-02-27     85 call 199.22 43.65 43.80           128.79 It appears that as the strike price approaches the underlying price, the implied volatility decreases. Plotting this shows it even more clearly: In [10]:    ax = aos[(aos.Expiry=='2015-02-27') & (aos.Type=='call')] \            .set_index('Strike')[['IV']].plot(figsize=(12,8))    ax.axvline(calls1.Underlying_Price.iloc[0], color='g'); The shape of this curve is important as it defines points where options are considered to be either in or out of the money. A call option is referred to as in the money when the options strike price is below the market price of the underlying instrument. A put option is in the money when the strike price is above the market price of the underlying instrument. Being in the money does not mean that you will profit; it simply means that the option is worth exercising. Where and when an option is in our out of the money can be visualized by examining the shape of its implied volatility curve. Because of this curved shape, it is generally referred to as a volatility smile as both ends tend to turn upwards on both ends, particularly, if the curve has a uniform shape around its lowest point. This is demonstrated in the following graph, which shows the nature of in/out of the money for both puts and calls: A skew on the smile demonstrates a relative demand that is greater toward the option being in or out of the money. When this occurs, the skew is often referred to as a smirk. Volatility smirks Smirks can either be reverse or forward. The following graph demonstrates a reverse skew, similar to what we have seen with our AAPL 2015-02-27 call: In a reverse-skew smirk, the volatility for options at lower strikes is higher than at higher strikes. This is the case with our AAPL options expiring on 2015-02-27. This means that the in-the-money calls and out-of-the-money puts are more expensive than out-of-the-money calls and in-the-money puts. A popular explanation for the manifestation of the reverse volatility skew is that investors are generally worried about market crashes and buy puts for protection. One piece of evidence supporting this argument is the fact that the reverse skew did not show up for equity options until after the crash of 1987. Another possible explanation is that in-the-money calls have become popular alternatives to outright stock purchases as they offer leverage and, hence, increased ROI. This leads to greater demand for in-the-money calls and, therefore, increased IV at the lower strikes. The other variant of the volatility smirk is the forward skew. In the forward-skew pattern, the IV for options at the lower strikes is lower than the IV at higher strikes. This suggests that out-of-the-money calls and in-the-money puts are in greater demand compared to in-the-money calls and out-of-the-money puts: The forward-skew pattern is common for options in the commodities market. When supply is tight, businesses would rather pay more to secure supply than to risk supply disruption. For example, if weather reports indicate a heightened possibility of an impending frost, fear of supply disruption will cause businesses to drive up demand for out-of-the-money calls for the affected crops. Calculating payoff on options The payoff of an option is a relatively straightforward calculation based upon the type of the option and is derived from the price of the underlying security on expiry relative to the strike price. The formula for the call option payoff is as follows: The formula for the put option payoff is as follows: We will model both of these functions and visualize their payouts. The call option payoff calculation An option gives the buyer of the option the right to buy (a call option) or sell (a put option) an underlying security at a point in the future and at a predetermined price. A call option is basically a bet on whether or not the price of the underlying instrument will exceed the strike price. Your bet is the price of the option (the premium). On the expiry date of a call, the value of the option is 0 if the strike price has not been exceeded. If it has been exceeded, its value is the market value of the underlying security. The general value of a call option can be calculated with the following function: In [11]:    def call_payoff(price_at_maturity, strike_price):        return max(0, price_at_maturity - strike_price) When the price of the underlying instrument is below the strike price, the value is 0 (out of the money). This can be seen here: In [12]:    call_payoff(25, 30)   Out[12]:    0 When it is above the strike price (in the money), it will be the difference of the price and the strike price: In [13]:    call_payoff(35, 30)   Out[13]:    5 The following function returns a DataFrame object that calculates the return for an option over a range of maturity prices. It uses np.vectorize() to efficiently apply the call_payoff() function to each item in the specific column of the DataFrame: In [14]:    def call_payoffs(min_maturity_price, max_maturity_price,                    strike_price, step=1):        maturities = np.arange(min_maturity_price,                              max_maturity_price + step, step)        payoffs = np.vectorize(call_payoff)(maturities, strike_price)        df = pd.DataFrame({'Strike': strike_price, 'Payoff': payoffs},                          index=maturities)        df.index.name = 'Maturity Price'    return df The following command demonstrates the use of this function to calculate payoff of an underlying security at finishing prices ranging from 10 to 25 and with a strike price of 15: In [15]:    call_payoffs(10, 25, 15)   Out[15]:                    Payoff Strike    Maturity Price                  10                   0     15    11                   0     15    12                   0     15    13                   0     15    14                   0     15    ...               ...     ...    21                   6     15    22                  7     15    23                   8     15    24                   9     15    25                 10     15      [16 rows x 2 columns] Using this result, we can visualize the payoffs using the following function: In [16]:    def plot_call_payoffs(min_maturity_price, max_maturity_price,                          strike_price, step=1):        payoffs = call_payoffs(min_maturity_price, max_maturity_price,                              strike_price, step)        plt.ylim(payoffs.Payoff.min() - 10, payoffs.Payoff.max() + 10)        plt.ylabel("Payoff")        plt.xlabel("Maturity Price")        plt.title('Payoff of call option, Strike={0}'                  .format(strike_price))        plt.xlim(min_maturity_price, max_maturity_price)        plt.plot(payoffs.index, payoffs.Payoff.values); The payoffs are visualized as follows: In [17]:    plot_call_payoffs(10, 25, 15) The put option payoff calculation The value of a put option can be calculated with the following function: In [18]:    def put_payoff(price_at_maturity, strike_price):        return max(0, strike_price - price_at_maturity) While the price of the underlying is below the strike price, the value is 0: In [19]:    put_payoff(25, 20)   Out[19]:    0 When the price is below the strike price, the value of the option is the difference between the strike price and the price: In [20]:    put_payoff(15, 20)   Out [20]:    5 This payoff for a series of prices can be calculated with the following function: In [21]:    def put_payoffs(min_maturity_price, max_maturity_price,                    strike_price, step=1):        maturities = np.arange(min_maturity_price,                              max_maturity_price + step, step)        payoffs = np.vectorize(put_payoff)(maturities, strike_price)       df = pd.DataFrame({'Payoff': payoffs, 'Strike': strike_price},                          index=maturities)        df.index.name = 'Maturity Price'        return df The following command demonstrates the values of the put payoffs for prices of 10 through 25 with a strike price of 25: In [22]:    put_payoffs(10, 25, 15)   Out [22]:                    Payoff Strike    Maturity Price                  10                   5     15    11                   4     15    12                   3     15    13                  2     15    14                   1     15    ...               ...     ...    21                   0     15    22                   0     15    23                   0     15    24                   0     15    25                   0      15      [16 rows x 2 columns] The following function will generate a graph of payoffs: In [23]:    def plot_put_payoffs(min_maturity_price,                        max_maturity_price,                        strike_price,                        step=1):        payoffs = put_payoffs(min_maturity_price,                              max_maturity_price,                              strike_price, step)        plt.ylim(payoffs.Payoff.min() - 10, payoffs.Payoff.max() + 10)        plt.ylabel("Payoff")      plt.xlabel("Maturity Price")        plt.title('Payoff of put option, Strike={0}'                  .format(strike_price))        plt.xlim(min_maturity_price, max_maturity_price)        plt.plot(payoffs.index, payoffs.Payoff.values); The following command demonstrates the payoffs for prices between 10 and 25 with a strike price of 15: In [24]:    plot_put_payoffs(10, 25, 15) Summary In this article, we examined several techniques for using pandas to calculate the prices of options, their payoffs, and profit and loss for the various combinations of calls and puts for both buyers and sellers. Resources for Article: Further resources on this subject: Why Big Data in the Financial Sector? [article] Building Financial Functions into Excel 2010 [article] Using indexes to manipulate pandas objects [article]
Read more
  • 0
  • 0
  • 8756

article-image-ninject-patterns-and-anti-patterns
Packt
30 Sep 2013
7 min read
Save for later

Ninject Patterns and Anti-patterns

Packt
30 Sep 2013
7 min read
(For more resources related to this topic, see here.) Dependencies can be injected in a consumer class using different patterns and injecting them into a constructor is just one of them. While there are some patterns that can be followed for injecting dependencies, there are also some patterns that are recommended to be avoided, as they usually lead to undesirable results. In this article, we will examine only those patterns and antipatterns that are somehow relevant to Ninject features. Constructor Injection Constructor Injection is the most common and recommended pattern for injecting dependencies in a class. Generally this pattern should always be used as the primary injection pattern unless we have to use other ones. In this pattern, a list of all class dependencies should be introduced in the constructor. The question is what if the class has more than one constructor. Although Ninject's strategy for selecting constructor is customizable, its default behavior is selecting the constructor with more parameters, provided all of them are resolvable by Ninject. So, although in the following code the second constructor introduces more parameters, Ninject will select the first one if it cannot resolve IService2 and it will even use the default constructor if IService1 is not registered either. But if both dependencies are registered and resolvable, Ninject will select the second constructor because it has more parameters: public class Consumer { private readonly IService1 dependency1; private readonly IService2 dependency2; public Consumer(IService1 dependency1) { this.dependency1 = dependency1; } public Consumer(IService1 dependency1, IService2 dependency2) { this.dependency1 = dependency1; this.dependency2 = dependency2; } } If the preceding class had another constructor with two resolvable parameters, Ninject would throw an ActivationException exception notifying that several constructors had the same priority. There are two approaches to override this default behavior and explicitly select a constructor. The first approach is to indicate the desired constructor in a binding as follows: Bind<Consumer>().ToConstructor(arg => new Consumer(arg.Inject<IService1>())); In the preceding example, we explicitly selected the first constructor. Using the Inject<T> method that the arg argument provides, we requested Ninject to resolve IService1 in order to be injected into the specified constructor. The second method is to indicate the desired constructor using the [Inject] attribute: [Inject] public Consumer(IService1 dependency1) { this.dependency1 = dependency1; } In the preceding example, we applied the Ninject's [Inject] attribute on the first constructor to explicitly specify that we need to initialize the class by injecting dependencies into this constructor; even though the second constructor has more parameters and the default strategy of Ninject would be to select the second one. Note that applying this attribute on more than one constructor will result in the ActivationException. Ninject is highly customizable and it is even possible to substitute the default [Inject] attribute with another one, so that we don't need to add reference to the Ninject library from our consumer classes just because of an attribute: kernel.Settings.Set("InjectAttribute",typeof(MyAttribute)); Initializer methods and properties Apart from constructor injection, Ninject supports the injection of dependencies using initializer methods and property setters. We can specify as many methods and properties as required using the [Inject] attribute to inject dependencies. Although the dependencies will be injected to them as soon as the class is constructed, it is not possible to predict in which order they will receive their dependencies. The following code shows how to specify a property for injection: [Inject]public IService Service{ get { return dependency; } set { dependency = value; }} Here is an example of injecting dependencies using an injector method: [Inject]public void Setup(IService dependency){ this.dependency = dependency;} Note that only public members and constructors will be injected and even the internals will be ignored unless Ninject is configured to inject nonpublic members. In Constructor Injection, the constructor is a single point where we can consume all of the dependencies as soon as the class is activated. But when we use initializer methods the dependencies will be injected via multiple points in an unpredictable order, so we cannot decide in which method all of the dependencies will be ready to consume. In order to solve this problem, Ninject offers the IInitializable interface. This interface has an Initialize method which will be called once all of the dependencies have been injected: public class Consumer:IInitializable{ private IService1 dependency1; private IService2 dependency2; [Inject] public IService Service1 { get { return dependency1; } set { dependency1 = value; } } [Inject] public IService Service2 { get { return dependency2; } set { dependency2 = value; } } public void Initialize() { // Consume all dependencies here }} Although Ninject supports injection using properties and methods, Constructor Injection should be the superior approach. First of all, Constructor Injection makes the class more reusable, because a list of all class dependencies are visible, while in the initializer property or method the user of the class should investigate all of the class members or go through the class documentations (if any), to discover its dependencies. Initialization of the class is easier while using Constructor Injection because all the dependencies get injected at the same time and we can easily consume them at the same place where the constructor initializes the class. As we have seen in the preceding examples the only case where the backing fields could be readonly was in the Constructor Injection scenario. As the readonly fields are initializable only in the constructor, we need to make them writable to be able to use initializer methods and properties. This can lead to potential mutation of backing fields. Service Locator Service Locator is a design pattern introduced by Martin Fowler regarding which there have been some controversies. Although it can be useful in particular circumstances, it is generally considered as an antipattern and preferably should be avoided. Ninject can easily be misused as a Service Locator if we are not familiar to this pattern. The following example demonstrates misusing the Ninject kernel as a Service Locator rather than a DI container: public class Consumer{ public void Consume() { var kernel = new StandardKernel(); var depenency1 = kernel.Get<IService1>(); var depenency2 = kernel.Get<IService2>(); ... }} There are two significant downsides with the preceding code. The first one is that although we are using a DI container, we are not at all implementing DI. The class is tied to the Ninject kernel while it is not really a dependency of this class. This class and all of its prospective consumers will always have to drag their unnecessary dependency on the kernel object and Ninject library. On the other hand, the real dependencies of class (IService1 and IService2) are invisible from the consumers, and this reduces its reusability. Even if we change the design of this class to the following one, the problems still exist: public class Consumer{ private readonly IKernel kernel; public Consumer(IKernel kernel) { this.kernel = kernel; } public void Consume() { var depenency1 = kernel.Get<IService1>(); var depenency2 = kernel.Get<IService2>(); ... }} The preceding class still depends on the Ninject library while it doesn't have to and its actual dependencies are still invisible to its consumers. It can easily be refactored using the Constructor Injection pattern: public Consumer(IService1 dependency1, IService2 dependency2){ this.dependency1 = dependency1; this.dependency2 = dependency2;} Summary In this article we studied the most common DI patterns and anti-patterns related to Ninject. Resources for Article: Further resources on this subject: Introduction to JBoss Clustering [Article] Configuring Clusters in GlassFish [Article] Designing Secure Java EE Applications in GlassFish [Article]
Read more
  • 0
  • 0
  • 8336
article-image-java-server-faces-jsf-tools
Packt
07 Oct 2009
5 min read
Save for later

Java Server Faces (JSF) Tools

Packt
07 Oct 2009
5 min read
Throughout this article, we will follow the "learning by example" technique, and we will develop a completely functional JSF application that will represent a JSF registration form as you can see in the following screenshot. These kinds of forms can be seen on many sites, so it will be very useful to know how to develop them with JSF Tools. The example consists of a simple JSP page, named register.jsp, containing a JSF form with four fields (these fields will be mapped into a managed bean), as follows: personName – this is a text field for the user's name personAge – this is a text field for the user's age personPhone – this is a text field for the user's phone number personBirthDate – this is a text field for the user's birth date The information provided by the user will be properly converted and validated using JSF capabilities. Afterwards, the information will be displayed on a second JSP page, named success.jsp. Overview of JSF Java Server Faces (JSF) is a popular framework used to develop User Interfaces (UI) for server-based applications (it is also known as JSR 127 and is a part of J2EE 1.5). It contains a set of UI components totally managed through JSF support, like handling events, validation, navigation rules, internationalization, accessibility, customizability, and so on. In addition, it contains a tag library for expressing UI components within a JSP page. Among JSF features, we mention the ones that JSF provides: A set of base UI components Extension of the base UI components to create additional UI component libraries Custom UI components Reusable UI components Read/write application date to and from UI components Managing UI state across server requests Wiring client-generated events to server-side application code Multiple rendering capabilities that enable JSF UI components to render themselves differently depending on the client type JSF is tool friendly JSF is implementation agnostic Abstract away from HTTP, HTML, and JSP Speaking of JSF life cycle, you should know that every JSF request that renders a JSP involves a JSF component tree, also called a view. Each request is made up of phases. By standard, we have the following phases (shown in the following figure): The restore view is built Request values are applied Validations are processed Model values are updated The applications is invoked A response is rendered During the above phases, the events are notified to event listeners We end this overview with a few bullets regarding JSF UI components: A UI component can be anything from a simple to a complex user interface (for example, from a simple text field to an entire page) A UI component can be associated to model data objects through value binding A UI component can use helper objects, like converters and validators A UI component can be rendered in different ways, based on invocation A UI component can be invoked directly or from JSP pages using JSF tag libraries Creating a JSF project stub In this section you will see how to create a JSF project stub with JSF Tools. This is a straightforward task that is based on the following steps: From the File menu, select New | Project option. In the New Project window, expand the JBoss Tools Web node and select the JSF Project option (as shown in the following screenshot). After that, click the Next button. In the next window, it is mandatory to specify a name for the new project (Project Name field), a location (Location field—only if you don't use the default path), a JSF implementation (JSF Environment field), and a template (Template field). As you can see, JSF Tools offers a set of predefined templates as follows: JSFBlankWithLibs—this is a blank JSF project with complete JSF support. JSFKickStartWithLibs—this is a demo JSF project with complete JSF support. JSFKickStartWithoutLibs—this is a demo JSF project without JSF support. In this case, the JSF libraries are missing for avoiding the potential conflicts with the servers that already offer JSF support. JSFBlankWithoutLibs—this is a blank JSF project without JSF support. In this case, the JSF libraries are missing for avoiding the potential conflicts with the servers that already offer JSF support (for example, JBoss AS includes JSF support). The next screenshot is an example of how to configure our JSF project at this step. At the end, just click the Next button: This step allows you to set the servlet version (Servlet Version field), the runtime (Runtime field) used for building and compiling the application, and the server where the application will be deployed (Target Server field). Note that this server is in direct relationship with the selected runtime. The following screenshot shows an example of how to complete this step (click on the Finish button): After a few seconds, the new JSF project stub is created and ready to take shape! You can see the new project in the Package Explorer view.
Read more
  • 0
  • 0
  • 8222

article-image-getting-started-codeblocks
Packt
28 Oct 2013
7 min read
Save for later

Getting Started with Code::Blocks

Packt
28 Oct 2013
7 min read
(For more resources related to this topic, see here.) Why Code::Blocks? Before we go on learning more about Code::Blocks let us understand why we shall use Code::Blocks over other IDEs. It is a cross-platform Integrated Development Environment (IDE). It supports Windows, Linux, and Mac operating system. It supports GCC compiler and GNU debugger on all supported platforms completely. It supports numerous other compilers to various degrees on multiple platforms. It is scriptable and extendable. It comes with several plugins that extend its core functionality. It is lightweight on resources and doesn't require a powerful computer to run it. Finally, it is free and open source. Installing Code::Blocks on Windows Our primary focus of this article will be on Windows platform. However, we'll touch upon other platforms wherever possible. Official Code::Blocks binaries are available from www.codeblocks.org. Perform the following steps for successful installation of Code::Blocks: For installation on Windows platform download codeblocks-12.11mingw-setup.exe file from http://www.codeblocks.org/downloads/26 or from sourceforge mirror http://sourceforge.net/projects/codeblocks/files/Binaries/12.11/Windows/codeblocks-12.11mingw-setup.exe/download and save it in a folder. Double-click on this file and run it. You'll be presented with the following screen: As shown in the following screenshot click on the Next button to continue. License text will be presented. The Code::Blocks application is licensed under GNU GPLv3 and Code::Blocks SDK is licensed under GNU LGPLv3. You can learn more about these licenses at this URL—https://www.gnu.org/licenses/licenses.html. Click on I Agree to accept the License Agreement. The component selection page will be presented in the following screenshot: You may choose any of the following options: Default install: This is the default installation option. This will install Code::Block's core components and core plugins. Contrib Plugins: Plugins are small programs that extend Code::Block's functionality. Select this option to install plugins contributed by several other developers. C::B Share Config: This utility can copy all/parts of configuration file. MinGW Compiler Suite: This option will install GCC 4.7.1 for Windows. Select Full Installation and click on Next button to continue. As shown in the following screenshot installer will now prompt to select installation directory: You can install it to default installation directory. Otherwise choose Destination Folder and then click on the Install button. Installer will now proceed with installation. As shown in the following screenshot Code::Blocks will now prompt us to run it after the installation is completed: Click on the No button here and then click on the Next button. Installation will now be completed: Click on the Finish button to complete installation. A shortcut will be created on the desktop. This completes our Code::Blocks installation on Windows. Installing Code::Blocks on Linux Code::Blocks runs numerous Linux distributions. In this section we'll learn about installation of Code::Blocks on CentOS linux. CentOS is a Linux distro based on Red Hat Enterprise Linux and is a freely available, enterprise grade Linux distribution. Perform the following steps to install Code::Blocks on Linux OS: Navigate to Settings | Administration | Add/Remove Software menu option. Enter wxGTK in the Search box and hit the Enter key. As of writing wxGTK-2.8.12 is the latest wxWidgets stable release available. Select it and click on the Apply button to install wxGTK package via the package manager, as shown in the following screenshot. Download packages for CentOS 6 from this URL—http://www.codeblocks.org/downloads/26. Unpack the .tar.bz2 file by issuing the following command in shell: tar xvjf codeblocks-12.11-1.el6.i686.tar.bz2 Right-click on the codeblocks-12.11-1.el6.i686.rpm file as shown in the following screenshot and choose the Open with Package Installer option. The following window will be displayed. Click on the Install button to begin installation, as shown in the following screenshot: You may be asked to enter the root password if you are installing it from a user account. Enter the root password and click on the Authenticate button. Code::Blocks will now be installed. Repeat steps 4 to 6 to install other rpm files. We have now learned to install Code::Blocks on the Windows and Linux platforms. We are now ready for C++ development. Before doing that we'll learn about the Code::Blocks user interface. First run On the Windows platform navigate to the Start | All Programs | CodeBlocks | CodeBlocks menu options to launch Code::Blocks. Alternatively you may double-click on the shortcut displayed on the desktop to launch Code::Blocks, as in the following screenshot: On Linux navigate to Applications | Programming | Code::Blocks IDE menu options to run Code::Blocks. Code::Blocks will now ask the user to select the default compiler. Code::Blocks supports several compilers and hence, is able to detect the presence of other compilers. The following screenshot shows that Code::Blocks has detected GNU GCC Compiler (which was bundled with the installer and has been installed). Click on it to select and then click on Set as default button, as shown in the following screenshot: Do not worry about the items highlighted in red in the previous screenshot. Red colored lines indicate Code::Blocks was unable to detect the presence of a particular compiler. Finally, click on the OK button to continue with the loading of Code::Blocks. After the loading is complete the Code::Blocks window will be shown. The following screenshot shows main window of Code::Blocks. Annotated portions highlight different User Interface (UI) components: Now, let us understand more about different UI components: Menu bar and toolbar: All Code::Blocks commands are available via menu bar. On the other hand toolbars provide quick access to commonly used commands. Start page and code editors: Start page is the default page when Code::Blocks is launched. This contains some useful links and recent project and file history. Code editors are text containers to edit C++ (and other language) source files. These editors offer syntax highlighting—a feature that highlights keywords in different colors. Management pane: This window shows all open files (including source files, project files, and workspace files). This pane is also used by other plugins to provide additional functionalities. In the preceding screenshot FileManager plugin is providing a Windows Explorer like facility and Code Completion plugin is providing details of currently open source files. Log windows: Log messages from different tools, for example, compiler, debugger, document parser, and so on, are shown here. This component is also used by other plugins. Status bar: This component shows various status information of Code::Blocks, for example, file path, file encoding, line numbers, and so on. Introduction to important toolbars Toolbars provide easier access to different functions of Code::Blocks. Amongst the several toolbars following ones are most important. Main toolbar The main toolbar holds core component commands. From left to right there are new file, open file, save, save all, undo, redo, cut, copy, paste, find, and replace buttons. Compiler toolbar The compiler toolbar holds commonly used compiler related commands. From left to right there are build, run, build and run, rebuild, stop build, build target buttons. Compilation of C++ source code is also called a build and this terminology will be used throughout the article. Debugger toolbar The debugger toolbar holds commonly used debugger related commands. From left to right there are debug/continue, run to cursor, next line, step into, step out, next instruction, step into instruction, break debugger, stop debugger, debugging windows, and various info buttons. Summary In this article we have learned to download and install Code::Blocks. We also learnt about different interface elements. Resources for Article: Further resources on this subject: OpenGL 4.0: Building a C++ Shader Program Class [Article] Application Development in Visual C++ - The Tetris Application [Article] Building UI with XAML for Windows 8 Using C [Article]
Read more
  • 0
  • 0
  • 8147
Modal Close icon
Modal Close icon