Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-oracle-web-rowset-part1
Packt
22 Oct 2009
6 min read
Save for later

Oracle Web RowSet - Part1

Packt
22 Oct 2009
6 min read
The ResultSet interface requires a persistent connection with a database to invoke the insert, update, and delete row operations on the database table data. The RowSet interface extends the ResultSet interface and is a container for tabular data that may operate without being connected to the data source. Thus, the RowSet interface reduces the overhead of a persistent connection with the database. In J2SE 5.0, five new implementations of RowSet—JdbcRowSet, CachedRowSet, WebRowSet, FilteredRowSet, and JoinRowSet—were introduced. The WebRowSet interface extends the RowSet interface and is the XML document representation of a RowSet object. A WebRowSet object represents a set of fetched database table rows, which may be modified without being connected to the database. Support for Oracle Web RowSet is a new feature in Oracle Database 10g driver. Oracle Web RowSet precludes the requirement for a persistent connection with the database. A connection is required only for retrieving data from the database with a SELECT query and for updating data in the database after all the required row operations on the retrieved data has been performed. Oracle Web RowSet is used for queries and modifications on the data retrieved from the database. Oracle Web RowSet, as an XML document representation of a RowSet facilitates the transfer of data. In Oracle Database 10g and 11g JDBC drivers, Oracle Web RowSet is implemented in the oracle.jdbc.rowset package. The OracleWebRowSet class represents a Oracle Web RowSet. The data in the Web RowSet may be modified without connecting to the database. The database table may be updated with the OracleWebRowSet class after the modifications to the Web RowSet have been made. A database JDBC connection is required only for retrieving data from the database and for updating the database. An XML document representation of the data in a Web RowSet may be obtained for data exchange. In this article, the Web RowSet feature in Oracle 10g database JDBC driver is implemented in JDeveloper 10g. An example Web RowSet will be created from a database. The Web RowSet will be modified and stored in the database table. In this article, we will learn the following: Creating a Oracle Web RowSet object Adding a row to Oracle Web RowSet Modifying the database table with Web RowSet In the second half of the article, we will cover the following : Reading a row from Oracle Web RowSet Updating a row in Oracle Web RowSet Deleting a row from Oracle Web RowSet Updating Database Table with modified Oracle Web RowSet Setting the Environment We will use Oracle database to generate an updatable OracleWebRowSet object. Therefore, install Oracle database 10g including the sample schemas. Connect to the database with the OE schema: SQL> CONNECT OE/<password> Create an example database table, Catalog, with the following SQL script: CREATE TABLE OE.Catalog(Journal VARCHAR(25), Publisher Varchar(25),Edition VARCHAR(25), Title Varchar(45), Author Varchar(25));INSERT INTO OE.Catalog VALUES('Oracle Magazine', 'OraclePublishing', 'July-August 2005', 'Tuning Undo Tablespace','Kimberly Floss');INSERT INTO OE.Catalog VALUES('Oracle Magazine', 'OraclePublishing', 'March-April 2005', 'Starting with Oracle ADF', 'SteveMuench'); Configure JDeveloper 10g for Web RowSet implementation. Create a project in JDeveloper. Select File | New | General | Application. In the Create Application window specify an Application Name and click on Next. In the Create Project window, specify a Project Name and click on Next. A project is added in the Applications Navigator. Next, we will set the project libraries. Select Tools | ProjectProperties and in the Project Properties window, select Libraries | Add Library to add a library. Add the Oracle JDBC library to project libraries. If the Oracle JDBC drivers version prior to the Oracle database 10g (R2) JDBC drivers version is used, create a library from the Oracle Web RowSet implementation classes JAR file: C:JDeveloper10.1.3jdbclibocrs12.jar. The ocrs12.jar is required only for JDBC drivers prior to Oracle database 10g (R2) JDBC drivers. In Oracle database 10g (R2) JDBC drivers OracleRowSet implementation classes are packaged in the ojdbc14.jar. In Oracle database 11g JDBC drivers Oracle RowSet implementation classes are packaged in ojdbc5.jar and ojdbc6.jar. In the Add Library window select the User node and click on New. In the Create Library window specify a Library Name, select the Class Path node and click on Add Entry. Add an entry for ocrs12.jar. As Web RowSet was introduced in J2SE 5.0, if J2SE 1.4 is being used we also need to add an entry for the RowSet implementations JAR file, rowset.jar. Download the JDBC RowSet Implementations 1.0.1 zip file, jdbc_rowset_tiger-1_0_1-mrel-ri.zip, from http://java.sun.com/products/jdbc/download.html#rowset1_0_1 and extract the JDBC RowSet zip file to a directory. Click on OK in the Create Library window. Click on OK in the Add Library window. A library for the Web RowSet application is added. Now configure an OC4J data source. Select Tools | Embedded OC4J Server Preferences. A data source may be configured globally or for the current workspace. If a global data source is created using Global | Data Sources, the data source is configured in the C:JDeveloper10.1.3jdevsystemoracle.j2ee.10.1.3.36.73embedded-oc4jconfig data-sources.xml file. If a data source is configured for the current workspace using Current Workspace | Data Sources, the data source is configured in the data-sources.xml file. For example, the data source file for the WebRowSetApp application is WebRowSetApp-data-sources.xml. In the Embedded OC4J Server Preferences window configure either a global data source or a data source in the current workspace. A global data source definition is available to all applications deployed in the OC4J server instance. A managed-data-source element is added to the data-sources.xml file. <managed-data-source name='OracleDataSource' connection-pool-name='Oracle Connection Pool' jndi-name='jdbc/OracleDataSource'/><connection-pool name='Oracle Connection Pool'><connection-factory factory-class='oracle.jdbc.pool.OracleDataSource' user='OE' password='pw'url="jdbc:oracle:thin:@localhost:1521:ORCL"></connection-factory></connection-pool> Add a JSP, GenerateWebRowSet.jsp, to the WebRowSet project. Select File | New | Web Tier | JSP | JSP. Click on OK. Select J2EE 1.3 or J2EE 1.4 in the Web Application window and click on Next. In the JSP File window specify a File Name and click on Next. Select the default settings in the Error Page Options page and click on Next. Select the default settings in the Tag Librarieswindow and click on Next. Select the default options in the HTML Options window and click on Next. Click on Finish in the Finish window. Next, configure the web.xml deployment descriptor to include a reference to the data source resource configured in the data-sources.xml file as shown in following listing: <resource-ref><res-ref-name>jdbc/OracleDataSource</res-ref-name><res-type>javax.sql.DataSource</res-type><res-auth>Container</res-auth></resource-ref>
Read more
  • 0
  • 0
  • 2372

article-image-control-templates-visual-state-manager-and-event-handlers-silverlight-4
Packt
16 Apr 2010
7 min read
Save for later

Control templates, Visual State Manager, and Event Handlers in Silverlight 4

Packt
16 Apr 2010
7 min read
Skinning a control So far, you've seen that while styles can change the look of a control, they can only go so far. No matter how many changes we make, the buttons still look like old-fashioned buttons. Surely, there must be a way to customize a control further to match our creative vision. There is a way, its called skinning. Controls in Silverlight are extremely flexible and customizable. This flexibility stems from the fact that controls have both a VisualTree and a LogicalTree. The VisualTree deals with all the visual elements in a control, while the Logical tree deals with all the logical elements. All controls in Silverlight come with a default template, which defines what a control should look like. You can easily override this default template by redefining a control's visual tree with a custom one. Designers can either work directly with XAML in Blend or use a design tool that supports exporting to XAML. Expression Design is one such tool. You can alsoimport artwork from Adobe Illustrator and Adobe Photoshop from within Blend. In our scenario, let us pretend that there is a team of graphic designers. From time to time graphic designers will provide us with visual elements and, if we're lucky, snippets of XAML. In this case, the designers have sent us the XAML for a rectangle and gradient for us to base our control on: <Rectangle Stroke="#7F646464" Height="43" Width="150"StrokeThickness="2" RadiusX="15" RadiusY="15" VerticalAlignment="Top"> <Rectangle.Fill> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FFEE9D9D" Offset="0.197"/> <GradientStop Color="#FFFF7D7D" Offset="0.847"/> <GradientStop Color="#FFF2DADA" Offset="0.066"/> <GradientStop Color="#FF7E4F4F" Offset="1"/> </LinearGradientBrush> </Rectangle.Fill></Rectangle> After inputting the above XAML, you will be presented with this image: We need to make this rectangle the template for our buttons. Time for action – Skinning a control We're going to take the XAML snippet above and skin our buttons with it. In order to achieve this we will need to do the following: Open up the CakeNavigationButtons project in Blend. In the MainPage.XAML file, switch to XAML View, either by clicking the XAML button on the upper-right corner of the art board or choosing View|Active Document View|XAML from the menu bar. Type in the following XAML after the closing tag for the StackPanel: (</StackPanel>)<Rectangle Stroke="#7F646464" Height="43" Width="150" StrokeThickness="2" RadiusX="15" RadiusY="15" VerticalAlignment="Top" > <Rectangle.Fill> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FFEE9D9D" Offset="0.197"/> <GradientStop Color="#FFFF7D7D" Offset="0.847"/> <GradientStop Color="#FFF2DADA" Offset="0.066"/> <GradientStop Color="#FF7E4F4F" Offset="1"/> </LinearGradientBrush> </Rectangle.Fill></Rectangle> Switch back to Design View, either by clicking on the appropriate button on the upper right corner of the art board or choosing View|Active Document View|Design View from the menu bar. Right-click on the rectangle and click on Make Into Control. In the dialog box, choose Button, change the Name (Key) field to navButtonStyle and click OK. You are now in template editing mode. There are two on-screen indicators that you are in this mode: one is the Objects and Timeline tab: And one is the MainControl.xaml at the top of the art board: Click on the up button to exit template editing mode. Delete the button that our Rectangle was converted into. Select all the buttons in the StackPanel by clicking on the first one and then Shift+clicking on the last one. With all the buttons selected, go to the Properties tab, type Style into the search box. Using the techniques you've learned in this chapter, change the style to navButtonStyle, so that your screen now looks like this: The result is still not quite what we're looking for, but it's close. We need to increase the font size again; fortunately, we know how easy that is in Blend. Click on one of the buttons and choose Object|Edit Style|Edit Current from the menu bar to get into style editing mode. Make note of all the visual indicators. In the Properties tab, change the FontSize to 18, the Cursor to Hand, the Height to 45, and the Width to 200. You should see the changes immediately. The cursor change will only be noticeable at run time. Exit the template editing mode. There is a slight problem with the last button; the font is a little too large. Click on the button and use the Properties tab to change the FontSize to 12. Run the project and your application will look something like this: Run your mouse over the buttons. The button no longer reacts when you mouse over it, we'll fix that next. What just happened? We just took a plain old button and turned it into something a little more in line with the graphic designers' vision but how did we do it? When in doubt, look at the XAML The nice thing about Silverlight is that you can always take a look at the XAML to get a better understanding of what's going on. There are many places where things can "hide" in a tool like Blend or even Visual Studio. The raw naked XAML, however, bares all. For starters, we took a chunk of XAML and, using Blend, told Silverlight that we wanted to "take control" over how this button looks. This data was encapsulated into a Style and we told all our buttons to use our new style. When the new style was created, we lost some of our formatting data. We then inserted it back in and added a few more properties. If you're really curious to see what's going on, let's take a closer look at the XAML that Blend just generated for us: <Style TargetType="Button"> <Setter Property="FontSize" Value="18.667"/> <Setter Property="Background" Value="Red"/> <Setter Property="FontStyle" Value="Italic"/> <Setter Property="FontWeight" Value="Bold"/> <Setter Property="Cursor" Value="Hand"/> <Setter Property="Margin" Value="5"/></Style><Style x_Key="smallerTextStyle" TargetType="Button"> <Setter Property="FontSize" Value="9"/> </Style><Style x_Key="navButtonStyle" TargetType="Button"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Grid> <Rectangle RadiusY="15" RadiusX="15" Stroke="#7F646464" StrokeThickness="2"> <Rectangle.Fill> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FFEE9D9D" Offset="0.197"/> <GradientStop Color="#FFFF7D7D" Offset="0.847"/> <GradientStop Color="#FFF2DADA" Offset="0.066"/> <GradientStop Color="#FF7E4F4F" Offset="1"/> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <ContentPresenter HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="FontSize" Value="24"/> <Setter Property="Cursor" Value="Hand"/> <Setter Property="Height" Value="45"/> <Setter Property="Width" Value="200"/></Style> You'll immediately notice how verbose XAML can be. We've not done a great deal of work, yet we've generated a lot of XAML. This is where a tool like Blend really saves us all those keystrokes. The next thing you'll see is that we're actually setting the Template property inside of a Setter node of a Style definition. It's not until toward the end of the Style definition that we see the Rectangle which we started with. There's also a lot of code here devoted to something called the Visual State Manager. Prior to us changing the control's template, you'll remember that when you moved your mouse over any of the buttons, they reacted by changing color. This was nice, subtle feedback for the user. Now that it's gone, we really miss it and so will our users. If you carefully study the XAML, it should come as no surprise to you that the button doesn't do anything other than just sit there: we've not defined anything for any of the states listed here. The nodes are blank. Let's do that now.
Read more
  • 0
  • 0
  • 2366

Packt
16 Oct 2009
17 min read
Save for later

WCF – Windows Communication Foundation

Packt
16 Oct 2009
17 min read
What is WCF? WCF is the acronym for Windows Communication Foundation. It is Microsoft's latest technology that enables applications in a distributed environment to communicate with each other. WCF is Microsoft's unified programming model for building service-oriented applications. It enables developers to build secure, reliable, transacted solutions that integrate across platforms and interoperate with existing investments. WCF is built on the Microsoft .NET Framework and simplifies the development of connected systems. It unifies a broad array of distributed systems capabilities in a composable, extensible architecture that supports multiple transports, messaging patterns, encodings, network topologies, and hosting models. It is the next version of several existing products—ASP.NET's web methods (ASMX) and Microsoft Web Services Enhancements (WSE) for Microsoft .NET, .NET Remoting, Enterprise Services, and System.Messaging. The purpose of WCF is to provide a single programming model that can be used to create services on the .NET platform for organizations. Why is WCF used for SOA? As we have seen in the previous section, WCF is an umbrella technology that covers ASMX web services, .NET remoting, WSE, Enterprise Service, and System.Messaging. It is designed to offer a manageable approach to distributed computing, broad interoperability, and direct support for service orientation. WCF supports many styles of distributed application development by providing a layered architecture. At its base, the WCF channel architecture provides asynchronous, untyped message-passing primitives. Built on top of this base are protocol facilities for secure, reliable, transacted data exchange and a broad choice of transport and encoding options. Let us take an example to see why WCF is a good approach for SOA. Suppose a company is designing a service to get loan information. This service could be used by the internal call center application, an Internet web application, and a third-party Java J2EE application such as a banking system. For interactions with the call center client application, performance is important. For communication with the J2EE-based application however, interoperability becomes the highest goal. The security requirements are also quite different between the local Windows-based application, and the J2EE-based application running on another operating system. Even transactional requirements might vary, with only the internal application being allowed to make transactional requests. With these complex requirements, it is not easy to build the desired service with any single existing technology. For example, the ASMX technology may serve well for the interoperability, but its performance may not be ideal. The .NET remoting will be a good choice from the performance perspective, but it is not good at interoperability. Enterprise Services could be used for managing object lifetimes and defining distributed transactions, but Enterprise Services supports only a limited set of communication options. Now with WCF, it is much easier to implement this service. As WCF has unified a broad array of distributed systems capabilities, the get loan service can be built with WCF for all of its application-to-application communication. The following shows how WCF addresses each of these requirements: Because WCF can communicate using web service standards, interoperability with other platforms that also support SOAP, such as the leading J2EE-based application servers, is straightforward. You can also configure and extend WCF to communicate with web services using messages not based on SOAP, for example, simple XML formats such as RSS. Performance is of paramount concern for most businesses. WCF was developed with the goal of being one of the fastest distributed application platforms developed by Microsoft. To allow for optimal performance when both parties in a communication are built on WCF, the wire encoding used in this case is an optimized binary version of an XML Information Set. Using this option makes sense for communication with the call center client application, because it is also built on WCF, and performance is an important concern. Managing object lifetimes, defining distributed transactions, and other aspects of Enterprise Services, are now provided by WCF. They are available to any WCF-based application, which means that the get loan service can use them with any of the other applications that it communicates with. Because it supports a large set of the WS-* specifications, WCF helps to provide reliability, security, and transactions when communicating with any platform that supports these specifications. The WCF option for queued messaging, built on Message Queuing, allows applications to use persistent queuing without using another set of application programming interfaces. The result of this unification is greater functionality, and significantly reduced complexity. WCF architecture The following diagram illustrates the major layers of the Windows Communication Foundation (WCF) architecture. This diagram is taken from the Microsoft web site (http://msdn.microsoft.com/en-us/library/ms733128.aspx): The Contracts layer defines various aspects of the message system. For example, the Data Contract describes every parameter that makes up every message that a service can create or consume. The Service runtime layer contains the behaviors that occur only during the actual operation of the service, that is, the runtime behaviors of the service. The Messaging layer is composed of channels. A channel is a component that processes a message in some way, for example, authenticating a message. In its final form, a service is a program. Like other programs, a service must be run in an executable format. This is known as the hosting application. In the next section, we will explain these concepts in detail. Basic WCF concepts—WCF ABCs There are many terms and concepts around WCF, such as address, binding, contract, endpoint, behavior, hosting, and channels. Understanding these terms is very helpful when using WCF. Address The WCF Address is a specific location for a service. It is the specific place to which a message will be sent. All WCF services are deployed at a specific address, listening at that address for incoming requests. A WCF Address is normally specified as a URI, with the first part specifying the transport mechanism, and the hierarchical part specifying the unique location of the service. For example, http://www.myweb.com/myWCFServices/SampleService can be an address for a WCF service. This WCF service uses HTTP as its transport protocol, and it is located on the server www.myweb.com, with a unique service path of myWCFServices/SampleService. The following diagram illustrates the three parts of a WCF service address. Binding Bindings are used to specify the transport, encoding, and protocol details required for clients and services to communicate with each other. Bindings are what WCF uses to generate the underlying wire representation of the endpoint. So, most of the details of the binding must be agreed upon by the parties that are communicating. The easiest way to achieve this is for clients of a service to use the same binding that the service uses. A binding is made up of a collection of binding elements. Each element describes some aspect of how the service communicates with clients. A binding must include at least one transport binding element, at least one message encoding binding element (which can be provided by the transport binding element by default), and any number of other protocol binding elements. The process that builds a runtime out of this description allows each binding element to contribute code to that runtime. WCF provides bindings that contain common selections of binding elements. These can either be used with their default settings, or the default values can be modified according to user requirements. These system-provided bindings have properties that allow direct control over the binding elements and their settings. The following are some examples of the system-provided bindings: BasicHttpBinding, WSHttpBinding, WSDualHttpBinding, WSFederationHttpBinding, NetTcpBinding, NetNamedPipeBinding, NetMsmqBinding, NetPeerTcpBinding, and MsmqIntegrationBinding. Each one of these built-in bindings has predefined required elements for a common task, and is ready to be used in your project. For instance, the BasicHttpBinding uses HTTP as the transport for sending SOAP 1.1 messages, and it has attributes and elements such as receiveTimeout, sendTimeout, maxMessageSize, and maxBufferSize. You can accept the default settings of its attributes and elements, or overwrite them as needed. Contract A WCF contract is a set of specifications that define the interfaces of a WCF service. A WCF service communicates with other applications according to its contracts. There are several types of WCF contracts, such as Service Contract, Operation Contract, Data Contract, Message Contract, and Fault Contract. Service contract A service contract is the interface of the WCF service. Basically, it tells others what the service can do. It may include service-level settings, such as the name of the service, the namespace of the service, and the corresponding callback contracts of the service. Inside the interface, it can define a bunch of methods, or service operations for specific tasks. Normally, a WCF service has at least one service contract. Operation contract An operation contract is defined within a service contract. It defines the parameters and return type of an operation. An operation can take data of a primitive (native) data type, such as an integer as a parameter, or it can take a message, which should be defined as a message contract type. Just as a service contract is an interface, an operation contract is a definition of an operation. It has to be implemented in order that the service functions as a WCF service. An operation contract also defines operation-level settings, such as the transaction flow of the operation, the directions of the operation (one-way, two-way, or both ways), and fault contract of the operation. The following is an example of an operation contract: [WCF::FaultContract(typeof(MyWCF.EasyNorthwind.FaultContracts.ProductFault))]MyWCF.EasyNorthwind.MessageContracts.GetProductResponseGetProduct(MyWCF.EasyNorthwind.MessageContracts.GetProductRequest request); In this example, the operation contract's name is GetProduct, and it takes one input parameter, which is of type GetProductRequest (a message contract) and has one return value, which is of type GetProductResponse (another message contract). It may return a fault message, which is of type ProductFault (a fault contract), to the client applications. We will cover message contract and fault contract in the following sections. Message contract If an operation contract needs to pass a message as a parameter or return a message, the type of these messages will be defined as message contracts. A message contract defines the elements of the message, as well as any message-related settings, such as the level of message security, and also whether an element should go to the header or to the body. The following is a message contract example: namespace MyWCF.EasyNorthwind.MessageContracts{ /// <summary> /// Service Contract Class - GetProductResponse /// </summary> [WCF::MessageContract(IsWrapped = false)] public partial class GetProductResponse { private MyWCF.EasyNorthwind.DataContracts.Product product; [WCF::MessageBodyMember(Name = "Product")] public MyWCF.EasyNorthwind.DataContracts.Product Product { get { return product; } set { product = value; } } }} In this example, the namespace of the message contract is MyWCF.EasyNorthwind.MessageContracts, and the message contract's name is GetProductResponse. This message contract has one member, which is of type Product. Data contract Data contracts are data types of the WCF service. All data types used by the WCF service must be described in metadata to enable other applications to interoperate with the service. A data contract can be used by an operation contract as a parameter or return type, or it can be used by a message contract to define elements. If a WCF service uses only primitive (native) data types, it is not necessary to define any data contract. The following is an of example data contract: namespace MyWCF.EasyNorthwind.DataContracts{ /// <summary> /// Data Contract Class - Product /// </summary> [WcfSerialization::DataContract(Namespace = "http://MyCompany.com/ ProductService/EasyWCF/2008/05", Name = "Product")] public partial class Product { private int productID; private string productName; [WcfSerialization::DataMember(Name = "ProductID", IsRequired = false, Order = 0)] public int ProductID { get { return productID; } set { productID = value; } } [WcfSerialization::DataMember(Name = "ProductName", IsRequired = false, Order = 1)] public string ProductName { get { return productName; } set { productName = value; } } }} In this example, the namespace of the data contract is MyWCF.EasyNorthwind.DataContracts, the name of the data contract is Product, and this data contract has two members (ProductID and ProductName).   Fault contract In any WCF service operation contract, if an error can be returned to the caller, the caller should be warned of that error. These error types are defined as fault contracts. An operation can have zero or more fault contracts associated with it. The following is a fault contract example: namespace MyWCF.EasyNorthwind.FaultContracts{ /// <summary> /// Data Contract Class - ProductFault /// </summary> [WcfSerialization::DataContract(Namespace = "http://MyCompany.com/ ProductService/EasyWCF/2008/05", Name = "ProductFault")] public partial class ProductFault { private string faultMessage; [WcfSerialization::DataMember(Name = "FaultMessage", IsRequired = false, Order = 0)] public string FaultMessage { get { return faultMessage; } set { faultMessage = value; } } }} In this example, the namespace of the fault contract is MyWCF.EasyNorthwind.FaultContracts, the name of the fault contract is ProductFault, and the fault contract has only one member (FaultMessage). Endpoint Messages are sent between endpoints. Endpoints are places where messages are sent or received (or both), and they define all of the information required for the message exchange. A service exposes one or more application endpoints (as well as zero or more infrastructure endpoints). A service can expose this information as the metadata that clients can process to generate appropriate WCF clients and communication stacks. When needed, the client generates an endpoint that is compatible with one of the service's endpoints. A WCF service endpoint has an address, a binding, and a service contract(WCF ABC). The endpoint's address is a network address where the endpoint resides. It describes, in a standard-based way, where messages should be sent. Each endpoint normally has one unique address, but sometimes two or more endpoints can share the same address. The endpoint's binding specifies how the endpoint communicates with the world, including things such as transport protocol (TCP, HTTP), encoding (text, binary), and security requirements (SSL, SOAP message security). The endpoint's contract specifies what the endpoint communicates, and is essentially a collection of messages organized in the operations that have basic Message Exchange Patterns (MEPs) such as one-way, duplex, or request/reply. The following diagram shows the components of a WCF service endpoint. Behavior A WCF behavior is a type, or settings to extend the functionality of the original type. There are many types of behaviors in WCF, such as service behavior, binding behavior, contract behavior, security behavior and channel behavior. For example, a new service behavior can be defined to specify the transaction timeout of the service, the maximum concurrent instances of the service, and whether the service publishes metadata. Behaviors are configured in the WCF service configuration file. Hosting A WCF service is a component that can be called by other applications. It must be hosted in an environment in order to be discovered and used by others. The WCF host is an application that controls the lifetime of the service. With .NET 3.0 and beyond, there are several ways to host the service. Self hosting A WCF service can be self-hosted, which means that the service runs as a standalone application and controls its own lifetime. This is the most flexible and easiest way of hosting a WCF service, but its availability and features are limited. Windows services hosting A WCF service can also be hosted as a Windows service. A Windows service is a process managed by the operating system and it is automatically started when Windows is started (if it is configured to do so). However, it lacks some critical features (such as versioning) for WCF services. IIS hosting A better way of hosting a WCF service is to use IIS. This is the traditional way of hosting a web service. IIS, by nature, has many useful features, such as process recycling, idle shutdown, process health monitoring, message-based activation, high availability, easy manageability, versioning, and deployment scenarios. All of these features are required for enterprise-level WCF services. Windows Activation Services hosting The IIS hosting method, however, comes with several limitations in the service-orientation world; the dependency on HTTP is the main culprit. With IIS hosting, many of WCF's flexible options can't be utilized. This is the reason why Microsoft specifically developed a new method, called Windows Activation Services, to host WCF services. Windows Process Activation Service (WAS) is the new process activation mechanism for Windows Server 2008 that is also available on Windows Vista. It retains the familiar IIS 6.0 process model (application pools and message-based process activation) and hosting features (such as rapid failure protection, health monitoring, and recycling), but it removes the dependency on HTTP from the activation architecture. IIS 7.0 uses WAS to accomplish message-based activation over HTTP. Additional WCF components also plug into WAS to provide message-based activation over the other protocols that WCF supports, such as TCP, MSMQ, and named pipes. This allows applications that use the non-HTTP communication protocols to use the IIS features such as process recycling, rapid fail protection, and the common configuration systems that were only available to HTTP-based applications. This hosting option requires that WAS be properly configured, but it does not require you to write any hosting code as part of the application. [Microsoft MSN, Hosting Services, retrieved on 3/6/2008 from http://msdn2.microsoft.com/en-us/library/ms730158.aspx] Channels As we have seen in the previous sections, a WCF service has to be hosted in an application on the server side. On the client side, the client applications have to specify the bindings to connect to the WCF services. The binding elements are interfaces, and they have to be implemented in concrete classes. The concrete implementation of a binding element is called a channel. The binding represents the configuration, and the channel is the implementation associated with that configuration. Therefore, there is a channel associated with each binding element. Channels stack on top of one another to create the concrete implementation of the binding—the channel stack. The WCF channel stack is a layered communication stack with one or more channels that process messages. At the bottom of the stack is a transport channel that is responsible for adapting the channel stack to the underlying transport (for example, TCP, HTTP, SMTP and other types of transport). Channels provide a low-level programming model for sending and receiving messages. This programming model relies on several interfaces and other types collectively known as the WCF channel model. The following diagram shows a simple channel stack: Metadata The metadata of a service describes the characteristics of the service that an external entity needs to understand in order to communicate with the service. Metadata can be consumed by the ServiceModel Metadata Utility Tool (Svcutil.exe) to generate a WCF client and the accompanying configuration that a client application can use to interact with the service. The metadata exposed by the service includes XML schema documents, which define the data contract of the service, and WSDL documents, which describe the methods of the service. Though WCF services will always have metadata, it is possible to hide the metadata from outsiders. If you do so, you have to pass the metadata to the client side by other means. This practice is not common, but it gives your services an extra layer of security. When enabled via the configuration settings through metadata behavior, metadata for the service can be retrieved by inspecting the service and its endpoints. The following configuration setting in a WCF service configuration file will enable the metadata publishing for HTTP transport protocol: <serviceMetadata httpGetEnabled="true" />
Read more
  • 0
  • 0
  • 2350

article-image-silverlight-5-lob-development-validation-advanced-topics-and-mvvm
Packt
09 Mar 2012
8 min read
Save for later

Silverlight 5 LOB Development : Validation, Advanced Topics, and MVVM

Packt
09 Mar 2012
8 min read
(For more resources on Silverlight, see here.) Validation One of the most important parts of the Silverlight application is the correct implementation of validations in our business logic. These can be simple details, such as the fact that the client must provide their name and e-mail address to sign up, or that before selling a book, it must be in stock. In RIA Services, validations can be defined on two levels: In entities, via DataAnnotations. In our Domain Service, server or asynchronous validations via Invoke. DataAnnotations The space named System.ComponentModel.DataAnnotations implements a series of attributes allowing us to add validation rules to the properties of our entities. The following table shows the most outstanding ones: Validation Attribute Description DataTypeAttribute Specifies a particular type of data such as date or an e-mail EnumDataTypeAttribute Ensures that the value exists in an enumeration RangeAttribute Designates minimum and maximum constraints RegularExpressionAttribute Uses a regular expression to determine valid values RequiredAttribute Specifies that a value must be provided StringLengthAttribute Designates a maximum and minimum number of characters CustomValidationAttribute Uses a custom method for validation The following code shows us how to add a field as "required": [Required()]public string Name{ get { return this._name; } set { (...) }} In the UI layer, the control linked to this field (a TextBox, in this case), automatically detects and displays the error. It can be customized as follows: These validations are based on the launch of exceptions. They are captured by user controls and bound to data elements. If there are errors, these are shown in a friendly way. When executing the application in debug mode with Visual Studio, it is possible to find that IDE captures exceptions. To avoid this, refer to the following link, where the IDE configuration is explained: http://bit.ly/riNdmp. Where can validations be added? The answer is in the metadata definition, entities, in our Domain Service, within the server project. Going back to our example, the server project is SimpleDB.Web and the Domain Service is MyDomainService. medatada.cs. These validations are automatically copied to the entities definition file and the context found on the client side. In the Simple.DB.Web.g.cs file, when the hidden folder Generated Code is opened, you will be surprised to find that some validations are already implemented. For example, the required field, field length, and so on. These are inferred from the Entity Framework model. Simple validations For validations that are already generated, let's see a simple example on how to implement those of the "required" field and "maximum length": [Required()][StringLength(60)]public string Name{ get { return this._name; } set { (...) }} Now, we will implement the syntactic validation for credit cards (format dddddddd- dddd-dddd). To do so, use the regular expression validator and add the server file MyDomainService.metadata.cs, as shown in the following code: [RegularExpression(@"d{4}-d{4}-d{4}-d{4}",ErrorMessage="Credit card not valid format should be: 9999-9999-9999-9999")]public string CreditCard { get; set; } To know how regular expressions work, refer to the following link: http://bit.ly/115Td0 and refer to this free tool to try them in a quick way: http://bit.ly/1ZcGFC. Custom and shared validations Basic validations are acceptable for 70 percent of validation scenarios, but there are still 30 percent of validations which do not fit in these patterns. What do you do then? RIA Services offers CustomValidatorAttribute. It permits the creation of a method which makes a validation defined by the developer. The benefits are listed below: Its code: The necessary logic can be implemented to make validations. It can be oriented for validations to be viable in other modules (for instance, the validation of an IBAN [International Bank Account]). It can be chosen if a validation is executed on only the server side (for example, a validation requiring data base readings) or if it is also copied to the client. To validate the checksum of the CreditCard field, follow these steps: Add to the SimpleDB.Web project, the class named ClientCustomValidation. Within this class, define a static model, ValidationResult, which accepts the value of the field to evaluate as a parameter and returns the validation result. public class ClientCustomValidation{ public static ValidationResult ValidMasterCard(string strcardNumber)} Implement the summarized validation method (the part related to the result call back is returned). public static ValidationResult ValidMasterCard(string strcardNumber){ // Let us remove the "-" separator string cardNumber = strcardNumber.Replace("-", ""); // We need to keep track of the entity fields that are // affected, so the UI controls that have this property / bound can display the error message when applies List<string> AffectedMembers = new List<string>(); AffectedMembers.Add("CreditCard"); (...) // Validation succeeded returns success // Validation failed provides error message and indicates // the entity fields that are affected return (sum % 10 == 0) ? ValidationResult.Success : new ValidationResult("Failed to validate", AffectedMembers);} To make validation simpler, only the MasterCard has been covered. To know more and cover more card types, refer to the page http://bit.ly/aYx39u. In order to find examples of valid numbers, go to http://bit.ly/gZpBj. Go to the file MyDomainService.metadata.cs and, in the Client entity, add the following to the CreditCard field: [CustomValidation(typeof(ClientCustomValidation),"ValidMasterCard")]public string CreditCard { get; set; } If it is executed now and you try to enter an invalid field in the CreditCard field, it won't be marked as an error. What happens? Validation is only executed on the server side. If it is intended to be executed on the client side as well, rename the file called ClientCustomValidation.cs to ClientCustomValidation.shared. cs. In this way, the validation will be copied to the Generated_code folder and the validation will be launched. In the code generated on the client side, the entity validation is associated. /// <summary>/// Gets or sets the 'CreditCard' value./// </summary>[CustomValidation(typeof(ClientCustomValidation), "ValidMasterCard")][DataMember()][RegularExpression("d{4}-d{4}-d{4}-d{4}", ErrorMessage="Creditcard not valid format should be: 9999-9999-9999-9999")][StringLength(30)]public string CreditCard{ This is quite interesting. However, what happens if more than one field has to be checked in the validation? In this case, one more parameter is added to the validation method. It is ValidationContext, and through this parameter, the instance of the entity we are dealing with can be accessed. public static ValidationResult ValidMasterCard( string strcardNumber, ValidationContext validationContext){ client currentClient = (client)validationContext.ObjectInstance; Entity-level validations Fields validation is quite interesting, but sometimes, rules have to be applied in a higher level, that is, entity level. RIA Services implements some machinery to perform this kind of validation. Only a custom validation has to be defined in the appropriate entity class declaration. Following the sample we're working upon, let us implement one validation which checks that at least one of the two payment methods (PayPal or credit card) is informed. To do so, go to the ClientCustomValidation.shared.cs (SimpleDB web project) and add the following static function to the ClientCustomValidation class: public static ValidationResult ValidatePaymentInformed(clientCurrentClient){ bool atLeastOnePaymentInformed = ((CurrentClient.PayPalAccount != null && CurrentClient.PayPalAccount != string.Empty) || (CurrentClient.CreditCard != null && CurrentClient.CreditCard != string.Empty)); return (atLeastOnePaymentInformed) ? ValidationResult.Success : new ValidationResult("One payment method must be informed at least");} Next, open the MyDomainService.metadata file and add, in the class level, the following annotation to enable that validation: [CustomValidation(typeof(ClientCustomValidation), ValidatePaymentInformed")][MetadataTypeAttribute(typeof(client.clientMetadata))]public partial class client When executing and trying the application, it will be realized that the validation is not performed. This is due to the fact that, unlike validations in the field level, the entity validations are only launched client-side when calling EndEdit or TryValidateObject. The logic is to first check if the fields are well informed and then make the appropriate validations. In this case, a button will be added, making the validation and forcing it to entity level. To know more about validation on entities, go to http://bit.ly/qTr9hz. Define the command launching the validation on the current entity in the ViewModel as the following code: private RelayCommand _validateCommand;public RelayCommand ValidateCommand{ get { if (_validateCommand == null) { _validateCommand = new RelayCommand(() => { // Let us clear the current validation list CurrentSelectedClient.ValidationErrors.Clear(); var validationResults = new List<ValidationResult>(); ValidationContext vcontext = new ValidationContext(CurrentSelectedClient, null, null); // Let us run the validation Validator.TryValidateObject(CurrentSelectedClient, vcontext, validationResults); // Add the errors to the entities validation error // list foreach (var res in validationResults) { CurrentSelectedClient.ValidationErrors.Add(res); } },(() => (CurrentSelectedClient != null)) ); } return _validateCommand; }} Define the button in the window and bind it to the command: <Button Content="Validate" Command="{Binding Path=ValidateCommand}"/> While executing, it will be appreciated that the fields be blank, even if we click the button. Nonetheless, when adding a breaking point, the validation is shown. What happens is, there is a missing element showing the result of that validation. In this case, the choice will be to add a header whose DataContext points to the current entity. If entity validations fail, they will be shown in this element. For more information on how to show errors, check the link http://bit.ly/ad0JyD. The TextBox added will show the entity validation errors. The final result will look as shown in the following screenshot:
Read more
  • 0
  • 0
  • 2347

article-image-ejb-3-entities
Packt
16 Oct 2009
6 min read
Save for later

EJB 3 Entities

Packt
16 Oct 2009
6 min read
The JPA can be regarded as a higher level of abstraction sitting on top of JDBC. Under the covers the persistence engine converts JPA statements into lower level JDBC statements. EJB 3 Entities In JPA, any class or POJO (Plain Old Java Object) can be converted to an entity with very few modifications. The following listing shows an entity Customer.java with attributes id, which is unique for a Customer instance, and firstName and lastName. package ejb30.entity; import javax.persistence.Entity; import javax.persistence.Id; @Entity public class Customer implements java.io.Serializable { private int id; private String firstName; private String lastName; public Customer() {} @Id public int getId() { return id; } public void setId(int id) { this.id = id; } public String getFirstname() { return firstName; } public void setFirstname(String firstName) { this.firstName = firstName; } public String getLastname() { return lastName; } public void setLastname(String lastName) { this.lastName = lastName; } public String toString() { return "[Customer Id =" + id + ",first name=" + firstName + ",last name=" + lastName + "]"; } } The class follows the usual JavaBean rules. The instance variables are non-public and are accessed by clients through appropriately named getter and setter accessor methods. Only a couple of annotations have been added to distinguish this entity from a POJO. Annotations specify entity metadata. They are not an intrinsic part of an entity but describe how an entity is persisted. The @Entity annotation indicates to the persistence engine that the annotated class, in this case Customer, is an entity. The annotation is placed immediately before the class definition and is an example of a class level annotation. We can also have property-based and field-based annotations, as we shall see. The @Id annotation specifies the primary key of the entity. The id attribute is a primary key candidate. Note that we have placed the annotation immediately before the corresponding getter method, getId(). This is an example of a property-based annotation. A property-based annotation must be placed immediately before the corresponding getter method, and not the setter method. Where property-based annotations are used, the persistence engine uses the getter and setter methods to access and set the entity state. An alternative to property-based annotations are field-based annotations. An example of this is shown later. Note that all annotations within an entity, other than class level annotations, must be all property-based or all field-based. The final requirement for an entity is the presence of a no-arg constructor. Our Customer entity also implements the java.io.Serializable interface. This is not essential, but good practice because the Customer entity has the potential of becoming a detached entity. Detached entities must implement the Serializable interface. At this point we remind the reader that, as throughout EJB 3, XML deployment descriptors are an alternative to entity metadata annotations. Comparison with EJB 2.x Entity Beans An EJB 3 entity is a POJO and not a component, so it is referred to as an entity and not an entity bean. In EJB 2.x the corresponding construct is an entity bean component with the same artifacts as session beans, namely an XML deployment descriptor file, a remote or local interface, a home or localhome interface, and the bean class itself. The remote or local interface contains getter and setter method definitions. The home or local interface contains definitions for the create() and findByPrimaryKey() methods and optionally other finder method definitions. As with session beans, the entity bean class contains callback methods such as ejbCreate(), ejbLoad(), ejbStore(), ejbRemove(), ejbActivate(), ejbPassivate(), and setEntityContext(). The EJB 3 entity, being a POJO, can run outside a container. Its clients are always local to the JVM. The EJB 2.x entity bean is a distributed object that needs a container to run, but can have clients from outside its JVM. Consequently EJB 3 entities are more reusable and easier to test than EJB 2.x entity beans. In EJB 2.x we need to decide whether the persistence aspects of an entity bean are handled by the container (Container Managed Persistence or CMP) or by the application (Bean Managed Persistence or BMP). In the case of CMP, the entity bean is defined as an abstract class with abstract getter and setter method definitions. At deployment the container creates a concrete implementation of this abstract entity bean class. In the case of BMP, the entity bean is defined as a class. The getter and setter methods need to be coded. In addition the ejbCreate(), ejbLoad(), ejbStore(), ejbFindByPrimaryKey(), and any other finder methods need to be coded using JDBC. Mapping an Entity to a Database Table We can map entities onto just about any relational database. GlassFish includes an embedded Derby relational database. If we want GlassFish to access another relational database, Oracle say, then we need to use the GlassFish admin console to set up an Oracle data source. We also need to refer to this Oracle data source in the persistence.xml file. We will describe the persistence.xml file later in this article. These steps are not required if we use the GlassFish default Derby data source. All the examples in this article will use the Derby database. EJB 3 makes heavy use of defaulting for describing entity metadata. In this section we describe a few of these defaults. First, by default, the persistence engine maps the entity name to a relational table name. So in our example the table name is CUSTOMER. If we want to map the Customer entity to another table we will need to use the @Table annotation which we shall see later. By default, property or fields names are mapped to a column name. So ID, FIRSTNAME, and LASTNAME are the column names corresponding to the id, firstname, and lastname entity attributes. If we want to change this default behavior we will need to use the @Column annotation which we shall see later. JDBC rules are used for mapping Java primitives to relational datatypes. So a String will be mapped to VARCHAR for a Derby database and VARCHAR2 for an Oracle database. An int will be mapped to INTEGER for a Derby database and NUMBER for an Oracle database. The size of a column mapped from a String defaults to 255, for example VARCHAR(255) for Derby or VARCHAR2(255) for Oracle. If we want to change this column size then we need to use the length element of the @Column annotation which we shall see later. To summarize, if we are using the GlassFish container with the embedded Derby database, the Customer entity will map onto the following table:   CUSTOMER ID INTEGER PRIMARY KEY FIRSTNAME VARCHAR(255) LASTNAME VARCHAR(255) Most persistence engines, including the GlassFish default persistence engine, Toplink, have a schema generation option, although this is not required by the JPA specification. In the case of GlassFish, if a flag is set when the application is deployed to the container, then the container will create the mapped table in the database. Otherwise the table is assumed to exist in the database.
Read more
  • 0
  • 0
  • 2329

article-image-create-quick-application-cakephp-part-2
Packt
18 Nov 2009
7 min read
Save for later

Create a Quick Application in CakePHP: Part 2

Packt
18 Nov 2009
7 min read
Editing a Task Now that we can add tasks to CakeTooDoo, the next thing that we will be doing is to have the ability to edit tasks. This is necessary because the users should be able to tick on a task when it has been completed. Also, if the users are not happy with the title of the task, they can change it. To have these features in CakeTooDoo, we will need to add another action to our Tasks Controller and also add a view for this action. Time for Action: Creating the Edit Task Form Open the file tasks_controller.php and add a new action named edit as shown in the following code: function edit($id = null) { if (!$id) { $this->Session->setFlash('Invalid Task'); $this->redirect(array('action'=>'index'), null, true); } if (empty($this->data)) { $this->data = $this->Task->find(array('id' => $id)); } else { if ($this->Task->save($this->data)) { $this->Session->setFlash('The Task has been saved'); $this->redirect(array('action'=>'index'), null, true); } else { $this->Session->setFlash('The Task could not be saved. Please, try again.'); } } } Inside the directory /CakeTooDoo/app/views/tasks, create a new file named edit.ctp and add the following code to it: <?php echo $form->create('Task');?> <fieldset> <legend>Edit Task</legend> <?php echo $form->hidden('id'); echo $form->input('title'); echo $form->input('done'); ?> </fieldset> <?php echo $form->end('Save');?> We will be accessing the Task Edit Form from the List All Task page. So, let's add a link from the List All Tasks page to the Edit Task page. Open the index.ctp file in /CakeTooDoo/app/views directory, and replace the HTML comment <!-- different actions on tasks will be added here later --> with the following code: <?php echo $html->link('Edit', array('action'=>'edit', $task['Task']['id'])); ?> Now open the List All Tasks page in the browser by pointing it to http://localhost/CakeTooDoo/tasks/index and we will see an edit link beside all the tasks. Click on the edit link of the task you want to edit, and this will take you to do the Edit Task form, as shown below: Now let us add links in the Edit Task Form page to the List All Tasks and Add New Task page. Add the following code to the end of edit.ctp in /CakeTooDoo/app/views: <?php echo $html->link('List All Tasks', array('action'=>'index')); ?><br /> <?php echo $html->link('Add Task', array('action'=>'add')); ?> What Just Happened? We added a new action named edit in the Tasks controller. Then we went on to add the view file edit.ctp for this action. Lastly, we linked the other pages to the Edit Task page using the HTML helper. When accessing this page, we need to tell the action which task we are interested to edit. This is done by passing the task id in the URL. So, if we want to edit the task with the id of 2, we need to point our browser to http://localhost/CakeTooDoo/tasks/edit/2. When such a request is made, Cake forwards this request to the Tasks controller's edit action, and passes the value of the id to the first parameter of the edit action. If we check the edit action, we will notice that it accepts a parameter named $id. The task id passed in the URL is stored in this parameter. When a request is made to the edit action, the first thing that it does is to check if any id has been supplied or not. To let users edit a task, it needs to know which task the user wants to edit. It cannot continue if there is no id supplied. So, if $id is undefined, it stores an error message to the session and redirects to the index action that will show the list of current tasks along with the error message. If $id is defined, the edit action then checks whether there is any data stored in $this->data. If no data is stored in $this->data, it means that the user has not yet edited. And so, the desired task is fetched from the Task model, and stored in $this->data in the line: $this->data = $this->Task->find(array('id' => $id)); Once that is done, the view of the edit action is then rendered, displaying the task information. The view fetches the task information to be displayed from $this->data. The view of the edit action is very similar to that of the add action with a single difference. It has an extra line with echo $form->hidden('id');. This creates an HTML hidden input with the value of the task id that is being edited. Once the user edits the task and clicks on the Save button, the edited data is resent to the edit action and saved in $this->data. Having data in $this->data confirms that the user has edited and submitted the changed data. Thus, if $this->data is not empty, the edit action then tries to save the data by calling the Task Model's save() function: $this->Task->save($this->data). This is the same function that we used to add a new task in the add action. You may ask how does the save() function of model knows when to add a new record and when to edit an existing one? If the form data has a hidden id field, the function knows that it needs to edit an existing record with that id. If no id field is found, the function adds a new record. Once the data has been successfully updated, a success message is stored in the session and it redirects to the index action. Of course the index page will show the success message. Adding Data Validation If you have come this far, by now you should have a working CakeTooDoo. It has the ability to add a task, list all the tasks with their statuses, and edit a task to change its status and title. But, we are still not happy with it. We want the CakeTooDoo to be a quality application, and making a quality application with CakePHP is as easy as eating a cake. A very important aspect of any web application (or software in general), is to make sure that the users do not enter inputs that are invalid. For example, suppose a user mistakenly adds a task with an empty title, this is not desirable because without a title we cannot identify a task. We would want our application to check whether the user enters title. If they do not enter a title, CakeTooDoo should not allow the user to add or edit a task, and should show the user a message stating the problem. Adding these checks is what we call Data Validation. No matter how big or small our applications are, it is very important that we have proper data validation in place. But adding data validation can be a painful and time consuming task. This is especially true, if we have a complex application with lots of forms. Thankfully, CakePHP comes with a built-in data validation feature that can really make our lives much easier. Time for Action: Adding Data Validation to Check for Empty Title In the Task model that we created in /CakeTooDoo/app/models, add the following code inside the Task Model class. The Task Model will look like this: <?php class Task extends AppModel { var $name = 'Task'; var $validate = array( 'title' => array( 'rule' => VALID_NOT_EMPTY, 'message' => 'Title of a task cannot be empty' ) ); } ?> Now open the Add Task form in the browser by pointing it to http://localhost/CakeTooDoo/tasks/add, and try to add a task with an empty title. It will show the following error message:
Read more
  • 0
  • 0
  • 2328
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-using-datastore-transactions-google-app
Packt
30 Nov 2010
12 min read
Save for later

Using Datastore Transactions in Google App

Packt
30 Nov 2010
12 min read
Google App Engine Java and GWT Application Development Build powerful, scalable, and interactive web applications in the cloud Comprehensive coverage of building scalable, modular, and maintainable applications with GWT and GAE using Java Leverage the Google App Engine services and enhance your app functionality and performance Integrate your application with Google Accounts, Facebook, and Twitter Safely deploy, monitor, and maintain your GAE applications A practical guide with a step-by-step approach that helps you build an application in stages        As the App Engine documentation states, A transaction is a Datastore operation or a set of Datastore operations that either succeed completely, or fail completely. If the transaction succeeds, then all of its intended effects are applied to the Datastore. If the transaction fails, then none of the effects are applied. The use of transactions can be the key to the stability of a multiprocess application (such as a web app) whose different processes share the same persistent Datastore. Without transactional control, the processes can overwrite each other's data updates midstream, essentially stomping all over each other's toes. Many database implementations support some form of transactions, and you may be familiar with RDBMS transactions. App Engine Datastore transactions have a different set of requirements and usage model than you may be used to. First, it is important to understand that a "regular" Datastore write on a given entity is atomic—in the sense that if you are updating multiple fields in that entity, they will either all be updated, or the write will fail and none of the fields will be updated. Thus, a single update can essentially be considered a (small, implicit) transaction— one that you as the developer do not explicitly declare. If one single update is initiated while another update on that entity is in progress, this can generate a "concurrency failure" exception. In the more recent versions of App Engine, such failures on single writes are now retried transparently by App Engine, so that you rarely need to deal with them in application-level code. However, often your application needs stronger control over the atomicity and isolation of its operations, as multiple processes may be trying to read and write to the same objects at the same time. Transactions provide this control. For example, suppose we are keeping a count of some value in a "counter" field of an object, which various methods can increment. It is important to ensure that if one Servlet reads the "counter" field and then updates it based on its current value, no other request has updated the same field between the time that its value is read and when it is updated. Transactions let you ensure that this is the case: if a transaction succeeds, it is as if it were done in isolation, with no other concurrent processes 'dirtying' its data. Another common scenario: you may be making multiple changes to the Datastore, and you may want to ensure that the changes either all go through atomically, or none do. For example, when adding a new Friend to a UserAccount, we want to make sure that if the Friend is created, any related UserAcount object changes are also performed. While a Datastore transaction is ongoing, no other transactions or operations can see the work being done in that transaction; it becomes visible only if the transaction succeeds. Additionally, queries inside a transaction see a consistent "snapshot" of the Datastore as it was when the transaction was initiated. This consistent snapshot is preserved even after the in-transaction writes are performed. Unlike some other transaction models, with App Engine, a within-transaction read after a write will still show the Datastore as it was at the beginning of the transaction. Datastore transactions can operate only on entities that are in the same entity group. We discuss entity groups later in this article. Transaction commits and rollbacks To specify a transaction, we need the concepts of a transaction commit and rollback. A transaction must make an explicit "commit" call when all of its actions have been completed. On successful transaction commit, all of the create, update, and delete operations performed during the transaction are effected atomically. If a transaction is rolled back, none of its Datastore modifications will be performed. If you do not commit a transaction, it will be rolled back automatically when its Servlet exits. However, it is good practice to wrap a transaction in a try/finally block, and explicitly perform a rollback if the commit was not performed for some reason. This could occur, for example, if an exception was thrown. If a transaction commit fails, as would be the case if the objects under its control had been modified by some other process since the transaction was started the transaction is automatically rolled back. Example—a JDO transaction With JDO, a transaction is initiated and terminated as follows: import javax.jdo.PersistenceManager; import javax.jdo.Transaction; ... PersistenceManager pm = PMF.get().getPersistenceManager(); Transaction tx; ... try { tx = pm.currentTransaction(); tx.begin(); // Do the transaction work tx.commit(); } finally { if (tx.isActive()) { tx.rollback(); } } A transaction is obtained by calling the currentTransaction() method of the PersistenceManager. Then, initiate the transaction by calling its begin() method . To commit the transaction, call its commit() method . The finally clause in the example above checks to see if the transaction is still active, and does a rollback if that is the case. While the preceding code is correct as far as it goes, it does not check to see if the commit was successful, and retry if it was not. We will add that next. App Engine transactions use optimistic concurrency In contrast to some other transactional models, the initiation of an App Engine transaction is never blocked. However, when the transaction attempts to commit, if there has been a modification in the meantime (by some other process) of any objects in the same entity group as the objects involved in the transaction, the transaction commit will fail. That is, the commit not only fails if the objects in the transaction have been modified by some other process, but also if any objects in its entity group have been modified. For example, if one request were to modify a FeedInfo object while its FeedIndex child was involved in a transaction as part of another request, that transaction would not successfully commit, as those two objects share an entity group. App Engine uses an optimistic concurrency model. This means that there is no check when the transaction initiates, as to whether the transaction's resources are currently involved in some other transaction, and no blocking on transaction start. The commit simply fails if it turns out that these resources have been modified elsewhere after initiating the transaction. Optimistic concurrency tends to work well in scenarios where quick response is valuable (as is the case with web apps) but contention is rare, and thus, transaction failures are relatively rare. Transaction retries With optimistic concurrency, a commit can fail simply due to concurrent activity on the shared resource. In that case, if the transaction is retried, it is likely to succeed. So, one thing missing from the previous example is that it does not take any action if the transaction commit did not succeed. Typically, if a commit fails, it is worth simply retrying the transaction. If there is some contention for the objects in the transaction, it will probably be resolved when it is retried. PersistenceManager pm = PMF.get().getPersistenceManager(); // ... try { for (int i =0; i < NUM_RETRIES; i++) { pm.currentTransaction().begin(); // ...do the transaction work ... try { pm.currentTransaction().commit(); break; } catch (JDOCanRetryException e1) { if (i == (NUM_RETRIES - 1)) { throw e1; } } } } finally { if (pm.currentTransaction().isActive()) { pm.currentTransaction().rollback(); } pm.close(); } As shown in the example above, you can wrap a transaction in a retry loop, where NUM_RETRIES is set to the number of times you want to re-attempt the transaction. If a commit fails, a JDOCanRetryException will be thrown. If the commit succeeds, the for loop will be terminated. If a transaction commit fails, this likely means that the Datastore has changed in the interim. So, next time through the retry loop, be sure to start over in gathering any information required to perform the transaction. Transactions and entity groups An entity's entity group is determined by its key. When an entity is created, its key can be defined as a child of another entity's key, which becomes its parent. The child is then in the same entity group as the parent. That child's key could in turn be used to define another entity's key, which becomes its child, and so on. An entity's key can be viewed as a path of ancestor relationships, traced back to a root entity with no parent. Every entity with the same root is in the same entity group. If an entity has no parent, it is its own root. Because entity group membership is determined by an entity's key, and the key cannot be changed after the object is created, this means that entity group membership cannot be changed either. As introduced earlier, a transaction can only operate on entities from the same entity group. If you try to access entities from different groups within the same transaction, an error will occur and the transaction will fail. In App Engine, JDO owned relationships place the parent and child entities in the same entity group. That is why, when constructing an owned relationship, you cannot explicitly persist the children ahead of time, but must let the JDO implementation create them for you when the parent is made persistent. JDO will define the keys of the children in an owned relationship such that they are the child keys of the parent object key. This means that the parent and children in a JDO owned relationship can always be safely used in the same transaction. (The same holds with JPA owned relationships). So in the Connectr app, for example, you could create a transaction that encompasses work on a UserAccount object and its list of Friends—they will all be in the same entity group. But, you could not include a Friend from a different UserAccount in that same transaction—it will not be in the same entity group. This App Engine constraint on transactions—that they can only encompass members of the same entity group—is enforced in order to allow transactions to be handled in a scalable way across App Engine's distributed Datastores. Entity group members are always stored together, not distributed. Creating entities in the same entity group As discussed earlier, one way to place entities in the same entity group is to create a JDO owned relationship between them; JDO will manage the child key creation so that the parent and children are in the same entity group. To explicitly create an entity with an entity group parent, you can use the App Engine KeyFactory.Builder class . This is the approach used in the FeedIndex constructor example shown previously. Recall that you cannot change an object's key after it is created, so you have to make this decision when you are creating the object. Your "child" entity must use a primary key of type Key or String-encoded Key; these key types allow parent path information to be encoded in them. As you may recall, it is required to use one of these two types of keys for JDO owned relationship children, for the same reason. If the data class of the object for which you want to create an entity group parent uses an app-assigned string ID, you can build its key as follows: // you can construct a Builder as follows: KeyFactory.Builder keyBuilder = new KeyFactory.Builder(Class1.class.getSimpleName(), parentIDString); // alternatively, pass the parent Key object: Key pkey = KeyFactory.Builder keyBuilder = new KeyFactory.Builder(pkey); // Then construct the child key keyBuilder.addChild(Class2.class.getSimpleName(), childIDString); Key ckey = keyBuilder.getKey(); Create a new KeyFactory.Builder using the key of the desired parent. You may specify the parent key as either a Key object or via its entity name (the simple name of its class) and its app-assigned (String) or system-assigned (numeric) ID, as appropriate. Then, call the addChild method of the Builder with its arguments—the entity name and the app-assigned ID string that you want to use. Then, call the getKey() method of Builder. The generated child key encodes parent path information. Assign the result to the child entity's key field. When the entity is persisted, its entity group parent will be that entity whose key was used as the parent. This is the approach we showed previously in the constructor of FeedIndex, creating its key using its parent FeedInfo key . See http://code.google.com/appengine/docs/java/javadoc/ com/google/appengine/api/datastore/KeyFactory.Builder. html for more information on key construction. If the data class of the object for which you want to create an entity group parent uses a system-assigned ID, then (because you don't know this ID ahead of time), you must go about creating the key in a different way. Create an additional field in your data class for the parent key, of the appropriate type for the parent key, as shown in the following code: @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; ... @Persistent @Extension(vendorName="datanucleus", key="gae.parent-pk", value="true") private String parentKey; Assign the parent key to this field prior to creating the object. When the object is persisted, the data object's primary key field will be populated using the parent key as the entity group parent. You can use this technique with any child key type.
Read more
  • 0
  • 0
  • 2327

article-image-understanding-expression-blend-and-how-use-it-silverlight-4
Packt
16 Apr 2010
5 min read
Save for later

Understanding Expression Blend and How to Use it with Silverlight 4

Packt
16 Apr 2010
5 min read
Creating applications in Expression Blend What we've done so far falls short of some of the things you may have already seen and done in Silverlight. Hand editing XAML, assisted by Intellisense, works just fine to a point, but to create anything complex requires another tool to assist with turning our vision into code. Intellisense is a feature of Visual Studio and Blend that auto-completes text when you start typing a keyword, method, or variable name. Expression Blend may scare off developers at first with its radically different interface, but if you look more closely, you'll see that Blend has a lot in common with Visual Studio. For starters, both tools use the same Solution and Project file format. That means it's 100% compatible and enables tighter integration between developers and designers. You could even have the same project open in both Visual Studio and in Blend at the same time. Just be prepared to see the File Modified dialog box like the one below when switching between the two applications: If you've worked with designers on a project before, they typically mock up an interface in a graphics program and ship it off to the development team. Many times, a simple graphic embellishment can cause us developers to develop heartburn. Anyone who's ever had to implement a rounded corner in HTML knows the special kind of frustration that it brings along. Here's the good news: those days are over with Silverlight. A crash course in Expression Blend In the following screenshot, our CakeNavigationButton project is loaded into Expression Blend. Blend can be a bit daunting at first for developers that are used to Visual Studio as Blend's interface is dense with a lot of subtle cues. Solutions and projects are opened in Blend in the same manner as you would in Visual Studio. Just like in Visual Studio, you can customize Expression Blend's interface to suit your preference. You can move tabs around, dock, and undock them to create a workspace that works best for you as the following screenshot demonstrates: If you look at the CakeNavigationButton project, on the left hand side of the application window, you have the toolbar, which is substantially different from the toolbox in Visual Studio. The toolbar in Blend more closely resembles the toolbar in graphics editing software such as Adobe Photoshop or Adobe Illustrator. If you move the mouse over each button, you will see a tooltip that tells you what that button does, as well as the button's keyboard shortcut. In the upper-left corner, you'll notice a tab labeled Projects. This is functionally equivalent to the Solution Explorer in Visual Studio. The asterisk next to MainPage.XAML indicates that the file has not been saved. Examine the next screenshot to see Blend's equivalent to Visual Studio's Solution Explorer: If we look at the following screenshot, we find the Document tab control and the design surface, which Blend calls the art board. On the upper-right of the art board, there are three small buttons to control the switch between Design view, XAML view, or Split view. On the lower edge of the art board, there are controls to modify the view of the design surface. You can zoom in to take a closer look, turn on snap grid visibility, and turn on or off the snapping to snap lines.   If we then move to the upper-right corner of the next screen, we will see the Properties tab, which is a much more evolved version of the Properties tab in Visual Studio. As you can see in this screenshot, the color picker has a lot more to offer. There's also a search feature that narrows down the items in the tab based on the property name you type in.   At the lower left side of the next screen, there is the Objects and Timeline view, which shows the object hierarchy of the open document. Since we have the MainPage.XAML of our CakeNavigationButtons project, the view has StackPanel with six Buttons all inside a grid named LayoutRoot inside of a UserControl. Clicking on an item in this view selects the item on the art board and vice versa. Expression Blend is an intricate and rich application. Time for action – styles revisited Earlier in this chapter, we created and referenced a style directly in the XAML in Visual Studio. Let's modify the style we made in Blend to see how to do it graphically. To do this, we will need to: Open up the CakeNavigationButtons solution in Expression Blend. In the upper right corner, there are three tabs (Properties, Resources, and Data). On the Resources tab, expand the tree node marked [UserControl] and click on the button highlighted below to edit the [Button default] resource. Your art board should look something like this:
Read more
  • 0
  • 0
  • 2313

Packt
14 Apr 2014
4 min read
Save for later

The Fabric library – the deployment and development task manager

Packt
14 Apr 2014
4 min read
(For more resources related to this topic, see here.) Essentially, Fabric is a tool that allows the developer to execute arbitrary Python functions via the command line and also a set of functions in order to execute shell commands on remote servers via SSH. Combining these two things together offers developers a powerful way to administrate the application workflow without having to remember the series of commands that need to be executed on the command line. The library documentation can be found at http://fabric.readthedocs.org/. Installing the library in PTVS is straightforward. Like all other libraries, to insert this library into a Django project, right-click on the Python 2.7 node in Python Environments of the Solution Explorer window. Then, select the Install Python Package entry. The Python environment contextual menu Clicking on it brings up the Install Python Package modal window as shown in the following screenshot: It's important to use easy_install to download from the Python package index. This will bring the precompiled versions of the library into the system instead of the plain Python C libraries that have to be compiled on the system. Once the package is installed in the system, you can start creating tasks that can be executed outside your application from the command line. First, create a configuration file, fabfile.py, for Fabric. This file contains the tasks that Fabric will execute. The previous screenshot shows a really simple task: it prints out the string hello world once it's executed. You can execute it from the command prompt by using the Fabric command fab, as shown in the following screenshot: Now that you know that the system is working fine, you can move on to the juicy part where you can make some tasks that interact with a remote server through ssh. Create a task that connects to a remote machine and find out the type of OS that runs on it. The env object provides a way to add credentials to Fabric in a programmatic way We have defined a Python function, host_type, that runs a POSIX command, uname–s, on the remote. We also set up a couple of variables to tell Fabric which is the remote machine we are connecting to, i.e. env.hosts, and the password that has to be used to access that machine, i.e. env.password. It's never a good idea to put plain passwords into the source code, as is shown in the preceding screenshot example. Now, we can execute the host_typetask in the command line as follows: The Fabric library connects to the remote machine with the information provided and executes the command on the server. Then, it brings back the result of the command itself in the output part of the response. We can also create tasks that accept parameters from the command line. Create a task that echoes a message on the remote machine, starting with a parameter as shown in the following screenshot: The following are two examples of how the task can be executed: We can also create a helper function that executes an arbitrary command on the remote machine as follows: def execute(cmd): run(cmd) We are also able to upload a file into the remote server by using put: The first argument of put is the local file you want to upload and the second one is the destination folder's filename. Let's see what happens: Deploying process with Fabric The possibilities of using Fabric are really endless, since the tasks can be written in plain Python language. This provides the opportunity to automate many operations and focus more on the development instead of focusing on how to deploy your code to servers to maintain them. Summary This article provided you with an in-depth look at remote task management and schema migrations using the third-party Python library Fabric. Resources for Article: Further resources on this subject: Through the Web Theming using Python [Article] Web Scraping with Python [Article] Python Data Persistence using MySQL [Article]
Read more
  • 0
  • 0
  • 2313

article-image-nhibernate-30-using-named-queries-data-access-layer
Packt
15 Oct 2010
4 min read
Save for later

NHibernate 3.0: Using named queries in the data access layer

Packt
15 Oct 2010
4 min read
Getting ready Download the latest release of the Common Service Locator from http://commonservicelocator.codeplex.com, and extract Microsoft.Practices.ServiceLocation.dll to your solution's libs folder. Complete the previous recipe, Setting up an NHibernate repository. Following the Fast testing with SQLite in-memory database recipe in the previous article, create a new NHibernate test project named Eg.Core.Data.Impl.Test. Include the Eg.Core.Data.Impl assembly as an additional mapping assembly in your test project's App.Config with the following xml: <mapping assembly="Eg.Core.Data.Impl"/> How to do it... In the Eg.Core.Data project, add a folder for the Queries namespace. Add the following IQuery interfaces: public interface IQuery { } public interface IQuery<TResult> : IQuery { TResult Execute(); } Add the following IQueryFactory interface: { TQuery CreateQuery<TQuery>() where TQuery :IQuery; } Change the IRepository interface to implement the IQueryFactory interface, as shown in the following code: public interface IRepository<T> : IEnumerable<T>, IQueryFactory where T : Entity { void Add(T item); bool Contains(T item); int Count { get; } bool Remove(T item); } In the Eg.Core.Data.Impl project, change the NHibernateRepository constructor and add the _queryFactory field, as shown in the following code: private readonly IQueryFactory _queryFactory; public NHibernateRepository(ISessionFactory sessionFactory, IQueryFactory queryFactory) : base(sessionFactory) { _queryFactory = queryFactory; } Add the following method to NHibernateRepository: public TQuery CreateQuery<TQuery>() where TQuery : IQuery { return _queryFactory.CreateQuery<TQuery>(); } In the Eg.Core.Data.Impl project, add a folder for the Queries namespace. To the Eg.Core.Data.Impl project, add a reference to Microsoft.Practices.ServiceLocation.dll. To the Queries namespace, add this QueryFactory class: public class QueryFactory : IQueryFactory { private readonly IServiceLocator _serviceLocator; public QueryFactory(IServiceLocator serviceLocator) { _serviceLocator = serviceLocator; } public TQuery CreateQuery<TQuery>() where TQuery : IQuery { return _serviceLocator.GetInstance<TQuery>(); } } Add the following NHibernateQueryBase class: public abstract class NHibernateQueryBase<TResult> : NHibernateBase, IQuery<TResult> { protected NHibernateQueryBase( ISessionFactory sessionFactory) : base(sessionFactory) { } public abstract TResult Execute(); } Add an empty INamedQuery interface, as shown in the following code: public interface INamedQuery { string QueryName { get; } } Add a NamedQueryBase class, as shown in the following code: public abstract class NamedQueryBase<TResult> : NHibernateQueryBase<TResult>, INamedQuery { protected NamedQueryBase(ISessionFactory sessionFactory) : base(sessionFactory) { } public override TResult Execute() { var nhQuery = GetNamedQuery(); return Transact(() => Execute(nhQuery)); } protected abstract TResult Execute(IQuery query); protected virtual IQuery GetNamedQuery() { var nhQuery = session.GetNamedQuery( ((INamedQuery) this).QueryName); SetParameters(nhQuery); return nhQuery; } protected abstract void SetParameters(IQuery nhQuery); public virtual string QueryName { get { return GetType().Name; } } } In Eg.Core.Data.Impl.Test, add a test fixture named QueryTests inherited from NHibernateFixture. Add the following test and three helper methods: [Test] public void NamedQueryCheck() { var errors = new StringBuilder(); var queryObjectTypes = GetNamedQueryObjectTypes(); var mappedQueries = GetNamedQueryNames(); foreach (var queryType in queryObjectTypes) { var query = GetQuery(queryType); if (!mappedQueries.Contains(query.QueryName)) { errors.AppendFormat( "Query object {0} references non-existent " + "named query {1}.", queryType, query.QueryName); errors.AppendLine(); } } if (errors.Length != 0) Assert.Fail(errors.ToString()); } private IEnumerable<Type> GetNamedQueryObjectTypes() { var namedQueryType = typeof(INamedQuery); var queryImplAssembly = typeof(BookWithISBN).Assembly; var types = from t in queryImplAssembly.GetTypes() where namedQueryType.IsAssignableFrom(t) && t.IsClass && !t.IsAbstract select t; return types; } private IEnumerable<string> GetNamedQueryNames() { var nhCfg = NHConfigurator.Configuration; var mappedQueries = nhCfg.NamedQueries.Keys .Union(nhCfg.NamedSQLQueries.Keys); return mappedQueries; } private INamedQuery GetQuery(Type queryType) { return (INamedQuery) Activator .CreateInstance(queryType, new object[] { SessionFactory }); } For our example query, in the Queries namespace of Eg.Core.Data, add the following interface: public interface IBookWithISBN : IQuery<Book> { string ISBN { get; set; } } Add the implementation to the Queries namespace of Eg.Core.Data.Impl using the following code: public class BookWithISBN : NamedQueryBase<Book>, IBookWithISBN { public BookWithISBN(ISessionFactory sessionFactory) : base(sessionFactory) { } public string ISBN { get; set; } protected override void SetParameters( NHibernate.IQuery nhQuery) { nhQuery.SetParameter("isbn", ISBN); } protected override Book Execute(NHibernate.IQuery query) { return query.UniqueResult<Book>(); } } Finally, add the embedded resource mapping, BookWithISBN.hbm.xml, to Eg.Core.Data.Impl with the following xml code: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping > <query name="BookWithISBN"> <![CDATA[ from Book b where b.ISBN = :isbn ]]> </query> </hibernate-mapping>
Read more
  • 0
  • 0
  • 2281
article-image-using-business-rules-define-decision-points-oracle-soa-suite-part-1
Packt
28 Oct 2009
11 min read
Save for later

Using Business Rules to Define Decision Points in Oracle SOA Suite: Part 1

Packt
28 Oct 2009
11 min read
The advantage of separating out decision points as external rules is that we not only ensure that each rule is used in a consistent fashion, but in addition make it simpler and quicker to modify; that is we only have to modify a rule once and can do this with almost immediate effect, thus increasing the agility of our solution. Business Rule concepts Before we implement our first rule, let's briefly introduce the key components which make up a Business Rule. These are: Facts: Represent the data or business objects that rules are applied to. Rules: A rule consists of two parts, an IF part which consists of one or more tests to be applied to fact(s), and a THEN part, which lists the actions to be carried out should the test to evaluate to true Rule Set: As the name implies, it is just a set of one or more related rules that are designed to work together . Dictionary: A dictionary is the container of all components that make up a business rule, it holds all the facts, rule sets, and rules for a business rule. In addition, a dictionary may also contain functions, variables, and constraints. We will introduce these in more detail later in this article. To execute a business rule, you submit one or more facts to the rules engine. It will apply the rules to the facts, that is each fact will be tested against the IF part of the rule and if it evaluates to true, then it will perform the specified actions for that fact. This may result in the creation of new facts or the modification of existing facts (which may result in further rule evaluation). Leave approval rule To begin with, we will write a simple rule to automatically approve a leave request that is of type Vacation and only for 1 day's duration. A pretty trivial example, but once we've done this we will look at how to extend this rule to handle more complex examples. Using the Rule Author In SOA Suite 10.1.3 you use the Rule Author, which is a browser based interface for defining your business rules. To launch the Rule Author within your browser go to the following URL: http://<host name>:<port number>/ruleauthor/ This will bring up the Rule Author Log In screen. Here you need to log in as user that belongs to the rule-administrators role. You can either log in as the user oc4jadmin (default password Welcome1), which automatically belongs to this group, or define your own user. Creating a Rule Repository Within Oracle Business Rules, all of our definitions (that is facts, constraints, variables, and functions) and rule sets are defined within a dictionary. A dictionary is held within a Repository. A repository can contain multiple dictionaries and can also contain multiple versions of a dictionary. So, before we can write any rules, we need to either connect to an existing repository, or create a new one. Oracle Business Rules supports two types of repository—File based and WebDAV. For simplicity we will use a File based repository, though typically in production you want to use a WebDAV based repository as this makes it simpler to share rules between multiple BPEL Processes. WebDAV is short for Web-based Distributed Authoring and Versioning. It is an extension to HTTP that allows users to collaboratively edit and manage files (that is business rules in our case) over the Web. To create a File based repository click on the Repository tab within the Rule Author, this will display the Repository Connect screen as shown in the following screenshot: From here we can either connect to an existing repository (WebDAV or File based) or create and connect to a new file-based repository. For our purposes, select a Repository Type of File, and specify the full path name of where you want to create the repository and then click Create. To use a WebDAV repository, you will first need to create this externally from the Rule Author. Details on how to do this can be found in Appendix B of the Oracle Business Rules User Guide (http://download.oracle.com/docs/cd/B25221_04/web.1013/b15986/toc.htm). From a development perspective it can often be more convenient to develop your initial business rules in a file repository. Once complete, you can then export the rules from the file repository and import them into a WebDAV repository. Creating a dictionary Once we have connected to a repository, the next step is to create a dictionary. Click on the Create tab, circled in the following screenshot, and this will bring up the Create Dictionary screen. Enter a New Dictionary Name (for example LeaveApproval) and click Create. This will create and load the dictionary so it's ready to use. Once you have created a dictionary, then next time you connect to the repository you will select the Load tab (next to the Create tab) to load it. Defining facts Before we can define any rules, we first need to define the facts that the rules will be applied to. Click on the Definitions tab, this will bring up the page which summarizes all the facts defined within the current dictionary. You will see from this that the rule engine supports three types of facts: Java Facts, XML Facts, and RL Facts. The type of fact that you want to use really depends on the context in which you will be using the rules engine. For example, if you are calling the rule engine from Java, then you would work with Java Facts as this provides a more integrated way of combining the two components. As we are using the rule engine with BPEL then it makes sense to use XML Facts. Creating XML Facts The Rule Author uses XML Schemas to generate JAXB 1.0 classes, which are then imported to generate the corresponding XML Facts. For our example we will use the Leave Request schema, shown as follows for convenience: <?xml version="1.0" encoding="windows-1252"?> <xsd:schema targetNamespace="http://schemas.packtpub.com/LeaveRequest" elementFormDefault="qualified" > <xsd:element name="leaveRequest" type="tLeaveRequest"/> <xsd:complexType name="tLeaveRequest"> <xsd:sequence> <xsd:element name="employeeId" type="xsd:string"/> <xsd:element name="fullName" type="xsd:string" /> <xsd:element name="startDate" type="xsd:date" /> <xsd:element name="endDate" type="xsd:date" /> <xsd:element name="leaveType" type="xsd:string" /> <xsd:element name="leaveReason" type="xsd:string"/> <xsd:element name="requestStatus" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:schema> Using JAXB, particularly when used in conjunction with BPEL, places a number of constraints on how we define our XML Schemas, including: When defining rules, the Rule Author can only work with globally defined types. This is because it's unable to introspect the properties (i.e. attributes and elements) of global elements. Within BPEL you can only define variables based on globally defined elements. The net result is that any facts we want to pass from BPEL to the rules engine (or vice versa) must be defined as global elements for BPEL and have a corresponding global type definition so that we can define rules against it. The simplest way to achieve this is to define a global type (for example tLeaveRequest in the above schema) and then define a corresponding global element based on that type (for example, leaveRequest in the above schema). Even though it is perfectly acceptable with XML Schemas to use the same name for both elements and types, it presents problems for JAXB, hence the approach taken above where we have prefixed every type definition with t as in tLeaveRequest. Fortunately this approach corresponds to best practice for XML Schema design. The final point you need to be aware of is that when creating XML facts the JAXB processor maps the type xsd:decimal to java.lang.BigDecimal and xsd:integer to java.lang.BigInteger. This means you can't use the standard operators (for example >, >=, <=, and <) within your rules to compare properties of these types. To simplify your rules, within your XML Schemas use xsd:double in place of xsd:decimal and xsd:int in place of xsd:integer. To generate XML facts, from the XML Fact Summary screen (shown previously), click Create, this will display the XML Schema Selector page as shown: Here we need to specify the location of the XML Schema, this can either be an absolute path to an xsd file containing the schema or can be a URL. Next we need to specify a temporary JAXB Class Directory in which the generated JAXB classes are to be created. Finally, for the Target Package Name we can optionally specify a unique name that will be used as the Java package name for the generated classes. If we leave this blank, the package name will be automatically generated based on the target namespace of the XML Schema using the JAXB XML-to-Java mapping rules. For example, our leave request schema has a target namespace of http://schemas.packtpub.com/LeaveRequest; this will result in a package name of com.packtpub.schemas.leaverequest. Next click on Add Schema; this will cause the Rule Author to generate the JAXB classes for our schema in the specified directory. This will update the XML Fact Summary screen to show details of the generated classes; expand the class navigation tree until you can see the list of all the generated classes, as shown in the following screenshot: Select the top level node (that is com) to specify that we want to import all the generated classes. We need to import the TLeaveRequest class as this is the one we will use to implement rules and the LeaveRequest class as we need this to pass this in as a fact from BPEL to the rules engine. The ObjectFactory class is optional, but we will need this if we need to generate new LeaveRequest facts within our rule sets. Although we don't need to do this at the moment it makes sense to import it now in case we do need it in the future. Once we have selected the classes to be imported, click Import (circled in previous screenshot) to load them into the dictionary. The Rule Author will display a message to confirm that the classes have been successfully imported. If you check the list of generated JAXB classes, you will see that the imported classes are shown in bold. In the process of importing your facts, the Rule Author will assign default aliases to each fact and a default alias to all properties that make up a fact, where a property corresponds to either an element or an attribute in the XML Schema. Using aliases Oracle Business Rules allows you to specify your own aliases for facts and properties in order to define more business friendly names which can then be used when writing rules. For XML facts if you have followed standard naming conventions when defining your XML Schemas, we typically find that the default aliases are clear enough and that if you start defining aliases it can actually cause more confusion unless applied consistently across all facts. Hiding facts and properties The Rule Author lets you hide facts and properties so that they don't appear in the drop downs within the Rule Author. For facts which have a large number of properties, hiding some of these can be worth while as it can simplify the creation of rules. Another obvious use of this might be to hide all the facts based on elements, since we won't be implementing any rules directly against these. However, any facts you hide will also be hidden from BPEL, so you won't be able to pass facts of these types from BPEL to the rules engine (or vice versa). In reality, the only fact you will typically want to hide will be the ObjectFactory (as you will have one of these per XML Schema that you import). Saving the rule dictionary As you define your business rules, it makes sense to save your work at regular intervals. To save the dictionary, click on the Save Dictionary link in the top right hand corner of the Rule Author page. This will bring up the Save Dictionary page. Here either click on the Save button to update the current version of the dictionary with your changes or, if you want to save the dictionary as a new version or under a new dictionary name, then click on the Save As link and amend the dictionary name and version as appropriate.
Read more
  • 0
  • 0
  • 2268

Packt
08 Mar 2013
19 min read
Save for later

Painting – Multi-finger Paint

Packt
08 Mar 2013
19 min read
(For more resources related to this topic, see here.) What is multi-touch? The genesis of multi-touch on Mac OS X was the ability to perform two finger scrolling on a trackpad. The technology was further refined on mobile touch screen devices such as the iPod Touch, iPhone, and iPad. And it has also matured on the Mac OS X platform to allow the use of multi-touch or magic trackpad combined with one or more fingers and a motion to interact with the computer. Gestures are intuitive and allow us to control what is on the screen with fluid motions. Some of the things that we can do using multi-touch are as follows: Two finger scrolling: This is done by placing two fingers on the trackpad and dragging in a line Tap or pinch to zoom : This is done by tapping once with a single finger, or by placing two fingers on the trackpad and dragging them closer to each other Swipe to navigate: This is done by placing one or more fingers on the trackpad and quickly dragging in any direction followed by lifting all the fingers Rotate : This is done by placing two fingers on the trackpad and turning them in a circular motion while keeping them on the trackpad But these gestures just touch the surface of what is possible with multi-touch hardware. The magic trackpad can detect and track all 10 of our fingers with ease. There are plenty of new things that can be done with multi-touch — we are just waiting for someone to invent them. Implementing a custom view Multi-touch events are sent to the NSView objects. So before we can invent that great new multi-touch thing, we first need to understand how to implement a custom view. Essentially, a custom view is a subclass of NSView that overrides some of the behavior of the NSView object. Primarily, it will override the drawRect: method and some of the event handling methods. Time for action — creating a GUI with a custom view By now we should be familiar with creating new Xcode projects so some of the steps here are very high level. Let's get started! Create a new Xcode project with Automatic Reference Counting enabled and these options enabled as follows: Option Value Product Name Multi-Finger Paint Company Identifier com.yourdomain Class Prefix Your initials After Xcode creates the new project, design an icon and drag it in to the App Icon field on the TARGET Summary. Remember to set the Organization in the Project Document section of the File inspector. Click on the filename MainMenu.xib in the project navigator. Select the Multi-Finger Paint window and in the Size inspector change its Width and Height to 700 and 600 respectively. Enable both the Minimum Size and Maximum Size Constraints values. From the Object Library , drag a custom view into the window. In the Size inspector , change the Width and Height of the custom view to 400 and 300 respectively. Center the window using the guides that appear. From the File menu, select New>, then select the File…option. Select the Mac OS X Cocoa Objective-C class template and click on the Next button. Name the class BTSFingerView and select subclass of NSView. It is very important that the subclass is NSView. If we make a mistake and select the wrong subclass, our App won't work. Click on the button titled Create to create the .h and .m files. Click on the filename BTSFingerView.m and look at it carefully. It should look similar to the following code: // // BTSFingerView.m // Multi-Finger Paint // // Created by rwiebe on 12-05-23. // Copyright (c) 2012 BurningThumb Software. All rights reserved. // #import "BTSFingerView.h" @implementation BTSFingerView - (id)initWithFrame:(NSRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code here. } return self; } - (void)drawRect:(NSRect)dirtyRect { // Drawing code here. } @end By default, custom views do not receive events (keyboard, mouse, trackpad, and so on) but we need our custom view to receive events. To ensure our custom view will receive events, add the following code to the BTSFingerView.m file to accept the first responder: /* ** - (BOOL) acceptsFirstResponder ** ** Make sure the view will receive ** events. ** ** Input: none ** ** Output: YES to accept, NO to reject */ - (BOOL) acceptsFirstResponder { return YES; } And, still in the BTSFingerView.m file, modify the initWithFrame method to allow the view to accept touch events from the trackpad as follows: - (id)initWithFrame:(NSRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code here. // Accept trackpad events [self setAcceptsTouchEvents: YES]; } return self; } Once we are sure our custom view will receive events, we can start the process of drawing its content. This is done in the drawRect: method. Add the following code to the drawRect: method to clear it with a transparent color and draw a focus ring if the view is first responder: /* ** - (void)drawRect:(NSRect)dirtyRect ** ** Draw the view content ** ** Input: dirtyRect - the rectangle to draw ** ** Output: none */ - (void)drawRect:(NSRect)dirtyRect { // Drawing code here. // Preserve the graphics content // so that other things we draw // don't get focus rings [NSGraphicsContext saveGraphicsState]; // color the background transparent [[NSColor clearColor] set]; // If this view has accepted first responder // it should draw the focus ring if ([[self window] firstResponder] == self) { NSSetFocusRingStyle(NSFocusRingAbove); } // Fill the view with fully transparent // color so that we can see through it // to whatever is below [[NSBezierPath bezierPathWithRect:[self bounds]] fill]; // Restore the graphics content // so that other things we draw // don't get focus rings [NSGraphicsContext restoreGraphicsState]; } Next, we need to go back into the .xib file, and select our custom view, and then select the Identity Inspector where we will see that in the section titled Custom Class, the Class field contains NSView as the class. Finally, to connect this object to our new custom view program code, we need to change the Class to BTSFingerView as shown in the following screenshot: What just happened? We created our Xcode project and implemented a custom NSView object that will receive events. When we run the project we notice that the focus ring is drawn so that we can be confident the view has accepted the firstResponder status. How to receive multi-touch events Because our custom view accepts first responder, the Mac OS will automatically send events to it. We can override the methods that process the events that we want to handle in our view. Specifically, we can override the following events and process them to handle multi-touch events in our custom view: - (void)touchesBeganWithEvent:(NSEvent *)event - (void)touchesMovedWithEvent:(NSEvent *)event - (void)touchesEndedWithEvent:(NSEvent *)event - (void)touchesCancelledWithEvent:(NSEvent *)event Time for action — drawing our fingers When the multi-touch or magic trackpad is touched, our custom view methods will be invoked and we will be able to draw the placement of our fingers on the trackpad in our custom view. In Xcode, click on the filename BTSFingerView.h in the project navigator and add the following highlighted property: // // BTSFingerView.h // Multi-Finger Paint // // Created by rwiebe on 12-05-23. // Copyright (c) 2012 BurningThumb Software. All rights reserved. // #import <Cocoa/Cocoa.h> @interface BTSFingerView : NSView // A reference to the object that will // store the currently active touches @property (strong) NSMutableDictionary *m_activeTouches; @end In Xcode, click on the file BTSFingerView.m in the project navigator and add the following program code to synthesize the property: // // BTSFingerView.m // Multi-Finger Paint // // Created by rwiebe on 12-05-23. // Copyright (c) 2012 BurningThumb Software. All rights reserved. // #import "BTSFingerView.h" @implementation BTSFingerView // Synthesize the object that will // store the currently active touches @synthesize m_activeTouches; Add the following code to the initWithFrame: method in the BTSFingerView.m file to create the dictionary object that will be used to store the active touch objects: - (id)initWithFrame:(NSRect)frame { self = [super initWithFrame:frame]; if (self) { // Initialization code here. // Create the mutable dictionary that // will hold the list of currently active // touch events m_activeTouches = [[NSMutableDictionary alloc] init]; } return self; } Add the following code to the BTSFingerView.m file to add BeganWith touch events to the dictionary of active touches: /** ** - (void)touchesBeganWithEvent:(NSEvent *)event ** ** Invoked when a finger touches the trackpad ** ** Input: event - the touch event ** ** Output: none */ - (void)touchesBeganWithEvent:(NSEvent *)event { // Get the set of began touches NSSet *l_touches = [event touchesMatchingPhase:NSTouchPhaseBegan inView:self]; // For each began touch, add the touch // to the active touches dictionary // using its identity as the key for (NSTouch *l_touch in l_touches) { [m_activeTouches setObject:l_touch forKey:l_touch. identity]; } // Redisplay the view [self setNeedsDisplay:YES]; } Add the following code to the BTSFingerView.m file to add moved touch events to the dictionary of active touches: /** ** - (void)touchesMovedWithEvent:(NSEvent *)event ** ** Invoked when a finger moves on the trackpad ** ** Input: event - the touch event ** ** Output: none */ - (void)touchesMovedWithEvent:(NSEvent *)event { // Get the set of move touches NSSet *l_touches = [event touchesMatchingPhase:NSTouchPhaseMoved inView:self]; // For each move touch, update the touch // in the active touches dictionary // using its identity as the key for (NSTouch *l_touch in l_touches) { // Update the touch only if it is found // in the active touches dictionary if ([m_activeTouches objectForKey:l_touch.identity]) { [m_activeTouches setObject:l_touch forKey:l_touch.identity]; } } // Redisplay the view [self setNeedsDisplay:YES]; } Add the following code to the BTSFingerView.m file to remove the touch from the dictionary of active touches when the touch ends: /** ** - (void)touchesEndedWithEvent:(NSEvent *)event ** ** Invoked when a finger lifts off the trackpad ** ** Input: event - the touch event ** ** Output: none */ - (void)touchesEndedWithEvent:(NSEvent *)event { // Get the set of ended touches NSSet *l_touches = [event touchesMatchingPhase:NSTouchPhaseEnded inView:self]; // For each ended touch, remove the touch // from the active touches dictionary // using its identity as the key for (NSTouch *l_touch in l_touches) { [m_activeTouches removeObjectForKey:l_touch.identity]; } // Redisplay the view [self setNeedsDisplay:YES]; } Add the following code to the BTSFingerView.m file to remove the touch from the dictionary of active touches when the touch is cancelled: /** ** - (void)touchesCancelledWithEvent:(NSEvent *)event ** ** Invoked when a touch is cancelled ** ** Input: event - the touch event ** ** Output: none */ - (void)touchesCancelledWithEvent:(NSEvent *)event { // Get the set of cancelled touches NSSet *l_touches = [event touchesMatchingPhase:NSTouchPhaseCancelled inView:self]; // For each cancelled touch, remove the touch // from the active touches dictionary // using its identity as the key for (NSTouch *l_touch in l_touches) { [m_activeTouches removeObjectForKey:l_touch.identity]; } // Redisplay the view [self setNeedsDisplay:YES]; } When we touch the trackpad we are going to draw a "finger cursor" in our custom view. We need to decide how big we want that cursor to be and the color that we want the cursor to be. Then we can add a series of #define to the file named BTSFingerView.h to define that value: // Define the size of the cursor that // will be drawn in the view for each // finger on the trackpad #define D_FINGER_CURSOR_SIZE 20 // Define the color values that will // be used for the finger cursor #define D_FINGER_CURSOR_RED 1.0 #define D_FINGER_CURSOR_GREEN 0.0 #define D_FINGER_CURSOR_BLUE 0.0 #define D_FINGER_CURSOR_ALPHA 0.5 Now we can add the program code to our drawRect: implementation that will draw the finger cursors in the custom view. // For each active touch for (NSTouch *l_touch in m_activeTouches.allValues) { // Create a rectangle reference to hold the // location of the cursor NSRect l_cursor; // Determine where the touch point NSPoint l_touchNP = [l_touch normalizedPosition]; // Calculate the pixel position of the touch point l_touchNP.x = l_touchNP.x * [self bounds].size.width; l_touchNP.y = l_touchNP.y * [self bounds].size.height; // Calculate the rectangle around the cursor l_cursor.origin.x = l_touchNP.x - (D_FINGER_CURSOR_SIZE / 2); l_cursor.origin.y = l_touchNP.y - (D_FINGER_CURSOR_SIZE / 2); l_cursor.size.width = D_FINGER_CURSOR_SIZE; l_cursor.size.height = D_FINGER_CURSOR_SIZE; // Set the color of the cursor [[NSColor colorWithDeviceRed: D_FINGER_CURSOR_RED green: D_FINGER_CURSOR_GREEN blue: D_FINGER_CURSOR_BLUE alpha: D_FINGER_CURSOR_ALPHA] set]; // Draw the cursor as a circle [[NSBezierPath bezierPathWithOvalInRect: l_cursor] fill]; } What just happened? We implemented the methods required to keep track of the touches and to draw the location of the touches in our custom view. If we run the App now, and move the mouse pointer over the view area, and then touch the trackpad, we will see red circles that track our fingers being drawn in the view as shown in the following screenshot: What is an NSBezierPath? A Bezier Path consists of straight and curved line segments that can be used to draw recognizable shapes. In our program code, we use Bezier Paths to draw a rectangle and a circle but a Bezier Path can be used to draw many other shapes. How to manage the mouse cursor One of the interesting things about the trackpad and the mouse is the association between a single finger touch and the movement of the mouse cursor. Essentially, Mac OS X treats a single finger movement as if it was a mouse movement. The problem with this is that when we move just a single finger on the trackpad, the mouse cursor will move away from our NSView causing it to lose focus so that when we lift our finger we need to move the mouse cursor back to our NSView to receive touch events. Time for action — detaching the mouse cursor from the mouse hardware The solution to this problem is to detach the mouse cursor from the mouse hardware (typically called capturing the mouse) whenever a touch event is active so that the cursor is not moved by touch events. In addition, since a "stuck" mouse cursor may be cause for concern to our App user, we can hide the mouse cursor when touches are active. In Xcode, click on the file named BTSFingerView.h in the project navigator and add the following flag to the interface: @interface BTSFingerView : NSView { // Define a flag so that touch methods can behave // differently depending on the visibility of // the mouse cursor BOOL m_cursorIsHidden; } In Xcode, click on the file named BTSFingerView.m in the project navigator. Add the following code to the beginning of the touchesBeganWithEvent: method to detach and hide the mouse cursor when a touch begins. We only want to do this one time so it is guarded by a BOOL flag and an if statement to make sure we don't do it for every touch that begins. - (void)touchesBeganWithEvent:(NSEvent *)event { // If the mouse cursor is not already hidden, if (NO == m_cursorIsHidden) { // Detach the mouse cursor from the mouse // hardware so that moving the mouse (or a // single finger) will not move the cursor CGAssociateMouseAndMouseCursorPosition(false); // Hide the mouse cursor [NSCursor hide]; // Remember that we detached and hid the // mouse cursor m_cursorIsHidden = YES; } Add the following code to the end of the touchesEndedWithEvent: method to attach and unhide the mouse cursor when all touches end. We use a BOOL flag to remember the state of the cursor so that the touchesBeganWithEvent: method will re-hide it when the next touch begins. // If there are no remaining active touches if (0 == [m_activeTouches count]) { // Attach the mouse cursor to the mouse // hardware so that moving the mouse (or a // single finger) will move the cursor CGAssociateMouseAndMouseCursorPosition(true); // Show the mouse cursor [NSCursor unhide]; // Remember that we attached and unhid the // mouse cursor so that the next touch that // begins will detach and hide it m_cursorIsHidden = NO; } // Redisplay the view [self setNeedsDisplay:YES]; } Add the following code to the end of the touchesCancelledWithEvent: method to attach and unhide the mouse cursor when all touches end. We use a BOOL flag to remember the state of the cursor so that the touchesBeganWithEvent: method will re-hide it when the next touch begins. // If there are no remaining active touches if (0 == [m_activeTouches count]) { // Attach the mouse cursor to the mouse // hardware so that moving the mouse (or a // single finger) will move the cursor CGAssociateMouseAndMouseCursorPosition(true); // Show the mouse cursor [NSCursor unhide]; // Remember that we attached and unhid the // mouse cursor so that the next touch that // begins will detach and hide it m_cursorIsHidden = NO; } // Redisplay the view [self setNeedsDisplay:YES]; } While we are looking at the movement of the mouse, we also notice that the focus ring for our custom view is being drawn regardless of whether or not the mouse cursor is over our view. Since touch events will only be sent to our view if the mouse cursor is over it, we want to change the program code so that the focus ring only appears when the mouse cursor is over the custom view. This is something we can do with another BOOL flag. Add the following code to the file to define a BOOL flag that will allow us to determine if the mouse cursor is over our custom view: // Define a flag so that view methods can behave // differently depending on the position of the // mouse cursor BOOL m_mouseIsInFingerView; In the file named BTSFingerView.m, add the following code to create a tracking rectangle that matches the bounds of our custom view. Once the tracking rectangle is active, the methods mouseEntered: and mouseExited: will be automatically invoked as the mouse cursor enters and exits our custom view. /** ** - (void)viewDidMoveToWindow ** ** Informs the receiver that it has been added to ** a new view hierarchy. ** ** We need to make sure the view window is valid ** and when it is, we can add the tracking rect ** ** Once the tracking rect is added the mouseEntered: ** and mouseExited: events will be sent to our view ** */ - (void)viewDidMoveToWindow { // Is the views window valid if ([self window] != nil) { // Add a tracking rect such that the // mouseEntered; and mouseExited: methods // will be automatically invoked [self addTrackingRect:[self bounds] owner:self userData:NULL assumeInside:NO]; } } In the file named BTSFingerView.m, add the following code to implement the mouseEntered: and mouseExited: methods. In those methods, we set the BOOL flag so that the drawRect: method knows whether or not to draw the focus ring. /** ** - (void)mouseEntered: ** ** Informs the receiver that the mouse cursor ** entered a tracking rectangle ** ** Since we only have a single tracking rect ** we know the mouse is over our custom view ** */ - (void)mouseEntered:(NSEvent *)theEvent { // Set the flag so that other methods know // the mouse cursor is over our view m_mouseIsInFingerView = YES; // Redraw the view so that the focus ring // will appear [self setNeedsDisplay:YES]; } /** ** - (void)mouseExited: ** ** Informs the receiver that the mouse cursor ** exited a tracking rectangle ** ** Since we only have a single tracking rect ** we know the mouse is not over our custom view ** */ - (void)mouseExited:(NSEvent *)theEvent { // Set the flag so that other methods know // the mouse cursor is not over our view m_mouseIsInFingerView = NO; // Redraw the view so that the focus ring // will not appear [self setNeedsDisplay:YES]; } Finally, in the drawRect: method, change the program code that draws the focus ring to only do so if the mouse cursor is in the tracking rectangle: // If this view has accepted first responder // it should draw the focus ring but only if // the mouse cursor is over this view if ( ([[self window] firstResponder] == self) && (YES == m_mouseIsInFingerView) ) { NSSetFocusRingStyle(NSFocusRingAbove); } What just happened? We implemented the program code that will prevent the mouse cursor from moving out of our custom view when touch events are active. In doing so we noticed that our focus ring behavior could be improved. Therefore we added additional program code to ensure the focus ring is visible only when the mouse pointer is over our view. Performing 2D drawing in a custom view Mac OS X provides a number of ways to perform drawing. The methods provided range from very simple methods to very complex methods. For our multi-finger painting program we are going to use the core graphics APIs designed to draw a path. We are going to collect each stroke as a series of points and construct a path from those points so that we can draw the stroke. Each active touch event will have a corresponding active stroke object that needs to be drawn in our custom view. When a stroke is finished, and the App user lifts the finger, we are going to send the finished stroke to another custom view so that it is drawn only one time and not each time fingers move. The optimization of using the second view will ensure our finger tracking is not slowed down too much by drawing. Before we can begin drawing, we need to create two new objects that will be used to store individual points and strokes. The program code for these two objects is not shown but the objects are included in the Multi-Finger Paint Xcode project. The two objects are as follows: BTSPoint BTSStroke The BTSPoint object is a wrapper for an NSPoint structure. The NSPoint structure needs to be wrapped in an object so that it can be stored in an NSArray object. It has a single instance variable: NSPoint m_point; It implements the following methods which allows it to be initialized: return the point (x and y), return just the x value, or return just the y value. For more information on the object, we can read the source code file in the project: - (id) initWithNSPoint:(NSPoint)a_point; - (NSPoint) point; - (CGFloat)x; - (CGFloat)y; The BTSStroke object is a wrapper for an array of BTSPoint objects, a color, and a stroke width. It is used to store strokes that are drawn in our custom NSView. It has the following instance variables and properties: float m_red; float m_green; float m_blue; float m_alpha; float m_width; @property (strong) NSMutableArray *m_points; It implements the following methods which allows it to be initialized: a new point to be added, return the array of points, return any of the color components, and return the stroke width. For more information on the object, we can read the source code file in the project: - (id) initWithWidth:(float)a_width red:(float)a_red green:(float)a_green blue:(float)a_blue alpha:(float)a_alpha; - (void) addPoint:(BTSPoint *)a_point; - (NSMutableArray *) points; - (float)red; - (float)green; - (float)blue; - (float)alpha; - (float)width;
Read more
  • 0
  • 0
  • 2259

article-image-writing-your-first-lines-coffeescript
Packt
29 Aug 2013
9 min read
Save for later

Writing Your First Lines of CoffeeScript

Packt
29 Aug 2013
9 min read
(For more resources related to this topic, see here.) Following along with the examples I implore you to open up a console as you read this article and try out the examples for yourself. You don't strictly have to; I'll show you any important output from the example code. However, following along will make you more comfortable with the command-line tools, give you a chance to write some CoffeeScript yourself, and most importantly, will give you an opportunity to experiment. Try changing the examples in small ways to see what happens. If you're confused about a piece of code, playing around and looking at the outcome will often help you understand what's really going on. The easiest way to follow along is to simply open up a CoffeeScript console. Just run this from the command line to get an interactive console: coffee If you'd like to save all your code to return to later, or if you wish to work on something more complicated, you can create files instead and run those. Give your files the .coffee extension , and run them like this: coffee my_sample_code.coffee Seeing the compiled JavaScript The golden rule of CoffeeScript, according to the CoffeeScript documentation, is: It's just JavaScript. This means that it is a language that compiles down to JavaScript in a simple fashion, without any complicated extra moving parts. This also means that it's easy, with a little practice, to understand how the CoffeeScript you are writing will compile into JavaScript. Your JavaScript expertise is still applicable, but you are freed from the tedious parts of the language. You should understand how the generated JavaScript will work, but you do not need to actually write the JavaScript. To this end, we'll spend a fair amount of time, especially in this article, comparing CoffeeScript code to the compiled JavaScript results. It's like peeking behind the wizard's curtain! The new language features won't seem so intimidating once you know how they work, and you'll find you have more trust in CoffeeScript when you can check in on the code it's generating. After a while, you won't even need to check in at all. I'll show you the corresponding JavaScript for most of the examples in this article, but if you write your own code, you may want to examine the output. This is a great way to experiment and learn more about the language! Unfortunately, if you're using the CoffeeScript console to follow along, there isn't a great way to see the compiled output (most of the time, it's nice to have all that out of sight—just not right now!). You can see the compiled JavaScript in several other easy ways, though. The first is to put your code in a file and compile it. The other is to use the Try CoffeeScript tool on http://coffeescript.org/. It brings up an editor right in the browser that updates the output as you type. CoffeeScript basics Let's get started! We'll begin with something simple: x = 1 + 1 You can probably guess what JavaScript this will compile to: var x;x = 1 + 1; Statements One of the very first things you will notice about CoffeeScript is that there are no semicolons. Statements are ended by a new line. The parser usually knows if a statement should be continued on the next line. You can explicitly tell it to continue to the next line by using a backslash at the end of the first line: x = 1+ 1 It's also possible to stretch function calls across multiple lines, as is common in "fluent" JavaScript interfaces: "foo" .concat("barbaz") .replace("foobar", "fubar") You may occasionally wish to place more than one statement on a single line (for purely stylistic purposes). This is the one time when you will use a semicolon in CoffeeScript: x = 1; y = 2 Both of these situations are fairly rare. The vast majority of the time, you'll find that one statement per line works great. You might feel a pang of loss for your semicolons at first, but give it time. The calluses on your pinky finger will fall off, your eyes will adjust to the lack of clutter, and soon enough you won't remember what good you ever saw in semicolons. Variables CoffeeScript variables look a lot like JavaScript variables, with one big difference: no var! CoffeeScript puts all variables in the local scope by default. x = 1y = 2z = x + y compiles to: var x, y, z;x = 1;y = 2;z = x + y; Believe it or not, this is one of my absolute top favorite things about CoffeeScript. It's so easy to accidentally introduce variables to the global scope in JavaScript and create subtle problems for yourself. You never need to worry about that again; from now on, it's handled automatically. Nothing is getting into the global scope unless you want it there. If you really want to put a variable in the global scope and you're really sure it's a good idea, you can easily do this by attaching it to the top-level object. In the CoffeeScript console, or in Node.js programs, this is the global object: global.myGlobalVariable = "I'm so worldly!" In a browser, we use the window object instead: window.myGlobalVariable = "I'm so worldly!" Comments Any line that begins with a # is a comment. Anything after a # in the middle of a line will also be a comment. # This is a comment."Hello" # This is also a comment Most of the time, CoffeeScripters use only this style, even for multiline comments. # Most multiline comments simply wrap to the# next line, each begun with a # and a space. It is also possible (but rare in the CoffeeScript world) to use a block comment, which begins and ends with ###. The lines in between these characters do not need to begin with a #. ###This is a block comment. You can get artistic in here.<(^^)>### Regular comments are not included in the compiled JavaScript, but block comments are, delineated by /* */. Calling functions Function invocation can look very familiar in CoffeeScript: console.log("Hello, planet!") Other than the missing semicolon, that's exactly like JavaScript, right? But function invocation can also look different: console.log "Hello, planet!" Whoa! Now we're in unfamiliar ground. This will work exactly the same as the previous example, though. Any time you call a function with arguments, the parentheses are optional. This also works with more than one argument: Math.pow 2, 3 While you might be a little nervous writing this way at first, I encourage you to try it and give yourself time to become comfortable with it. Idiomatic CoffeeScript style eliminates parentheses whenever it's sensible to do so. What do I mean by "sensible"? Well, imagine you're reading your code for the first time, and ask yourself which style makes it easiest to comprehend. Usually it's most readable without parentheses, but there are some occasions when your code is complex enough that judicious use of parentheses will help. Use your best judgment, and everything will turn out fine. There is one exception to the optional parentheses rule. If you are invoking a function with no arguments, you must use parentheses: Date.now() Why? The reason is simple. CoffeeScript preserves JavaScript's treatment of functions as first-class citizens. myFunc = Date.now #=> myFunc holds a function object that hasn't been executedmyDate = Date.now() #=> myDate holds the result of the function's execution CoffeeScript's syntax is looser, but it must still be unambiguous. When no arguments are present, it's not clear whether you want to access the function object or execute the function. Requiring parentheses makes it clear which one you want, and still allows both kinds of functionality. This is part of CoffeeScript's philosophy of not deviating from the fundamentals of the JavaScript language. If functions were always executed instead of returned, CoffeeScript would no longer act like JavaScript, and it would be hard for you, the seasoned JavaScripter, to know what to expect. This way, once you understand a few simple concepts, you will know exactly what your code is doing. From this discussion, we can extract a more general principle: parentheses are optional, except when necessary to avoid ambiguity . Here's another situation in which you might encounter ambiguity: nested function calls. Math.max 2, 3, Math.min 4, 5, 6 Yikes! What's happening there? Well, you can easily clear this up by adding parentheses. You may add parentheses to all the function calls, or you may add just enough to resolve the ambiguity: # These two calls are equivalentMath.max(2, 3, Math.min(4, 5, 6))Math.max 2, 3, Math.min(4, 5, 6) This makes it clear that you wish min to take 4 and 5 as arguments. If you wished 6 to be an argument to max instead, you would place the parentheses differently. # These two calls are equivalentMath.max(2, 3, Math.min(4, 5), 6)Math.max 2, 3, Math.min(4, 5), 6 Precedence Actually, the original version I showed you is valid CoffeeScript too! You just need to understand the precedence rules that CoffeeScript uses for functions. Arguments are assigned to functions from the inside out . Another way to think of this is that an argument belongs to the function that it's nearest to. So our original example is equivalent to the first variation we used, in which 4, 5, and 6 are arguments to min: # These two calls are equivalentMath.max 2, 3, Math.min 4, 5, 6Math.max 2, 3, Math.min(4, 5, 6) The parentheses are only absolutely necessary if our desired behavior doesn't match CoffeeScript's precedence—in this case, if we wanted 6 to be a argument to max. This applies to an unlimited level of nesting: threeSquared = Math.pow 3, Math.floor Math.min 4, Math.sqrt 5 Of course, at some point the elimination of parentheses turns from the question of if you can to if you should. You are now a master of the intricacies of CoffeeScript function-call parsing, but the other programmers reading your code might not be (and even if they are, they might prefer not to puzzle out what your code is doing). Avoid parentheses in simple cases, and use them judiciously in the more complicated situations.
Read more
  • 0
  • 0
  • 2248
article-image-process-driven-soa-development
Packt
13 Sep 2010
9 min read
Save for later

Process Driven SOA Development

Packt
13 Sep 2010
9 min read
(For more resources on Oracle, see here.) Business Process Management and SOA One of the major benefits of a Service-Oriented Architecture is its ability to align IT with business processes. Business processes are important because they define the way business activities are performed. Business processes change as the company evolves and improves its operations. They also change in order to make the company more competitive. Today, IT is an essential part of business operations. Companies are simply unable to do business without IT support. However, this places a high level of responsibility on IT. An important part of this responsibility is the ability of IT to react to changes in a quick and efficient manner. Ideally, IT must instantly respond to business process changes. In most cases, however, IT is not flexible enough to adapt application architecture to the changes in business processes quickly. Software developers require time to modify application behavior. In the meantime, the company is stuck with old processes. In a highly competitive marketplace such delays are dangerous, and the threat is exacerbated by a reliance on traditional software development to make quick changes within an increasingly complex IT architecture. The major problem with traditional approaches to software development is the huge semantic gap between IT and the process models. The traditional approach to software development has been focused on functionalities rather than on end-to-end support for business processes. It usually requires the definition of use cases, sequence diagrams, class diagrams, and other artifacts, which bring us to the actual code in a programming language such as Java, C#, C++, and so on. SOA reduces the semantic gap by introducing a development model that aligns the IT development cycle with the business process lifecycle. In SOA, business processes can be executed directly and integrated with existing applications through services. To understand this better, let's look at the four phases of the SOA lifecycle: Process modeling: This is the phase in which process analysts work with process owners to analyze the business process and define the process model. They define the activity flow, information flow, roles, and business documents. They also define business policies and constraints, business rules, and performance measures. Performance measures are often called Key Performance Indicators (KPIs). Examples of KPIs include activity turnaround time, activity cost, and so on. Usually Business Process Modeling Notation (BPMN) is used in this phase. Process implementation: This is the phase in which developers work with process analysts to implement the business process, with the objective of providing end-to-end support for the process. In an SOA approach, the process implementation phase includes process implementation with the Business Process Execution Language (BPEL) and process decomposition to the services, implementation or reuse of services, and integration. Process execution and control: This is the actual execution phase, in which the process participants execute various activities of the process. In the end-to-end support for business processes, it is very important that IT drives the process and directs process participants to execute activities, and not vice versa, where the actual process drivers are employees. In SOA, processes execute on a process server. Process control is an important part of this phase, during which process supervisors or process managers control whether the process is executing optimally. If delays occur, exceptions arise, resources are unavailable, or other problems develop, process supervisors or managers can take corrective actions. Process monitoring and optimization: This is the phase in which process owners monitor the KPIs of the process using Business Activity Monitoring (BAM). Process analysts, process owners, process supervisors, and key users examine the process and analyze the KPIs while taking into account changing business conditions. They examine business issues and make optimizations to the business process. The following figure shows how a process enters this cycle, and goes through the various stages: Once optimizations have been identified and selected, the process returns to the modeling phase, where optimizations are applied. Then the process is re-implemented and the whole lifecycle is repeated. This is referred to as an iterative-incremental lifecycle, because the process is improved at each stage. Organizational aspects of SOA development SOA development, as described in the previous section, differs considerably from traditional development. SOA development is process-centric and keeps the modeler and the developer focused on the business process and on end-to-end support for the process, thereby efficiently reducing the gap between business and IT. The success of the SOA development cycle relies on correct process modeling. Only when processes are modeled in detail can we develop end-to-end support that will work. Exceptional process fl ows also have to be considered. This can be a difficult task, one that is beyond the scope of the IT department (particularly when viewed from the traditional perspective). To make process-centric SOA projects successful, some organizational changes are required. Business users with a good understanding of the process must be motivated to actively participate in the process modeling. Their active participation must not be taken for granted, lest they find other work "more useful," particularly if they do not see the added value of process modeling. Therefore, a concise explanation as to why process modeling makes sense can be a very valuable time investment. A good strategy is to gain top management support. It makes enormous sense to explain two key factors to top management—first, why a process centric approach and end-to-end support for processes makes sense, and second, why the IT department cannot successfully complete the task without the participation of business users. Usually top management will understand the situation rather quickly and will instruct business users to participate. Obviously, the proposed process-centric development approach must become an ongoing activity. This will require the formalization of certain organizational structures. Otherwise, it will be necessary to seek approval for each and every project. We have already seen that the proposed approach outgrows the organizational limits of the IT department. Many organizations establish a BPM/SOA Competency Center, which includes business users and all the other profiles required for SOA development. This also includes the process analyst, process implementation, service development, and presentation layer groups, as well as SOA governance. Perhaps the greatest responsibility of SOA development is to orchestrate the aforementioned groups so that they work towards a common goal. This is the responsibility of the project manager, who must work in close connection with the governance group. Only in this way can SOA development be successful, both in the short term (developing end-to-end applications for business processes), and in the long term (developing a fl exible, agile IT architecture that is aligned with business needs). Technology aspects of SOA development SOA introduces technologies and languages that enable the SOA development approach. Particularly important is BPMN, which is used for business process modeling, and BPEL, which is used for business process execution. BPMN is the key technology for process modeling. The process analyst group must have in-depth knowledge of BPMN and process modeling concepts. When modeling processes for SOA, they must be modeled in detail. Using SOA, we model business processes with the objective of implementing them in BPEL and executing them on the process server. Process models can be made executable only if all the relevant information is captured that is needed for the actual execution. We must identify individual activities that are atomic from the perspective of the execution. We must model exceptional scenarios too. Exceptional scenarios define how the process behaves when something goes wrong—and in the real world, business processes can and do go wrong. We must model how to react to exceptional situations and how to recover appropriately. Next, we automate the process. This requires mapping of the BPMN process model into the executable representation in BPEL. This is the responsibility of the process implementation group. BPMN can be converted to BPEL almost automatically and vice versa, which guarantees that the process map is always in sync with the executable code. However, the executable BPEL process also has to be connected with the business services. Each process activity is connected with the corresponding business service. Business services are responsible for fulfilling the individual process activities. SOA development is most efficient if you have a portfolio of business services that can be reused, and which includes lower-level and intermediate technical services. Business services can be developed from scratch, exposed from existing systems, or outsourced. This task is the responsibility of the service development group. In theory, it makes sense for the service development group to first develop all business services. Only then would the process implementation group start to compose those services into the process. However, in the real world this is often not the case, because you will probably not have the luxury of time to develop the services first and only then start the processes. And even if you do have enough time, it would be difficult to know which business services will be required by processes. Therefore, both groups usually work in parallel, which is a great challenge. It requires interaction between them and strict, concise supervision of the SOA governance group and the project manager; otherwise, the results of both groups (the process implementation group and the service development group) will be incompatible. Once you have successfully implemented the process, it can be deployed on the process server. In addition to executing processes, a process server provides other valuable information, including a process audit trail, lists of successfully completed processes, and a list of terminated or failed processes. This information is helpful in controlling the process execution and in taking any necessary corrective measures. The services and processes communicate using the Enterprise Service Bus (ESB). The services and processes are registered in the UDDI-compliant service registry. Another part of the architecture is the rule engine, which serves as a central place for business rules. For processes with human tasks, user interaction is obviously important, and is connected to identity management. The SOA platform also provides BAM. BAM helps to measure the key performance indicators of the process, and provides valuable data that can be used to optimize processes. The ultimate goal of each BAM user is to optimize process execution, to improve process efficiency, and to sense and react to important events. BAM ensures that we start optimizing processes where it makes most sense. Traditionally, process optimization has been based on simulation results, or even worse, by guessing where bottlenecks might be. BAM, on the other hand, gives more reliable and accurate data, which leads to better decisions about where to start with optimizations. The following figure illustrates the SOA layers:
Read more
  • 0
  • 0
  • 2222

article-image-flex-multi-list-selector-using-list-control-datagrid-and-accordion
Packt
02 Mar 2010
3 min read
Save for later

Flex Multi-List Selector using List Control, DataGrid, and the Accordion

Packt
02 Mar 2010
3 min read
Instead of files and directories, I'm going to use States, Counties and Cities. Essentially this application will be used to give the user an easy way to select a city. Flex offers many components that can help us build this application. The controls I immediately consider for the job are the List Control, DataGrid, and the Accordion (in combination with the List). The List is the obvious control to start with because it represents the data in the right way - a list of states, counties, and cities. The reason I also considered the DataGrid and the Accordion (with the List) is because the they both have a header. I want an easy way to label the three columns/list 'States','Counties' and 'Cities'. With that said, I selected the Accordion with the List option. Using this option also allows for future expansion of the tool. For instance, one could adapt the tool to add country, then state, county, and city. The Accordion naturally has this grouping capability. With that said, our first code block contains our basic UI. The structure is pretty simple. The layout of the application is vertical. I've added an HBox which contains the main components of the application. The basic structure of each list is a List Control inside a Canvas Container which is inside of an Accordian Control. The Canvas is there because Accordians must have a container as a child and a List is not a part of the container package. We repeat this 3 times, one for each column and give the appropriate name. <?xml version="1.0" encoding="utf-8"?><mx:Application horizontalGap="0" layout="vertical"> <mx:HBox width="100%" height="100%"> <!-- States --> <mx:Accordion id="statesAccoridon" width="100%" height="100%"> <mx:Canvas width="100%" height="100%" label="States"> <mx:List id="statesList" width="100%" height="100%" dataProvider="{locations.state.@name}" click="{selectCounties()}"/> </mx:Canvas> </mx:Accordion> <!-- Counties --> <mx:Accordion id="countiesAccoridon" width="100%" height="100%"> <mx:Canvas width="100%" height="100%" label="Counties"> <mx:List id="countiesList" width="100%" height="100%" click="selectCities()"/> </mx:Canvas> </mx:Accordion> <!-- Cities --> <mx:Accordion id="citiesAccoridon" width="100%" height="100%"> <mx:Canvas width="100%" height="100%" label="Cities"> <mx:List id="citiesList" width="100%" height="100%"/> </mx:Canvas> </mx:Accordion> </mx:HBox> <!-- Selected City --> <mx:Label text="{citiesList.selectedItem}"/> <mx:Script> <![CDATA[ public function selectCounties():void{ countiesList.dataProvider = locations.state.(@name==statesList.selectedItem).counties.county.@name } public function selectCities():void{ citiesList.dataProvider = locations.state.(@name==statesList.selectedItem).counties.county.(@name==countiesList.selectedItem).cities.city.@name; } ]]> </mx:Script> </mx:Application> I've set the width and height to all containers to 100%. This will make it easy to later embed this application into a web page or other Flex application as a module. Also notice how the dataProvider attribute is only set for the statesList. This is because the countiesList and the citiesList are not populated until a state is selected. These dataProviders are set using ActionScript and are triggered by the click event listeners for both objects. Here is what the start of our selector looks like:
Read more
  • 0
  • 0
  • 2201
Modal Close icon
Modal Close icon