Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-simplifying-parallelism-complexity-c
Packt
16 Oct 2009
7 min read
Save for later

Simplifying Parallelism Complexity in C#

Packt
16 Oct 2009
7 min read
Specializing the algorithms for segmentation with classes So far, we have been developing applications that split work into multiple independent jobs and created classes to generalize the algorithms for segmentation. We simplified the creation of segmented and parallelized algorithms, generalizing behaviors to simplify our code and to avoid repeating the same code on every new application. However, we did not do that using inheritance, a very powerful object-oriented capability that simplifies code re-use. C# is an object-oriented programming language that supports inheritance and offers many possibilities to specialize behaviors to simplify our code and to avoid some synchronization problems related to parallel programming. How can we use C# object-oriented capabilities to define specific segmented algorithms prepared for running each piece in an independent thread using ParallelAlgorithm and ParallelAlgorithmPiece as the base classes? The answer is very simple—by using inheritance and the factory method class creational pattern (also known as virtual constructor). Thus, we can advance into creating a complete framework to simplify the algorithm optimization process. Again, we can combine multithreading with object-oriented capabilities to reduce our development time and avoid synchronization problems. Besides, using classes to specialize the process of splitting a linear algorithm into many pieces will make it easier for the developers to focus on generating very independent parts that will work well in a multithreading environment, while avoiding side-effects. Time for action – Preparing the parallel algorithm classes for the factory method You made the necessary changes to the ParallelAlgorithmPiece and the ParallelAlgorithm classes to possibly find planets similar to Mars in the images corresponding to different galaxies. NASA's CIO was impressed with your parallel programming capabilities. Nevertheless, he is an object-oriented guru, and he gave you the advice to apply the factory method pattern to specialize the parallel algorithm classes in each new algorithm. That could make the code simpler, more re-usable, and easier to maintain. He asked you to do so. The NASA scientists would then bring you another huge image processing challenge for your parallel programming capabilities—a sunspot analyzer. If you resolve this problem using the factory method pattern or something like that, he will hire you! However, be careful, because you must avoid some synchronization problems! First, we are going to create a new project with tailored versions of the ParallelAlgorithmPiece and ParallelAlgorithm classes. This way, later, we will be able to inherit from these classes and apply the factory method pattern to specialize in parallel algorithms: Create a new C# Project using the Windows Forms Application template in Visual Studio or Visual C# Express. Use SunspotsAnalyzer as the project's name. Open the code for Program.cs. Replace the line [STAThread] with the following line (before the Main method declaration): [MTAThread] Copy the file that contains the original code of the ParallelAlgorithmPiece and the ParallelAlgorithm classes (ParallelAlgorithm.cs) and include them in the project. Add the abstract keyword before the declarations of theParallelAlgorithmPiece and the ParallelAlgorithm classes, as shown in the following lines (we do not want to create instances directly from these abstract classes): abstract class ParallelAlgorithmPieceabstract class ParallelAlgorithm Change the ThreadMethod method declaration in the ParallelAlgorithmPiece class (add the abstract keyword to force us to override it in subclasses): public abstract void ThreadMethod(object poThreadParameter); Add the following public abstract method to create each parallel algorithm piece in the ParallelAlgorithm class (the key to the factory method pattern): public abstract ParallelAlgorithmPieceCreateParallelAlgorithmPiece(int priThreadNumber); Add the following constructor with a parameter to the ParallelAlgorithmPiece class: public ParallelAlgorithmPiece(int priThreadNumberToAssign){priThreadNumber = priThreadNumberToAssign;} Copy the original code of the ParallelAlgorithmPiece class CreatePieces method and paste it in the ParallelAlgorithm class (we move it to allow creation of parallel algorithm pieces of different subclasses). Replace the lloPieces[i].priBegin and lloPieces[i].priEnd private variables' access with their corresponding public properties access lloPieces[i].piBegin and lloPieces[i].piEnd. Change the new CreatePieces method declaration in the ParallelAlgorithm class (remove the static clause and add the virtual keyword to allow us to override it in subclasses and to access instance variables): public virtual List<ParallelAlgorithmPiece>CreatePieces(long priTotalElements, int priTotalParts) Replace the line lloPieces[i] = new ParallelAlgorithmPiece(); in the CreatePieces method declaration in the ParallelAlgorithm class with the following line of code (now the creation is encapsulated in a method, and also, a great bug is corrected, which we will explain later): lloPieces.Add(CreateParallelAlgorithmPiece(i)); Comment the following line of code in the CreatePieces method in the ParallelAlgorithm class (now the new ParallelAlgorithmPiece constructor assigns the value to piThreadNumber): //lloPieces[i].piThreadNumber = i; Replace the line prloPieces = ParallelAlgorithmPiece. CreatePieces(priTotalElements, priTotalParts); in the CreateThreads method declaration in the ParallelAlgorithm class with the following line of code (now the creation is done in the new CreatePieces method): prloPieces = CreatePieces(priTotalElements, priTotalParts); Change the StartThreadsAsync method declaration in the ParallelAlgorithm class (add the virtual keyword to allow us to override it in subclasses): public virtual void StartThreadsAsync() Change the CollectResults method declaration in the ParallelAlgorithm class (add the abstract keyword to force us to override it in subclasses): public abstract void CollectResults(); What just happened? The code required to create subclasses to implement algorithms, following a variation of the factory method class creational pattern, is now held in the ParallelAlgorithmPiece and ParallelAlgorithm classes. Thus, when we create new classes that will inherit from these two classes, we can easily implement a parallel algorithm. We must just fill in the gaps and override some methods, and we can then focus on the algorithm problems instead of working hard on the splitting techniques. We also solved some bugs related to the previous versions of these classes. Using C# programming language's excellent object-oriented capabilities, we can avoid many problems related to concurrency and simplify the development process using high-performance parallel algorithms. Nevertheless, we must master many object-oriented design patterns to help us in reducing the complexity added by multithreading and concurrency. Defining the class to instantiate One of the main problems that arise when generalizing an algorithm is that the generalized code needed to coordinate the parallel algorithm must create instances of the subclasses that represent the pieces. Using the concepts introduced by the factory method class creational pattern, we solved this problem with great simplicity. We made the necessary changes to the ParallelAlgorithmPiece and ParallelAlgorithm classes to implement a variation of this design pattern. First, we added a constructor to the ParallelAlgorithmPiece class with the thread or piece number as a parameter. The constructor assigns the received value to the priThreadNumber private variable, accessed by the piThreadNumber property: public ParallelAlgorithmPiece(int priThreadNumberToAssign){priThreadNumber = priThreadNumberToAssign;} The subclasses will be able to override this constructor to add any additional initialization code. We had to move the CreatePieces method from the ParallelAlgorithmPiece class to the ParallelAlgorithm class. We did this because each ParallelAlgorithm subclass will know which ParallelAlgorithmPiece subclass to create for each piece representation. Thus, we also made the method virtual, to allow it to be overridden in subclasses. Besides, now it is an instance method and not a static one. There was an intentional bug left in the previous CreatePieces method. As you must master lists and collections management in C# in order to master parallel programming, you should be able to detect and solve this little problem. The method assigned the capacity, but did not add elements to the list. Hence, we must use the add method using the result of the new CreateParallelAlgorithmPiece method. lloPieces.Add(CreateParallelAlgorithmPiece(i)); The creation is now encapsulated in this method, which is virtual, and allows subclasses to override it. The original implementation is shown in the following lines: public virtual ParallelAlgorithmPiece CreateParallelAlgorithmPiece(int priThreadNumber){return (new ParallelAlgorithmPiece(priThreadNumber));} It returns a new ParallelAlgorithmPiece instance, sending the thread or piece number as a parameter. Overriding this method, we can return instances of any subclass of ParallelAlgorithmPiece. Thus, we let the ParallelAlgorithm subclasses decide which class to instantiate. This is the principle of the factory method design pattern. It lets a class defer instantiation to subclasses. Hence, each new implementation of a parallel algorithm will have its new ParallelAlgorithm and ParallelAlgorithmPiece subclasses. We made additional changes needed to keep conceptual integrity with this new approach for the two classes that define the behavior of a parallel algorithm that splits work into pieces using multithreading capabilities.
Read more
  • 0
  • 0
  • 2494

article-image-soa-java-business-integration-part-1
Packt
16 Oct 2009
5 min read
Save for later

SOA with Java Business Integration (part 1)

Packt
16 Oct 2009
5 min read
SOA—The Motto We have been doing integration for many decades in proprietary or ad-hoc manner. Today, the buzz word is SOA and in the integration space, we are talking about Service Oriented Integration (SOI). Let us look into the essentials of SOA and see whether the existing standards and APIs are sufficient in the integration space. Why We Need SOA We have been using multiple technologies for developing application components, and a few of them are listed as follows: Remote Procedure Call (RPC) Common Object Request Broker Architecture (CORBA) Distributed Component Object Model (DCOM) .NET remoting Enterprise Java Beans (EJBs) Java Remote Method Invocation (RMI) One drawback, which can be seen in almost all these technologies, is their inability to interoperate. In other words, if a .NET remoting component has to send bytes to a Java RMI component, there are workarounds that may not work all the times. Next, all the above listed technologies follow the best Object Oriented Principles (OOP), especially hiding the implementation details behind interfaces. This will provide loose coupling between the provider and the consumer, which is very important especially in distributed computing environments. Now the question is, are these interfaces abstract enough? To rephrase the question, can a Java RMI runtime make sense out of a .NET interface? Along these lines, we can point out a full list of doubts or deficiencies which exist in today's computing environment. This is where SOA brings new promises. What is SOA SOA is all about a set of architectural patterns, principles, and best practices to implement software components in such a way that we overcome much of the deficiencies identified in traditional programming paradigms. SOA speaks about services implemented based on abstract interfaces where only the abstract interface is exposed to the outside world. Hence the consumers are unaware of any implementation details. Moreover, the abstract model is neutral of any platform or technology. This means, components or services implemented in any platform or technology can interoperate. We will list out few more features of SOA here: Standards-based (WS-* Specifications) Services are autonomous and coarse grained Providers and consumers are loosely coupled The list is not exhaustive, but we have many number of literature available speaking on SOA, so let us not repeat it here. Instead we will see the importance of SOA in the integration context. SOA and Web Services SOA doesn't mandate any specific platform, technology, or even a specific method of software engineering, but time has proven that web service is a viable technology to implement SOA. However, we need to be cautious in that using web services doesn't lead to SOA by itself, or implement it. Rather, since web services are based on industry accepted standards like WSDL, SOAP, and XML; it is one of the best available means to attain SOA. Providers and consumers agree to a common interface called Web Services Description Language (WSDL) in SOA using web services. Data is exchanged normally through HTTP protocol, in Simple Object Access Protocol (SOAP) format. WSDL WSDL is the language of web services, used to specify the service contract to be agreed upon by the provider and consumer. It is a XML formatted information, mainly intended to be machine processable (but human readable too, since it is XML). When we host a web service, it is normal to retrieve the WSDL from the web service endpoint. Also, there are mainly two approaches in working with WSDL, which are listed as follows: Start from WSDL, create and host the web service and open the service for clients; tools like wsdl2java help us to do this. Start from the types already available, generate the WSDL and then continue; tools like java2wsdl help us here. Let us now quickly run through the main sections within a WSDL. A WSDL structure is as shown here: <?xml version="1.0" encoding="UTF-8"?><wsdl:definitions targetNamespace= "http://versionXYZ.ws.servicemix.esb.binildas.com" …> <wsdl:types> <schema elementFormDefault="qualified" targetNamespace="http://version20061231.ws. servicemix.esb.binildas.com" > </schema> </wsdl:types> <wsdl:message name="helloResponse"> <!-- other code goes here --> </wsdl:message> <wsdl:portType name="IHelloWeb"> <!-- other code goes here --> </wsdl:portType> <wsdl:binding name="HelloWebService20061231SoapBinding" type="impl:IHelloWeb"> <!-- other code goes here --> </wsdl:binding> <wsdl:service name="IHelloWebService"> <wsdl:port binding="impl:HelloWebService20061231SoapBinding" name="HelloWebService20061231"> <wsdlsoap:address location="http://localhost:8080/AxisEndToEnd20061231/ services/HelloWebService20061231"/> </wsdl:port> </wsdl:service></wsdl:definitions> We will now run through the main sections of a typical WSDL: types: The data types exchanged are expressed here as an XML schema. message: This section details about the message formats (or the documents) exchanged. portType: The PortType can be looked at as the abstract interface definition for the exposed service. binding: The PortType has to be mapped to specific data formats and protocols, which will be detailed out in the binding section. port: The port gives the URL representation of the service endpoint. service: Service can contain a collection of port elements. Since JBI is based on WSDL, we can deal with many WSDL instances.    
Read more
  • 0
  • 0
  • 1711

article-image-soa-java-business-integration-part-2
Packt
16 Oct 2009
6 min read
Save for later

SOA with Java Business Integration (part 2)

Packt
16 Oct 2009
6 min read
(For more resources on this subject, see here.) Provider—Consumer Contract In the JBI environment, the provider and consumer always interact based on a services model. A service interface is the common aspect between them. WSDL 1.1 and 2.0 are used to define the contract through the services interface. The following figure represents the two parts of the WSDL representation of a service: In the Abstract Model, WSDL describes the propagation of a message through a type system. A message has sequence and cardinality specified by its Message Exchange Pattern (MEP). A Message can be a Fault Message also. An MEP is associated with one or more messages using an Operation. An Interface can contain a single Operation or a group of Operations represented in an abstract fashion—independent of wire formats and transport protocols. An Interface in the Abstract Model is bound to a specific wire format and transport protocol via Binding. A Binding is associated with a network address in an Endpoint and a single Service in the concrete model aggregates multiple Endpoints implementing common interfaces. Detached Message Exchange JBI-based message exchange occurs between a Provider and Consumer in a detached fashion. This means, the Provider and Consumer never interact directly. In technical terms, they never share the same thread context of execution. Instead, the Provider and Consumer use JBI NMR as an intermediary. Thus, the Consumer sends a request message to the NMR. The NMR, using intelligent routers decides the best matched service provider and dispatches the message on behalf of the Consumer. The Provider component can be a different component or the same component as the Consumer itself. The Provider can be an SE or a BC and based on the type it will execute the business process by itself or delegate the actual processing to the remotely bound component. The response message is sent back to the NMR by the Provider, and the NMR in turn passes it back to the Consumer. This completes the message exchange. The following figure represents the JBI-based message exchange: There are multiple patterns by which messages are exchanged, which we will review shortly. Provider—Consumer Role Though a JBI component can function as a Consumer, a Provider, or as both a Consumer and Provider, there is clear cut distinction between the Provider and Consumer roles. These roles may be performed by bindings or engines, in any combination of the two. When a binding acts as a service Provider, an external service is implied. Similarly, when the binding acts as a service Consumer, an external Consumer is implied. In the same way, the use of a Service Engines in either role implies a local actor for that role. This is shown in the following figure: The Provider and Consumer interact with each other through the NMR. When they interact, they perform the distinct responsibilities (not necessarily in the same order). The following is the list of responsibilities, performed by the Provider and Consumer while interacting with NMR: Provider: Once deployed, the JBI activates the service provider endpoint. Provider: Provider then publishes the service description in WSDL format. Consumer: Consumer then discovers the required service. This can happen at design time (static binding) or run time (dynamic binding). Consumer: Invokes the queried service. Provider and Consumer: Send and respond to message exchanges according to the MEP, and state of the message exchange instance. Provider: Provides the service by responding to the function invocations. Provider and Consumer: Responds with status (fault or done) to complete the message exchange. During run-time activation, a service provider activates the actual services it provides, making them known to the NMR. It can now route service invocations to that service. javax.jbi.component.ComponentContext context ;// Initialized via. AOPjavax.jbi.messaging.DeliveryChannel channel = context. getDeliveryChannel();javax.jbi.servicedesc.ServiceEndpoint serviceEndpoint = null; if (service != null && endpoint != null) { serviceEndpoint = context.activateEndpoint (service, endpoint); } The Provider creates a WSDL described service available through an endpoint. As described in the Provider-Consumer contract, the service implements a WSDL-based interface, which is a collection of operations. The consumer creates a message exchange to send a message to invoke a particular service. Since consumers and providers only share the abstract service definition, they are decoupled from each other. Moreover, several services can implement the same WSDL interface. Hence, if a consumer sends a message for a particular interface, the JBI might find more than one endpoint conforming to the interface and can thus route to the best-fit endpoint. Message Exchange A message exchange is the "Message Packet" transferred between a consumer and a provider in a service invocation. It represents a container for normalized messages which are described by an exchange pattern. Thus message exchange encapsulates the following: Normalized message Message exchange metadata Message exchange state Thus, message exchange is the JBI local portion of a service invocation. Service Invocation An end-to-end interaction between a service consumer and a service provider is a service invocation. Service consumers employ one or more service invocation patterns. Service invocation through a JBI infrastructure is based on a 'pull' model, where a component accepts message exchange instances when it is ready. Thus, once a message exchange instance is created, it is sent back and forth between the two participating components, and this continues till the status of the message exchange instance is either set to 'done' or 'error', and sent one last time between the two components. Message Exchange Patterns (MEP) Service consumers interact with service providers for message exchange employing one or more service invocation patterns. The MEP defines the names, sequence, and cardinality of messages in an exchange. There are many service invocation patterns, and, from a JBI perspective, any JBI-compliant ESB implementation must support the following four service invocations: One-Way: Service consumer issues a request to the service provider. No error (fault) path is provided. Reliable One-Way: Service consumer issues a request to the service provider. Provider may respond with a fault if it fails to process the request. Request-Response: Service Consumer issues a request to the service provider, with expectation of response. Provider may respond with a fault if it fails to process request. Request Optional-Response: Service consumer issues a request to the service provider, which may result in a response. Both consumer and provider have the option of generating a fault in response to a message received during the interaction. The above service invocations can be mapped to four different MEPs that are listed as follows. In-Only MEP In-Only MEP is used for one-way exchanges. The following figure diagrammatically explains the In-Only MEP: In the In-Only MEP normal scenario, the sequence of operations is as follows: Service Consumer initiates with a message. Service Provider responds with the status to complete the message exchange. In the In-Only MEP normal scenario, since the Consumer issues a request to the Provider with no error (fault) path, any errors at the Provider-level will not be propagated to the Consumer.    
Read more
  • 0
  • 0
  • 1676

article-image-asterisk-gateway-interface-scripting-php
Packt
16 Oct 2009
4 min read
Save for later

Asterisk Gateway Interface Scripting with PHP

Packt
16 Oct 2009
4 min read
PHP-CLI vs PHP-CGI Most Linux distributions include both versions of PHP when installed, especially if you are using a modern distribution such as CentOS or Mandriva. When writing AGI scripts with PHP, it is imperative that you use PHP-CLI, and not PHP-CGI. Why is this so important? The main issue is that PHP-CLI and PHP-CGI handle their STDIN (standard input) slightly differently, which makes the reading of channel variables via PHP-CGI slightly more problematic. The php.ini configuration file The PHP interpreter includes a configuration file that defines a set of defaults for the interpreter. For your scripts to work in an efficient manner, the following must be set—either via the php.ini file, or by your PHP script: ob_implicit_flush(false); set_time_limit(5); error_log = filename;error_reporting(0); The above code snippet performs the following: Directive Description ob_implicit_flush(false); Sets your PHP output buffering to false, in order to make sure that output from your AGI script to Asterisk is not buffered, and takes longer to execute set_time_limit(5); Sets a time limit on your AGI scripts to verify that they don't extend beyond a reasonable time of execution; there is no rule of thumb relating to the actual value; it is highly dependant on your implementation Depending on your system and applications, your maximum time limit may be set to any value; however, we suggest that you verify your scripts, and are able to work with a maximum limit of 30 seconds. error_log=filename; Excellent for debugging purposes; always creates a log file error_reporting(E_NONE); Does not report errors to the error_log; changes the value to enable different logging parameters; check the PHP website for additional information about this AGI script permissions All AGI scripts must be located in the directory /var/lib/asterisk/agi-bin, which is Asterisk's default directory for AGI scripts. All AGI scripts should have the execute permission, and should be owned by the user running Asterisk. If you are unfamiliar with these, consult with your system administrator for additional information. The structure of a PHP based AGI script Every PHP based AGI script takes the following form: #!/usr/bin/php -q <? $stdin = fopen(‘php://stdin’, ‘r’); $stdout = fopen(‘php://stdout’, ‘w’); $stdlog = fopen(‘my_agi.log’, ‘w’); /* Operational Code starts here */ .. .. ..?> Upon execution, Asterisk transmits a set of information to our AGI script via STDIN. Handling of that input is best performed in the following manner: #!/usr/bin/php -q <? $stdin = fopen(‘php://stdin’, ‘r’); $stdout = fopen(‘php://stdout’, ‘w’); $stdlog = fopen(‘my_agi.log’, ‘w’); /* Handling execution input from Asterisk */ while (!feof($stdin)) { $temp = fgets($stdin); $temp = str_replace("n","",$temp); $s = explode(":",$temp); $agivar[$s[0]] = trim($s[1]); if $temp == "") { break; } } /* Operational Code starts here */ .. .. ..?> Once we have handled our inbound information from the Asterisk server, we can start our actual operational flow. Communication between Asterisk and AGI The communication between Asterisk and an AGI script is performed via STDIN and STDOUT (standard output). Let's examine the following diagram: In the above diagram, ASC refers to our AGI script, while AST refers to Asterisk itself. As you can see from the diagram above, the entire flow is fairly simple. It is just a set of simple I/O queries and responses that are carried through the STDIN/STDOUT data streams. Let's now examine a slightly more complicated example: The above figure shows an example that includes two new elements in our AGI logic—access to a database, and to information provided via a web service. For example, the above image illustrates something that may be used as a connection between the telephony world and a dating service. This leads to an immediate conclusion that just as AGI is capable of connecting to almost any type of information source, depending solely on the implementation of the AGI script and not on Asterisk, Asterisk is capable of interfacing with almost any type of information source via out-of-band facilities. Enough of talking! Let's write our first AGI script.
Read more
  • 0
  • 0
  • 6738

article-image-arrays-and-control-structures-object-oriented-javascript
Packt
16 Oct 2009
4 min read
Save for later

Arrays and Control Structures in Object-Oriented JavaScript

Packt
16 Oct 2009
4 min read
Arrays Now that you know the basic primitive data types in JavaScript, it's time to move to a more interesting data structure—the array. To declare a variable that contains an empty array, you use square brackets with nothing between them: >>> var a = [];>>> typeof a;"object" typeof returns "object", but don't worry about this for the time being, we'll get to that when we take a closer look at objects. To define an array that has three elements, you do this: >>> var a = [1,2,3]; When you simply type the name of the array in the Firebug console, it prints the contents of the array: >>> a[1, 2, 3] So what is an array exactly? It's simply a list of values. Instead of using one variable to store one value, you can use one array variable to store any number of values as elements of the array. Now the question is how to access each of these stored values? The elements contained in an array are indexed with consecutive numbers starting from zero. The first element has index (or position) 0, the second has index 1 and so on. Here's the three-element array from the previous example: Index Value 0 1 1 2 2 3 In order to access an array element, you specify the index of that element inside square brackets. So a[0] gives you the first element of the array a, a[1] gives you the second, and so on. >>> a[0]1>>> a[1]2 Adding/Updating Array Elements Using the index, you can also update elements of the array. The next example updates the third element (index 2) and prints the contents of the new array. >>> a[2] = 'three';"three">>> a[1, 2, "three"] You can add more elements, by addressing an index that didn't exist before. >>> a[3] = 'four';"four">>> a[1, 2, "three", "four"] If you add a new element, but leave a gap in the array, those elements in between are all assigned the undefined value. Check out this example: >>> var a = [1,2,3];>>> a[6] = 'new';"new">>> a[1, 2, 3, undefined, undefined, undefined, "new"] Deleting Elements In order to delete an element, you can use the delete operator. It doesn't actually remove the element, but sets its value to undefined. After the deletion, the length of the array does not change. >>> var a = [1, 2, 3];>>> delete a[1];true>>> a[1, undefined, 3] Arrays of arrays An array can contain any type of values, including other arrays. >>> var a = [1, "two", false, null, undefined];>>> a[1, "two", false, null, undefined]>>> a[5] = [1,2,3][1, 2, 3]>>> a[1, "two", false, null, undefined, [1, 2, 3]] Let's see an example where you have an array of two elements, each of them being an array. >>> var a = [[1,2,3],[4,5,6]];>>> a[[1, 2, 3], [4, 5, 6]] The first element of the array is a[0] and it is an array itself. >>> a[0][1, 2, 3] To access an element in the nested array, you refer to the element index in another set of square brackets. >>> a[0][0]1>>> a[1][2]6 Note also that you can use the array notation to access individual characters inside a string. >>> var s = 'one';>>> s[0]"o">>> s[1]"n">>> s[2]"e" There are more ways to have fun with arrays, but let's stop here for now, remembering that: An array is a data store An array contains indexed elements Indexes start from zero and increment by one for each element in the array To access array elements we use the index in square brackets An array can contain any type of data, including other arrays
Read more
  • 0
  • 0
  • 5531

article-image-primer-agi-asterisk-gateway-interface
Packt
16 Oct 2009
2 min read
Save for later

A Primer to AGI: Asterisk Gateway Interface

Packt
16 Oct 2009
2 min read
How does AGI work Let's examine the following diagram: As the previous diagram illustrates, an AGI script communicates with Asterisk via two standard data streams—STDIN (Standard Input) and STDOUT (Standard Output). From the AGI script point-of-view, any input coming in from Asterisk would be considered STDIN, while output to Asterisk would be considered as STDOUT. The idea of using STDIN/STDOUT data streams with applications isn't a new one, even if you're a junior level programmer. Think of it as regarding any input from Asterisk with a read directive and outputting to Asterisk with a print or echo directive. When thinking about it in such a simplistic manner, it is clear that AGI scripts can be written in any scripting or programming language, ranging from BASH scripting, through PERL/PHP scripting, to even writing C/C++ programs to perform the same task. Let's now examine how an AGI script is invoked from within the Asterisk dialplan: exten => _X.,1,AGI(some_script_name.agi,param1,param2,param3) As you can see, the invocation is similar to the invocation of any other Asterisk dialplan application. However, there is one major difference between a regular dialplan application and an AGI script—the resources an AGI script consumes.While an internal application consumes a well-known set of resources from Asterisk, an AGI script simply hands over the control to an external process. Thus, the resources required to execute the external AGI script are now unknown, while at the same time, Asterisk consumes the resources for managing the execution of the AGI script.Ok, so BASH isn't much of a resource hog, but what about Java? This means that the choice of programming language for your AGI scripts is important. Choosing the wrong programming language can often lead to slow systems and in most cases, non-operational systems. While one may argue that the underlying programming language has a direct impact on the performance of your AGI application, it is imperative to learn the impact of each. To be more exact, it's not the language itself, but more the technology of the programming language runtime that is important. The following table tries to distinguish between three programming languages' families and their applicability to AGI development.
Read more
  • 0
  • 0
  • 6544
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-jbi-binding-components-netbeans-ide-6
Packt
16 Oct 2009
4 min read
Save for later

JBI Binding Components in NetBeans IDE 6

Packt
16 Oct 2009
4 min read
Binding Components Service Engines are pluggable components which connect to the Normalized Message Router (NMR) to perform business logic for clients. Binding components are also standard JSR 208 components that plug in to NMR and provide transport independence to NMR and Service Engines. The role of binding components is to isolate communication protocols from JBI container so that Service Engines are completely decoupled from the communication infrastructure. For example, BPEL Service Engine can receive requests to initiate BPEL process while reading files on the local file system. It can receive these requests from SOAP messages, from a JMS message, or from any of the other binding components installed into JBI container. Binding Component is a JSR 208 component that provides protocol independent transport services to other JBI components. The following figure shows how binding components fit into the JBI Container architecture: In this figure, we can see that the role of BC is to send and receive messages both internally and externally from Normalized Message Router using protocols, specific to the binding component. We can also see that any number of binding components can be installed into the JBI container. This figure shows that like Service Engines (SE), binding components do not communicate directly with other binding components or with Service Engines. All communication between individual binding components and between binding components and Service Engines is performed via sending standard messages through the Normalized Message Router. NetBeans Support for Binding Components The following table lists which binding components are installed into the JBI container with NetBeans 5.5 and NetBeans 6.0:   As is the case with Service Engines, binding components can be managed within the NetBeans IDE. The list of Binding Components installed into the JBI container can be displayed by expanding the Servers | Sun Java System Application Server 9 | JBI | Binding Components node within the Services explorer. The lifecycle of binding components can be managed by right-clicking on a binding component and selecting a lifecycle process—Start, Stop, Shutdown, or Uninstall. The properties of an individual binding component can also be obtained by selecting the Properties menu option from the context menu as shown in the following figure. Now that we've discussed what binding components are, and how they communicate both internally and externally to the Normalized Message Router, let's take a closer look at some of the more common binding components and how they are accessed and managed from within the NetBeans IDE. File Binding Component The file binding component provides a communications mechanism for JBI components to interact with the file system. It can act as both a Provider by checking for new files to process, or as a Consumer by outputting files for other processes or components. The figure above shows the file binding component acting as a Provider of messages. In this scenario, a message has been sent to the JBI container, and picked up by a protocol-specific binding component (for example, a SOAP message has been received). A JBI Process then occurs within the JBI container which may include routing the message between many different binding components and Service Engines depending upon the process. Finally, after the JBI Process has completed, the results of the process are sent to File Binding Component which writes out the result to a file. The figure above shows the file binding component acting as a Consumer of messages. In this situation, the File Binding Component is periodically polling the file system looking for files with a specified filename pattern in a specified directory. When the binding component finds a file that matches its criteria, it reads in the file and starts the JBI Process, which may again cause the input message to be routed between many different binding components and Service Engines. Finally, in this example, the results of the JBI Process are output via a Binding Component. Of course, it is possible that a binding component can act as both a provider and a consumer within the same JBI process. In this case, the file binding component would be initially responsible for reading an input message from the file system. After any JBI processing has occurred, the file binding component would then write out the results of the process to a file. Within the NetBeans Enterprise Pack, the entire set of properties for the file binding component can be edited within the Properties window. The properties for the binding component are displayed when either the input or output messages are selected from the WSDL in a composite application as shown in the following figure.
Read more
  • 0
  • 0
  • 1826

article-image-user-input-validation-tapestry-5
Packt
16 Oct 2009
9 min read
Save for later

User Input Validation in Tapestry 5

Packt
16 Oct 2009
9 min read
Adding Validation to Components The Start page of the web application Celebrity Collector has a login form that expects the user to enter some values into its two fields. But, what if the user didn't enter anything and still clicked on the Log In button? Currently, the application will decide that the credentials are wrong and the user will be redirected to the Registration page, and receive an invitation to register. This logic does make some sense; but, it isn't the best line of action, as the button might have been pressed by mistake. These two fields, User Name and Password, are actually mandatory, and if no value was entered into them, then it should be considered an error. All we need to do for this is to add a required validator to every field, as seen in the following code: <tr> <td> <t:label t_for="userName"> Label for the first text box</t:label>: </td> <td> <input type="text" t_id="userName" t_type="TextField" t:label="User Name" t_validate="required"/> </td></tr><tr> <td> <t:label t_for="password"> The second label</t:label>: </td><td> <input type="text" t_id="password" t_label="Password" t:type="PasswordField" t_validate="required"/></td></tr> Just one additional attribute for each component, and let's see how this works now. Run the application, leave both fields empty and click on the Log In button. Here is what you should see: Both fields, including their labels, are clearly marked now as an error. We even have some kind of graphical marker for the problematic fields. However, one thing is missing—a clear explanation of what exactly went wrong. To display such a message, one more component needs to be added to the page. Modify the page template, as done here: <t:form t_id="loginForm"> <t:errors/> <table> The Errors component is very simple, but one important thing to remember is that it should be placed inside of the Form component, which in turn, surrounds the validated components. Let's run the application again and try to submit an empty form. Now the result should look like this: This kind of feedback doesn't leave any space for doubt, does it? If you see that the error messages are strongly misplaced to the left, it means that an error in the default.css file that comes with Tapestry distribution still hasn't been fixed. To override the faulty style, define it in our application's styles.css file like this: DIV.t-error LI{ margin-left: 20px;} Do not forget to make the stylesheet available to the page. I hope you will agree that the efforts we had to make to get user input validated are close to zero. But let's see what Tapestry has done in response to them: Every form component has a ValidationTracker object associated with it. It is provided automatically, we do not need to care about it. Basically, ValidationTracker is the place where any validation problems, if they happen, are recorded. As soon as we use the t:validate attribute for a component in the form, Tapestry will assign to that component one or more validators, the number and type of them will depend on the value of the t:validate attribute (more about this later). As soon as a validator decides that the value entered associated with the component is not valid, it records an error in the ValidationTracker. Again, this happens automatically. If there are any errors recorded in ValidationTracker, Tapestry will redisplay the form, decorating the fields with erroneous input and their labels appropriately. If there is an Errors component in the form, it will automatically display error messages for all the errors in ValidationTracker. The error messages for standard validators are provided by Tapestry while the name of the component to be mentioned in the message is taken from its label. A lot of very useful functionality comes with the framework and works for us "out of the box", without any configuration or set-up! Tapestry comes with a set of validators that should be sufficient for most needs. Let's have a more detailed look at how to use them. Validators The following validators come with the current distribution of Tapestry 5: Required—checks if the value of the validated component is not null or an empty string. MinLength—checks if the string (the value of the validated component) is not shorter than the specified length. You will see how to pass the length parameter to this validator shortly. MaxLength—same as above, but checks if the string is not too long. Min—ensures that the numeric value of the validated component is not less than the specified value, passed to the validator as a parameter. Max—as above, but ensures that the value does not exceed the specified limit. Regexp—checks if the string value fits the specified pattern. We can use several validators for one component. Let's see how all this works together. First of all, let's add another component to the Registration page template: <tr> <td><t:label t_for="age"/>:</td> <td><input type="text" t_type="textfield" t_id="age"/></td></tr> Also, add the corresponding property to the Registration page class, age, of type double. It could be an int indeed, but I want to show that the Min and Max validators can work with fractional numbers too. Besides, someone might decide to enter their age as 23.4567. This will be weird, but not against the laws. Finally, add an Errors component to the form at the Registration page, so that we can see error messages: <t:form t_id="registrationForm"> <t:errors/> <table> Now we can test all the available validators on one page. Let's specify the validation rules first: Both User Name and Password are required. Also, they should not be shorter than three characters and not longer than eight characters. Age is required, and it should not be less than five (change this number if you've got a prodigy in your family) and not more than 120 (as that would probably be a mistake). Email address is not required, but if entered, should match a common pattern. Here are the changes to the Registration page template that will implement the specified validation rules: <td> <input type="text" t_type="textfield" t_id="userName" t:validate="required,minlength=3,maxlength=8"/></td>...<td> <input type="text" t_type="passwordfield" t_id="password" t:validate="required,minlength=3,maxlength=8"/></td>...<td> <input type="text" t_type="textfield" t_id="age" t:validate="required,min=5,max=120"/></td>...<input type="text" t_type="textfield" t_id="email" t:validate="regexp"/> As you see, it is very easy to pass a parameter to a validator, like min=5 or maxlength=8. But, where do we specify a pattern for the Regexp validator? The answer is, in the message catalog. Let's add the following line to the app.properties file: email-regexp=^([a-zA-Z0-9_.-])+@(([a-zA-Z0-9-])+.)+([a-zA-Z0-9]{2,4})+$ This will serve as a regular expression for all Regexp validators applied to components with ID email throughout the application. Run the application, go to the Registration page and, try to submit the empty form. Here is what you should see: Looks all right, but the message for the age could be more sensible, something like You are too young! You should be at least 5 years old. We'll deal with this later. However for now, enter a very long username, only two characters for password and an age that is more than the upper limit, and see how the messages will change: Again, looks good, except for the message about age. Next, enter some valid values for User Name, Password and Age. Then click on the check box to subscribe to the newsletter. In the text box for email, enter some invalid value and click on Submit. Here is the result: Yes! The validation worked properly, but the error message is absolutely unacceptable. Let's deal with this, but first make sure that any valid email address will pass the validation.   Providing Custom Error Messages We can provide custom messages for validators in the application's (or page's) message catalog. For such messages we use keys that are made of the validated component's ID, the name of validator and the "message" postfix. Here is an example of what we could add to the app.properties file to change error messages for the Min and Max validators of the Age component as well as the message used for the email validation: email-regexp-message=Email address is not valid.age-min-message=You are too young! You should be at least 5 years old.age-max-message=People do not live that long! Still better, instead of hard-coding the required minimal age into the message, we could insert into the message the parameter that was passed to the Min validator (following the rules for java.text.Format), like this: age-min-message=You are too young! You should be at least %s years old. If you run the application now and submit an invalid value for age, the error message will be much better: You might want to make sure that the other error messages have changed too. We can now successfully validate values entered into separate fields, but what if the validity of the input depends on how two or more different values relate to each other? For example, at the Registration page we want two versions of password to be the same, and if they are not, this should be considered as an invalid input and reported appropriately. Before dealing with this problem however, we need to look more thoroughly at different events generated by the Form component.  
Read more
  • 0
  • 0
  • 4197

article-image-flex-101-flash-builder-4-part-1
Packt
16 Oct 2009
11 min read
Save for later

Flex 101 with Flash Builder 4: Part 1

Packt
16 Oct 2009
11 min read
  The article is intended towards developers who have never used Flex before and would like to exercise a “Hello World” kind of tutorial. The article does not aim to cover Flex and FB4 in detail but rather focuses on the mechanics of FB4 and getting an application running with minimal effort. For developers familiar with Flex and the predecessor to Flash Builder 4 (Flex Builder 2 or 3), it contains an introduction to FB4 and some differences in the way you go about building Flex Applications using FB4. Even if you have not programmed before and are looking at understanding how to make a start in developing applications, this would serve as a good start. The Flex Ecosystem The Flex ecosystem is a set of libraries, tools, languages and a deployment runtime that provides an end-to-end framework for designing, developing and deploying RIAs. All these together are being branded as a part of the Flash platform. In its latest release, Flex 4, special efforts have been put in to address the designer to developer workflow by letting graphic designers address layout, skinning, effects and general look and feel of your application and then the developers taking over to address the application logic, events, etc. To understand this at a high level, take a look at the diagram shown below. This is a very simplified diagram and the intention is to project a 10,000 ft view of the development, compilation and execution process. Let us understand the diagram now: The developer will typically work in the Flash Builder Application. Flash Builder is the Integrated Development Environment (IDE) that provides an environment for coding, compiling, running / debugging your Flex based applications. Your Flex Application will typically consist of MXML and ActionScript code. ActionScript is an ECMAScript compatible Object Oriented language, whereas MXML is an XML-based markup language. Using MXML you can define/layout your visual components like buttons, combobox, data grids, and others. Your application logic will be typically coded inside ActionScript classes/methods. While coding your Flex Application, you will make use of the Flex framework classes that provide most of the core functionality. Additional libraries like Flex Charting libraries and 3rd party components can be used in your application too. Flash Builder compiles all of this into object byte code that can be executed inside the Flash Player. Flash Player is the runtime host that executes your application. This is high level introduction to the ecosystem and as we work through the samples later on in the article, things will start falling into place. Flash Builder 4 Flash Builder is the new name for the development IDE previously known as Flex Builder. The latest release is 4 and it is currently in public beta. Flash Builder 4 is based on the Eclipse IDE, so if you are familiar with Eclipse based tools, you will be able to navigate your way quite easily. Flash Builder 4 like Flex Builder 3 previously is a commercial product and you need to purchase a development license. FB4 currently is in public beta and is available as a 30-day evaluation. Through the rest of the article, we will make use of FB4 and will be focused completely on that to build and run the sample applications. Let us now take a look at setting up FB4. Setting up your Development Environment To setup Flash Builder 4, follows these steps: The first step should be installing Flash Player 10 on your system. We will be developing with the Flex 4 SDK that comes along with Flash Builder 4 and it requires Flash Player 10. You can download the latest version of Flash Player from here: http://www.adobe.com/products/flashplayer/ Download Flash Builder 4 Public Beta from http://labs.adobe.com/technologies/flashbuilder4/. The page is shown below: After you download, run the installer program and proceed with the rest of the installation. Launch the Adobe Flash Builder Beta. It will prompt first with a message that it is a Trial version as shown below: To continue in evaluation mode, select the option highlighted above and click Next. This will launch the Flash Builder IDE. Let us start coding with Flash Builder 4 IDE. We will stick to tradition and write the “Hello World” application. Hello World using Flash Builder 4 In this section, we will be developing a basic Hello World application. While the application does not do much, it will help you get comfortable with the Flash Builder IDE. Launch the Flash Builder IDE. We will be creating a Flex Project. Flash Builder will help us create the Project that will contain all our files. To create a new Flex Project, click on the File → New → Flex Project as shown below: This will bring up a dialog in which you will need to specify more details about the Flex Project that you plan to develop. The dialog is shown below: You will need to provide at least the following information: Project Name: This is the name of your project. Enter a name that you want over here. In our case, we have named our project MyFirstFB4App. Application Type: We can develop both a Web version and a desktop version of our application using Flash Builder. The web application will then run inside of a web browser and execute within the Flash Player plug-in. We will go with the Web option over here. The Desktop application runs inside the Adobe Integrated Runtime environment and can have more desktop like features. We will skip that option for now. We will let the other options remain as is. We will use the Flex 4.0 SDK and currently we are not integrating with any Server side layer so we will leave that option as None/Other. Click on Finish at this point to create your Flex Project. This will create a main application file called MyFirstFB4App.mxml as shown below. We will come back to our coding a little later but first we must familiarize ourselves with the Flash Builder IDE. Let us first look at the Package Explorer to understand the files that have been created for the Flex Project. The screenshot is shown below: It consists of the main source file MyFirstFB4App.mxml. This is the main application file or in other words the bootstrap. All your source files (MXML and ActionScript code along with assets like images, and others should go under the src folder. They can optionally be placed in packages too. The Flex 4.0 framework consists of several libraries that you compile your code against. You would end up using its framework code, components (visual and non-visual) and other classes. These classes are packaged in a library file with an extension .swc. A list of library files is shown above. You do not need to typically do anything with it. Optionally, you can also use 3rd party components written by other companies and developers that are not part of the Flex framework. These libraries are packages as .SWC files too and they can be placed in the libs folder as shown in the previous screenshot. The typical step is to write and compile your code—build your project. If your build is successful, the object code is generated in the bin-debug folder. When you deploy your application to a Web Server, you will need to pickup the contents from this folder. We will come to that a little later. The html-template folder contains some boiler-plate code that contains the container HTML into which your object code will be referenced. It is possible to customize this but for now, we will not discuss that. Double-click MyFirstFB4App.mxml file. This is our main application file. The code listing is given below: <?xml version="1.0" encoding="utf-8"?> <s:Application minWidth="1024" minHeight="768"> </s:Application> As discussed before, you will typically write one or more MXML files that will contain typically your visual components (although there can be non-visual components also). By visual components, we mean controls like button, combobox, list, tree, and others. It could also contain layout components and containers that help you layout your design as per the application screen design. To view what components, you can place on the main application canvas, select the Design View as shown below: Have a look at the lower half of the left pane. You will see the Components tab as shown below, which would address most needs of your Application Visual design. Click on the Controls tree node as shown below. You will see several controls that you can use and from which, we will use the Button control for this application. Simply select the Button control and drag it to the Design View Canvas as shown below: This will drop an instance of the Button control on the Design View as shown below: Select the Button to see its properties panel as shown below. Properties Panel is where you can set several attributes at design time for the control. In case the Properties panel is not visible, you can get to that by selecting Window → Properties from the main menu. In the Properties panel, we can change several key attributes. All controls can be uniquely identified and addressed in your code via the ID attribute. This is a unique name that you need to provide. Go ahead and give it some meaningful name. In our case, we name it btnSayHello. Next we can change the label so that instead of Button, it can display a message for example, Say Hello. Finally we want to wire some code such that if the button is clicked, we can do some action like display a Message Box saying Hello World. To do that, click the icon next to the On click edit field as shown below. It will provide you two options. Select the option for Generate Event Handler. This will generate the code and switch to the Source view. The code is listed below for your reference. <?xml version="1.0" encoding="utf-8"?> <s:Application minWidth="1024" minHeight="768"> <fx:Script> <![CDATA[ protected function btnSayHello_clickHandler(event:MouseEvent):void { // TODO Auto-generated method stub } ]]> </fx:Script> <s:Button x="17" y="14" label="Button" id="btnSayHello" click="btnSayHello_clickHandler(event)"/> </s:Application> There are few things to note here. As mentioned most of your application logic will be written in ActionScript and that is exactly what Flash Builder has generated for you. All such code is typically added inside a scripting block marked with the <fx:Script> tag. You can place your ActionScript methods over here that can be used by the rest of the application. When we clicked on Generate Event Handler, Flash Builder generated the Event Handler code. This code is in ActionScript and was appropriately placed inside the <fx:Script> block for us. If you look at the code, you can see that it has added a function that is invoked when the click event is fired on the button. The method is btnSayHello_clickHandler and if you notice it has an empty method that is, no implementation. Let us run the application now to see what it looks like. To run the application, click on the   Run icon in the main toolbar of Flash Builder. This will launch the web application as shown below. Clicking the Say Hello button will not do anything at this point since there is no code written inside the handler as we saw above. To display the MessageBox, we add the code shown below (Only the Script section is shown below): <fx:Script> <![CDATA[ import mx.controls.Alert; protected function btnSayHello_clickHandler(event:MouseEvent):void { Alert.show("Hello World"); } ]]> </fx:Script> We use one of the classes (called Alert) from the Flex framework. Like any other language, we need to specify which package we are using the class from so that the compiler can understand it. The Alert class belongs to the mx.controls package and it has a static method called show() which takes a single parameter of type String. This String parameter is the message to be displayed and in our case it is "Hello World". To run this, click Ctrl-S to save your file or File →  Save from the main menu. And click on Run icon in the main toolbar. This will launch the application and on clicking the SayHello button, you will see the Hello World Alert window as shown below.
Read more
  • 0
  • 0
  • 3150

article-image-flex-101-flash-builder-4-part-2
Packt
16 Oct 2009
7 min read
Save for later

Flex 101 with Flash Builder 4: Part 2

Packt
16 Oct 2009
7 min read
Using Flash Builder Data Services In this section, we will write the same application that we have written in part one. However this time we will not write the code ourselves instead we'll let the Flash Builder generate the code for us. Flash Builder comes with powerful new features that ease integration with external services. It can auto generate client code that can invoke external services for us and bind the results to existing User Interface components. Let us look at building the same application that we developed in the previous section via the new Data Services and Binding wizardry that Flash Builder provides. Let us create a new Flex Project as shown below: Name this Flex Project as YahooNewsWithDataServices as shown below. We will go with the default settings for the other fields. Click on Finish button. You will get the standard boilerplate code that is produced for the main Application MXML file—YahooNewsWithDataServices.mxml. Switch to the Design View and the Properties tab as shown below. Modify the values for the Layout to spark.layouts.VerticalLayout and the Width and Height to 100%. This means that the main Application window will occupy the maximum area available on your screen and by choosing a Vertical Layout, all visual components dragged to the main Application Canvas will be arranged in a vertical fashion. Stay in Design view. From the Components Tab shown below select Data Controls → DataGrid Drag it onto the canvas on the right side. It will appear as shown below: Keep the DataGrid selected and go to the Properties Tab as shown below. Give it an ID value of dgYahooNews. Set its Width and Height to 100% so that it will occupy the entire parent container like the Application Window. Then click on the Configure Columns button. Clicking on the Configure Columns button will bring up the current columns that are added by default. The screenshot is shown below. We do not need any of these columns at this point. So select each of the columns and click on the Delete button. Finally click on the OK button. Save your work at this point through the Ctrl-S key combination. At this point in time, we will simply create an Application screen with a datagrid on it. Now we need to connect it to the Yahoo News Service and bind the datagrid rows/columns to the data being returned from the Service. In the previous section, we had seen how to do this by writing code. In this case, we will let Flash Builder generate the connectivity code and also do the binding for us. First step is to select the Connect to Data/Service option from the main menu as shown below: This will bring up the wizard as shown in the following screenshot. Flash Builder allows us to connect to several types of backend systems. In our case, we wish to connect to the Yahoo Most Emailed News RSS Service that is available at: http://rss.news.yahoo.com/rss/mostemailed. So we will choose the HTTP Service as shown below and click on the Next button. The next step will be a configuration screen, where we will provide the following information: Service Name: Give the Service a name that you like which u can refer to later. We name it YahooNewsService. Operations: We give an operation name called getNews. The HTTP Method is GET and in the field for URL, you need to provide the RSS Feed URL http://rss.news.yahoo.com/rss/mostemailed You can enter more than one operation name here. Alternately, if you were invoking a HTTP Service using the POST method, you could even specify the Parameters by adding parameters via the Add button.   Click on Finish. This will display a Message window as shown below which tells you that there is still some incomplete work or a few last steps remaining before you can complete the definition of the service and bind it to a UI Component (in our case it is the DataGrid). Click on OK to dismiss the message. Once again, switch to Design view and click on the button for the Data Provider shown below. What we are going to do is associate/bind the result of the Service invocation to the Data Grid via the data provider. This will bring up the Bind To Data window as show below. It should be clear to you by now that we can select the Service and the Operation whose result we wish to bind to our data grid. Flash Builder automatically shows us the Service Name and an Operation name. If there are more Services defined, it will allow to select the appropriate Service and its respective operations. Note that it asks us to Configure Return Type as shown. This is because Flash Builder needs to understand the data that is going to be returned by the Service invocation and also to help it map the result data types to appropriate Action Script Data structures. Click on the Configure Return Type button. This will bring up the Configure Operation Return Type window. Give a name for the Custom Data Type as we have given below and click on the Next button. The next step is to specify the kind of data that will be returned. Flash Builder simplifies this by allowing us to give the complete URL so that it can invoke it and determine the structure that is returned. If you already have a sample response, you could even choose the third option. In our case, we will go with the second option of entering a complete URL as shown below. We also specify the RSS Feed url. Click on the Next button. This will invoke the RSS Feed and retrieve the HTTP Response. Since the response returned is in XML, Flash Builder is able to generically map it to a tree-like structure as shown below. The structure is a typical RSS XML structure. The root node that is rss is shown in the Select Node and all its children nodes are shown in the grid below. Since we are only interested in the RSS items and extracting out the title and category, we first navigate and choose the item field in the Select Node field. This will show all the children nodes for the item field as shown below: Delete all the fields except for title and category. To delete a field, select it and click on the Delete button. Click on the Finish button. This will bring up the original Bind To Data form. Click on the OK button. We are all set now. Save your work and click on the Run button from the main toolbar. Flash Builder will launch the application. The Data Grid on creation will invoke the HTTP Service and the items will be retrieved. The title and category fields are shown for each news item as shown below: r Summary This article provided an introduction to Flex 4 via its development environment Flash Builder. While the applications that we covered in the article are not too practical, it should give you a glimpse of the power of the Flex framework and its tools and loads of developer productivity that is one of its key strengths. Interested readers are encouraged to use this as a starting point in the world of developing Rich Internet Applications with Flex.
Read more
  • 0
  • 0
  • 1492
article-image-drools-jboss-rules-50-flow-part-2
Packt
16 Oct 2009
8 min read
Save for later

Drools JBoss Rules 5.0 Flow (Part 2)

Packt
16 Oct 2009
8 min read
Transfer Funds work item We'll now jump almost to the end of our process. After a loan is approved, we need a way of transferring the specified sum of money to customer's account. This can be done with rules, or even better, with pure Java as this task is procedural in nature. We'll create a custom work item so that we can easily reuse this functionality in other ruleflows. Note that if it was a once-off task, it would probably be better suited to an action node. The Transfer Funds node in the loan approval process is a custom work item. A new custom work item can be defined using the following four steps (We'll see how they are accomplished later on): Create a work item definition. This will be used by the Eclipse ruleflow editor and by the ruleflow engine to set and get parameters. For example, the following is an extract from the default WorkDefinitions.conf file that comes with Drools. It describes 'Email' work definition. The configuration is written in MVEL. MVEL allows one to construct complex object graphs in a very concise format. This file contains a list of maps—List<map<string, Object>>. Each map defines properties of one work definition. The properties are: name, parameters (that this work item works with), displayName, icon, and customEditor (these last three are used when displaying the work item in the Eclipse ruleflow editor). A custom editor is opened after double-clicking on the ruleflow node. import org.drools.process.core.datatype.impl.type.StringDataType;[ [ "name" : "Email", "parameters" : [ "From" : new StringDataType(), "To" : new StringDataType(), "Subject" : new StringDataType(), "Body" : new StringDataType() ], "displayName" : "Email", "icon" : "icons/import_statement.gif", "customEditor" : "org.drools.eclipse.flow.common.editor. editpart.work.EmailCustomEditor" ]] Code listing 13: Excerpt from the default WorkDefinitions.conf file. Work item's parameters property is a map of parameterName and its value wrappers. The value wrapper must implement the org.drools.process.core.datatype.DataType interface. Register the work definitions with the knowledge base configuration. This will be shown in the next section. Create a work item handler. This handler represents the actual behavior of a work item. It will be invoked whenever the ruleflow execution reaches this work item node. All of the handlers must extend the org.drools.runtime.process.WorkItemHandler interface. It defines two methods. One for executing the work item and another for aborting the work item. Drools comes with some default work item handler implementations, for example, a handler for sending emails: org.drools.process.workitem.email.EmailWorkItemHandler. This handler needs a working SMTP server. It must be set through the setConnection method before registering the work item handler with the work item manager (next step). Another default work item handler was shown in code listing 2 (in the first part)-SystemOutWorkItemHandler. Register the work item handler with the work item manager. After reading this you may ask, why doesn't the work item definition also specify the handler? It is because a work item can have one or more work item handlers that can be used interchangeably. For example, in a test case, we may want to use a different work item handler than in production environment. We'll now follow this four-step process and create a Transfer Funds custom work item. Work item definition Our transfer funds work item will have three input parameters: source account, destination account, and the amount to transfer. Its definition is as follows: import org.drools.process.core.datatype.impl.type.ObjectDataType;[ [ "name" : "Transfer Funds", "parameters" : [ "Source Account" : new ObjectDataType("droolsbook.bank. model.Account"), "Destination Account" : new ObjectDataType("droolsbook.bank. model.Account"), "Amount" : new ObjectDataType("java.math.BigDecimal") ], "displayName" : "Transfer Funds", "icon" : "icons/transfer.gif" ]] Code listing 14: Work item definition from the BankingWorkDefinitions.conf file. The Transfer Funds work item definition from the code above declares the usual properties. It doesn't have a custom editor as was the case with email work item. All of the parameters are of the ObjectDataType type. This is a wrapper that can wrap any type. In our case, we are wrapping Account and BigDecimal  types. We've also specified an icon that will be displayed in the ruleflow's editor palette and in the ruleflow itself. The icon should be of the size 16x16 pixels. Work item registration First make sure that the BankingWorkDefinitions.conf file is on your classpath. We now have to tell Drools about our new work item. This can be done by creating a drools.rulebase.conf file with the following contents: drools.workDefinitions = WorkDefinitions.conf BankingWorkDefinitions.conf Code listing 15: Work item definition from the BankingWorkDefinitions.conf file (all in one one line). When Drools starts up, it scans the classpath for configuration files. Configuration specified in the drools.rulebase.conf file will override the default configuration. In this case, only the drools.workDefinitions setting is being overridden. We already know that the WorkDefinitions.conf file contains the default work items such as email and log. We want to keep those and just add ours. As can be seen from the code listing above, drools.workDefinitions settings accept list of configurations. They must be separated by a space. When we now open the ruleflow editor in Eclipse, the ruleflow palette should contain our new Transfer Funds work item. If you want to know more about the file based configuration resolution process, you can look into the org.drools.util.ChainedProperties class. Work item handler Next, we'll implement the work item handler. It must implement the org. drools.runtime.process.WorkItemHandler interface that defines two methods: executeWorkItem and abortWorkItem. The implementation is as follows: /** * work item handler responsible for transferring amount from * one account to another using bankingService.transfer method * input parameters: 'Source Account', 'Destination Account' * and 'Amount' */public class TransferWorkItemHandler implements WorkItemHandler { BankingService bankingService; public void executeWorkItem(WorkItem workItem, WorkItemManager manager) { Account sourceAccount = (Account) workItem .getParameter("Source Account"); Account destinationAccount = (Account) workItem .getParameter("Destination Account"); BigDecimal sum = (BigDecimal) workItem .getParameter("Amount"); try { bankingService.transfer(sourceAccount, destinationAccount, sum); manager.completeWorkItem(workItem.getId(), null); } catch (Exception e) { e.printStackTrace(); manager.abortWorkItem(workItem.getId()); } } /** * does nothing as this work item cannot be aborted */ public void abortWorkItem(WorkItem workItem, WorkItemManager manager) { } Code listing 16: Work item handler (TransferWorkItemHandler.java file). The executeWorkItem method retrieves the three declared parameters and calls the bankingService.transfer method (the implementation of this method won't be shown). If all went OK, the manager is notified that this work item has been completed. It needs the ID of the work item and optionally a result parameter map. In our case, it is set to null. If an exception happens during the transfer, the manager is told to abort this work item. The abortWorkItem method on our handler doesn't do anything because this work item cannot be aborted. Please note that the work item handler must be thread-safe. Many ruleflow instances may reuse the same work item instance. Work item handler registration The transfer work item handler can be registered with a WorkItemManager as follows: TransferWorkItemHandler transferHandler = new TransferWorkItemHandler(); transferHandler.setBankingService(bankingService); session.getWorkItemManager().registerWorkItemHandler( "Transfer Funds", transferHandler); Code listing 17: TransferWorkItemHandler registration (DefaultLoanApprovalServiceTest.java file). A new instance of this handler is created and the banking service is set. Then it is registered with WorkItemManager in a session. Next, we need to 'connect' this work item into our ruleflow. This means set its parameters once it is executed. We need to set the source/destination account and the amount to be transferred. We'll use the in-parameter mappings of Transfer Funds to set these parameters. As we can see the Source Account is mapped to the loanSourceAccount ruleflow variable. The Destination Account ruleflow variable is set to the destination account of the loan and the Amount ruleflow variable is set to loan amount. Testing the transfer work item This test will verify that the Transfer Funds work item is correctly executed with all of the parameters set and that it calls the bankingService.transfer method with correct parameters. For this test, the bankingService service will be mocked with jMock library (jMock is a lightweight Mock object library for Java. More information can be found at http://www.jmock.org/). First, we need to set up the banking service mock object in the following manner: mockery = new JUnit4Mockery();bankingService = mockery.mock(BankingService.class); Code listing 18: jMock setup of bankingService mock object (DefaultLoanApprovalServiceTest.java file). Next, we can write our test. We are expecting one invocation of the transfer method with loanSourceAccount and loan's destination and amount properties. Then the test will set up the transfer work item as in code listing 17, start the process, and approve the loan (more about this is discussed in the next section). The test also verifies that the Transfer Funds node has been executed. Test method's implementation is as follows: @Test public void transferFunds() { mockery.checking(new Expectations() { { one(bankingService).transfer(loanSourceAccount, loan.getDestinationAccount(), loan.getAmount()); } }); setUpTransferWorkItem(); setUpLowAmount(); startProcess(); approveLoan(); assertTrue(trackingProcessEventListener.isNodeTriggered( PROCESS_LOAN_APPROVAL, NODE_WORK_ITEM_TRANSFER)); } Code listing 19: Test for the Transfer Funds work item (DefaultLoanApprovalServiceTest.java file). The test should execute successfully.
Read more
  • 0
  • 0
  • 2594

article-image-getting-started-scratch-14-part-2
Packt
16 Oct 2009
7 min read
Save for later

Getting Started with Scratch 1.4 (Part 2)

Packt
16 Oct 2009
7 min read
Add sprites to the stage In the first part we learned that if we want something done in Scratch, we tell a sprite by using blocks in the scripts area. A single sprite can't be responsible for carrying out all our actions, which means we'll often need to add sprites to accomplish our goals. We can add sprites to the stage in one of the following four ways: paint new sprite, choose new sprite from file, get a surprise sprite, or by duplicating a sprite. Duplicating a sprite is not in the scope of this article. The buttons to insert a new sprite using the other three methods are directly above the sprites list. Let's be surprised. Click on get surprise sprite (the button with the "?" on it.). If the second sprite covers up the first sprite, grab one of them with your mouse and drag it around the screen to reposition it. If you don't like the sprite that popped up, delete it by selecting the scissors from the tool bar and clicking on the sprite. Then click on get surprise sprite again. Each sprite has a name that displays beneath the icon. See the previous screenshot for an example. Right now, our sprites are cleverly named Sprite1 and Sprite2. Get new sprites The create new sprite option allows you to draw a sprite using the Paint Editor when you need a sprite that you can't find anywhere else. You can also create sprites using third-party graphics programs, such as Adobe Photoshop, GIMP, and Tux Paint. If you create a sprite in a different program, then you need to import the sprite using the choose new sprite from file option. Scratch also bundles many sprites with the installation, and the choose new sprite from file option will allow you to select one of the included files. The bundled sprites are categorized into Animals, Fantasy, Letters, People, Things, and Transportation, as seen in the following screenshot: If you look at the screenshot carefully, you'll notice the folder path lists Costumes, not sprites. A costume is really a sprite. If you want to be surprised, then use the get surprise sprite option to add a sprite to the project. This option picks a random entry from the gallery of bundled sprites. We can also add a new sprite by duplicating a sprite that's already in the project by right-clicking on the sprite in the sprites list and choosing duplicate (command C on Mac). As the name implies, this creates a clone of the sprite. The method we use to add a new sprite depends on what we are trying to do and what we need for our project. Time for action – spin sprite spin Let's get our sprites spinning. To start, click on Sprite1 from the sprites list. This will let us edit the script for Sprite1. From the Motion palette, drag the turn clockwise 15 degrees block into the script for Sprite1 and snap it in place after the if on edge, bounce block. Change the value on the turn block to 5. From the sprites list, click on Sprite2. From the Motion palette, drag the turn clockwise 15 degrees block into the scripts area. Find the repeat 10 block from the Control palette and snap it around the turn clockwise 15 degrees block. Wrap the script in the forever block. Place the when space key pressed block on top of the entire stack of blocks. From the Looks palette, snap the say hello for 2 secs block onto the bottom of the repeat block and above the forever block. Change the value on the repeat block to 100. Change the value on the turn clockwise 15 degrees block to 270. Change the value on the say block to I'm getting dizzy! Press the Space bar and watch the second sprite spin. Click the flag and set the second sprite on a trip around the stage. What just happened? We have two sprites on the screen acting independently of each other. It seems simple enough, but let's step through our script. Our cat got bored bouncing in a straight line across the stage, so we introduced some rotation. Now as the cat walked, it turned five degrees each time the blocks in the forever loop ran. This caused the cat to walk in an arc. As the cat bounced off the stage, it got a new trajectory. We told Sprite2 to turn 270 degrees for 100 consecutive times. Then the sprite stopped for two seconds and displayed a message, "I'm getting dizzy!" Because the script was wrapped in a forever block, Sprite2 started tumbling again. We used the space bar as the control to set Sprite2 in motion. However, you noticed that Sprite1 did not start until we clicked the flag. That's because we programmed Sprite1 to start when the flag was clicked. Have a go hero Make Sprite2 less spastic. Instead of turning 270 degrees, try a smaller value, such as 5. Sometimes we need inspiration So far, we've had a cursory introduction to Scratch, and we've created a few animations to illustrate some basic concepts. However, now is a good time to pause and talk about inspiration. Sometimes we learn by examining the work of other people and adapting that work to create something new that leads to creative solutions. When we want to see what other people are doing with Scratch, we have two places to turn. First, our Scratch installation contains dozens of sample projects. Second, the Scratch web site at http://scratch.mit.edu maintains a thriving community of Scratchers. Browse Scratch's projects Scratch includes several categories of projects for Animation, Games, Greetings, Interactive Art, Lists, Music and Dance, Names, Simulations, Speak up, and Stories. Time for action – spinner Let's dive right in. From the Scratch interface, click the Open button to display the Open Project dialog box, as seen in the following screenshot. Click on the Examples button. Select Simulations and click OK. Select Spinner and click OK to load the Spinner project. Follow the instructions on the screen and spin the arrow by clicking on the arrow. We're going to edit the spinner wheel. From the sprites list, click on Stage. From the scripts area, click the Backgrounds tab. Click Edit on background number 1 to open the Paint Editor. Select a unique color from the color palette, such as purple. Click on the paint bucket from the toolbar, then click on one of the triangles in the circle to change its color. The paint bucket is highlighted in the following screenshot. Click OK to return to our project. What just happened? We opened a community project called Spinner that came bundled with Scratch. When we clicked on the arrow, it spun and randomly selected a color from the wheel. We got our first look at a project that uses a background for the stage and modified the background using Scratch's built-in image editor. The Paint Editor in Scratch provides a basic but functional image editing environment. Using the Paint Editor, we can create a new sprite/background and modify a sprite/background. This can be useful if we are working with a sprite or background that someone else has created. Costume versus background A costume defines the look of a sprite while a background defines the look of the stage. A sprite may have multiple costumes just as the stage can have multiple backgrounds. When we want to work with the backgrounds on the stage, we use the switch to background and next background blocks. We use the switch to costume and next costume blocks when we want to manipulate a sprite's costume. Actually, if you look closely at the available looks blocks when you're working with a sprite, you'll realize that you can't select the backgrounds. Likewise, if you're working with the stage, you can't select costumes.
Read more
  • 0
  • 0
  • 5462

article-image-drools-jboss-rules-50-flow-part-1
Packt
16 Oct 2009
10 min read
Save for later

Drools JBoss Rules 5.0 Flow (Part 1)

Packt
16 Oct 2009
10 min read
Loan approval service Loan approval is a complex process starting with customer requesting a loan. This request comes with information such as amount to be borrowed, duration of the loan, and destination account where the borrowed amount will be transferred. Only the existing customers can apply for a loan. The process starts with validating the request. Upon successful validation, a customer rating is calculated. Only customers with a certain rating are allowed to have loans. The loan is processed by a bank employee. As soon as an approved event is received from a supervisor, the loan is approved and money can be transferred to the destination account. An email is sent to inform the customer about the outcome. Model If we look at this process from the domain modeling perspective, in addition to the model that we already have, we'll need a Loan class. An instance of this class will be a part of the context of this process. The screenshot above shows Java Bean, Loan, for holding loan-related information. The Loan bean defines three properties. amount (which is of type BigDecimal), destinationAccount (which is of type Account; if the loan is approved, the amount will be transferred to this account), and durationYears (which represents a period for which the customer will be repaying this loan). Loan approval ruleflow We'll now represent this process as a ruleflow. It is shown in the following figure. Try to remember this figure because we'll be referring back to it throughout this article. The preceding figure shows the loan approval process—loanApproval.rf file. You can use the Ruleflow Editor that comes with the Drools Eclipse plugin to create this ruleflow. The rest of the article will be a walk through this ruleflow explaining each node in more detail. The process starts with Validate Loan ruleflow group. Rules in this group will check the loan for missing required values and do other more complex validation. Each validation rule simply inserts Message into the knowledge session. The next node called Validated? is an XOR type split node. The ruleflow will continue through the no errors branch if there are no error or warning messages in the knowledge session—the split node constraint for this branch says: not Message() Code listing 1: Validated? split node no errors branch constraint (loanApproval.rf file). For this to work, we need to import the Message type into the ruleflow. This can be done from the Constraint editor, just click on the Imports... button. The import statements are common for the whole ruleflow. Whenever we use a new type in the ruleflow (constraints, actions, and so on), it needs to be imported. The otherwise branch is a "catch all" type branch (it is set to 'always true'). It has higher priority number, which means that it will be checked after the no errors branch. The .rf files are pure XML files that conform with a well formed XSD schema. They can be edited with any XML editor. Invalid loan application form If the validation didn't pass, an email is sent to the customer and the loan approval process finishes as Not Valid. This can be seen in the otherwise branch. There are two nodes-Email and Not Valid. Email is a special ruleflow node called work item. Email work item Work item is a node that encapsulates some piece of work. This can be an interaction with another system or some logic that is easier to write using standard Java. Each work item represents a piece of logic that can be reused in many systems. We can also look at work items as a ruleflow alternative to DSLs. By default, Drools Flow comes with various generic work items, for example, Email (for sending emails), Log (for logging messages), Finder (for finding files on a file system), Archive (for archiving files), and Exec (for executing programs/system commands). In a real application, you'd probably want to use a different work item than a generic one for sending an email. For example, a custom work item that inserts a record into your loan repository. Each work item can take multiple parameters. In case of email, these are: From, To, Subject, Text, and others. Values for these parameters can be specified at ruleflow creation time or at runtime. By double-clicking on the Email node in the ruleflow, Custom Work Editor is opened (see the following screenshot). Please note that not all work items have a custom editor. In the first tab (not visible), we can specify recipients and the source email address. In the second tab (visible), we can specify the email's subject and body. If you look closer at the body of the email, you'll notice two placeholders. They have the following syntax: #{placeholder}. A placeholder can contain any mvel code and has access to all of the ruleflow variables (we'll learn more about ruleflow variables later in this article). This allows us to customize the work item parameters based on runtime conditions. As can be seen from the screenshot above, we use two placeholders: customer.firstName and errorList. customer and errorList are ruleflow variables. The first one represents the current Customer object and the second one is ValidationReport. When the ruleflow execution reaches this email work item, these placeholders are evaluated and replaced with the actual values (by calling the toString method on the result). Fault node The second node in the otherwise branch in the loan approval process ruleflow is a fault node. Fault node is similar to an end node. It accepts one incoming connection and has no outgoing connections. When the execution reaches this node, a fault is thrown with the given name. We could, for example, register a fault handler that will generate a record in our reporting database. However, we won't register a fault handler, and in that case, it will simply indicate that this ruleflow finished with an error. Test setup We'll now write a test for the otherwise branch. First, let's set up the test environment. Then a new session is created in the setup method along with some test data. A valid Customer with one Account is requesting a Loan. The setup method will create a valid loan configuration and the individual tests can then change this configuration in order to test various exceptional cases. @Before public void setUp() throws Exception { session = knowledgeBase.newStatefulKnowledgeSession(); trackingProcessEventListener = new TrackingProcessEventListener(); session.addEventListener(trackingProcessEventListener); session.getWorkItemManager().registerWorkItemHandler( "Email", new SystemOutWorkItemHandler()); loanSourceAccount = new Account(); customer = new Customer(); customer.setFirstName("Bob"); customer.setLastName("Green"); customer.setEmail("bob.green@mail.com"); Account account = new Account(); account.setNumber(123456789l); customer.addAccount(account); account.setOwner(customer); loan = new Loan(); loan.setDestinationAccount(account); loan.setAmount(BigDecimal.valueOf(4000.0)); loan.setDurationYears(2); Code listing 2: Test setup method called before every test execution (DefaulLoanApprovalServiceTest.java file). A tracking ruleflow event listener is created and added to the knowledge session. This event listener will record the execution path of a ruleflow—store all of the executed ruleflow nodes in a list. TrackingProcessEventListener overrides the beforeNodeTriggered method and gets the node to be executed by calling event.getNodeInstance(). loanSourceAccount represents the bank's account for sourcing loans. The setup method also registers an Email work item handler. A work item handler is responsible for execution of the work item (in this case, connecting to the mail server and sending out emails). However, the SystemOutWorkItemHandler implementation that we've used is only a dummy implementation that writes some information to the console. It is useful for our testing purposes. Testing the 'otherwise' branch of 'Validated?' node We'll now test the otherwise branch, which sends an email informing the applicant about missing data and ends with a fault. Our test (the following code) will set up a loan request that will fail the validation. It will then verify that the fault node was executed and that the ruleflow process was aborted. @Test public void notValid() { session.insert(new DefaultMessage()); startProcess(); assertTrue(trackingProcessEventListener.isNodeTriggered( PROCESS_LOAN_APPROVAL, NODE_FAULT_NOT_VALID)); assertEquals(ProcessInstance.STATE_ABORTED, processInstance.getState()); } Code listing 3: Test method for testing Validated? node's otherwise branch (DefaultLoanApprovalServiceTest.java file). By inserting a message into the session, we're simulating a validation error. The ruleflow should end up in the otherwise branch. Next, the test above calls the startProcess method. It's implementation is as follows: private void startProcess() { Map<String, Object> parameterMap = new HashMap<String, Object>(); parameterMap.put("loanSourceAccount", loanSourceAccount); parameterMap.put("customer", customer); parameterMap.put("loan", loan); processInstance = session.startProcess( PROCESS_LOAN_APPROVAL, parameterMap); session.insert(processInstance); session.fireAllRules(); } Code listing 4: Utility method for starting the ruleflow (DefaultLoanApprovalServiceTest.java file). The startProcess method starts the loan approval process. It also sets loanSourceAccount, loan, and customer as ruleflow variables. The resulting process instance is, in turn, inserted into the knowledge session. This will enable our rules to make more sophisticated decisions based on the state of the current process instance. Finally, all of the rules are fired. We're already supplying three variables to the ruleflow; however, we haven't declared them yet. Let's fix this. Ruleflow variables can be added through Eclipse's Properties editor as can be seen in the following screenshot (just click on the ruleflow canvas, this should give the focus to the ruleflow itself). Each variable needs a name type and, optionally, a value. The preceding screenshot shows how to set the loan ruleflow variable. Its Type is set to Object and ClassName is set to the full type name droolsbook.bank.model.Loan. The other two variables are set in a similar manner. Now back to the test from code listing 3. It verifies that the correct nodes were triggered and that the process ended in aborted state. The isNodeTriggered method takes the process ID, which is stored in a constant called PROCESS_LOAN_APPROVAL. The method also takes the node ID as second argument. This node ID can be found in the properties view after clicking on the fault node. The node ID—NODE_FAULT_NOT_VALID—is a constant of type long defined as a property of this test class. static final long NODE_FAULT_NOT_VALID = 21;static final long NODE_SPLIT_VALIDATED = 20; Code listing 5: Constants that holds fault and Validated? node's IDs (DefaultLoanApprovalServiceTest.java file). By using the node ID, we can change node's name and other properties without breaking this test (node ID is least likely to change). Also, if we're performing bigger re-factorings involving node ID changes, we have only one place to update—the test's constants. Ruleflow unit testingDrools Flow support for unit testing isn't the best. With every test, we have to run the full process from start to the end. We'll make it easier with some helper methods that will set up a state that will utilize different parts of the flow. For example, a loan with high amount to borrow or a customer with low rating.Ideally we should be able to test each node in isolation. Simply start the ruleflow in a particular node. Just set the necessary parameters needed for a particular test and verify that the node executed as expected.Drools support for snapshots may resolve some of these issues; however, we'd have to first create all snapshots that we need before executing the individual test methods. Another alternative is to dig deeper into Drools internal API, but this is not recommended. The internal API can change in the next release without any notice.
Read more
  • 0
  • 0
  • 2831
article-image-human-readable-rules-drools-jboss-rules-50part-2
Packt
16 Oct 2009
5 min read
Save for later

Human-readable Rules with Drools JBoss Rules 5.0(Part 2)

Packt
16 Oct 2009
5 min read
Drools Agenda Before we talk about how to manage rule execution order, we have to understand Drools Agenda. When an object is inserted into the knowledge session, Drools tries to match this object with all of the possible rules. If a rule has all of its conditions met, its consequence can be executed. We say that a rule is activated. Drools records this event by placing this rule onto its agenda (it is a collection of activated rules). As you may imagine, many rules can be activated, and also deactivated, depending on what objects are in the rule session. After the fireAllRules method call, Drools picks one rule from the agenda and executes its consequence. It may or may not cause further activations or deactivations. This continues until the Drools Agenda is empty. The purpose of the agenda is to manage the execution order of rules. Methods for managing rule execution order The following are the methods for managing the rule execution order (from the user's perspective). They can be viewed as alternatives to ruleflow. All of them are defined as rule attributes. salience: This is the most basic one. Every rule has a salience value. By default it is set to 0. Rules with higher salience value will fire first. The problem with this approach is that it is hard to maintain. If we want to add new rule with some priority, we may have to shift the priorities of existing rules. It is often hard to figure out why a rule has certain salience, so we have to comment every salience value. It creates an invisible dependency on other rules. activation-group: This used to be called xor-group. When two or more rules with the same activation group are on the agenda, Drools will fire just one of them. agenda-group: Every rule has an agenda group. By default it is MAIN. However, it can be overridden. This allows us to partition Drools Agenda into multiple groups that can be executed separately. The figure above shows partitioned Agenda with activated rules. The matched rules are coming from left and going into Agenda. One rule is chosen from the Agenda at a time and then executed/fired. At runtime, we can programmatically set the active Agenda group (through the getAgenda().getAgendaGroup(String agendaGroup).setFocus() method of KnowledgeRuntime), or declaratively, by setting the rule attribute auto-focus to true. When a rule is activated and has this attribute set to true, the active agenda group is automatically changed to rule's agenda group. Drools maintains a stack of agenda groups. Whenever the focus is set to a different agenda group, Drools adds this group onto this stack. When there are no rules to fire in the current agenda group, Drools pops from the stack and sets the agenda group to the next one. Agenda groups are similar to ruleflow groups with the exception that ruleflow groups are not stacked. Note that only one instance of each of these attributes is allowed per rule (for example, a rule can only be in one ruleflow-group ; however, it can also define salience within that group). Ruleflow As we've already said, ruleflow can externalize the execution order from the rule definitions. Rules just define a ruleflow-group attribute, which is similar to agenda-group. It is then used to define the execution order. A simple ruleflow (in the example.rf file) is shown in the following screenshot: The preceding screenshot shows a ruleflow opened with the Drools Eclipse plugin. On the lefthand side are the components that can be used when building a ruleflow. On the righthand side is the ruleflow itself. It has a Start node which goes to ruleflow group called Group 1. After it finishes execution, an Action is executed, then the flow continues to another ruleflow group called Group 2, and finally it finishes at an End node. Ruleflow definitions are stored in a file with the .rf extension. This file has an XML format and defines the structure and layout for presentational purposes. Another useful rule attribute for managing which rules can be activated is lock-on-active. It is a special form of the no-loop attribute. It can be used in combination with ruleflow-group or agenda-group. If it is set to true, and an agenda/ruleflow group becomes active/focused, it discards any further activations for the rule until a different group becomes active. Please note that activations that are already on the agenda will be fired. A ruleflow consists of various nodes. Each node has a name, type, and other specific attributes. You can see and change these attributes by opening the standard Properties view in Eclipse while editing the ruleflow file. The basic node types are as follows: Start End Action RuleFlowGroup Split Join They are discussed in the following sections. Start It is the initial node. The flow begins here. Each ruleflow needs one start node. This node has no incoming connection—just one outgoing connection. End It is a terminal node. When execution reaches this node, the whole ruleflow is terminated (all of the active nodes are canceled). This node has one incoming connection and no outgoing connections. Action Used to execute some arbitrary block of code. It is similar to the rule consequence—it can reference global variables and can specify dialect. RuleFlowGroup This node will activate a ruleflow-group, as specified by its RuleFlowGroup attribute. It should match the value in ruleflow-group rule attribute.  
Read more
  • 0
  • 0
  • 5015

article-image-getting-started-scratch-14-part-1
Packt
16 Oct 2009
6 min read
Save for later

Getting Started with Scratch 1.4 (Part 1)

Packt
16 Oct 2009
6 min read
Before we create any code, let's make sure we speak the same language. The interface at a glance When we encounter software that's unfamiliar to us, we often wonder, "Where do I begin?" Together, we'll answer that question and click through some important sections of the Scratch interface so that we can quickly start creating our own projects. Now, open Scratch and let's begin. Time for action – first step When we open Scratch, we notice that the development environment roughly divides into three distinct sections, as seen in the following screenshot. Moving from left to right, we have the following sections in sequential order: Blocks palette Script editor Stage Let's see if we can get our cat moving: In the blocks palette, click on the Looks button. Drag the switch to costume block onto the scripts area. Now, in the blocks palette, click on the Control button. Drag the when flag clicked block to the scripts area and snap it on top of the switch to costume block, as illustrated in the following screenshot. How to snap two blocks together?As you drag a block onto another block, a white line displays to indicate that the block you are dragging can be added to the script. When you see the white line, release your mouse to snap the block in place. In the scripts area, click on the Costumes tab to display the sprite's costumes. Click on costume2 to change the sprite on the stage. Now, click back on costume1 to change how the sprite displays on the stage. Directly beneath the stage is a sprites list. The current list displays Sprite1 and Stage. Click on the sprite named Stage and notice that the scripts area changes. Click back on Sprite1 in the sprites list and again note the change to the scripts area. Click on the flag above the stage to set our first Scratch program in motion. Watch closely, or you might miss it. What just happened? Congratulations! You created your first Scratch project. Let's take a closer look at what we did just now. As we clicked through the blocks palette, we saw that the available blocks changed depending on whether we chose Motion, Looks, or Control. Each set of blocks is color-coded to help us easily identify them in our scripts. The first block we added to the script instructed the sprite to display costume2. The second block provided a way to control our script by clicking on the flag. Blocks with a smooth top are called hats in Scratch terminology because they can be placed only at the top of a stack of blocks. Did you look closely at the blocks as you snapped the control block into the looks block? The bottom of the when flag clicked block had a protrusion like a puzzle piece that fits the indent on the top of the switch to costume block. As children, most of us probably have played a game where we needed to put the round peg into the round hole. Building a Scratch program is just that simple. We see instantly how one block may or may not fit into another block. Stack blocks have indents on top and bumps on the bottom that allow blocks to lock together to form a sequence of actions that we call a script. A block depicting its indent and bump can be seen in the following screenshot: When we clicked on the Costumes tab, we learned that our cat had two costumes or appearances. Clicking on the costume caused the cat on the stage to change its appearance. As we clicked around the sprites list, we discovered our project had two sprites: a cat and a stage. And the script we created for the cat didn't transfer to the stage. We finished the exercise by clicking on the flag. The change was subtle, but our cat appeared to take its first step when it switched to costume2. Basics of a Scratch project Inside every Scratch project, we find the following ingredients: sprites, costumes, blocks, scripts, and a stage. It's how we mix the ingredients with our imagination that creates captivating stories, animations, and games. Sprites bring our program to life, and every project has at least one. Throughout the book, we'll learn how to add and customize sprites. A sprite wears a costume. Change the costume and you change the way the sprite looks. If the sprite happens to be the stage, the costume is known as a background. Blocks are just categories of instructions that include motion, looks, sound, pen, control, sensing, operators, and variables. Scripts define a set of blocks that tell a sprite exactly what to do. Each block represents an instruction or piece of information that affects the sprite in some way. We're all actors on Scratch's stage Think of each sprite in a Scratch program as an actor. Each actor walks onto the stage and recites a set of lines from the script. How each actor interacts with another actor depends on the words the director chooses. On Scratch's stage, every object, even the stone in the corner, is a sprite capable of contributing to the story. As directors, we have full creative control. Time for action – save your work It's a good practice to get in the habit of saving your work. Save your work early, and save it often: To save your new project, click the disk icon at the top of the Scratch window or click File | Save As. A Save Project dialog box opens and asks you for a location and a New Filename. Enter some descriptive information for your project by supplying the Project author and notes About this project in the fields provided. Set the cat in motion Even though our script contains only two blocks, we have a problem. When we click on the flag, the sprite switches to a different costume and stops. If we try to click on the flag again, nothing appears to happen, and we can't get back to the first costume unless we go to the Costumes tab and select costume1. That's not fun. In our next exercise, we're going to switch between both costumes and create a lively animation.
Read more
  • 0
  • 0
  • 3497
Modal Close icon
Modal Close icon