Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-implementing-stacks-using-javascript
Packt
22 Oct 2014
10 min read
Save for later

Implementing Stacks using JavaScript

Packt
22 Oct 2014
10 min read
 In this article by Loiane Groner, author of the book Learning JavaScript Data Structures and Algorithms, we will discuss the stacks. (For more resources related to this topic, see here.) A stack is an ordered collection of items that follows the LIFO (short for Last In First Out) principle. The addition of new items or the removal of existing items takes place at the same end. The end of the stack is known as the top and the opposite is known as the base. The newest elements are near the top, and the oldest elements are near the base. We have several examples of stacks in real life, for example, a pile of books, as we can see in the following image, or a stack of trays from a cafeteria or food court: A stack is also used by compilers in programming languages and by computer memory to store variables and method calls. Creating a stack We are going to create our own class to represent a stack. Let's start from the basics and declare our class: function Stack() {   //properties and methods go here} First, we need a data structure that will store the elements of the stack. We can use an array to do this: Var items = []; Next, we need to declare the methods available for our stack: push(element(s)): This adds a new item (or several items) to the top of the stack. pop(): This removes the top item from the stack. It also returns the removed element. peek(): This returns the top element from the stack. The stack is not modified (it does not remove the element; it only returns the element for information purposes). isEmpty(): This returns true if the stack does not contain any elements and false if the size of the stack is bigger than 0. clear(): This removes all the elements of the stack. size(): This returns how many elements the stack contains. It is similar to the length property of an array. The first method we will implement is the push method. This method will be responsible for adding new elements to the stack with one very important detail: we can only add new items to the top of the stack, meaning at the end of the stack. The push method is represented as follows: this.push = function(element){   items.push(element);}; As we are using an array to store the elements of the stack, we can use the push method from the JavaScript array class. Next, we are going to implement the pop method. This method will be responsible for removing the items from the stack. As the stack uses the LIFO principle, the last item that we added is the one that is removed. For this reason, we can use the pop method from the JavaScript array class. The pop method is represented as follows: this.pop = function(){   return items.pop();}; With the push and pop methods being the only methods available for adding and removing items from the stack, the LIFO principle will apply to our own Stack class. Now, let's implement some additional helper methods for our class. If we would like to know what the last item added to our stack was, we can use the peek method. This method will return the item from the top of the stack: this.peek = function(){   return items[items.length-1];}; As we are using an array to store the items internally, we can obtain the last item from an array using length - 1 as follows: For example, in the previous diagram, we have a stack with three items; therefore, the length of the internal array is 3. The last position used in the internal array is 2. As a result, the length - 1 (3 - 1) is 2! The next method is the isEmpty method, which returns true if the stack is empty (no item has been added) and false otherwise: this.isEmpty = function(){   return items.length == 0;}; Using the isEmpty method, we can simply verify whether the length of the internal array is 0. Similar to the length property from the array class, we can also implement length for our Stack class. For collections, we usually use the term "size" instead of "length". And again, as we are using an array to store the items internally, we can simply return its length: this.size = function(){   return items.length;}; Finally, we are going to implement the clear method. The clear method simply empties the stack, removing all its elements. The simplest way of implementing this method is as follows: this.clear = function(){   items = [];}; An alternative implementation would be calling the pop method until the stack is empty. And we are done! Our Stack class is implemented. Just to make our lives easier during the examples, to help us inspect the contents of our stack, let's implement a helper method called print that is going to output the content of the stack on the console: this.print = function(){   console.log(items.toString());}; And now we are really done! The complete Stack class Let's take a look at how our Stack class looks after its full implementation: function Stack() {    var items = [];    this.push = function(element){       items.push(element);   };    this.pop = function(){       return items.pop();   };    this.peek = function(){       return items[items.length-1];   };    this.isEmpty = function(){       return items.length == 0;   };    this.size = function(){       return items.length;   };    this.clear = function(){       items = [];   };    this.print = function(){       console.log(items.toString());   };} Using the Stack class Before we dive into some examples, we need to learn how to use the Stack class. The first thing we need to do is instantiate the Stack class we just created. Next, we can verify whether it is empty (the output is true because we have not added any elements to our stack yet): var stack = new Stack();console.log(stack.isEmpty()); //outputs true Next, let's add some elements to it (let's push the numbers 5 and 8; you can add any element type to the stack): stack.push(5);stack.push(8); If we call the peek method, the output will be the number 8 because it was the last element that was added to the stack: console.log(stack.peek()); // outputs 8 Let's also add another element: stack.push(11);console.log(stack.size()); // outputs 3console.log(stack.isEmpty()); //outputs false We added the element 11. If we call the size method, it will give the output as 3, because we have three elements in our stack (5, 8, and 11). Also, if we call the isEmpty method, the output will be false (we have three elements in our stack). Finally, let's add another element: stack.push(15); The following diagram shows all the push operations we have executed so far and the current status of our stack: Next, let's remove two elements from the stack by calling the pop method twice: stack.pop();stack.pop();console.log(stack.size()); // outputs 2stack.print(); // outputs [5, 8] Before we called the pop method twice, our stack had four elements in it. After the execution of the pop method two times, the stack now has only two elements: 5 and 8. The following diagram exemplifies the execution of the pop method: Decimal to binary Now that we know how to use the Stack class, let's use it to solve some Computer Science problems. You are probably already aware of the decimal base. However, binary representation is very important in Computer Science as everything in a computer is represented by binary digits (0 and 1). Without the ability to convert back and forth between decimal and binary numbers, it would be a little bit difficult to communicate with a computer. To convert a decimal number to a binary representation, we can divide the number by 2 (binary is base 2 number system) until the division result is 0. As an example, we will convert the number 10 into binary digits: This conversion is one of the first things you learn in college (Computer Science classes). The following is our algorithm: function divideBy2(decNumber){    var remStack = new Stack(),       rem,       binaryString = '';    while (decNumber > 0){ //{1}       rem = Math.floor(decNumber % 2); //{2}       remStack.push(rem); //{3}       decNumber = Math.floor(decNumber / 2); //{4} }    while (!remStack.isEmpty()){ //{5}       binaryString += remStack.pop().toString();   }    return binaryString;} In this code, while the division result is not zero (line {1}), we get the remainder of the division (mod) and push it to the stack (lines {2} and {3}), and finally, we update the number that will be divided by 2 (line {4}). An important observation: JavaScript has a numeric data type, but it does not distinguish integers from floating points. For this reason, we need to use the Math.floor function to obtain only the integer value from the division operations. And finally, we pop the elements from the stack until it is empty, concatenating the elements that were removed from the stack into a string (line {5}). We can try the previous algorithm and output its result on the console using the following code: console.log(divideBy2(233));console.log(divideBy2(10));console.log(divideBy2(1000)); We can easily modify the previous algorithm to make it work as a converter from decimal to any base. Instead of dividing the decimal number by 2, we can pass the desired base as an argument to the method and use it in the divisions, as shown in the following algorithm: function baseConverter(decNumber, base){    var remStack = new Stack(),        rem,       baseString = '',       digits = '0123456789ABCDEF'; //{6}    while (decNumber > 0){       rem = Math.floor(decNumber % base);       remStack.push(rem);       decNumber = Math.floor(decNumber / base);   }    while (!remStack.isEmpty()){       baseString += digits[remStack.pop()]; //{7}   }    return baseString;} There is one more thing we need to change. In the conversion from decimal to binary, the remainders will be 0 or 1; in the conversion from decimal to octagonal, the remainders will be from 0 to 8; but in the conversion from decimal to hexadecimal, the remainders can be 0 to 8 plus the letters A to F (values 10 to 15). For this reason, we need to convert these values as well (lines {6} and {7}). We can use the previous algorithm and output its result on the console as follows: console.log(baseConverter(100345, 2));console.log(baseConverter(100345, 8));console.log(baseConverter(100345, 16)); Summary In this article, we learned about the stack data structure. We implemented our own algorithm that represents a stack and we learned how to add and remove elements from it using the push and pop methods. We also covered a very famous example of how to use a stack. Resources for Article: Further resources on this subject: Organizing Backbone Applications - Structure, Optimize, and Deploy [article] Introduction to Modern OpenGL [article] Customizing the Backend Editing in TYPO3 Templates [article]
Read more
  • 0
  • 0
  • 4827

article-image-bpms-components
Packt
19 Aug 2014
8 min read
Save for later

BPMS Components

Packt
19 Aug 2014
8 min read
In this article by Mariano Nicolas De Maio, the author of jBPM6 Developer Guide, we will look into the various components of a Business Process Management (BPM) system. (For more resources related to this topic, see here.) BPM systems are pieces of software created with the sole purpose of guiding your processes through the BPM cycle. They were originally monolithic systems in charge of every aspect of a process, where they had to be heavily migrated from visual representations to executable definitions. They've come a long way from there, but we usually relate them to the same old picture in our heads when a system that runs all your business processes is mentioned. Nowadays, nothing is further from the truth. Modern BPM Systems are not monolithic environments; they're coordination agents. If a task is finished, they will know what to do next. If a decision needs to be made regarding the next step, they manage it. If a group of tasks can be concurrent, they turn them into parallel tasks. If a process's execution is efficient, they will perform the processing 0.1 percent of the time in the process engine and 99.9 percent of the time on tasks in external systems. This is because they will have no heavy executions within, only derivations to other systems. Also, they will be able to do this from nothing but a specific diagram for each process and specific connectors to external components. In order to empower us to do so, they need to provide us with a structure and a set of tools that we'll start defining to understand how BPM systems' internal mechanisms work, and specifically, how jBPM6 implements these tools. Components of a BPMS All big systems become manageable when we divide their complexities into smaller pieces, which makes them easier to understand and implement. BPM systems apply this by dividing each function in a different module and interconnecting them within a special structure that (in the case of jBPM6) looks something like the following figure: BPMS' internal structure Each component in the preceding figure resolves one particular function inside the BPMS architecture, and we'll see a detailed explanation on each one of them. The execution node The execution node, as seen from a black box perspective, is the component that receives the process definitions (a description of each step that must be followed; from here on, we'll just refer to them as processes). Then, it executes all the necessary steps in the established way, keeping track of each step, variable, and decision that has to be taken in each process's execution (we'll start calling these process instances). The execution node along with its modules are shown in the following figure: The execution node is composed of a set of low-level modules: the semantic module and the process engine. The semantic module The semantic module is in charge of defining each of the specific language semantics, that is, what each word means and how it will be translated to the internal structures that the process engine can execute. It consists of a series of parsers to understand different languages. It is flexible enough to allow you to extend and support multiple languages; it also allows the user to change the way already defined languages are to be interpreted for special use cases. It is a common component of most of the BPMSes out there, and in jBPM6, it allows you to add the extensions of the process interpretations to the module. This is so that you can add your own language parsers, and define your very own text-based process definition language or extend existing ones. The process engine The process engine is the module that is in charge of the actual execution of our business processes. It creates new process instances and keeps track of their state and their internal steps. Its job is to expose methods to inject process definitions and to create, start, and continue our process instances. Understanding how the process engine works internally is a very important task for the people involved in BPM's stage 4, that is, runtime. This is where different configurations can be used to improve performance, integrate with other systems, provide fault tolerance, clustering, and many other functionalities. Process Engine structure In the case of jBPM6, process definitions and process instances have similar structures but completely different objectives. Process definitions only show the steps it should follow and the internal structures of the process, keeping track of all the parameters it should have. Process instances, on the other hand, should carry all of the information of each process's execution, and have a strategy for handling each step of the process and keep track of all its actual internal values. Process definition structures These structures are static representations of our business processes. However, from the process engine's internal perspective, these representations are far from the actual process structure that the engine is prepared to handle. In order for the engine to get those structures generated, it requires the previously described semantic module to transform those representations into the required object structure. The following figure shows how this parsing process happens as well as the resultant structure: Using a process modeler, business analysts can draw business processes by dragging-and-dropping different activities from the modeler palette. For jBPM6, there is a web-based modeler designed to draw Scalable Vector Graphics (SVG) files; this is a type of image file that has the particularity of storing the image information using XML text, which is later transformed into valid BPMN2 files. Note that both BPMN2 and jBPM6 are not tied up together. On one hand, the BPMN2 standard can be used by other process engine provides such as Activiti or Oracle BPM Suite. Also, because of the semantic module, jBPM6 could easily work with other parsers to virtually translate any form of textual representation of a process to its internal structures. In the internal structures, we have a root component (called Process in our case, which is finally implemented in a class called RuleFlowProcess) that will contain all the steps that are represented inside the process definition. From the jBPM6 perspective, you can manually create these structures using nothing but the objects provided by the engine. Inside the jBPM6-Quickstart project, you will find a code snippet doing exactly this in the createProcessDefinition() method of the ProgrammedProcessExecutionTest class: //Process Definition RuleFlowProcess process = new RuleFlowProcess(); process.setId("myProgramaticProcess"); //Start Task StartNode startTask = new StartNode(); startTask.setId(1); //Script Task ActionNode scriptTask = new ActionNode(); scriptTask.setId(2); DroolsAction action = new DroolsAction(); action.setMetaData("Action", new Action() { @Override public void execute(ProcessContext context) throws Exception { System.out.println("Executing the Action!!"); } }); scriptTask.setAction(action); //End Task EndNode endTask = new EndNode(); endTask.setId(3); //Adding the connections to the nodes and the nodes to the processes new ConnectionImpl(startTask, "DROOLS_DEFAULT", scriptTask, "DROOLS_DEFAULT"); new ConnectionImpl(scriptTask, "DROOLS_DEFAULT", endTask, "DROOLS_DEFAULT"); process.addNode(startTask); process.addNode(scriptTask); process.addNode(endTask); Using this code, we can manually create the object structures to represent the process shown in the following figure: This process contains three components: a start node, a script node, and an end node. In this case, this simple process is in charge of executing a simple action. The start and end tasks simply specify a sequence. Even if this is a correct way to create a process definition, it is not the recommended one (unless you're making a low-level functionality test). Real-world, complex processes are better off being designed in a process modeler, with visual tools, and exported to standard representations such as BPMN 2.0. The output of both the cases is the same; a process object that will be understandable by the jBPM6 runtime. While we analyze how the process instance structures are created and how they are executed, this will do. Process instance structures Process instances represent the running processes and all the information being handled by them. Every time you want to start a process execution, the engine will create a process instance. Each particular instance will keep track of all the activities that are being created by its execution. In jBPM6, the structure is very similar to that of the process definitions, with one root structure (the ProcessInstance object) in charge of keeping all the information and NodeInstance objects to keep track of live nodes. The following code shows a simplification of the methods of the ProcessInstance implementation: public class RuleFlowProcessInstance implements ProcessInstance { public RuleFlowProcess getRuleFlowProcess() { ... } public long getId() { ... } public void start() { ... } public int getState() { ... } public void setVariable(String name, Object value) { ... } public Collection<NodeInstance> getNodeInstances() { ... } public Object getVariable(String name) { ... } } After its creation, the engine calls the start() method of ProcessInstance. This method seeks StartNode of the process and triggers it. Depending on the execution of the path and how different nodes connect between each other, other nodes will get triggered until they reach a safe state where the execution of the process is completed or awaiting external data. You can access the internal parameters that the process instance has through the getVariable and setVariable methods. They provide local information from the particular process instance scope. Summary In this article, we saw what are the basic components required to set up a BPM system. With these components in place, we are ready to explore, in more detail, the structure and working of a BPM system. Resources for Article: Further resources on this subject: jBPM for Developers: Part 1 [Article] Configuring JBoss Application Server 5 [Article] Boss jBPM Concepts and jBPM Process Definition Language (jPDL) [Article]
Read more
  • 0
  • 0
  • 4825

article-image-article-odata-on-mobile-devices
Packt
02 Aug 2012
8 min read
Save for later

Odata on Mobile Devices

Packt
02 Aug 2012
8 min read
With the continuous evolution of mobile operating systems, smart mobile devices (such as smartphones or tablets) play increasingly important roles in everyone's daily work and life. The iOS (from Apple Inc., for iPhone, iPad, and iPod Touch devices), Android (from Google) and Windows Phone 7 (from Microsoft) operating systems have shown us the great power and potential of modern mobile systems. In the early days of the Internet, web access was mostly limited to fixed-line devices. However, with the rapid development of wireless network technology (such as 3G), Internet access has become a common feature for mobile or portable devices. Modern mobile OSes, such as iOS, Android, and Windows Phone have all provided rich APIs for network access (especially Internet-based web access). For example, it is quite convenient for mobile developers to create a native iPhone program that uses a network API to access remote RSS feeds from the Internet and present the retrieved data items on the phone screen. And to make Internet-based data access and communication more convenient and standardized, we often leverage some existing protocols, such as XML or JSON, to help us. Thus, it is also a good idea if we can incorporate OData services in mobile application development so as to concentrate our effort on the main application logic instead of the details about underlying data exchange and manipulation. In this article, we will discuss several cases of building OData client applications for various kinds of mobile device platforms. The first four recipes will focus on how to deal with OData in applications running on Microsoft Windows Phone 7. And they will be followed by two recipes that discuss consuming an OData service in mobile applications running on the iOS and Android platforms. Although this book is .NET developer-oriented, since iOS and Android are the most popular and dominating mobile OSes in the market, I think the last two recipes here would still be helpful (especially when the OData service is built upon WCF Data Service on the server side). Accessing OData service with OData WP7 client library What is the best way to consume an OData service in a Windows Phone 7 application? The answer is, by using the OData client library for Windows Phone 7 (OData WP7 client library). Just like the WCF Data Service client library for standard .NET Framework based applications, the OData WP7 client library allows developers to communicate with OData services via strong-typed proxy and entity classes in Windows Phone 7 applications. Also, the latest Windows Phone SDK 7.1 has included the OData WP7 client library and the associated developer tools in it. In this recipe, we will demonstrate how to use the OData WP7 client library in a standard Windows Phone 7 application. Getting ready The sample WP7 application we will build here provides a simple UI for users to view and edit the Categories data by using the Northwind OData service. The application consists of two phone screens, shown in the following screenshot: Make sure you have installed Windows Phone SDK 7.1 (which contains the OData WP7 client library and tools) on the development machine. You can get the SDK from the following website: http://create.msdn.com/en-us/home/getting_started The source code for this recipe can be found in the ch05ODataWP7ClientLibrarySln directory. How to do it... Create a new ASP.NET web application that contains the Northwind OData service. Add a new Windows Phone Application project in the same solution (see the following screenshot). Select Windows Phone OS 7.1 as the Target Windows Phone OS Version in the New Windows Phone Application dialog box (see the following screenshot). Click on the OK button, to finish the WP7 project creation. The following screenshot shows the default WP7 project structure created by Visual Studio: Create a new Windows Phone Portrait Page (see the following screenshot) and name it EditCategory.xaml. Create the OData client proxy (against the Northwind OData service) by using the Visual Studio Add Service Reference wizard. Add the XAML content for the MainPage.xaml page (see the following XAML fragment). <Grid x_Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0"> <ListBox x_Name="lstCategories" ItemsSource="{Binding}"> <ListBox.ItemTemplate>> <DataTemplate> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="60" /> <ColumnDefinition Width="260" /> <ColumnDefinition Width="140" /> </Grid.ColumnDefinitions> <TextBlock Grid.Column="0" Text="{Binding Path=CategoryID}" FontSize="36" Margin="5"/> <TextBlock Grid.Column="1" Text="{Binding Path=CategoryName}" FontSize="36" Margin="5" TextWrapping="Wrap"/> <HyperlinkButton Grid.Column="2" Content="Edit" HorizontalAlignment="Right" NavigateUri="{Binding Path=CategoryID, StringFormat='/EditCategory.xaml? ID={0}'}" FontSize="36" Margin="5"/> <Grid> <DataTemplate> <ListBox.ItemTemplate> <ListBox> <Grid> Add the code for loading the Category list in the code-behind file of the MainPage. xaml page (see the following code snippet). public partial class MainPage : PhoneApplicationPage { ODataSvc.NorthwindEntities _ctx = null; DataServiceCollection _categories = null; ...... private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e) { Uri svcUri = new Uri("http://localhost:9188/NorthwindOData.svc"); _ctx = new ODataSvc.NorthwindEntities(svcUri); _categories = new DataServiceCollection(_ctx); _categories.LoadCompleted += (o, args) => { if (_categories.Continuation != null) _categories.LoadNextPartialSetAsync(); else { this.Dispatcher.BeginInvoke( () => { ContentPanel.DataContext = _categories; ContentPanel.UpdateLayout(); } ); } }; var query = from c in _ctx.Categories select c; _categories.LoadAsync(query); } } Add the XAML content for the EditCategory.xamlpage (see the following XAML fragment). <Grid x_Name="ContentPanel" Grid.Row="1" Margin="12,0,12,0"> <StackPanel> <TextBlock Text="{Binding Path=CategoryID, StringFormat='Fields of Categories({0})'}" FontSize="40" Margin="5" /> <Border> <StackPanel> <TextBlock Text="Category Name:" FontSize="24" Margin="10" /> <TextBox x_Name="txtCategoryName" Text="{Binding Path=CategoryName, Mode=TwoWay}" /> <TextBlock Text="Description:" FontSize="24" Margin="10" /> <TextBox x_Name="txtDescription" Text="{Binding Path=Description, Mode=TwoWay}" /> </StackPanel> </Border> <StackPanel Orientation="Horizontal" HorizontalAlignment="Center"> <Button x_Name="btnUpdate" Content="Update" HorizontalAlignment="Center" Click="btnUpdate_Click" /> <Button x_Name="btnCancel" Content="Cancel" HorizontalAlignment="Center" Click="btnCancel_Click" /> </StackPanel> </StackPanel> </Grid> Add the code for editing the selected Category item in the code-behind file of the EditCategory.xaml page. In the PhoneApplicationPage_Loaded event, we will load the properties of the selected Category item and display them on the screen (see the following code snippet). private void PhoneApplicationPage_Loaded(object sender, RoutedEventArgs e) { EnableControls(false); Uri svcUri = new Uri("http://localhost:9188/NorthwindOData. svc"); _ctx = new ODataSvc.NorthwindEntities(svcUri); var id = int.Parse(NavigationContext.QueryString["ID"]); var query = _ctx.Categories.Where(c => c.CategoryID == id); _categories = new DataServiceCollection(_ctx); _categories.LoadCompleted += (o, args) => { if (_categories.Count <= 0) { MessageBox.Show("Failed to retrieve Category item."); NavigationService.GoBack(); } else { EnableControls(true); ContentPanel.DataContext = _categories[0]; ContentPanel.UpdateLayout(); } }; _categories.LoadAsync(query); } The code for updating changes (against the Category item) is put in the Click event of the Update button (see the following code snippet). private void btnUpdate_Click(object sender, RoutedEventArgs e) { EnableControls(false); _ctx.UpdateObject(_categories[0]); _ctx.BeginSaveChanges( (ar) => { this.Dispatcher.BeginInvoke( () => { try { var response = _ctx.EndSaveChanges(ar); NavigationService.Navigate(new Uri("/MainPage.xaml", UriKind.Relative)); } catch (Exception ex) { MessageBox.Show("Failed to save changes."); EnableControls(true); } } ); }, null ); } Select the WP7 project and launch it in Windows Phone Emulator (see the following screenshot). Depending on the performance of the development machine, it might take a while to start the emulator. Running a WP7 application in Windows Phone Emulator is very helpful especially when the phone application needs to access some web services (such as WCF Data Service) hosted on the local machine (via the Visual Studio test web server). How it works... Since the OData WP7 client library (and tools) has been installed together with Windows Phone SDK 7.1, we can directly use the Visual Studio Add Service Reference wizard to generate the OData client proxy in Windows Phone applications. And the generated OData proxy is the same as what we used in standard .NET applications. Similarly, all network access code (such as the OData service consumption code in this recipe) has to follow the asynchronous programming pattern in Windows Phone applications. There's more... In this recipe, we use the Windows Phone Emulator for testing. If you want to deploy and test your Windows Phone application on a real device, you need to obtain a Windows Phone developer account so as to unlock your Windows Phone device. Refer to the walkthrough: App Hub - windows phone developer registration walkthrough,available at http://go.microsoft.com/fwlink/?LinkID=202697
Read more
  • 0
  • 0
  • 4817

article-image-building-your-first-bean-coldfusion
Packt
22 Oct 2010
10 min read
Save for later

Building Your First Bean in ColdFusion

Packt
22 Oct 2010
10 min read
  Object-Oriented Programming in ColdFusion Break free from procedural programming and learn how to optimize your applications and enhance your skills using objects and design patterns Fast-paced easy-to-follow guide introducing object-oriented programming for ColdFusion developers Enhance your applications by building structured applications utilizing basic design patterns and object-oriented principles Streamline your code base with reusable, modular objects Packed with example code and useful snippets         What is a Bean? Although the terminology evokes the initial reaction of cooking ingredients or a tin of food from a supermarket, the Bean is an incredibly important piece of the object-oriented design pattern. The term 'Bean' originates from the Java programming language, and for those developers out there who enjoy their coffee as much as I do, the thought process behind it will make sense: Java = Coffee = Coffee Bean = Bean. A Bean is basically the building block for your object. Think of it as a blueprint for the information you want each object to hold and contain. In relation to other ColdFusion components, the Bean is a relatively simple CFC that primarily has two roles in life: to store information or a collection of values to return the information or collection of values when required But what is it really? Typically, a ColdFusion bean is a single CFC built to encapsulate and store a single record of data, and not a record-set query result, which would normally hold more than one record. This is not to say that the information within the Bean should only be pulled from one record within a database table, or that the data needs to be only a single string—far from it. You can include information in your Bean from any source at your disposal; however, the Bean can only ever contain one set of information. Your Bean represents a specific entity. This entity could be a person, a car, or a building. Essentially, any 'single' object can be represented by a bean in terms of development. The Bean holds information about the entity it is written for. Imagine we have a Bean to represent a person, and this Bean will hold details on that individual's name, age, hair color, and so on. These details are the properties for the entity, and together they make up the completed Bean for that person. In reality, the idea of the Bean itself is incredibly similar to a structure. You could easily represent the person entity in the form of a structure, as follows: <!---Build an empty structure to emulate a Person entity.---> <cfset stuPerson = { name = '', age = '', hairColor = ''} /> Listing: 3.1 – Creating an entity structure This seems like an entirely feasible way to hold your data, right? To some extent it is. You have a structure, complete with properties for the object/entity, wrapped up into one tidy package. You can easily update the structure to hold the properties for the individual, and retrieve the information for each property, as seen in the following code example: <!---Build an empty structure to emulate a Person entity.---> <cfset stuPerson = { name = '', age = '', hairColor = ''} /> <cfdump var="#stuPerson#" label="Person - empty data" /> <!---Update the structure with data and display the output---> <cfset StructUpdate( stuPerson, 'name', 'Matt Gifford') /> <cfset StructUpdate( stuPerson, 'hairColor', 'Brown') /> <br /> <cfdump var="#stuPerson#" label="Person - data added" /> <br /> <cfoutput> Name: #stuPerson.name#<br /> Hair: #stuPerson.hairColor# </cfoutput> Listing 3.2 – Populating the entity structure Although the structure is an incredibly simple method of retaining and accessing data, particularly when looking at the code, they do not suit the purpose of a blueprint for an entity very well, and as soon as you have populated the structure it is no longer a blueprint, but a specific entity. Imagine that you were reading data from the database and wanted to use the structure for every person who was drawn out from the query. Sure enough, you could create a standard structure that was persistent in the Application scope, for example. You could then loop through the query and populate the structure with the recordset results. For every person object you wanted, you could run ColdFusion's built-in Duplicate() function to create a copy of the original 'base' structure, and apply it to a new variable. Or perhaps, the structure might need to be written again on every page it is required in, or maybe written on a separate .cfm page that is included into the template using cfinclude. Perhaps over time, your application will grow, requirements will change, and extra details will need to be stored. You would then be faced with the task of changing and updating every instance of the structures across your entire application to include additional keys and values, or remove some from the structure. This route could possibly have you searching for code, testing, and debugging at every turn, and would not be the best method to optimize your development time and to enhance the scalability of your application. Taking the time to invest in your code base and development practices from the start will greatly enhance your application, development time, and go some way to reduce unnecessary headaches caused by spaghetti code and lack of structure. The benefit of using beans By creating a Bean for each entity within your application, you have created a specific blueprint for the data we wish to hold for that entity. The rules of encapsulation are adhered to, and nothing is hardcoded into our CFC. We have already seen how our objects are created, and how we can pass variables and data into them, which can be through the init() method during instantiation or perhaps as an argument when calling an included function within the component. Every time you need to use the blueprint for the Person class, you can simply create an instance of the Bean. You instantly have a completely fresh new object ready to populate with specific properties, and you can create a fully populated Person object in just one line of code within your application. The main purpose of a Bean in object-oriented development is to capture and encapsulate a variety of different objects, be they structures, arrays, queries, or strings for example, into one single object, which is the Bean itself. The Bean can then be passed around your application where required, containing all of the included information, instead of the application itself sending the many individual objects around or storing each one in, for example, the Application or Session scope, which could get messy. This creates a nicely packaged container that holds all of the information we need to send and use within our applications, and acts as a much easier way to manage the data we are passing around. If your blueprints need updating, for example more properties need to be added to the objects, you only have one file to modify, the CFC of the Bean itself. This instantly removes the problematic issues of having to search for every instance or every structure for your object throughout your entire code base. A Bean essentially provides you with a consistent and elegant interface, which will help you to organize your data into objects, removing the need to create, persist, and replicate ad-hoc structures. Creating our first Bean Let's look at creating a Bean for use with the projects table in the database. We'll continue with the Person Bean as the primary example, and create the CFC to handle person objects. An introduction to UML Before we start coding the component, let's have a quick look at a visual representation of the object using Unified Modeling Language (UML). UML is a widely-used method to display, share, and reference objects, Classes, workflows, and structures in the world of object-oriented programming, which you will come into contact with during your time with OOP development. The modeling language itself is incredibly detailed and in-depth, and can express such a wide array of details and information. Person object in UML In this example, let's take a look at the basics of UML and the visual representation of the Person component that we will create, which looks like this: At first glances, you can instantly see what variables and functions our component consists of. With most UML objects, it is broken into segments for easier digestion. The actual name of the component is clearly visible within the top section of the diagram. In the second section, we include the variables that will be included within our object. These have a '-' character in front of them, to indicate that these variables are private and are hidden within the component (they are not accessible externally). These variables are followed by the variable type, separated by a colon (':'). This lets you easily see which variable type is expected. In this example, we can see that all of the variables are strings. In the bottom section of the diagram we include the function references, which contain all methods within the component. All of the functions are prefixed with a '+' to indicate that they are publically accessible, and so are available to be called externally from the component itself. For any functions that require parameters, they are included inside the parenthesis. If a function returns a value, the returnType is specified after the ':'. Based upon this UML diagram, let's create the core wrapper for the CFC, and create the constructor method—the init() function. Create a new file called Person.cfc and save this file in the following location within your project folder: com/packtApp/oop/beans. <cfcomponent displayname="Person" output="false" hint="I am the Person Class."> <cfproperty name="firstName" type="string" default="" /> <cfproperty name="lastName" type="string" default="" /> <cfproperty name="gender" type="string" default="" /> <cfproperty name="dateofbirth" type="string" default="" /> <cfproperty name="hairColor" type="string" default="" /> <!--- Pseudo-constructor ---> <cfset variables.instance = { firstName = '', lastName = '', gender= '', dateofbirth = '', hairColor = '' } /> <cffunction name="init" access="public" output="false" returntype="any" hint="I am the constructor method for the Person Class."> <cfargument name="firstName" required="true" type="String" default="" hint="I am the first name." /> <cfargument name="lastName" required="true" type="String" default="" hint="I am the last name." /> <cfargument name="gender" required="true" type="String" default="" hint="I am the gender." /> <cfargument name="dateofbirth" required="true" type="String" default="" hint="I am the date of birth." /> <cfargument name="hairColor" required="true" type="String" default="" hint="I am the hair color." /> <cfreturn this /> </cffunction> </cfcomponent> Listing 3.3 - com/packtApp/oop/beans/Person.cfc Here, we have the init() method for the Person.cfc and the arguments defined for each property within the object. The bean will hold the values of its properties within the variables.instance structure, which we have defined above the init() method as a pseudo-constructor.
Read more
  • 0
  • 0
  • 4789

article-image-discovering-pythons-parallel-programming-tools
Packt
20 Jun 2014
3 min read
Save for later

Discovering Python's parallel programming tools

Packt
20 Jun 2014
3 min read
(For more resources related to this topic, see here.) The Python threading module The Python threading module offers a layer of abstraction to the module _thread, which is a lower-level module. It provides functions that help the programmer during the hard task of developing parallel systems based on threads. The threading module's official papers can be found at http://docs.python.org/3/library/threading.html?highlight=threading#module-threadin. The Python multiprocessing module The multiprocessing module aims at providing a simple API for the use of parallelism based on processes. This module is similar to the threading module, which simplifies alternations between the processes without major difficulties. The approach that is based on processes is very popular within the Python users' community as it is an alternative to answering questions on the use of CPU-Bound threads and GIL present in Python. The multiprocessing module's official papers can be found at http://docs.python.org/3/library/multiprocessing.html?highlight=multiprocessing#multiprocessing. The parallel Python module The parallel Python module is external and offers a rich API for the creation of parallel and distributed systems making use of the processes approach. This module promises to be light and easy to install, and integrates with other Python programs. The parallel Python module can be found at http://parallelpython.com. Among some of the features, we may highlight the following: Automatic detection of the optimal confi guration The fact that a number of worker processes can be changed during runtime Dynamic load balance Fault tolerance Auto-discovery of computational resources Celery – a distributed task queue Celery is an excellent Python module that's used to create distributed systems and has excellent documentation. It makes use of at least three different types of approach to run tasks in concurrent form—multiprocessing, Eventlet, and Gevent. This work will, however, concentrate efforts on the use of the multiprocessing approach. Also, the link between one and another is a configuration issue, and it remains as a study so that the reader is able to establish comparisons with his/her own experiments. The Celery module can be obtained on the official project page at http://celeryproject.org. Summary In this article, we had a short introduction to some Python modules, built-in and external, which makes a developer's life easier when building up parallel systems. Resources for Article: Further resources on this subject: Getting Started with Spring Python [Article] Python Testing: Installing the Robot Framework [Article] Getting Up and Running with MySQL for Python [Article]
Read more
  • 0
  • 0
  • 4787

article-image-images-colors-and-backgrounds
Packt
07 Oct 2013
5 min read
Save for later

Images, colors, and backgrounds

Packt
07 Oct 2013
5 min read
(For more resources related to this topic, see here.) The following screenshot (Images and colors) shows the final result of this article:   Images and colors The following is the corresponding drawing.kv code: 64. # File name: drawing.kv (Images and colors) 65. <DrawingSpace>: 66. canvas: 67. Ellipse: 68. pos: 10,10 69. size: 80,80 70. source: 'kivy.png' 71. Rectangle: 72. pos: 110,10 73. size: 80,80 74. source: 'kivy.png' 75. Color: 76. rgba: 0,0,1,.75 77. Line: 78. points: 10,10,390,10 79. width: 10 80. cap: 'square' 81. Color: 82. rgba: 0,1,0,1 83. Rectangle: 84. pos: 210,10 85. size: 80,80 86. source: 'kivy.png' 87. Rectangle: 88. pos: 310,10 89. size: 80,80 This code starts with an Ellipse (line 67) and a Rectangle (line 71). We use the source property, which inserts an image to decorate the polygon. The image kivy.png is 80 x 80 pixels with a white background (without any alpha/transparency channel). The result is shown in the first two columns of the previous screenshot (Images and colors). In line 75, we use the context instruction Color to change the color (with the rgba property: red, green, blue, and alpha) of the coordinate space context. This means that the next VertexInstructions will be drawn with the color changed by rgba. A ContextInstruction changes the current coordinate space context. In the previous screenshot, the blue bar at the bottom (line 77) has a transparent blue (line 76) instead of the default white (1,1,1,1) as seen in the previous examples. We set the ends shape of the line to a square with the cap property (line 80). We change the color again in line 81. After that, we draw two more rectangles, one with the kivy.png image and other without it. In the previous screenshot (Images and color) you can see that the white part of the image has become as green as the basic Rectangle on the left. Be very careful with this. The Color instruction acts as a light that is illuminating the kivy.png image. This is why you can still see the Kivy logo on the background instead of it being all covered by the color. There is another important detail to notice in the previous screenshot. There is a blue line that crosses the first two polygons in front and then crosses behind the last two. This illustrates the fact that the instructions are executed in order and this might bring some unwanted results. In this example we have full control of the order but for more complicated scenarios Kivy provides an alternative. We can specify three Canvas instances (canvas.before, canvas, and canvas.after) for each Widget. They are useful to organize the order of execution to guarantee that the background component remains in the background, or to bring some of the elements to the foreground. The following drawing.kv file shows an example of these three sets (lines 92, 98, and 104) of instructions: 90. # File name: drawing.kv (Before and After Canvas) 91. <DrawingSpace>: 92. canvas.before: 93. Color: 94. rgba: 1,0,0,1 95. Rectangle: 96. pos: 0,0 97. size: 100,100 98. canvas: 99. Color: 100. rgba: 0,1,0,1 101. Rectangle: 102. pos: 100,0 103. size: 100,100 104. canvas.after: 105. Color: 106. rgba: 0,0,1,1 107. Rectangle: 108. pos: 200,0 109. size: 100,100 110. Button: 111. text: 'A very very very long button' 112. pos_hint: {'center_x': .5, 'center_y': .5} 113. size_hint: .9,.1 In each set, a Rectangle of different color is drawn (lines 95, 101, and 107). The following diagram illustrates the execution order of the canvas. The number on the top-left margin of each code block indicates the order of execution: Execution order of the canvas Please note that we didn't define any canvas, canvas.before, or canvas.after for the Button, but Kivy does. The Button is a Widget and it displays graphics on the screen. For example, the gray background is just a Rectangle. That means that it has instructions in its internal Canvas instances. The following screenshot shows the result (executed with python drawing.py --size=300x100): Before and after canvas The graphics of the Button (the child) are covered up by the graphics of instructions in the canvas.after. But what is executed between canvas.before and canvas? It could be code of a base class when we are working with inheritance and we want to add instructions in the subclass that should be executed before the base class Canvas instances. A practical example of this will be covered when we apply them in the last section of this article in the comic creator project. The canvas.before will also be useful when we study how to dynamically add instruction to Canvas instances For now, it is sufficient to understand that there are three sets of instructions (Canvas instances) that provide some flexibility when we are displaying graphics on the screen. We will now explore some more context instructions related to three basic transformations. Summary In this article we learned how to add images and colors to shapes and how to position graphics at a front or back level. Resources for Article: Further resources on this subject: Easily Writing SQL Queries with Spring Python [Article] Python Testing: Installing the Robot Framework [Article] Advanced Output Formats in Python 2.6 Text Processing [Article]
Read more
  • 0
  • 0
  • 4777
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-restful-services-jax-rs-20
Packt
26 Sep 2013
16 min read
Save for later

RESTful Services JAX-RS 2.0

Packt
26 Sep 2013
16 min read
Representational State Transfer Representational State Transfer (REST) is a style of information application architecture that aligns the distributed applications to the HTTP request and response protocols, in particular matching Hypermedia to the HTTP request methods and Uniform Resource Identifiers (URI). Hypermedia is the term that describes the ability of a system to deliver self-referential content, where related contextual links point to downloadable or stream-able digital media, such as photographs, movies, documents, and other data. Modern systems, especially web applications, demonstrate through display of text that a certain fragment of text is a link to the media. Hypermedia is the logical extension of the term hypertext, which is a text that contains embedded references to other text. These embedded references are called links, and they immediately transfer the user to the other text when they are invoked. Hypermedia is a property of media, including hypertext, to immediately link other media and text. In HTML, the anchor tag <a> accepts a href attribute, the so-called hyperlink parameter. The World Wide Web is built on the HTTP standards, Versions 1.0 and 1.1, which define specific enumerations to retrieve data from a web resource. These operations, sometimes called Web Methods, are GET, POST, PUT, and DELETE. Representational State Transfer also reuses these operations to form semantic interface to a URI. Representational State Transfer, then, is both a style and architecture for building network enabled distributed applications. It is governed by the following constraints: Client/Server: A REST application encourages the architectural robust principle of separation of concerns by dividing the solution into clients and servers. A standalone, therefore, cannot be a RESTful application. This constraint ensures the distributed nature of RESTful applications over the network. Stateless: A REST application exhibits stateless communication. Clients cannot and should not take advantage of any stored context information in the server and therefore the full data of each request must be sent to the server for processing. Cache: A REST application is able to declare which data is cacheable or not cacheable. This constraint allows the architect to set the performance level for the solution, in other words, a trade-off. Caching data to the web resource, allows the business to achieve a sense of latency, scalability, and availability. The counter point to improved performance through caching data is the issue of expiration of the cache at the correct time and sequence, when do we delete stale data? The cache constraint also permits successful implementation providers to develop optimal frameworks and servers. Uniform Interface: A REST application emphasizes and maintains a unique identifier for each component and there are a set of protocols for accessing data. The constraint allows general interaction to any REST component and therefore anyone or anything can manipulate a REST component to access data. The drawback is the Uniform Interface may be suboptimal in ease-of-use and cognitive load to directly provide a data structure and remote procedure function call. Layered Style: A REST application can be composed of functional processing layers in order to simplify complex flows of data between clients and servers. Layered style constraint permits modularization of function with the data and in itself is another sufficient example of separation of concerns. The layered style is an approach that benefits load-balancing servers, caching content, and scalability. Code-on-Demand: A REST application can optimally supply downloadable code on demand for the client to execute. The code could be the byte-codes from the JVM, such as a Java Applet, or JavaFX WebStart application, or it could be a JavaScript code with say JSON data. Downloadable code is definitely a clear security risk that means that the solution architect must assume responsibility of sandboxing Java classes, profiling data, and applying certificate signing in all instances. Therefore, code-on-demand, is a disadvantage in a public domain service, and this constraint in REST application is only seen inside the firewalls of corporations. In terms of the Java platform, the Java EE standard covers REST applications through the specification JAX-RS and this article covers Version 2.0. JAX-RS 2.0 features For Java EE 7, the JAX-RS 2.0 specification has the following new features: Client-side API for invoking RESTful server-side remote endpoint Support for Hypermedia linkage Tighter integration with the Bean Validation framework Asynchronous API for both server and client-side invocations Container filters on the server side for processing incoming requests and outbound responses Client filter on the client side for processing outgoing request and incoming responses Reader and writer interceptors to handle specific content types Architectural style The REST style is simply a Uniform Resource Identifier and the application of the HTTP request methods, which invokes resources that generate a HTTP response. Although Fielding, himself, says that REST does not necessarily require the HTTP communication as a networker layer, and the style of architecture can be built on any other network protocol. Let's look at those methods again with a fictional URL (http://fizzbuzz.com/) Method Description POST A REST style application creates or inserts an entity with the supplied data. The client can assume new data has been inserted into the underlying backend database and the server returns a new URI to reference the data. PUT A REST style application replaces the entity into the database with the supplied data. GET A REST style application retrieves the entity associated with the URI, and it can be a collection of URI representing entities or it can be the actual properties of the entity DELETE A REST style application deletes the entity associated with the URI from the backend database. The user should note that PUT and DELETE are idempotent operations, meaning they can be repeated endlessly and the result is the same in steady state conditions. The GET operation is a safe operation; it has no side effects to the server-side data. REST style for collections of entities Let's take a real example with the URL http://fizzbuzz.com/resources/, which represents the URI of a collection of resources. Resources could be anything, such as books, products, or cast iron widgets. Method Description GET Retrieves the collection entities by URI under the link http://fizzbuzz.com/resources and they may include other more data. POST Creates a new entity in the collection under the URI http://fizzbuzz.com/resources. The URI is automatically assigned and returned by this service call, which could be something like http://fizzbuzz.com/resources/WKT54321. PUT Replaces the entire collection of entities under the URI http://fizzbuzz.com/resources. DELETE Deletes the entire collection of entities under the URI http://fizzbuzz.com/resources. As a reminder, a URI is a series of characters that identifies a particular resource on the World Wide Web. A URI, then, allows different clients to uniquely identify a resource on the web, or a representation. A URI is combination of a Uniform Resource Name (URN) and a Uniform Resource Locator (URL). You can think of a URN like a person's name, as way of naming an individual and a URL is similar to a person's home address, which is the way to go and visit them sometime. In the modern world, non-technical people are accustomed to desktop web browsing as URL. However, the web URL is a special case of a generalized URI. A diagram that illustrates HTML5 RESTful communication between a JAX RS 2.0 client and server, is as follows: REST style for single entities Assuming we have a URI reference to a single entity like http://fizzbuzz.com/resources/WKT54321. Method Description GET Retrieves the entity with reference to URI under the link http://fizzbuzz.com/resources/WKT54321. POST Creates a new sub entity under the URI http://fizzbuzz.com/resources/WKT54321. There is a subtle difference here, as this call does something else. It is not often used, except to create Master-Detail records. The URI of the subentity is automatically assigned and returned by this service call, which could be something like http://fizzbuzz.com/resources/WKT54321/D1023 PUT Replaces the referenced entity's entire collection with the URI http://fizzbuzz.com/resources/WKT54321 . If the entity does not exist then the service creates it. DELETE Deletes the entity under the URI references http://fizzbuzz.com/resources/WKT54321 Now that we understand the REST style we can move on to the JAX-RS API properly. Consider carefully your REST hierarchy of resources The key to build a REST application is to target the users of the application instead of blindly converting the business domain into an exposed middleware. Does the user need the whole detail of every object and responsibility in the application? On the other hand is the design not spreading enough information for the intended audience to do their work? Servlet mapping In order to enable JAX-RS in a Java EE application the developer must set up the configuration in the web deployment descriptor file. JAX-RS requires a specific servlet mapping to be enabled, which triggers the provider to search for annotated classes. The standard API for JAX-RS lies in the Java package javax.ws.rs and in its subpackages. Interestingly, the Java API for REST style interfaces sits underneath the Java Web Services package javax.ws. For your web applications, you must configure a Java servlet with just fully qualified name of the class javax.ws.rs.core.Application. The Servlet must be mapped to a root URL pattern in order to intercept REST style requests. The following is an example code of such a configuration: <?xml version="1.0" encoding="UTF-8"?><web-app xsi:schemaLocation="http://metadata-complete="false"><display-name>JavaEE Handbook 7 JAX RS Basic</display-name><servlet><servlet-name>javax.ws.rs.core.Application</servlet-name><load-on-startup>1</load-on-startup></servlet><servlet-mapping><servlet-name>javax.ws.rs.core.Application</servlet-name><url-pattern>/rest/*</url-pattern></servlet-mapping></web-app> In the web deployment descriptor we just saw, only the servlet name is defined, javax.ws.rs.core.Application. Do not define the servlet class. The second step maps the URL pattern to the REST endpoint path. In this example, any URL that matches the simplified Glob pattern /rest/* is mapped. This is known as the application path. A JAX-RS application is normally packaged up as a WAR file and deployed to a Java EE application server or a servlet container that conforms to the Web Profile. The application classes are stored in the /WEB-INF/classes and required libraries are found under /WEB-INF/lib folder. Therefore, JAX-RS applications share the consistency of configurations as servlet applications. The JAX-RS specification does recommend, but does not enforce, that conformant providers follow the general principles of Servlet 3.0 plug-ability and discoverability of REST style endpoints. Discoverability is achieved in the recommendation through class scanning of the WAR file, and it is up to the provider to make this feature available. An application can create a subclass of the javax.ws.rs.core.Application class. The Application class is a concrete class and looks like the following code: public class Application {public Application() { /* ... */ }public java.util.Set<Class<?>> getClasses() {/* ... */ }public java.util.Set<Object> getSingletons() {/* ... */ }public java.util.Map<String,Object> getProperties() {/* ... */ }} Implementing a custom Application subclass is a special case of providing maximum control of RESTful services for your business application. The developer must provide a set collection of classes that represent JAX-RS end points. The engineer must supply a list of Singleton object, if any, and do something useful with the properties. The default implementation of javax.ws.rs.core.Application and the methods getClasses() and getSingletons() return empty sets. The getProperties() method returns an empty map collection. By returning empty sets, the provider assumes that all relevant JAX-RS resource and provider classes that are scanned and found will be added to the JAX-RS application. Majority of the time, I suspect, you, the developer will want to rely on annotations to specify the REST end points and there are Servlet Context listeners and Servlet Filters to configure application wide behavior, for example the startup sequence of a web application. So how can we register our own custom Application subclass? The answer is just subclass the core class with your own class. The following code explains what we just read: package com.fizbuzz.services;@javax.ws.rs.ApplicationPath("rest")public class GreatApp extends javax.ws.rs.core.Application {// Do your custom thing here} In the custom GreatApp class, you can now configure logic for initialization. Note the use of @ApplicationPath annotation to configure the REST style URL. You still have to associate your custom Application subclass into the web deployment descriptor with a XML Servlet name definition. Remember the Application Configuration A very common error for first time JAX-RS developers is to forget that the web deployment descriptor really requires a servlet mapping to a javax.ws.js.core.Application type. Now that we know how to initialize a JAX-RS application, let us dig deeper and look at the defining REST style endpoints. Mapping JAX-RS resources JAX-RS resources are configured through the resource path, which suffixes the application path. Here is the constitution of the URL. http://<hostname>:<port>/<web_context>/<application_path>/<resource_path> The <hostname> is the host name of the server. The <port> refers to the port number, which is optional and the default port is 80. The <web_context> is the Servlet context, which is the context for the deployed web application. The <application_path> is the configuration URI pattern as specified in the web deployment descriptor @ApplicationPath or the Servlet configuration of Application type. The <resource_path> is the resource path to REST style resource. The final fragment <resource_path> defines the URL pattern that maps a REST style resource. The resource path is configured by annotation javax.ws.rs.Path. Test-Driven Development with JAX-RS Let us write a unit test to verify the simplest JAX-RS service. It follows a REST style resource around a list of books. There are only four books in this endpoint and the only thing the user/client can do at the start is to access the list of books by author and title. The client invokes the REST style endpoint, otherwise known as a resource with an HTTP GET request. The following is the code for the class RestfulBookService: package je7hb.jaxrs.basic;import javax.annotation.*;import javax.ws.rs.*;import java.util.*;@Path("/books")public class RestfulBookService {private List<Book> products = Arrays.asList(new Book("Sir Arthur Dolan Coyle","Sherlock Holmes and the Hounds of the Baskervilles"),new Book("Dan Brown", "Da Vinci Code"),new Book("Charles Dickens", "Great Expectations"),new Book("Robert Louis Stevenson", "Treasure Island"));@GET@Produces("text/plain")public String getList() {StringBuffer buf = new StringBuffer();for (Book b: products) {buf.append(b.title); buf.append('n'); }return buf.toString();}@PostConstructpublic void acquireResource() { /* ... */ }@PreDestroypublic void releaseResource() { /* ... */ }static class Book {public final String author;public final String title;Book(String author, String title) {this.author = author;this.title = title;}}} The annotation @javax.ws.rs.Path declares the class as a REST style end-point for a resource. The @Path annotation is assigned to the class itself. The path argument defines the relative URL pattern for this resource, namely/books. The method getList() is the interesting one. It is annotated with both @javax.ws.rs.GET and @javax.ws.rs.Produces. The @GET is one of the six annotations that conform to the HTTP web request methods. It indicates that the method is associated with HTTP GET protocol request. The @Produces annotation indicates the MIME content that this resource will generate. In this example, the MIME content is text/plain. The other methods on the resource bring CDI into the picture. In the example, we are injecting post construction and pre destruction methods into the bean. This is the only class that we require for a simple REST style application from the server side. With an invoking of the web resource with a HTTP GET Request like http://localhost:8080/mywebapp/rest/books we should get a plain text output with list of titles like the following: Sherlock Holmes and the Hounds of the BaskervillesDa Vinci CodeGreat ExpectationsTreasure Island So do we test this REST style interface then? We could use Arquillian Framework directly, but this means our tests have to be built specifically in a project and it complicates the build process. Arquillian uses another open source project in the JBoss repository called ShrinkWrap. The framework allows the construction of various types of virtual Java archives in a programmatic fashion. Let's look at the unit test class RestfulBookServiceTest in the following code: package je7hb.jaxrs.basic;// import omittedpublic class RestfulBookServiceTest {@Testpublic void shouldAssembleAndRetrieveBookList()throws Exception {WebArchive webArchive =ShrinkWrap.create(WebArchive.class, "test.war").addClasses(RestfulBookService.class).setWebXML(new File("src/main/webapp/WEB-INF/web.xml")).addAsWebInfResource(EmptyAsset.INSTANCE, "beans.xml");File warFile = new File(webArchive.getName());new ZipExporterImpl(webArchive).exportTo(warFile, true);SimpleEmbeddedRunner runner =SimpleEmbeddedRunner.launchDeployWarFile(warFile, "mywebapp", 8080);try {URL url = new URL("http://localhost:8080/mywebapp/rest/books");InputStream inputStream = url.openStream();BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream));List<String> lines = new ArrayList<>();String text = null;int count=0;while ( ( text = reader.readLine()) != null ) {lines.add(text);++count;System.out.printf("**** OUTPUT ****text[%d] = %sn", count, text );}assertFalse( lines.isEmpty() );assertEquals("Sherlock Holmes and the Houndsof the Baskervilles", lines.get(0));assertEquals("Da Vinci Code", lines.get(1));assertEquals("Great Expectations", lines.get(2));assertEquals( "Treasure Island", lines.get(3) );}finally {runner.stop();}}} In the unit test method, shouldAssembleAndRetrieveBookList(), we first assemble a virtual web archive with an explicit name test.war. The WAR file contains the RestfulBookService services, the Web deployment descriptor file web.xml and an empty beans.xml file, which if you remember is only there to trigger the CDI container into life for this web application. With the virtual web archive, we export the WAR as a physical file with the utility ZipExporterImpl class from ShinkWrap library, which creates the file test.war in the project root folder. Next, we fire up the SimpleEmbeddedRunner utility. It deploys the web archive to an embedded GlassFish container. Essentially, this is the boilerplate to get to deliver a test result. We, then, get to the heart of the test itself; we construct a URL to invoke the REST style endpoint, which is http://localhost:8080/mywebapp/rest/books. We read the output from the service endpoint with Java standard I/O line by line into a list collection of Strings. Once we have a list of collections, then we assert each line against the expected output from the rest style service. Because we acquired an expensive resource, like an embedded Glassfish container, we are careful to release it, which is the reason, why we surround critical code with a try-finally block statement. When the execution comes to end of test method, we ensure the embedded GlassFish container is shut down.
Read more
  • 0
  • 0
  • 4771

article-image-maximizing-everyday-debugging
Packt
14 Mar 2014
5 min read
Save for later

Maximizing everyday debugging

Packt
14 Mar 2014
5 min read
(For more resources related to this topic, see here.) Getting ready For this article, you will just need a premium version of VS2013 or you may use VS Express for Windows Desktop. Be sure to run your choice on a machine using a 64-bit edition of Windows. Note that Edit and Continue previously existed for 32-bit code. How to do it… Both features are now supported by C#/VB, but we will be using C# for our examples. The features being demonstrated are compiler-based features, so feel free to use code from one of your own projects if you prefer. To see how Edit and Continue can benefit 64-bit development, perform the following steps: Create a new C# Console Application using the default name. To ensure the demonstration is running with 64-bit code, we need to change the default solution platform. Click on the drop-down arrow next to Any CPU and select Configuration Manager…: When the Configuration Manager dialog opens, we can create a new Project Platform targeting 64-bit code. To do this, click on the drop-down menu for Platform and select <New...>: When <New...> is selected, it will present the New Project Platform dialog box. Select x64 as the new platform type: Once x64 has been selected, you will return to the Configuration Manager. Verify that x64 remains active under Platform and then click on Close to close this dialog. The main IDE window will now indicate that x64 is active: Now, let's add some code to demonstrate the new behavior. Replace the existing code in your blank class file so that it looks like the following listing: class Program { static void Main(string[] args) { int w = 16; int h = 8; Debugging Your .NET Application 156 int area = calcArea(w, h); Console.WriteLine("Area: " + area); } private static int calcArea(int width, int height) { return width / height; } } Let's set some breakpoints so that we are able to inspect during execution. First, add a breakpoint to the Main method's Console line. Add a second breakpoint to the calcArea method's return line. You can do this by either clicking on the left side of the editor window's border or by right-clicking on the line, and selecting Breakpoint | Insert Breakpoint: If you are not sure where to click, use the right-click method and then practice toggling the breakpoint by left-clicking on the breakpoint marker. Feel free to use any method that you find most convenient. Once the two breakpoints are added, Visual Studio will mark their location as shown in the following screenshot (the arrow indicates where you may click to toggle the breakpoint): With the breakpoint marker now set, let's debug the program. Begin debugging by either pressing F5 or clicking on the Start button on the toolbar: Once debugging starts, the program will quickly execute until stopped by the first breakpoint. Let's first take a look at Edit and Continue. Visual Studio will stop at the calcArea method's return line. Astute readers will notice an error (marked by 1 in the following screenshot) present in the calculation as the area value returned should be width * height. Make the correction. Before continuing, note the variables listed in the Autos window (marked by 2 in the following screenshot). If you don't see Autos, it can be made visible by pressing Ctrl + D, A or through Debug | Windows | Autos while debugging. After correcting the area calculation, advance the debugging step by pressing F10 twice. (Alternatively make the advancement by selecting the menu item Debug | Step Over twice). Visual Studio will advance to the declaration for area. Note that you were able to edit your code and continue debugging without restarting. The Autos window will update to display the function's return value, which is 128 (the value for area has not been assigned yet): There's more… Programmers who write C++ already have the ability to see the return values of functions; this just brings .NET developers into the fold. Your development experience won't have to suffer based on the languages chosen for your projects. The Edit and Continue functionality is also available for ASP.NET projects. New projects created in VS2013 will have Edit and Continue enabled by default. Existing projects imported to VS2013 will usually need this to be enabled if it hasn't been already. To do so, right-click on your ASP.NET project in Solution Explorer and select Properties (alternatively, it is also available via Project | <Project Name> Properties…). Navigate to the Web option and scroll to the bottom to check the Enable Edit and Continue checkbox. The following screenshot shows where this option is located on the properties page: Summary In this article, we learned how to use the Edit and Continue feature. Using this feature enables you to make changes to your project without having to immediately recompile your project. This simplifies debugging and enables a bit of exploration. You also saw how the Autos window can display the values of variables as you step through your program’s execution. Resources for Article: Further resources on this subject: Using the Data Pager Control in Visual Studio 2008 [article] Load Testing Using Visual Studio 2008: Part 1 [article] Creating a Simple Report with Visual Studio 2008 [article]
Read more
  • 0
  • 0
  • 4767

article-image-performance-enhancements-aspnet
Packt
26 Jul 2013
40 min read
Save for later

Performance enhancements to ASP.NET

Packt
26 Jul 2013
40 min read
ASP.NET performance Performance is one of the primary goals for any system. For a server, the throughput /time actually specifies the performance of the hardware or software in the system. It is important to increase performance and decrease the amount of hardware used for the throughput. There must be a balance between the two. Performance is one of the key elements of web development. In the last phase of ASP.NET 4.5, performance was one of the key concerns for Microsoft. They made a few major changes to the ASP.NET system to make it more performant. Performance comes directly, starting from the CPU utilization all the way back to the actual code you write. Each CPU cycle you consume for producing your response will affect performance. Consuming a large number of CPU cycles will lead you to add more and more CPUs to avoid site unavailability. As we are moving more and more towards the cloud system, performance is directly related to the cost. Here, CPU cycles costs money. Hence to make a more cost effective system running on the cloud, unnecessary CPU cycles should always be avoided. .NET 4.5 addresses the problem to its core to support background GC. The background GC for the server introduces support for concurrent collection without blocking threads; hence the performance of the site is not compromised because of garbage collection as well. The multicore JIT in addition improves the start-up time of pages without additional work. By the way, some of the improvements in technology can be really tangible to developers as well as end users. They can be categorized as follows. CPU and JIT improvements ASP.NET feature improvements The first category is generally intangible while the second case is tangible. The CPU and JIT improvements, as we have already discussed, are actually related to server performance. JIT compilations are not tangible to the developers which means they will automatically work on the system rather than any code change while the second category is actually related to code. We will focus here mainly on the tangible improvements in this recipe. Getting ready To get started, let us start Visual Studio 2012 and create an ASP.NET project. If you are opening the project for the first time, you can choose the ASP.NET Web Forms Application project template. Visual Studio 2012 comes with a cool template which virtually creates the layout of a blank site. Just create the project and run it and you will be presented with a blank site with all the default behaviors you need. This is done without writing a single line of code. Now, if you look into Solution Explorer, the project is separated into folders, each representing its identification. For instance, the Scripts folder includes all the JavaScript associated with the site. You can also see the Themes folder in Content, which includes the CSS files. Generally, for production-level sites, we have large numbers of JavaScript and CSS files that are sometimes very big and they download to the client browser when the site is initially loaded. We specify the file path using the script or link path. If you are familiar with web clients you will know that, the websites request these files in a separate request after getting the HTTP response for the page. As most browsers don't support parallel download, the download of each file adds up to the response. Even during the completion of each download there is a pause, which we call network latency. So, if you see the entire page response of a website, you will see that a large amount of response time is actually consumed by the download of these external files rather than the actual website response. Let us create a page on the website and add few JavaScript files and see the response time using Fiddler: The preceding screenshot shows how the browser requests the resources. Just notice, the first request is the actual request made by the client that takes half of a second, but the second half of that second is consumed by the requests made by the response from the server. The server responds with a number of CSS and JavaScript requests in the header, which have eventually been called to the same web server one by one. Sometimes, if the JavaScript is heavy, it takes a lot of time to load these individual files to the client which results in delay in response time for the web page. It is the same with images too. Even though external files are downloaded in separate streams, big images hurt the performance of the web page as well: Here you can see that the source of the file contains the call to a number of files that corresponds to each request. When the HTML is processed on the browser, it invokes each of these file requests one by one and replaces the document with the reference of those files. As I have already told you, making more and more resources reduces the performance of a page. This huge number of requests makes the website very slow. The screenshot depicts the actual number of requests, the bytes sent and received, and the performance in seconds. If you look at some big applications, the performance of the page is reduced by a lot more than this. To address this problem we take the following two approaches: Minimizing the size of JavaScript and CSS by removing the whitespaces, newlines, tab spaces, and so on, or omitting out the unnecessary content Bundling all the files into one file of the same MIME type to reduce the requests made by the browser ASP.NET addresses both of these problems and introduces a new feature that can both minimize the content of the JavaScript and CSS files as well as bundle all the JavaScript or CSS files together to produce one single file request from the site. To use this feature you need to first install the package. Open Visual Studio 2012, select View | Other Windows | Package Manager Console as shown in the following screenshot. Package Manager Console will open the PowerShell window for package management inside Visual Studio: Once the package manager is loaded, type the following command. Install-Package Microsoft.Web.Optimization This will load the optimizations inside Visual Studio. On the other hand, rather than opening Package Manager Console, you can also open Nuget package manager by right-clicking on the references folder of the project and select Add Library Package Reference. This will produce a nice dialog box to select and install the appropriate package. In this recipe, we are going to cover how to take the benefits of bundling and minification of website contents in .NET 4.5. How to do it... In order to move ahead with the recipe, we will use a blank web solution instead of the template web solution that I have created just now. To do this, start Visual Studio and select the ASP.NET Empty Web Application template. The project will be created without any pages but with a web.config file (a web.config file is similar to app.config but works on web environments). Add a new page to the project and name it as home.aspx. Leave it as it is, and go ahead by adding a folder to the solution and naming it as Resources. Inside resources, create two folders, one for JavaScript named js and another for stylesheets name css. Create a few JavaScript files inside js folder and a few CSS files inside the css folder. Once you finish the folder structure will look like below: Now let us add the files on the home page. Just drag-and-drop the js files one by one into the head section of the page. The page IDE will produce the appropriate tags for scripts and CSS automatically. Now run the project. You will see that the CSS and the JavaScript are appropriately loaded. To check, try using Fiddler. When you select the source of a page, you will see links that points to the JavaScript, CSS, or other resource files. These files are directly linked to the source and hence if we navigate to these files, it will show the raw content of the JavaScript. Open Fiddler and refresh the page keeping it open in Internet Explorer in the debug mode. You will see that the browser invokes four requests. Three of them are for external files and one for the actual HTML file. The Fiddler shows how the timeframe of the request is maintained. The first request being for the home.aspx file while the others are automatically invoked by the browser to get the js and css files. You can also take a note at the total size of the whole combined request for the page. Let's close the browser and remove references to the js and css files from the head tag, where you have dragged and added the following code to reference folders rather than individual files: <script src = "Resources/js"></script> <link rel="stylesheet" href="Resources/css" /> Open the Global.asax file (add if not already added) and write the following line in Application_Start: void Application_Start(object sender, EventArgs e) { //Adds the default behavior BundleTable.Bundles.EnableDefaultBundles(); } Once the line has been added, you can now run the project and see the output. If you now see the result in Fiddler, you will see all the files inside the scripts are clubbed into a single file and the whole file gets downloaded in a single request. If you have a large number of files, the bundling will show considerable performance gain for the web page. Bundling is not the only performance gain that we have achieved using Optimization. Press F5 to run the application and try to look into the actual file that has been downloaded as js and css. You will see that the bundle has been minified already by disregarding comments, blank spaces, new lines, and so on. Hence, the size of the bundles has also been reduced physically. You can also add your custom BundleTable entries. Generally, we add them inside the Application_Start section of the Global.asax file, like so: Bundle mybundle = new Bundle("~/mycustombundle", typeof(JsMinify)); mybundle.AddFile("~/Resources/Main.js"); mybundle.AddFile("~/Resources/Sub1.js"); mybundle.AddDirectory("/Resources/Files", "*.js", false); BundleTable.Bundles.Add(mybundle); The preceding code creates a new bundle for the application that can be referenced later on. We can use AddFile to add individual files to the Bundle or we can also use AddDirectory to specify a whole directory for a particular search pattern. The last argument for AddDirectory specifies whether it needs to search for a subdirectory or not. JsMinify is the default Rule processor for the JavaScript files. Similar to JsMinify is a class called CssMinfy that acts as a default rule for CSS minification. You can reference your custom bundle directly inside your page using the following directive: <script src = "mycustombundle" type="text/javascript" /> You will notice that the directive appropriately points to the custom bundle that has been created. How it works... Bundling and minification works with the introduction of the System.Web.Optimization namespace. BundleTable is a new class inside this namespace that keeps track of all the bundles that have been created in the solution. It maintains a list of all the Bundle objects, that is, list of JavaScript or CSS files, in a key-value pair collection. Once the request for a bundle is made, HttpRuntime dynamically combines the files and/or directories associated with the bundle into a single file response. Let us consider some other types that helps in transformation. BundleResponse: This class represents the response after the resources are bundled and minified. So BundleResponse keeps track of the actual response of the combined file. IBundleTransform: This type specifies the contract for transformation. Its main idea is to provide the transformation for a particular resource. JsMinfy or CssMinify are the default implementations of IBundleTransform. Bundle: The class represents a resource bundle with a list of files or directories. The IBundleTransform type specifies the rule for producing the BundleResponse class. To implement custom rules for a bundle, we need to implement this interface. public class MyCustomTransform : IBundleTransform { public void Process(BundleResponse bundleresponse) { // write logic to Bundle and minfy… } } Here, the BundleResponse class is the actual response stream where we need to write the minified output to. Basically, the application uses the default BundleHandler class to initiate the transform. BundleHandler is an IHttpHandler that uses ProcessRequest to get the response for the request from the browser. The process is summarized as follows: HttpRuntime calls the default BundleHandler.ProcessRequest method to handle the bundling and minification request initiated by the browser. ProcessRequest gets the appropriate bundle from the BundleTable class and calls Bundle.ProcessRequest. The Bundle.ProcessRequest method first retrieves the bundle's Url and invokes Bundle.GetBundleResponse. GetBundleResponse first performs a cache lookup. If there is no cache available, it calls GenerateBundleResponse. The GenerateBundleResponse method creates an instance of BundleResponse, sets the files to be processed in correct order, and finally invokes IBundleTransform.Process. The response is then written to BundleResponse from this method and the output is thrown back to the client. The preceding flow diagram summarizes how the transformation is handled by ASP.NET. The final call to IBundleTransform returns the response back to the browser. There's more... Now let's talk about some other options, or possibly some pieces of general information that are relevant to this task. How to configure the compilation of pages in ASP.NET websites Compilation also plays a very vital role in the performance of websites. As we have already mentioned, we have background GC available to the servers with .NET 4.5 releases, which means when GC starts collecting unreferenced objects, there will be no suspension of executing threads on the server. The GC can start collecting in the background. The support of multicore JIT will increase the performance of non-JITed files as well. By default, .NET 4.5 supports multicore JIT. If you want to disable this option, you can use the following code. <system.web> <compilation profileGuidedOptimizations="None" /> </system.web> This configuration will disable the support of spreading the JIT into multiple cores. The server enables a Prefetcher technology, similar to what Windows uses, to reduce the disk read cost of paging during application startup. The Prefetcher is enabled by default, you can also disable this using the following code: <system.web> <compilation enablePrefetchOptimization ="false" /> </system.web> This settings will disable the Prefetcher technology on the ASP.NET site. You can also configure your server to directly manipulate the amount of GC: <runtime> <performanceScenario value="HighDensityWebHosting" /> The preceding configuration will make the website a high density website. This will reduce the amount of memory consumed per session. What is unobtrusive validation Validation plays a very vital role for any application that employs user input. We generally use ASP.NET data validators to specify validation for a particular control. The validation forms the basis of any input. People use validator controls available in ASP.NET (which include RequiredFieldValidator, RangeValidator, and so on) to validate the controls when either a page is submitted or when the control loses its focus or on any event the validator is associated with. Validators being most popular server-side controls that handles client-side validation by producing an inline JavaScript block inside the actual page that specifies each validator. Let us take an instance: <asp:TextBox ID="Username" runat="server"></asp:TextBox> <asp:RequiredFieldValidator ErrorMessage="Username is required!" ControlToValidate="Username" runat="server"></asp:RequiredFieldValidator> <asp:RegularExpressionValidator ErrorMessage="Username can only contain letters!" ControlToValidate="Username" ValidationExpression="^[A-Za-z]+$" runat="server"></asp:RegularExpressionValidator> The validator handles both the client-side and server-side validations. When the preceding lines are rendered in the browser, it produces a mess of inline JavaScript. .NET 4.5 uses unobtrusive validation. That means the inline JavaScript is replaced by the data attributes in the HTML. This is a normal HTML-only code and hence performs better than the inline HTML and is also very understandable, neat, and clean. You can also turn off the default behavior of the application just by adding the line in Application_Start of the Global.asax file: void Application_Start(object sender, EventArgs e) { //Disable UnobtrusiveValidation application wide ValidationSettings.UnobtrusiveValidationMode = UnobtrusiveValidationMode.None; } The preceding code will disable the feature for the application. Applying appSettings configuration key values Microsoft has implemented the ASP.NET web application engine in such a way that most of its configurations can be overridden by the developers while developing applications. There is a special configuration file named Machine.config that provides the default configuration of each of the config sections present for every application. web.config is specific to an application hosted on IIS. The IIS reads through the configuration of each directory to apply for the pages inside it. As configuring a web application is such a basic thing for any application, there is always a need to have a template for a specific set of configuration without rewriting the whole section inside web.config again. There are some specific requirements from the developers perspective that could be easily customizable without changing too much on the config.ASP.NET 4.5 introduces magic strings that could be used as configuration key values in the appSettings element that could give special meaning to the configuration. For instance, if you want the default built-in JavaScript encoding to encode & character, you might use the following: <appSettings> <add key="aspnet:JavaScriptDoNotEncodeAmpersand" value="false" /> </appSettings> This will ensure that & character is encoded as "u0026" which is the JavaScript-escaped form of that character. When the value is true, the default JavaScript string will not encode &. On the other hand, if you need to allow ScriptResource.axd to serve arbitrary static files other than JavaScript, you can use another magic appSettings key to handle this: <appSettings> <add key="aspnet:ScriptResourceAllowNonJsFiles" value="false" /> </appSettings> The configuration will ensure that ScriptResource.axd will not serve any file other than the .js extension even if the web page has a markup . Similar to this, you can also enable UnobtrusiveValidationMode on the website using a separate magic string on appSetting too: <appSettings> <add key="ValidationSettings:UnobtrusiveValidationMode" value="WebForms" /> </appSettings> This configuration will make the application to render HTML5 data-attributes for validators. There are a bunch of these appSettings key magic strings that you can use in your configuration to give special meaning to the web application. Refer to http://bit.ly/ASPNETMagicStrings for more information. DLL intern in ASP.NET servers Just like the reusability of string can be achieved using string intern tables, ASP.NET allows you to specify DLL intern that reduces the use of loading multiple DLLs into memory from different physical locations. The interning functionality introduced with ASP.NET reduces the RAM requirement and load time-even though the same DLL resides on multiple physical locations, they are loaded only once into the memory and the same memory is being shared by multiple processes. ASP.NET maintains symbolic links placed on the bin folder that map to a shared assembly. Sharing assemblies using a symbolic link requires a new tool named aspnet_intern.exe that lets you to create and manage the stored interned assemblies. To make sure that the assemblies are interned, we need to run the following code on the source directory: aspnet_inturn –mode exec –sourcedir "Temporary ASP.NET files" – interndir "c:assemblies" This code will put the shared assemblies placed inside assemblies directory interned to the temporary ASP.NET files. Thus, once a DLL is loaded into memory, it will be shared by other requests. How to work with statically-typed model binding in ASP.NET applications Binding is a concept that attaches the source with the target such that when something is modified on the source, it automatically reflects to the target. The concept of binding is not new in the .NET framework. It was there from the beginning. On the server-side controls, when we set DataSource, we generally don't expect DataSource to automatically produce the output to be rendered onto the actual HTML. We expect to call a DataBind method corresponding to the control. Something magical happens in the background that generates the actual HTML from DataSource and produces the output. DataSource expects a collection of items where each of the items produces single entry on the control. For instance, if we pass a collection as the data source of a grid, the data bind will enumerate the collection and each entry in the collection will create a row of DataGrid. To evaluate each property from within the individual element, we use DataBinder.Eval that uses reflection to evaluate the contextual property with actual data. Now we all know, DataBinder actually works on a string equivalent of the actual property, and you cannot get the error before you actually run the page. In case of model binding, the bound object generates the actual object. Model binding does have the information about the actual object for which the collection is made and can give you options such as IntelliSense or other advanced Visual Studio options to work with the item. Getting ready DataSource is a property of the Databound element that takes a collection and provides a mechanism to repeat its output replacing the contextual element of the collection with generated HTML. Each control generates HTML during the render phase of the ASP.NET page life cycle and returns the output to the client. The ASP.NET controls are built so elegantly that you can easily hook into its properties while the actual HTML is being created, and get the contextual controls that make up the HTML with the contextual data element as well. For a template control such as Repeater, each ItemTemplate property or the AlternateItemTemplate property exposes a data item in its callback when it is actually rendered. This is basically the contextual object of DataSource on the nth iteration. DataBinder.Eval is a special API that evaluates a property from any object using Reflection. It is totally a runtime evaluation and hence cannot determine any mistakes on designer during compile time. The contextual object also doesn't have any type-related information inherent inside the control. With ASP.NET 4.5, the DataBound controls expose the contextual object as generic types so that the contextual object is always strongly typed. The control exposes the ItemType property, which can also be used inside the HTML designer to specify the type of the contextual element. The object is determined automatically by the Visual Studio IDE and it produces proper IntelliSense and provides compile-time error checking on the type defined by the control. In this recipe we are going to see step by step how to create a control that is bound to a model and define the HTML using its inherent Item object. How to do it... Open Visual Studio and start an ASP.NET Web Application project. Create a class called Customer to actually implement the model. For simplicity, we are just using a class as our model: public class Customer { public string CustomerId { get; set; } public string Name { get; set; } } The Customer class has two properties, one that holds the identifier of the customer and another the name of the customer. Now let us add an ASPX file and add a Repeater control. The Repeater control has a property called ModelType that needs the actual logical path of the model class. Here we pass the customer. Once ModelType is set for the Repeater control, you can directly use the contextual object inside ItemTemplate just by specifying it with the : Item syntax: <asp:Repeater runat="server" ID="rptCustomers" ItemType="SampleBinding.Customer"> <ItemTemplate> <span><%# :Item.Name %></span> </ItemTemplate> </asp:Repeater> Here in this Repeater control we have directly accessed the Name property from the Customer class. So here if we specify a list of Customer values to its data source, it will bind the contextual objects appropriately. The ItemType property is available to all DataBound controls. The Databound controls in ASP.NET 4.5 also support CRUD operations. The controls such as Gridview, FormView, and DetailsView expose properties to specify SelectMethod, InsertMethod, UpdateMethod, or DeleteMethod. These methods allow you to pass proper methods that in turn allow you to specify DML statements. Add a new page called Details.aspx and configure it as follows: <asp:DetailsView SelectMethod="dvDepartments_GetItem" ID="dvDepartments" UpdateMethod="dvDepartments_ UpdateItem" runat="server" InsertMethod="dvDepartments_ InsertItem" DeleteMethod="dvDepartments_DeleteItem" ModelType="ModelBindingSample.ModelDepartment" AutoGenerateInsertB utton="true"> </asp:DetailsView> Here in the preceding code, you can see that I have specified all the DML methods. The code behind will have all the methods and you need to properly specify the methods for each operation. How it works... Every collection control loops through the Datasource control and renders the output. The Bindable controls support collection to be passed to it, so that it can make a template of itself by individually running the same code over and over again with a contextual object passed for an index. The contextual element is present during the phase of rendering the HTML. ASP.NET 4.5 comes with the feature that allows you to define the type of an individual item of the collection such that the template forces this conversion, and the contextual item is made available to the template. In other words, what we have been doing with Eval before, can now be done easily using the Item contextual object, which is of same type as we define in the ItemType property. The designer enumerates properties into an IntelliSense menu just like a C# code window to write code easier. Each databound control in ASP.NET 4.5 allows CRUD operations. For every CRUD operation there is a specific event handler that can be configured to handle operations defined inside the control. You should remember that after each of these operations, the control actually calls DataBind again so that the data gets refreshed. There's more... ModelBinding is not the only thing that is important. Let us discuss some of the other important concepts deemed fit to this category. ModelBinding with filter operations ModelBinding in ASP.NET 4.5 has been enhanced pretty much to support most of the operations that we regularly need with our ASP.NET pages. Among the interesting features is the support of filters in selection of control. Let us use DetailsView to introduce this feature: <asp:DropDownList ID="ddlDepartmentNames" runat="server" ItemType="ModelBindingSample.ModelDepartment" AutoPostBack="true" DataValueField="DepartmentId" DataTextField="DepartmentName" SelectMethod="GetDepartments"> </asp:DropDownList> <asp:DetailsView SelectMethod="dvDepartments_GetItems" ID="dvDepartments" UpdateMethod="dvDepartments_UpdateItem" runat="server" InsertMethod="dvDepartments_InsertItem" DeleteMethod="dvDepartments_DeleteItem" ItemType="ModelBindingSample.ModelCustomer" AutoGenerateIn sertButton="true"> </asp:DetailsView> Here you can see the DropDownList control calls Getdepartments to generate the list of departments available. The DetailsView control on the other hand uses the ModelCustomer class to generate the customer list. SelectMethod allows you to bind the control with the data. Now to get the filter out of SelectMethod we use the following code: public IQueryable<ModelCustomer> dvDepartments_GetItems([Control("ddlD epartmentNames")]string deptid) { // get customers for a specific id } This method will be automatically called when the drop-down list changes its value. The departmentid of the selected DropDownItem control is automatically passed into the method and the result is bound to DetailsView automatically. Remember, the dvDepartments_GetItems method always passes a Nullable parameter. So, if departmentid is declared as integer, it would have been passed as int? rather than int. The attribute on the argument specifies the control, which defines the value for the query element. You need to pass IEnumerable (IQueryable in our case) of the items to be bound to the control. You can also specify a filter using Querystring. You can use the following code: public IQueryable<Customer> GetCustomers([QueryString]string departmentid) { return null; }   This code will take departmentid from the query string and load the DataBound control instead of the control specified within the page. Introduction to HTML5 and CSS3 in ASP.NET applications Web is the media that runs over the Internet. It's a service that has already has us in its grasp. Literally, if you think of the Web, the first thing that can come into your mind is everything about HTML, CSS, and JavaScript. The browsers are the user agents that are used to communicate with the Web. The Web has been there for almost a decade and is used mostly to serve information about business, communities, social networks, and virtually everything that you can think of. For such a long period of time, users primarily use websites to see text-based content with minimum UI experiences and texts that can easily be consumed by search engines. In those websites, all that the browsers do is send a request for a page and the server serves the client with the appropriate page which is later rendered on the browser. But with the introduction to modern HTMLs, websites are gradually adopting interactivity in terms of CSS, AJAX, iFrame, or are even using sandboxed applications with the use of Silverlight, Flash, and so on. Silverlight and Adobe AIR (Flash) are specifically likely to be used when the requirement is great interactivity and rich clients. They totally look like desktop applications and interact with the user as much as they can. But the problems with a sandboxed application are that they are very slow and need every browser to install the appropriate plugin before they can actually navigate to the application. They are heavyweight and are not rendered by the browser engine. Even though they are so popular these days, most of the development still employs the traditional approach of HTML and CSS. Most businesses cannot afford the long loading waits or even as we move along to the lines of devices, most of these do not support them. The long term user requirements made it important to take the traditional HTML and CSS further, ornamenting it in such a way that these ongoing requirements could easily be solved using traditional code. The popularity of the ASP.NET technology also points to the popularity of HTML. Even though we are dealing with server-side controls (in case of ASP.NET applications), internally everything renders HTML to the browser. HTML5, which was introduced by W3C and drafted in June 2004, is going to be standardized in 2014 making most of the things that need desktop or sandboxed plugins easily carried out using HTML, CSS, and JavaScript. The long term requirement to have offline web, data storage, hardware access, or even working with graphics and multimedia is easily possible with the help of the HTML5 technology. So basically what we had to rely on (the sandbox browser plugins) is now going to be standardized. In this recipe, we are going to cover some of the interesting HTML5 features that need special attention. Getting ready HTML5 does not need the installation of any special SDK to be used. Most of the current browsers already support HTML5 and all the new browsers are going to support most of these features. The official logo of HTML5 has been considered as follows. HTML5 has introduced a lot of new advanced features but yet it also tries to simplify things that we commonly don't need to know but often need to remember in order to write code. For instance, the DocType element of an HTML5 document has been simplified to the following one line: <!DOCTYPE html> So, for an HTML5 document, the document type that specifies the page is simply HTML. Similar to DocType, the character set for the page is also defined in very simple terms. <meta charset="utf-8" /> The character set can be of any type. Here we specified the document to be UTF – 8. You do not need to specify the http-equiv attribute or content to define charset for the page in an HTML5 document according to the specification. Let us now jot down some of the interesting HTML5 items that we are going to take on in this recipe. Semantic tags, better markups, descriptive link relations, micro-data elements, new form types and field types, CSS enhancements and JavaScript enhancements. Not all browsers presently support every feature defined in HTML5. There are Modernizr scripts that can help as cross-browser polyfills for all browsers. You can read more information about it here. How to do it... The HTML5 syntax has been adorned with a lot of important tags which include header, nav, aside, figure, and footer syntaxes that help in defining better semantic meaning of the document: <body> <header> <hgroup> <h1>Page title</h1> <h2>Page subtitle</h2> </hgroup> </header> <nav> <ul> Specify navigation </ul> </nav> <section> <article> <header> <h1>Title</h1> </header> <section> Content for the section </section> </article> <article> <aside> Releated links </aside> <figure> <img src = "logo.jpg"/> <figcaption>Special HTML5 Logo</figcaption> </figure> <footer> Copyright © <time datetime="2010-11-08">2010</time>. </footer> </body> By reading the document, it clearly identifies the semantic meaning of the document. The header tag specifies the header information about the page. The nav tag defines the navigation panel. The Section tag is defined by articles and besides them, there are links. The img tag is adorned with the Figure tag and finally, the footer information is defined under the footer tag. A diagrammatic representation of the layout is shown as follows. The vocabulary of the page that has been previously defined by div and CSS classes are now maintained by the HTML itself and the whole document forms a meaning to the reader. HTML5 not only improves the semantic meaning of the document, it also adds new markup. For instance, take a look at the following code: <input list="options" type="text"/> <datalist id="options"> <option value="Abhishek"/> <option value="Abhijit"/> <option value="Abhik"/> </datalist> datalist specifies the autocomplete list for a control. A datalist item automatically pops up a menu while we type inside a textbox. The input tag specifies the list for autocomplete using the list attribute. Now if you start typing on the textbox, it specifies a list of items automatically. <details> <summary>HTML 5</summary> This is a sliding panel that comes when the HTML5 header is clicked </details> The preceding markup specifies a sliding panel container. We used to specify these using JavaScript, but now HTML5 comes with controls that handle these panels automatically. HTML5 comes with a progress bar. It supports the progress and meter tags that define the progress bar inside an HTML document: <meter min="0" max="100" low="40" high="90" optimum="100" value="91">A+</meter> <progress value="75" max="100">3/4 complete</progress> The progress shows 75 percent filled in and the meter shows a value of 91. HTML5 added a whole lot of new attributes to specify aria attributes and microdata for a block. For instance, consider the following code: <div itemscope itemtype="http://example.org/band"> <ul id="tv" role="tree" tabindex="0" aria-labelledby="node1"> <li role="treeitem" tabindex="-1" aria-expanded="true">Inside Node1</li> </li> </ul> Here, Itemscope defines the microdata and ul defines a tree with aria attributes. These data are helpful for different analyzers or even for automated tools or search engines about the document. There are new Form types that have been introduced with HTML5: <input type="email" value="some@email.com" /> <input type="date" min="2010-08-14" max="2011-08-14" value="2010-08-14"/> <input type="range" min="0" max="50" value="10" /> <input type="search" results="10" placeholder="Search..." /> <input type="tel" placeholder="(555) 555-5555" pattern="^(?d{3})?[-s]d{3}[-s]d{4}.*?$" /> <input type="color" placeholder="e.g. #bbbbbb" /> <input type="number" step="1" min="-5" max="10" value="0" /> These inputs types give a special meaning to the form. The preceding figure shows how the new controls are laid out when placed inside a HTML document. The controls are email, date, range, search, tel, color, and number respectively. HTML5 supports vector drawing over the document. We can use a canvas to draw 2D as well as 3D graphics over the HTML document: <script> var canvasContext = document.getElementById("canvas"). getContext("2d"); canvasContext.fillRect(250, 25, 150, 100); canvasContext.beginPath(); canvasContext.arc(450, 110, 100, Math.PI * 1/2, Math.PI * 3/2); canvasContext.lineWidth = 15; canvasContext.lineCap = 'round'; canvasContext.strokeStyle = 'rgba(255, 127, 0, 0.5)'; canvasContext.stroke(); </script> Consider the following diagram. The preceding code creates an arc on the canvas and a rectangle filled with the color black as shown in the diagram. The canvas gives us the options to draw any shape within it using simple JavaScript. As the world is moving towards multimedia, HTML5 introduces audio and video tags that allow us to run audio and video inside the browser. We do not need any third-party library or plugin to run audio or video inside a browser: <audio id="audio" src = "sound.mp3" controls></audio> <video id="video" src = "movie.webm" autoplay controls></video> The audio tag runs the audio and the video tag runs the video inside the browser. When controls are specified, the player provides superior browser controls to the user. With CSS3 on the way, CSS has been improved greatly to enhance the HTML document styles. For instance, CSS constructs such as .row:nth-child(even) gives the programmer control to deal with a particular set of items on the document and the programmer gains more granular programmatic approach using CSS. How it works... HTML5 is the standardization to the web environments with W3C standards. The HTML5 specifications are still in the draft stage (a 900-page specification available at http://www.w3.org/html/wg/drafts/html/master/), but most modern browsers have already started supporting the features mentioned in the specifications. The standardization is due in 2014 and by then all browsers need to support HTML5 constructs. Moreover, with the evolution of smart devices, mobile browsers are also getting support for HTML5 syntaxes. Most smart devices such as Android, iPhone, or Windows Phone now support HTML5 browsers and the HTML that runs over big devices can still show the content on those small browsers. HTML5 improves the richness of the web applications and hence most people have already started shifting their websites to the future of the Web. There's more... HTML5 has introduced a lot of new enhancements which cannot be completed using one single recipe. Let us look into some more enhancements of HTML5, which are important to know. How to work with web workers in HTML5 Web workers are one of the most awaited features of the entire HTML5 specification. Generally, if we think of the current environment, it is actually turning towards multicore machines. Today, it's verbal that every computer has at least two cores installed in their machine. Browsers are well capable of producing multiple threads that can run in parallel in different cores. But programmatically, we cannot have the flexibility in JavaScript to run parallel tasks in different cores yet. Previously, developers used setTimeout, setInterval, or XMLHttprequst to actually create non-blocking calls. But these are not truly a concurrency. I mean, if you still put a long loop inside setTimeout, it will still hang the UI. Actually these works asynchronously take some of the UI threads time slices but they do not actually spawn a new thread to run the code. As the world is moving towards client-side, rich user interfaces, we are prone to develop codes that are capable of computation on the client side itself. So going through the line, it is important that the browsers support multiple cores to be used up while executing a JavaScript. Web workers are actually a JavaScript type that enable you to create multiple cores and run your JavaScript in a separate thread altogether, and communicate the UI thread using messages in a similar way as we do for other languages. Let's look into the code to see how it works. var worker = new Worker('task.js'); worker.onmessage = function(event) { alert(event.data); }; worker.postMessage('data'); Here we will load task.js from Worker.Worker is a type that invokes the code inside a JavaScript in a new thread. The start of the thread is called using postMessage on the worker type. Now we have already added a callback to the event onmessage such that when the JavaScript inside task.js invokes postMessage, this message is received by this callback. Inside task.js we wrote. self.onmessage = function(event) { // Do some CPU intensive work. self.postMessage("recv'd: " + event.data); }; Here after some CPU-intensive work, we use self.postMessage to send the data we received from the UI thread and the onmessage event handler gets executed with message received data. Working with Socket using HTML5 HTML5 supports full-duplex bidirectional sockets that run over the Web. The browsers are capable of invoking socket requests directly using HTML5. The important thing that you should note with sockets is that it sends only the data without the overload of HTTP headers and other HTTP elements that are associated with any requests. The bandwidth using sockets is dramatically reduced. To use sockets from the browsers, a new protocol has been specified by W3C as a part of the HTML5 specification. WebSocket is a new protocol that supports two-way communication between the client and the server in a single TCP channel. To start socket server, we are going to use node.js for server side. Install node.js on the server side using the installer available at http://nodejs.org/dist/v0.6.6/node-v0.6.6.msi. Once you have installed node.js, start a server implementation of the socket. var io = require('socket.io'); //Creates a HTTP Server var socket = io.listen(8124); //Bind the Connection Event //This is called during connection socket.sockets.on('connection',function(socket){ //This will be fired when data is received from client socket.on('message', function(msg){ console.log('Received from client ',msg); }); //Emit a message to client socket.emit('greet',{hello: 'world'}); //This will fire when the client has disconnected socket.on('disconnect', function(){ console.log('Server has disconnected'); }); }); In the preceding code, the server implementation has been made. The require('socket.io') code snippet specifies the include module header. socket.io provides all the APIs from node.js that are useful for socket implementation. The Connection event is fired on the server when any client connects with the server. We have used to listen at the port 8124 in the server. socket.emit specifies the emit statement or the response from the server when any message is received by it. Here during the greet event, we pass a JSON object to the client which has a property hello. And finally, the disconnect event is called when the client disconnects the socket. Now to run this client implementation, we need to create a HTML file. <html> <title>WebSocket Client Demo</title> <script src = "http://localhost:8124/socket.io/socket.io.js"></ script> <script> //Create a socket and connect to the server var socket = io.connect('http://localhost:8124/'); socket.on("connect",function(){ alert("Client has connected to the server"); }); socket.on('greet', function (data) { alert(data.hello); } ); </script </html> Here we connect the server at 8124 port. The connect event is invoked first. We call an alert method when the client connects to the server. Finally, we also use the greet event to pass data from the server to the client. Here, if we run both the server and the client, we will see two alerts; one when the connection is made and the other alert to greet. The greet message passes a JSON object that greets with world. The URL for the socket from the browser looks like so. [scheme] '://' [host] '/' [namespace] '/' [protocol version] '/' [transport id] '/' [session id] '/' ( '?' [query] ) Here, we see. Scheme: This can bear values such as http or https (for web sockets, the browser changes it to ws:// after the connection is established, it's an upgrade request) host: This is the host name of the socket server namespace: This is the Socket.IO namespace, the default being socket.io protocol version: The client support default is 1 transport id: This is for the different supported transports which includes WebSockets, xhr-polling, and so on session id: This is the web socket session's unique session ID for the client Getting GeoLocation from the browser using HTML5 As we are getting inclined more and more towards devices, browsers are trying to do their best to actually implement features to suit these needs. HTML5 introduces GeoLocation APIs to the browser that enable you to get the location of your position directly using JavaScript. In spite of it being very much primitive, browsers are capable of detecting the actual location of the user using either Wi-Fi, satellite, or other external sources if available. As a programmer, you just need to call the location API and everything is handled automatically by the browser. As geolocation bears sensitive information, it is important to ask the user for permission. Let's look at the following code. if (navigator.geolocation) { navigator.geolocation.getCurrentPosition(function(position) { var latLng = "{" + position.coords.latitude + "," + position.coords. longitude + "with accuracy: " + position.coords.accuracy; alert(latLng); }, errorHandler); } Here in the code we first detect whether the geolocation API is available to the current browser. If it is available, we can use getCurrentPosition to get the location of the current device and the accuracy of the position as well. We can also use navigator.geolocation.watchPosition to continue watching the device location at an interval when the device is moving from one place to another.   Resources for Article: Further resources on this subject: Planning for a successful integration .NET 4.5 Extension Methods on IQueryable Core .NET Recipes
Read more
  • 0
  • 1
  • 4760

article-image-logging-and-reports
Packt
06 Aug 2013
24 min read
Save for later

Logging and Reports

Packt
06 Aug 2013
24 min read
(For more resources related to this topic, see here.) Logging and reporting Reporting is the most important part of any test execution, reason being it helps the user to understand the result of the test execution, point of failure, and reasons for the failure. Logging, on the other hand, is important to keep an eye on the execution flow or for debugging in case of any failures. TestNG by default generates a different type of report for its test execution. This includes an HTML and an XML report output. TestNG also allows its users to write their own reporter and use it with TestNG. There is also an option to write your own loggers, which are notified at runtime by TestNG. There are two main ways to generate a report with TestNG: Listeners : For implementing a listener class, the class has to implement the org.testng.ITestListener interface. These classes are notified at runtime by TestNG when the test starts, finishes, fails, skips, or passes. Reporters : For implementing a reporting class, the class has to implement an org.testng.IReporter interface. These classes are called when the whole suite run ends. The object containing the information of the whole test run is passed to this class when called. Each of them should be used depending upon the condition of how and when the reports have to be generated. For example, if you want to generate a custom HTML report at the end of execution then you should implement IReporter interface while writing extension. But in case you want to update a file at runtime and have to print a message as and when the tests are getting executed, then we should use the ITestListener interface. Writing your own logger We had earlier read about the different options that TestNG provides for logging and reporting. Now let's learn how to start using them. To start with, we will write a sample program in which we will use the ITestListener interface for logging purposes. Time for action – writing a custom logger Open Eclipse and create a Java project with the name CustomListener and with the following structure. Please make sure that the TestNG library is added to the build path of the project. Create a new class named SampleTest under the test.sample package and replace the following code in it: package test.sample; import org.testng.Assert; import org.testng.annotations.Test; public class SampleTest { @Test public void testMethodOne(){ Assert.assertTrue(true); } @Test public void testMethodTwo(){ Assert.assertTrue(false); } @Test(dependsOnMethods={"testMethodTwo"}) public void testMethodThree(){ Assert.assertTrue(true); } } The preceding test class contains three test methods out of which testMethodOne and testMethodThree will pass when executed, whereas testMethodTwo is made to fail by passing a false Boolean value to the Assert.assertTrue method which is used for truth conditions in the tests. In the preceding class test method testMethodThree depends on testMethodTwo. Create another new class named CustomLogging under the test.logger package and replace its code with the following code: package test.logger; import java.text.DateFormat; import java.text.SimpleDateFormat; import java.util.Date; import org.testng.ITestContext; import org.testng.ITestListener; import org.testng.ITestResult; public class CustomLogging implements ITestListener{ //Called when the test-method execution starts @Override public void onTestStart(ITestResult result) { System.out.println("Test method started: "+ result.getName()+ " and time is: "+getCurrentTime()); } //Called when the test-method execution is a success @Override public void onTestSuccess(ITestResult result) { System.out.println("Test method success: "+ result.getName()+ " and time is: "+getCurrentTime()); } //Called when the test-method execution fails @Override public void onTestFailure(ITestResult result) { System.out.println("Test method failed: "+ result.getName()+ " and time is: "+getCurrentTime()); } //Called when the test-method is skipped @Override public void onTestSkipped(ITestResult result) { System.out.println("Test method skipped: "+ result.getName()+ " and time is: "+getCurrentTime()); } //Called when the test-method fails within success percentage @Override public void onTestFailedButWithinSuccessPercentage(ITestResult result) { // Leaving blank } //Called when the test in xml suite starts @Override public void onStart(ITestContext context) { System.out.println("Test in a suite started: "+ context.getName()+ " and time is: "+getCurrentTime()); } //Called when the test in xml suite finishes @Override public void onFinish(ITestContext context) { System.out.println("Test in a suite finished: "+ context.getName()+ " and time is: "+getCurrentTime()); } //Returns the current time when the method is called public String getCurrentTime(){ DateFormat dateFormat = new SimpleDateFormat("HH:mm:ss:SSS"); Date dt = new Date(); return dateFormat.format(dt); } } The above test class extends the ITestListener interface and defines the overriding methods of the interface. Details of each of the methods are provided as comments inside the previous code. Each method when executed prints the respective test method name or the suite name and the time when it was called. The getCurrentTime method returns the current time in HH:mm:ss:SSS format using the Date and DateFormat class. Create a new testing XML file under the project with name simple-logger-testng.xml and copy the following contents onto it: <suite name="Simple Logger Suite"> <listeners> <listener class-name="test.logger.CustomLogging" /> </listeners> <test name="Simple Logger test"> <classes> <class name="test.sample.SampleTest" /> </classes> </test> </suite> The preceding XML defines a simple test which considers the class test.sample.SampleTest for test execution. The CustomLogging class which implements the ITestListener is added as a listener to the test suite using the listeners tag as shown in the preceding XML. Select the previous testing XML file in Eclipse and run it as TestNG suite. You will see the following test result in the Console window of Eclipse: The following screenshot shows the test methods that were executed, failed, and skipped in the test run: What just happened? We created a custom logger class which implements the ITestListener interface and attached itself to the TestNG test suite as a listener. Methods of this listener class are invoked by TestNG as and when certain conditions are met in the execution, for example, test started, test failure, test success, and so on. Multiple listeners can be implemented and added to the test suite execution, TestNG will invoke all the listeners that are attached to the test suite. Logging listeners are mainly used when we need to see the continuous status of the test execution when the tests are getting executed. Writing your own reporter In the earlier section we had seen an example of writing your custom logger and attaching it to TestNG. In this section we will cover, with an example, the method of writing your custom reporter and attaching it to TestNG. To write a custom reporter class, our extension class should implement the IReporter interface. Let's go ahead and create an example with the custom reporter. Time for action – writing a custom reporter Open the previously created project named CustomListener and create a package named reporter under the test package. Create a new class named CustomReporter under the test.reporter package and add the following code to it: package test.reporter; import java.util.List; import java.util.Map; import org.testng.IReporter; import org.testng.ISuite; import org.testng.ISuiteResult; import org.testng.ITestContext; import org.testng.xml.XmlSuite; public class CustomReporter implements IReporter { @Override public void generateReport(List<XmlSuite> xmlSuites, List<ISuite> suites, String outputDirectory) { //Iterating over each suite included in the test for (ISuite suite : suites) { //Following code gets the suite name String suiteName = suite.getName(); //Getting the results for the said suite Map<String, ISuiteResult> suiteResults = suite.getResults(); for (ISuiteResult sr : suiteResults.values()) { ITestContext tc = sr.getTestContext(); System.out.println("Passed tests for suite '" + suiteName + "' is:" + tc.getPassedTests().getAllResults().size()); System.out.println("Failed tests for suite '" + suiteName + "' is:" + tc.getFailedTests().getAllResults().size()); System.out.println("Skipped tests for suite '" + suiteName + "' is:" + tc.getSkippedTests().getAllResults().size()); } } } } The preceding class implements the org.testng.IReporter interface. It implements the definition for the method generateReport of the IReporter interface. The method takes three arguments , the first being xmlSuite, which is the list suites mentioned in the testng XML being executed. The second one being suites which contains the suite information after the test execution; this object contains all the information about the packages, classes, test methods, and their test execution results. The third being the outputDirectory, which contains the information of the output folder path where the reports will be generated. The custom report prints the total number of tests passed, failed, and skipped for each suite included in the particular test execution when added to TestNG as a listener. Create a new file named simple-reporter-testng.xml to the project and add the following code to it: <suite name="Simple Reporter Suite"> <listeners> <listener class-name="test.reporter.CustomReporter" /> </listeners> <test name="Simple Reporter test"> <classes> <class name="test.sample.SampleTest" /> </classes> </test> </suite> The preceding XML is a testng XML configuration file. It contains a single test with the class test.sample.SampleTest to be considered for test execution. The CustomReporter class is added as a listener to the test suite using the listeners and listener tag as defined in the previous file. Select the preceding XML file and run it as TestNG test suite in Eclipse. You will see the following test results under the Console window of Eclipse: What just happened? We successfully created an example of writing custom reporter and attaching it to TestNG as a listener. The preceding example shows a simple custom reporter which prints the number of failed, passed, and skipped tests on the console for each suite included the said test execution. Reporter is mainly used to generate the final report for the test execution. The extension can be used to generate XML, HTML, XLS, CSV, or text format files depending upon the report requirement. TestNG HTML and XML report TestNG comes with certain predefined listeners as part of the library. These listeners are by default added to any test execution and generate different HTML and XML reports for any test execution. The report is generated by default under the folder named testoutput and can be changed to any other folder by configuring it. These reports consist of certain HTML and XML reports that are TestNG specific. Let's create a sample project to see how the TestNG report is generated. Time for action – generating TestNG HTML and XML reports Open Eclipse and create a Java project with the name SampleReport having the following structure. Please make sure that the TestNG library is added to the build path of the project. Create a new class named SampleTest under the test package and replace the following code in it: package test; import org.testng.Assert; import org.testng.annotations.Test; public class SampleTest { @Test public void testMethodOne(){ Assert.assertTrue(true); } @Test public void testMethodTwo(){ Assert.assertTrue(false); } @Test(dependsOnMethods={"testMethodTwo"}) public void testMethodThree(){ Assert.assertTrue(true); } } The preceding test class contains three test methods out of which testMethodOne and testMethodThree will pass when executed, whereas testMethodTwo is made to fail by passing a false Boolean value to Assert.assertTrue method. In the preceding class test method testMethodThree depends on testMethodTwo. Select the previously created test class and run it as a TestNG test through Eclipse. Now refresh the Java project in Eclipse by selecting the project and pressing the F5 button or right-clicking and selecting Refresh , and this will refresh the project. You will see a new folder named test-output under the project. Expand the folder in Eclipse and you will see the following files as shown in the screenshot: Open index.html as shown in the preceding screenshot on your default web browser. You will see the following HTML report: Now open the file testing-results.xml in the default XML editor on your system, and you will see the following results in the XML file: What just happened? We successfully created a test project and generated a TestNG HTML and XML report for the test project. TestNG by default generates multiple reports as part of its test execution. These reports mainly include TestNG HTML report, TestNG emailable report, TestNG report XML, and JUnit report XML files. These files can be found under the output report folder (in this case test-output). These default report generation can be disabled while running the tests by setting the value of the property useDefaultListeners to false. This property can be set while using the build tools like Ant or Maven as explained in the previous chapter. Generating a JUnit HTML report JUnit is one of those unit frameworks which were initially used by many Java applications as a Unit test framework. By default, JUnit tests generate a simple report XML files for its test execution. These XML files can then be used to generate any custom reports as per the testing requirement. We can also generate HTML reports using the XML files. Ant has such a utility task which takes these JUnit XML files as input and generates an HTML report from it. We had earlier learnt that TestNG by default generates the JUnit XML reports for any test execution. We can use these XML report files as input for generation of a JUnit HTML report. Assuming we already have JUnit XML reports available from the earlier execution let's create a simple Ant build configuration XML file to generate an HTML report for the test execution. Time for action – generating a JUnit report Go to the previously created Java project in Eclipse. Create a new file named junit-report-build.xml by selecting the project. Add the following code to the newly created file and save it: <project name="Sample Report" default="junit-report" basedir="."> <!-- Sets the property variables to point to respective directories --> <property name="junit-xml-dir" value="${basedir}/test-output/junitreports"/> <property name="report-dir" value="${basedir}/html-report" /> <!-- Ant target to generate html report --> <target name="junit-report"> <!-- Delete and recreate the html report directories --> <delete dir="${report-dir}" failonerror="false"/> <mkdir dir="${report-dir}" /> <mkdir dir="${report-dir}/Junit" /> <!-- Ant task to generate the html report. todir - Directory to generate the output reports fileset - Directory to look for the junit xml reports. report - defines the type of format to be generated. Here we are using "noframes" which generates a single html report. --> <junitreport todir="${report-dir}/Junit"> <fileset dir="${junit-xml-dir}"> <include name="**/*.xml" /> </fileset> <report format="noframes" todir="${report-dir}/Junit" /> </junitreport> </target> </project> The preceding XML defines a simple Ant build.xml file having a specific Ant target named junit-report that generates a JUnit report when executed. The target looks for the JUnit report XML files under the directory test-output/junitreports. For the Ant configuration file the default target to execute is configured as junit-report. Open the terminal window and go to the Java project directory in the terminal. Run the command ant –buildfile junit-report-build.xml and press Enter . Once executed a JUnit HTML report will be generated in the configured directory /html-report/Junit. Open the file named junit-noframes.html on your default web browser. You will see the following HTML report: What just happened? In this section we have seen how to use the JUnit XML report generated by TestNG and generate HTML report using Ant. There are two kinds of reports that can be generated using this method: frames and no-frames. If the report generation is configured with frames there will multiple files generated for each class and the main report will connect to them through links. A no-frames report consists of a single file with all the results of the test execution. This can be configured by providing the respective value to the format attribute of the report task in Ant. Generating a ReportNG report We had earlier seen that TestNG provides options to add custom listeners for logging and reporting. These listeners can easily be added to the TestNG execution and will be called during the execution or at the end of the execution depending upon the type of listener. ReportNG is a reporter add-on for TestNG that implements the report listener of TestNG. ReportNG reports are better looking reports compared to the original HTML reports. To generate a ReportNG report we have to add the reporting class to the list of listeners of TestNG while executing the tests. Let's see how to add ReportNG listener to TestNG and generate a ReportNG HTML report. In the following example we will use an Ant build XML file used to run our tests. Time for action – generating a ReportNG report Go to the earlier created Java project SampleProject in Eclipse. Download ReportNG from http://reportng.uncommons.org/ download site. Unzip and copy the reportng-<version>.jar and velocity-dep-<version>.jar from the unzipped folder to the lib folder under the project. Download guice from guice site https://code.google.com/p/google-guice/downloads/list. Unzip the downloaded guice zip file and copy the guice-<version>.jar to the lib folder under the project. Create a new file named testng.xml under the folder and add the following content to it: <suite name="Sample Suite"> <test name="Sample test"> <classes> <class name="test.SampleTest" /> </classes> </test> </suite> Create a new file named reportng-build.xml by selecting the project. Add the following code to the newly created file and save it: <project name="Testng Ant build" basedir="."> <!-- Sets the property varaibles to point to respective directories --> <property name="report-dir" value="${basedir}/html-report" /> <property name="testng-report-dir" value="${report-dir}/TestNG-report" /> <property name="lib-dir" value="${basedir}/lib" /> <property name="bin-dir" value="${basedir}/bin-dir" /> <property name="src-dir" value="${basedir}/src" /> <!-- Sets the classpath including the bin directory and all thejars under the lib folder --> <path id="test.classpath"> <pathelement location="${bin-dir}" /> <fileset dir="${lib-dir}"> <include name="*.jar" /> </fileset> </path> <!-- Deletes and recreate the bin and report directory --> <target name="init"> <delete dir="${bin-dir}" /> <mkdir dir="${bin-dir}" /> <delete dir="${report-dir}" /> <mkdir dir="${report-dir}" /> </target> <!-- Compiles the source code present under the "srcdir" and place class files under bin-dir --> <target name="compile" depends="init"> <javac srcdir="${src-dir}" classpathref="test.classpath" includeAntRuntime="No" destdir="${bin-dir}" /> </target> <!-- Defines a TestNG task with name "testng" --> <taskdef name="testng" classname="org.testng.TestNGAntTask" classpathref="test.classpath" /> <!--Executes the testng tests configured in the testng.xml file--> <target name="testng-execution" depends="compile"> <mkdir dir="${testng-report-dir}" /> <testng outputdir="${testng-report-dir}" classpathref="test.classpath" useDefaultListeners="false" listeners="org.uncommons.reportng.HTMLReporter"> <!-- Configures the testng xml file to use as test-suite --> <xmlfileset dir="${basedir}" includes="testng.xml" /> <sysproperty key="org.uncommons.reportng.title" value="ReportNG Report" /> </testng> </target> </project> The preceding XML defines a simple Ant build XML file that generates a ReportNG report when executed. The said XML compiles and runs the TestNG tests. ReportNG is added as a listener and the default listener of TestNG is disabled by setting a false value to the useDefaultListeners attribute while using the testng Ant task. Open the terminal window and go to the Java project directory in the terminal. Run the command ant -buildfile reportng-build.xml testing execution and then press Enter . Once executed a ReportNG HTML report will be generated in the configured directory html-reportTestNG-reporthtml under the said project directory. Go to the said directory and open the index.html file on your default web browser. You will see the following HTML report: By clicking on the Sample test link, you will see details of the test report as shown in the following screenshot: What just happened? In the previous section we learned how to generate a ReportNG HTML report for our test execution. We disabled the default TestNG reports in the previous Ant XML file but, if required, we can generate both the default as well as ReportNG reports by enabling the default report listeners. In the previous example the title of the report is configured by setting the property org.uncommons.reportng.title. There are other configuration options that we can use while generating the report, and we will cover these in the next section. ReportNG configuration options ReportNG provides different configuration options based on which the respective HTML report is generated. Following is a list of configurations that are supported: org.uncommons.reportng.coverage-report: This is configured as the link to the test coverage report. org.uncommons.reportng.escape-output: This property is used to turn off the log output in reports. By default it's turned off and is not recommended to be switched on as enabling this may require certain hacks to be implemented for proper report generation. org.uncommons.reportng.frames: This property is used to generate an HTML report with frameset and without frameset. The default value is set to true and hence it generates HTML reports with frameset by default. org.uncommons.reportng.locale: Used to override the localized messages in the generated HTML report. org.uncommons.reportng.stylesheet: This property can be used to customize the CSS property of the generated HTML report. org.uncommons.reportng.title: Used to define a report title for the generated HTML report. Have a go hero Write an Ant file to configure ReportNG to generate an HTML report without any frames for the TestNG execution. Generating a Reporty-ng (former TestNG-xslt) report While looking at the test report, senior managers might like to see the report in a graphical representation to know the status of the execution with just a glance. Reporty-ng (formerly called TestNG-xslt) is one such add-on report that generates a pie chart for your test execution with all the passed, failed, and skipped tests. This plugin uses the XSL file to convert the TestNG XML report into the custom HTML report with a pie chart. To use this plugin we will write an Ant target which will use the TestNG results XML file to generate the report. Let's go ahead and write an Ant target to generate the report. Time for action – generating a Reporty-ng report Open the previously created SampleReport in Eclipse. Download the Reporty-ng from the URL: https://github.com/cosminaru/reporty-ng At the time of writing this book the latest version available was Reporty-ng 1.2. You can download a newer version if available. Changes in the installation process should be minor if there are any at all. Unzip the downloaded zip and copy a file named testng-results.xsl from srcmainresources onto the resources folder under the said project. Copy the JARs saxon-8.7.jar and SaxonLiason.jar from the unzipped Reporty-ng lib folder to the project lib folder. Create a new Ant XML configuration file named reporty-ng-report.xml and paste the following code onto it: <project name="Reporty-ng Report" default="reporty-ng-report" basedir="."> <!-- Sets the property variables to point to respective directories --> <property name="xslt-report-dir" value="${basedir}/reporty-ng/" /> <property name="report-dir" value="${basedir}/html-report" /> <property name="lib-dir" value="${basedir}/lib" /> <path id="test.classpath"> <fileset dir="${lib-dir}"> <include name="**/*.jar" /> </fileset> </path> <target name="reporty-ng-report"> <delete dir="${xslt-report-dir}" /> <mkdir dir="${xslt-report-dir}" /> <xslt in="${basedir}/test-output/testng-results.xml" style="${basedir}/resources/testng-results.xsl" out="${xslt-report-dir}/index.html"> <param name="testNgXslt.outputDir" expression="${xslt-report-dir}" /> <param name="testNgXslt.sortTestCaseLinks" expression="true" /> <param name="testNgXslt.testDetailsFilter" expression="FAIL,SKIP,PASS,CONF,BY_CLASS" /> <param name="testNgXslt.showRuntimeTotals" expression="true" /> <classpath refid="test.classpath" /> </xslt> </target> </project> The preceding XML defines Ant build XML configuration, it contains an Ant target which generates the Reporty-ng report. The input path for testng results XML is configured through the in attribute of the xslt task in Ant. The transformation from XML to HTML is done using the testng-results.xsl of Reporty-ng, the location of which is configured by using the style attribute of the xslt task. The output HTML name is configured using the out attribute. Different configuration parameters for Reporty-ng are configured using the param task of Ant as shown in the preceding code. Now go to the said project folder through the command terminal and type the command ant –buildfile reporty-ng-build.xml and press Enter . You will see the following console output on the terminal: Now go to the configured report output folder reporty-ng (in this case) and open the file index.html in your default browser. You will see the following test report: On clicking the Default suite link on the left-hand side, a detailed report of the executed test cases will be displayed as shown in the following screenshot: What just happened? In the previous example we learned how to generate a Reporty-ng report using Ant. The report is very good from a report point of view as it gives a clear picture of the test execution through the pie chart. The output report can be configured using different configurations, which we will cover in the next section. Configuration options for Reporty-ng report As said earlier there are different configuration options that the Reporty-ng report supports while generating the report. Following is the list of supported configuration options and how they affect the report generation: testNgXslt.outputDir: Sets the target output directory for the HTML content. This is mandatory and must be an absolute path. If you are using the Maven plugin, this is set automatically so you don't have to provide it. testNgXslt.cssFile: Specifies an alternative style sheet file overriding the default settings. This parameter is not required. testNgXslt.showRuntimeTotals: Boolean flag indicating if the report should display the aggregated information about the method durations. The information is displayed for each test case and aggregated for the whole suite. Non-mandatory parameter, defaults to false. testNgXslt.reportTitle: Use this setting to specify a title for your HTML reports. This is not a mandatory parameter and defaults to TestNG Results. testNgXslt.sortTestCaseLinks: Indicates whether the test case links (buttons) in the left frame should be sorted alphabetically. By default they are rendered in the order they are generated by TestNG so you should set this to true to change this behavior. testNgXslt.chartScaleFactor: A scale factor for the SVG pie chart in case you want it larger or smaller. Defaults to 1. testNgXslt.testDetailsFilter: Specifies the default settings for the checkbox filters at the top of the test details page. Can be any combination (comma-separated) of: FAIL,PASS,SKIP,CONF,BY_CLASS. Have a go hero Write an Ant target in Ant to generate a Reporty-ng report with only Fail and Pass filter options. Pop quiz – logging and reports Q1. Which interface should the custom class implement for tracking the execution status as and when the test is executed? org.testng.ITestListener org.testng.IReporter Q2. Can we disable the default reports generated by the TestNG? Yes No Summary In this chapter we have covered different sections related to logging and reporting with TestNG. We have learned about different logging and reporting options provided by TestNG, writing our custom loggers and reporters and methods to generate different reports for our TestNG execution. Each of the reports have certain characteristics and any or all of the reports can be generated and used for test execution depending upon the requirement. Till now we have been using the XML configuration methodology of defining and writing our TestNG test suites. In the next chapter we will learn how to define/configure the TestNG test suite through code. This is helpful for defining the test suite at runtime. Resources for Article : Further resources on this subject: Developing Seam Applications [Article] The Aliens Have Landed! [Article] Visual Studio 2010 Test Types [Article]
Read more
  • 0
  • 0
  • 4751
article-image-drools-jboss-rules-50complex-event-processing
Packt
08 Sep 2010
15 min read
Save for later

Drools JBoss Rules 5.0:Complex Event Processing

Packt
08 Sep 2010
15 min read
(For more resources on JBoss see here.) CEP and ESP CEP and ESP are styles of processing in an Event Driven Architecture (General introduction to Event Driven Architecture can be found at: http://elementallinks.typepad.com/bmichelson/2006/02/eventdriven_arc.html). One of the core benefits of such an architecture is that it provides loose coupling of its components. A component simply publishes events about actions that it is executing and other components can subscribe/listen to these events. The producer and the subscriber are completely unaware of each other. A subscriber listens for events and doesn't care where they come from. Similarly, a publisher generates events and doesn't know anything about who is listening to those events. Some orchestration layer then deals with the actual wiring of subscribers to publishers. An event represents a significant change of state. It usually consists of a header and a body. The header contains meta information such as its name, time of occurrence, duration, and so on. The body describes what happened. For example, if a transaction has been processed, the event body would contain the transaction ID, the amount transferred, source account number, destination account number, and so on. CEP deals with complex events. A complex event is a set of simple events. For example, a sequence of large withdrawals may raise a suspicious transaction event. The simple events are considered to infer that a complex event has occurred. ESP is more about real-time processing of huge volume of events. For example, calculating the real-time average transaction volume over time. More information about CEP and ESP can be found on the web site, http://complexevents.com/ or in a book written by Prof. David Luckham, The Power of Events. This book is considered the milestone for the modern research and development of CEP. There are many existing pure CEP/ESP engines, both commercial and open source. Drools Fusion enhances the rule based programming with event support. It makes use of its Rete algorithm and provides an alternative to existing engines. Drools Fusion Drools Fusion is a Drools module that is a part of the Business Logic Integration Platform. It is the Drools event processing engine covering both CEP and ESP. Each event has a type, a time of occurrence, and possibly, a duration. Both point in time (zero duration) and interval-based events are supported. Events can also contain other data like any other facts—properties with a name and type. All events are facts but not all facts are events. An event's state should not be changed. However, it is valid to populate the unpopulated values. Events have clear life cycle windows and may be transparently garbage collected after the life cycle window expires (for example, we may be interested only in transactions that happened in the last 24 hours). Rules can deal with time relationships between events. Fraud detection It will be easier to explain these concepts by using an example—a fraud detection system. Fraud in banking systems is becoming a major concern. The amount of online transactions is increasing every day. An automatic system for fraud detection is needed. The system should analyze various events happening in a bank and, based on a set of rules, raise an appropriate alarm. This problem cannot be solved by the standard Drools rule engine. The volume of events is huge and it happens asynchronously. If we simply inserted them into the knowledge session, we would soon run out of memory. While the Rete algorithm behind Drools doesn't have any theoretical limitation on number of objects in the session, we could use the processing power more wisely. Drools Fusion is the right candidate for this kind of task. Problem description Let's consider the following set of business requirements for the fraud detection system: If a notification is received from a customer about a stolen card, block this account and any withdrawals from this account. Check each transaction against a blacklist of account numbers. If the transaction is transferring money from/to such an account, then flag this transaction as suspicious with the maximum severity. If there are two large debit transactions from the same account within a ninety second period and each transaction is withdrawing more than 300% of the average monthly (30 days) withdrawal amount, flag these transactions as suspicious with minor severity. If there is a sequence of three consecutive, increasing, debit transactions originating from a same account within a three minute period and these transactions are together withdrawing more than 90% of the account's average balance over 30 days, then flag those transactions as suspicious with major severity and suspend the account. If the number of withdrawals over a day is 500% higher than the average number of withdrawals over a 30 day period and the account is left with less than 10% of the average balance over a month (30 days), then flag the account as suspicious with minor severity. Duplicate transactions check—if two transactions occur in a time window of 15 seconds that have the same source/destination account number, are of the same amount, and just differ in the transaction ID, then flag those transactions as duplicates. Monitoring: Monitor the average withdrawal amount over all of the accounts for 30 days. Monitor the average balance across all of the accounts. Design and modeling Looking at the requirements, we'll need a way of flagging a transaction as suspicious. This state can be added to an existing Transaction type, or we can externalize this state to a new event type. We'll do the latter. The following new events will be defined: TransactionCreatedEvent—An event that is triggered when a new transaction is created. It contains a transaction identifier, source account number, destination account number, and the actual amount transferred. TransactionCompletedEvent—An event that is triggered when an existing transaction has been processed. It contains the same fields as the TransactionCreatedEvent class. AccountUpdatedEvent—An event that is triggered when an account has been updated. It contains the account number, current balance, and the transaction identifier of a transaction that initiated this update. SuspiciousAccount—An event triggered when there is some sort of a suspicion around the account. It contains the account number and severity of the suspicion. It is an enumeration that can have two values MINOR and MAJOR. This event's implementation is shown in the following code. SuspiciousTransaction—Similar to SuspiciousAccount, this is an event that flags a transaction as suspicious. It contains a transaction identifier and severity level. LostCardEvent—An event indicating that a card was lost. It contains an account number. One of events described—SuspiciousAccount—is shown in the following code. It also defines SuspiciousAccountSeverity enumeration that encapsulates various severity levels that the event can represent. The event will define two properties. One of them is already mentioned severity and the other one will identify the account—accountNumber. /*** marks an account as suspicious*/public class SuspiciousAccount implements Serializable { public enum SuspiciousAccountSeverity { MINOR, MAJOR}private final Long accountNumber;private final SuspiciousAccountSeverity severity;public SuspiciousAccount(Long accountNumber, SuspiciousAccountSeverity severity) { this.accountNumber = accountNumber; this.severity = severity; } private transient String toString; @Override public String toString() { if (toString == null) { toString = new ToStringBuilder(this).appendSuper( super.toString()).append("accountNumber", accountNumber).append("severity", severity) .toString(); } return toString;} Code listing 1: Implementation of SuspiciousAccount event. Please note that the equals and hashCode methods in SuspiciousAccount from the preceding code listing are not overridden. This is because an event represents an active entity, which means that each instance is unique. The toString method is implemented using org.apache.commons.lang.builder.ToStringBuilder. All of these event classes are lightweight, and they have no references to other domain classes (no object reference; only a number—accountNumber—in this case). They are also implementing the Serializable interface. This makes them easier to transfer between JVMs. As best practice, this event is immutable. The two properties above (accountNumber and severity) are marked as final. They can be set only through a constructor (there are no set methods—only two get methods). The get methods were excluded from this code listing. The events themselves don't carry a time of occurrence—a time stamp (they easily could, if we needed it; we'll see how in the next set of code listings). When the event is inserted into the knowledge session, the rule engine assigns such a time stamp to FactHandle that is returned. (Remember? session.insert(..) returns a FactHandle). In fact, there is a special implementation of FactHandle called EventFactHandle. It extends the DefaultFactHandle (which is used for normal facts) and adds few additional fields, for example—startTimestamp and duration. Both contain millisecond values and are of type long. Ok, we now have the event classes and we know that there is a special FactHandle for events. However, we still don't know how to tell Drools that our class represents an event. Drools type declarations provide this missing link. It can define new types and enhance existing types. For example, to specify that the class TransactionCreatedEvent is an event, we have to write: declare TransactionCreatedEvent @role( event )end Code listing 2: Event role declaration (cep.drl file). This code can reside inside a normal .drl file. If our event had a time stamp property or a duration property, we could map it into startTimestamp or duration properties of EventFactHandle by using the following mapping: @duration( durationProperty ) Code listing 3: Duration property mapping. The name in brackets is the actual name of the property of our event that will be mapped to the duration property of EventFactHandle. This can be done similarly for startTimestamp property. As an event's state should not be changed (only unpopulated values can be populated), think twice before declaring existing beans as events. Modification to a property may result in an unpredictable behavior. In the following examples, we'll also see how to define a new type declaration Fraud detection rules Let's imagine that the system processes thousands of transactions at any given time. It is clear that this is challenging in terms of time and memory consumption. It is simply not possible to keep all of the data (transactions, accounts, and so on) in memory. A possible solution would be to keep all of the accounts in memory as there won't be that many of them (in comparison to transactions) and keep only transactions for a certain period. With Drools Fusion, we can do this very easily by declaring that a Transaction is an event. The transaction will then be inserted into the knowledge session through an entry-point. Each entry point defines a partition in the input data storage, reducing the match space and allowing patterns to target specific partitions. Matching data from a partition requires explicit reference at the pattern declaration. This makes sense, especially if there are large quantities of data and only some rules are interested in them. We'll look at entry points in the following example. If you are still concerned about the volume of objects in memory, this solution can be easily partitioned, for example, by account number. There might be more servers, each processing only a subset of accounts (simple routing strategy might be: accountNumber module totalNumberOfServersInCluster). Then each server would receive only appropriate events. Notification The requirement we're going to implement here is essentially to block an account whenever a LostCardEvent is received. This rule will match two facts: one of type Account and one of type LostCardEvent. The rule will then set the the status of this account to blocked. The implementation of the rule is as follows: rule notification when $account : Account( status != Account.Status.BLOCKED ) LostCardEvent( accountNumber == $account.number ) from entry-point LostCardStream then modify($account) { setStatus(Account.Status.BLOCKED) };end Code listing 4: Notification rule that blocks an account (cep.drl file). As we already know, Account is an ordinary fact from the knowledge session. The second fact—LostCardEvent—is an event from an entry point called LostCardStream. Whenever a new event is created and goes through the entry point, LostCardStream, this rule tries to match (checks if its conditions can be satisfied). If there is an Account in the knowledge session that didn't match with this event yet, and all conditions are met, the rule is activated. The consequence sets the status of the account to blocked in a modify block. As we're updating the account in the consequence and also matching on it in the condition, we have to add a constraint that matches only the non-blocked accounts to prevent looping (see above: status != Account.Status.BLOCKED). Test configuration setup Following the best practice that every code/rule needs to be tested, we'll now set up a class for writing unit tests. All of the rules will be written in a file called cep.drl. When creating this file, just make sure it is on the classpath. The creation of knowledgeBase won't be shown. It is similar to the previous tests that we've written. We just need to change the default knowledge base configuration slightly: KnowledgeBaseConfiguration config = KnowledgeBaseFactory .newKnowledgeBaseConfiguration();config.setOption( EventProcessingOption.STREAM ); Code listing 5: Enabling STREAM event processing mode on knowledge base configuration. This will enable the STREAM event processing mode. KnowledgeBaseConfiguration from the preceding code is then used when creating the knowledge base—KnowledgeBaseFactory.newKnowledgeBase(config). Part of the setup is also clock initialization. We already know that every event has a time stamp. This time stamp comes from a clock which is inside the knowledge session. Drools supports several clock types, for example, a real-time clock or a pseudo clock. The real-time clock is the default and should be used in normal circumstances. The pseudo clock is especially useful for testing as we have complete control over the time. The following initialize method sets up a pseudo clock. This is done by setting the clock type on KnowledgeSessionConfiguration and passing this object to the newStatefulKnowledgeSession method of knowledgeBase. The initialize method then makes this clock available as a test instance variable called clock when calling session.getSessionClock(), as we can see in the following code: public class CepTest { static KnowledgeBase knowledgeBase; StatefulKnowledgeSession session; Account account; FactHandle accountHandle; SessionPseudoClock clock; TrackingAgendaEventListener trackingAgendaEventListener; WorkingMemoryEntryPoint entry; @Before public void initialize() throws Exception { KnowledgeSessionConfiguration conf = KnowledgeBaseFactory.newKnowledgeSessionConfiguration(); conf.setOption( ClockTypeOption.get( "pseudo" ) ); session = knowledgeBase.newStatefulKnowledgeSession(conf, null); clock = (SessionPseudoClock) session.getSessionClock(); trackingAgendaEventListener = new TrackingAgendaEventListener(); session.addEventListener(trackingAgendaEventListener); account = new Account(); account.setNumber(123456l); account.setBalance(BigDecimal.valueOf(1000.00)); accountHandle = session.insert(account); Code listing 6: Unit tests setup (CepTest.java file). The preceding initialize method also creates an event listener and passes it into the session. The event listener is called TrackingAgendaEventListener. It simply tracks all of the rule executions. It is useful for unit testing to verify whether a rule fired or not. Its implementation is as follows: public class TrackingAgendaEventListener extends DefaultAgendaEventListener { List<String> rulesFiredList = new ArrayList<String>(); @Override public void afterActivationFired( AfterActivationFiredEvent event) { rulesFiredList.add(event.getActivation().getRule() .getName()); } public boolean isRuleFired(String ruleName) { for (String firedRuleName : rulesFiredList) { if (firedRuleName.equals(ruleName)) { return true; } } return false; } public void reset() { rulesFiredList.clear(); } } Code listing 7: Agenda event listener that tracks all rules that have been fired. DefaultAgendaEventListener comes from the org.drools.event.rule package that is part of drools-api.jar file as opposed to the org.drools.event package that is part of the old API in drools-core.jar. All of the Drools agenda event listeners must implement the AgendaEventListener interface. TrackingAgendaEventListener in the preceding code extends DefaultAgendaEventListener so that we don't have to implement all of the methods defined in the AgendaEventListener interface. Our listener just overrides the afterActivationFired method that will be called by Drools every time a rule's consequence has been executed. Our implementation of this method adds the fired rule name into a list of fired rules—rulesFiredList. Then the convenience method isRuleFired takes a ruleName as a parameter and checks if this rule has been executed/fired. The reset method is useful for clearing out the state of this listener, for example, after session.fireAllRules is called. Now, back to the test configuration setup. The last part of the initialize method from code listing 6 is account object creation (account = new Account(); ...). This is for convenience purposes so that every test does not have to create one. The account balance is set to 1000. The account is inserted into the knowledge session and its FactHandle is stored so that the account object can be easily updated. Testing the notification rule The test infrastructure is now fully set up and we can write a test for the notification rule from code listing 4 as follows: @Test public void notification() throws Exception { session.fireAllRules(); assertNotSame(Account.Status.BLOCKED,account.getStatus()); entry = session .getWorkingMemoryEntryPoint("LostCardStream"); entry.insert(new LostCardEvent(account.getNumber())); session.fireAllRules(); assertSame(Account.Status.BLOCKED, account.getStatus()); } Code listing 8: Notification rule's unit test (CepTest.java file). The test verifies that the account is not blocked by accident first. Then it gets the LostCardStream entry point from the session by calling: session.getWorkingMemoryEntryPoint("LostCardStream"). Then the code listing demonstrates how an event can be inserted into the knowledge session through an entry-point—entry.insert(new LostCardEvent(...)). In a real application, you'll probably want to use Drools Pipeline for inserting events into the knowledge session. It can be easily connected to an existing ESB (Enterprise Service Bus) or a JMS topic or queue. Drools entry points are thread-safe. Each entry point can be used by a different thread; however, no two threads can concurrently use the same entry point. In this case, it makes sense to start the engine in fireUntilHalt mode in a separate thread like this: new Thread(new Runnable() { public void run() { session.fireUntilHalt(); } }).start(); Code listing 9: Continuous execution of rules. The engine will then continuously execute activations until the session.halt() method is called. The test then verifies that the status of the account is blocked. If we simply executed session.insert(new LostCardEvent(..)); the test would fail because the rule wouldn't see the event.
Read more
  • 0
  • 0
  • 4739

article-image-planning-your-crm-implementation-using-civicrm
Packt
04 Mar 2011
10 min read
Save for later

Planning Your CRM Implementation Using CiviCRM

Packt
04 Mar 2011
10 min read
Using CiviCRM This article speaks to people who have the responsibility for initiating, scoping, and managing development and implementation for the CRM strategy. If you already have CiviCRM or another CRM operating in-house without a CRM strategy, it's not too late to be more methodical in your approach. The steps below can be used to plan to re-invigorate and re-focus the use of CRM within your organization as much as to plan the first implementation. Barriers to success Constituent Relationship Management initiatives can be difficult. At their best, they involve changing external relationships and internal processes and tools. Externally, the experiences that the constituents have of your organization need to change, so that they provide more value and fewer barriers to involvement. Internally, business processes and supporting technological systems need to change in order to break down departmental operations' silos, increase efficiencies, and enable more effective targeting, improved responsiveness, and new initiatives. The success of the CRM projects often depends on changing the behavior and attitudes of individuals across the organization and replacing, changing, and/or integrating many IT systems used across the organization. Succeeding with the management of organizational culture change may involve getting staff members to take on tasks and responsibilities that may not directly benefit them or the department managed by their supervisor, but only provide value to the organization by what it enables others in the organization to accomplish or avoid. As a result, it is more challenging to align the interests of the staff and organizational units with the organization's interest in improved constituent relations, as promised by the CRM strategy. This is why an Executive Sponsor, such as the Executive Director of a small or a medium-sized non-profit organization, is so important. On the technical side, CRM projects for reasonably-sized organizations typically involve replacing or integrating many systems. Configuring and customizing a single new software system, migrating data to it, testing and deploying it, and training the staff members can be a challenge at the best of times. Doing it for multiple systems and more users multiplies the challenge. Since a CRM initiative involves integrating separate systems, the complexity of such endeavors must be faced, such as disparate data schemas requiring transformations for interoperability, and keeping middleware in sync with changes in multiple independent software packages. Unfortunately, these challenges to the CRM implementation initiative may lead to a project failure if they are not realized and addressed. The common causes for failure are as follows: Lack of executive-level sponsorship resulting in improperly resolved turf wars. IT-led initiatives have a greater tendency to focus on cost efficiency. This focus will generally not result in better constituent relations that are oriented toward achieving the organization's mission. An IT approach, particularly where users and usability experts are not on the project team, may also lead to poor user adoption if the system is not adapted to their needs, or even if the users are poorly trained. No customer data integration approach resulting in not overcoming the data silos problem, no consolidated view of constituents, poorer targeting, and an inability to realize enterprise-level benefits. Lack of buy-in, leading to a lack of use of the new CRM system and continued use of the old processes and systems it was meant to supplant. Lack of training and follow-up training causing staff anxiety and opposition. This may cause non-use or misuse of the system, resulting in poor data handling and mix-ups in the way in which constituents are treated. Not customizing enough to actually meet the requirements of the organization in the areas of: Data integration Business processes User experiences Over-customizing, causing: The costs of the system to escalate The best practices incorporated in the base functionality to be overridden in some cases User forms to become overly complex User experiences to become off-putting No strategy for dealing with the technical challenges associated with developing, extending, and/or integrating the CRM software system, leading to: Cost overruns Poorly designed and built software Poor user experiences Incomplete or invalid data However, this does not mean that project failure is inevitable or common. These clearly identifiable causes of failure can be overcome through effective project planning. Perfection is the enemy of the good CRM systems and their functional components such as fundraising, ticket sales, communication with subscribers and other stakeholders, membership management, and case management are essential for the core operations of most non-profits. This can lead to a legitimate fear of project failure when changing them. However, this fear can easily create a perfectionist mentality, where the project team attempts to overcompensate by creating too much oversight, too much contingency planning, and too much project discovery time in an effort to avoid missing any potentially useful feature that could be integrated into the project. While planning is good, perfection may not be good, since perfection is often the enemy of the good. CRM implementations risk erring on the side of what is known, somewhat tongue-in- cheek, as the MIT Approach. The MIT approach believes in, and attempts to design, construct, and deploy, the "Right Thing" right from the start. Its big-brain approach to problem-solving leads to correctness, completeness, and consistency in the design. It values simplicity in the user interface over simplicity in the implementation design. The other end of the spectrum is captured with aphorisms like "Less is More," "KISS" (Keep it simple, stupid), and "Worse is Better". This alternate view willingly accepts deviations from correctness, completeness, and consistency in design in favor of general simplicity, or simplicity of implementation over the simplicity of user interface. The reason that such counter-intuitive approaches to developing solutions have become respected and popular is the problems and failures that can result from trying to do it all perfectly from the start. Neither end of the spectrum is healthier. Handcuffing the project to an unattainable standard of perfection, or over-simplifying in order to artificially reduce complexity will both lead to project failure. There is no perfect antidote to these two extremes. As a project manager, it will be your responsibility to set the tone, determine priorities, and plan the implementation and development process. Although it is not a perspective on project management, one rule that will help achieve balance and move the project forward is "Release early, release often." This is commonly embraced in the open source community where collaboration is essential to success. This motto: Captures the intent of catching errors earlier Allows users to capture value from the system sooner Allows users to better imagine and articulate what the software should do through ongoing use and interaction with a working system early in the process Development methodologies Whatever approach your organization decides to take for developing and implementing its CRM strategy, it's usually good to have an agreed upon process and methodology. Your processes define the steps to be taken as you implement the project. Your methodology defines the rules for the process, that is, the methods to be used throughout the course of the project. The spirit of the problem-solving approaches just reviewed can be seen in the Traditional Waterfall Development model and in the contrasting Iterative and Incremental Development model. Projects naturally change and evolve over time. You may find that you embrace one of these methodologies for initial implementation, and then migrate to a different method or mixed-method for maintenance and future development work. By no means should you feel restricted by the definitions provided, but rather adjust the principles to meet your changing needs throughout the course of the project. That being said, it's important that your team understands the project rules at a given point in time, so that the project management principles are respected. The conventional Waterfall Development methodology The traditional Waterfall method of software development is sometimes thought of as "big design upfront". It employs a sequential approach to development, moving from needs analysis and requirements, to architectural and user experience, detailed design, implementation, integration, testing, deployment, and maintenance. The output of each step or phase flows downward, like water, to the next step in the process, as illustrated by the arrows in the following figure: The Waterfall model tends to be more formal, more planned, includes more documentation, and often has a stronger division of labor. This methodology benefits from having clear, linear, and progressive development steps in which each phase builds upon the previous one. However, it can suffer from inflexibility if used too rigidly. For example, if during the verification and quality assurance phase, you realize a significant functionality gap resulting from incorrect (or changing) specification requirements, then it may be difficult to interject those new needs into the process. The "release early, release often" iterative principle mentioned earlier can help overcome that inflexibility. If the overall process is kept tight and the development window short, you can justify delaying the new functionality or corrective specifications for the next release. Iterative development methodology Iterative development models depart from this structure by breaking the work up into chunks that can be developed and delivered separately. The Waterfall process is used in each phase or segment of the project, but the overall project structure is not necessarily held to the same rigid process. As one moves farther away from the Waterfall approach, there is a greater emphasis on evaluating incrementally-delivered pieces of the solution, and incorporating feedback on what has already been developed into the planning of future work, as illustrated in the loop in the following figure: This methodology seeks to take what is good in the traditional Waterfall approach— structure, clearly-defined linear steps, a strong development/quality assurance/roll out process—and improve it through shorter development cycles that are centered on smaller segments of the overall project. Perhaps the biggest challenge in this model is the project management role, as it may result in many moving pieces that must be tightly coordinated in order to release the final working product. Agile development methodology Agile development methodologies are an effective derivative of the iterative development model that moves one step further away from the Waterfall model. They are characterized by requirements and solutions evolving together, requiring work teams to be drawn from all the relevant parts of the organization. They organize themselves to work in rapid one to four-week iteration cycles. Agile centers on time-based release cycles, and in this way, differs from the other methodologies discussed, which are oriented more toward functionality-based releases. The following figure illustrates the implementation of an Agile methodology that highlights short daily Scrum status meetings, a product backlog containing features or user stories for each iteration, and a sprint backlog containing revisable and reprioritizable work items for the team during the current iteration. A deliberate effort is usually made in order to ensure that the sprint backlog is long enough to ensure that the lowest priority items will not be dealt with before the end of the iteration. Although they can be put onto the list of work items that may or may not be selected for the next iteration, the idea is that the client or the product owner should, at some point, decide that it not worth investing more resources in the "nice to have, but not really necessary" items. As one might expect, this methodology relies heavily on effective prioritization. Since software releases and development cycles adhere to rigid timeframes, only high priority issues or features are actively addressed at a given point in time; the remainder issues falling lower on the list are subject to reassignment for the next cycle.
Read more
  • 0
  • 0
  • 4734

article-image-program-structure-execution-flow-and-runtime-objects
Packt
30 Jul 2014
8 min read
Save for later

Program structure, execution flow, and runtime objects

Packt
30 Jul 2014
8 min read
(For more resources related to this topic, see here.) Unfortunately, C++ isn't an easy language at all. Sometimes, you can think that it is a limitless language that cannot be learned and understood entirely, but you don't need to worry about that. It is not important to know everything; it is important to use the parts that are required in specific situations correctly. Practice is the best teacher, so it's better to understand how to use as many of the features as needed. In examples to come, we will use author Charles Simonyi's Hungarian notation. In his PhD in 1977, he used Meta-Programming – A software production method to make a standard for notation in programming, which says that the first letter of type or variable should represent the data type. For the example class that we want to call, a Test data type should be CTest, where the first letter says that Test is a class. This is good practice because a programmer who is not familiar with the Test data type will immediately know that Test is a class. The same standard stands for primitive types, such as int or double. For example, iCount stands for an integer variable Count, while dValue stands for a double variable Value. Using the given prefixes, it is easy to read code even if you are not so familiar with it. Getting ready Make sure Visual Studio is up and running. How to do it... Now, let's create our first program by performing the following steps and explaining its structure: Create a new C++ console application named TestDemo. Open TestDemo.cpp. Add the following code: #include "stdafx.h" #include <iostream> using namespace std; int _tmain(int argc, _TCHAR* argv[]) { cout << "Hello world" << endl; return 0; } How it works... The structure of the C++ program varies due to different programming techniques. What most programs must have is the #include or preprocessor directives. The #include <iostream> header tells the compiler to include the iostream.h header file where the available function prototypes reside. It also means that libraries with the functions' implementations should be built into executables. So, if we want to use some API or function, we need to include an appropriate header file, and maybe we will have to add an additional input library that contains the function/API implementation. One more important difference when including files is <header> versus "header". The first (<>) targets solution-configured project paths, while the other ("") targets folders relative to the C++ project. The using command instructs the compiler to use the std namespace. Namespaces are packages with object declarations and function implementations. Namespaces have a very important usage. They are used to minimize ambiguity while including third-party libraries when the same function name is used in two different packages. We need to implement a program entry point—the main function. As we said before, we can use main for an ANSI signature, wmain for a Unicode signature, or _tmain where the compiler will resolve its signature depending on the preprocessor definitions in the project property pages. For a console application, the main function can have the following four different prototypes: int _tmain(int argc, TCHAR* argv[]) void _tmain(int argc, TCHAR* argv[]) int _tmain(void) void _tmain(void) The first prototype has two arguments, argc and argv. The first argument, argc, or the argument count, says how many arguments are present in the second argument, argv, or the argument values. The argv parameter is an array of strings, where each string represents one command-line argument. The first string in argv is always the current program name. The second prototype is the same as the first one, except the return type. This means that the main function may or may not return a value. This value is returned to the OS. The third prototype has no arguments and returns an integer value, while the fourth prototype neither has arguments nor returns a value. It is good practice to use the first format. The next command uses the cout object. The cout object is the name of the standard output stream in C++, and the meaning of the entire statement is to insert a sequence of characters (in this case, the Hello world sequence of characters) into the standard output stream (usually corresponds to the screen). The cout object is declared in the iostream standard file within the std namespace. This is why we need to include the specific file and declare that we will use this specific namespace earlier in our code. In our usual selection of the (int _tmain(int,_TCHAR**)) prototype, _tmain returns an integer. We must specify some int value after the return command, in this case 0. When returning a value to the operating system, 0 usually means success, but this is operating system dependent. This simple program is very easy to create. We use this simple example to demonstrate the basic program structure and usage of the main routine as the entry point for every C++ program. Programs with one thread are executed sequentially line-by-line. This is why our program is not user friendly if we place all code into one thread. After the user gives the input, control is returned to the application. Only now can the application continue the execution. In order to overcome such an issue, we can create concurrent threads that will handle the user input. In this way, the application does not stall and is responsive all the time. After a thread handles its task, it can signal that a user has performed the requested operation to the application. There's more... Every time we have an operation that needs to be executed separately from the main execution flow, we have to think about a separate thread. The simplest example is when we have some calculations, and we want to have a progress bar where the calculation progress will be shown. If the same thread was responsible for calculation as well as to update the progress bar, probably it wouldn't work. This occurs because, if both work and a UI update are performed from a single thread, such threads can't interact with OS painting adequately; so almost always, the UI thread is separate from the working threads. Let's review the following example. Assume that we have created a function where we will calculate something, for example, sines or cosines of some angle, and we want to display progress in every step of the calculation: void CalculateSomething(int iCount) { int iCounter = 0; while (iCounter++ < iCount) { //make calculation //update progress bar } } As commands are executed one after another inside each iteration of the while loop, the operating system doesn't have the required time to properly update the user interface (in this case, the progress bar), so we would see an empty progress bar. After the function returns, a fully filled progress bar appears. The solution for this is to create a progress bar on the main thread. A separate thread should execute the CalculateSomething function; in each step of iteration, it should somehow signal the main thread to update the progress bar step by step. As we said before, threads are switched extremely fast on the CPU, so we can get the impression that the progress bar is updated at the same time at the calculation is performed. To conclude, each time we have to make a parallel task, wait for some kind of user input, or wait for an external dependency such as some response from a remote server, we will create a separate thread so that our program won't hang and be unresponsive. In our future examples, we will discuss static and dynamic libraries, so let's say a few words about both. A static library (*.lib) is usually some code placed in separate files and already compiled for future use. We will add it to the project when we want to use some of its features. When we wrote #include <iostream> earlier, we instructed the compiler to include a static library where implementations of input-output stream functions reside. A static library is built into the executable in compile time before we actually run the program. A dynamic library (*.dll) is similar to a static library, but the difference is that it is not resolved during compilation; it is linked later when we start the program, in another words—at runtime. Dynamic libraries are very useful when we have functions that a lot of programs will use. So, we don't need to include these functions into every program; we will simply link every program at runtime with one dynamic library. A good example is User32.dll, where the Windows OS placed a majority of GUI functions. So, if we create two programs where both have a window (GUI form), we do not need to include CreateWindow in both programs. We will simply link User32.dll at runtime, and the CreateWindow API will be available. Summary Thus the article covers the four main paradigms: imperative, declarative, functional (or structural), and object-oriented. Resources for Article: Further resources on this subject: OpenGL 4.0: Building a C++ Shader Program Class [Article] Application Development in Visual C++ - The Tetris Application [Article] Building UI with XAML for Windows 8 Using C [Article]
Read more
  • 0
  • 0
  • 4733
article-image-overview-tomcat-6-servlet-container-part-2
Packt
18 Jan 2010
8 min read
Save for later

An Overview of Tomcat 6 Servlet Container: Part 2

Packt
18 Jan 2010
8 min read
Nested components These components are specific to the Tomcat implementation, and their primary purpose is to enable the various Tomcat containers to perform their tasks. Valve A valve is a processing element that can be placed within the processing path of each of Tomcat's containers—engine, host, context, or a servlet wrapper. A Valve is added to a container using the <Valve> element in server.xml. They are executed in the order in which they are encountered within the server.xml file. The Tomcat distribution comes with a number of pre-rolled valves. These include: A valve that logs specific elements of a request (such as the remote client's IP address) to a log file or database A valve that lets you control access to a particular web application based on the remote client's IP address or host name A valve that lets you log every request and response header A valve that lets you configure single sign-on access across multiple web applications on a specific virtual host If these don't meet your needs, you can write your own implementations of org.apache.catalina.Valve and place them into service. A container does not hold references to individual valves. Instead, it holds a reference to a single entity known as the Pipeline, which represents a chain of valves associated with that container. When a container is invoked to process a request, it delegates the processing to its associated pipeline. The valves in a pipeline are arranged as a sequence, based on how they are defined within the server.xml file. The final valve in this sequence is known as the pipeline's basic valve. This valve performs the task that embodies the core purpose of a given container. Unlike individual valves, the pipeline is not an explicit element in server.xml, but instead is implicitly defined in terms of the sequence of valves that are associated with a given container. Each Valve is aware of the next valve in the pipeline. After it performs its pre processing, it invokes the next Valve in the chain, and when the call returns, it performs its own post processing before returning. This is very similar to what happens in filter chains within the servlet specification. In this image, the engine's configured valve(s) fire when an incoming request is received. An engine's basic valve determines the destination host and delegates processing to that host. The destination host's (www.host1.com) valves now fire in sequence. The host's basic valve then determines the destination context (here, Context1) and delegates processing to it. The valves configured for Context1 now fire and processing is then delegated by the context's basic valve to the appropriate wrapper, whose basic valve hands off processing to its wrapped servlet. The response then returns over the same path in reverse. A Valve becomes part of the Tomcat server's implementation and provides a way for developers to inject custom code into the servlet container's processing of a request. As a result, the class files for custom valves must be deployed to CATALINA_HOME/lib, rather than to the WEB-INF/classes of a deployed application. As they are not part of the servlet specification, valves are non-portable elements of your enterprise application. Therefore, if you rely on a particular valve, you will need to find equivalent alternatives in a different application server. It is important to note that valves are required to be very efficient in order not to introduce inordinate delays into the processing of a request. Realm Container managed security works by having the container handle the authentication and authorization aspects of an application. Authentication is defined as the task of ensuring that the user is who she says she is, and authorization is the task of determining whether the user may perform some specific action within an application. The advantage of container managed security is that security can be configured declaratively by the application's deployer. That is, the assignment of passwords to users and the mapping of users to roles can all be done through configuration, which can then be applied across multiple web applications without any coding changes being required to those web applications. Application Managed Security The alternative is having the application manage security. In this case, your web application code is the sole arbiter of whether a user may access some specific functionality or resource within your application. For Container managed security to work, you need to assemble the following components: Security constraints: Within your web application's deployment descriptor, web.xml, you must identify the URL patterns for restricted resources, as well as the user roles that would be permitted to access these resources. Credential input mechanism: In the web.xml deployment descriptor, you specify how the container should prompt the user for authentication credentials. This is usually accomplished by showing the user a dialog that prompts the user for a user name and password, but can also be configured to use other mechanisms such as a custom login form. Realm: This is a data store that holds user names, passwords, and roles, against which the user-supplied credentials are checked. It can be a simple XML file, a table in a relational database that is accessed using the JDBC API, or a Lightweight Directory Access Protocol (LDAP) server that can be accessed through the JNDI API. A realm provides Tomcat with a consistent mechanism of accessing these disparate data sources. All three of the above components are technically independent of each other. The power of container based security is that you can assemble your own security solution by mixing and matching selections from each of these groups. Now, when a user requests a resource, Tomcat will check to see whether a security constraint exists for this resource. For a restricted resource, Tomcat will then automatically request the user for her credentials and will then check these credentials against the configured realm. Access to the resource will be allowed only if the user's credentials are valid and if the user is a member of the role that is configured to access that resource. Executor This is a new element, available only since 6.0.11. It allows you to configure a shared thread pool that is available to all your connectors. This places an upper limit on the number of concurrent threads that may be started by your connectors. Note that this limit applies even if a particular connector has not used up all the threads configured for it. Listener Every major Tomcat component implements the org.apache.catalina.Lifecycle interface. This interface lets interested listeners to register with a component, to be notified of lifecycle events, such as the starting or stopping of that component. A listener implements the org.apache.catalina.LifecycleListener interface and implements its lifecycleEvent() method, which takes a LifecycleEvent that represents the event that has occurred. This gives you an opportunity to inject your own custom processing into Tomcat's lifecycle. Manager Sessions allows 'applications' to be made possible over the stateless HTTP protocol. A session represents a conversation between a client and a server and is implemented by a javax.servlet.http.HttpSession instance that is stored on the server and is associated with a unique identifier that is passed back by the client on each interaction. A new session is created on request and remains alive on the server either until it times out after a period of inactivity by its associated client, or until it is explicitly invalidated, for instance, by the client choosing to log out. The above image shows a very simplistic view of the session mechanism within Tomcat. An org.apache.catalina.Manager component is used by the Catalina engine to create, find, or invalidate sessions. This component is responsible for the sessions that are created for a context and their life cycles. The default Manager implementation simply retains sessions in memory, but supports session survival across server restarts. It writes out all active sessions to disk when the server is stopped and will reload them into memory when the server is started up again. A <Manager> must be a child of a <Context> element and is responsible for managing the sessions associated with that web application context. The default Manager takes attributes such as the algorithm that is used to generate its session identifiers, the frequency in seconds with which the manager should check for expired sessions, the maximum number of active sessions supported, and the file in which the sessions should be stored. Other implementations of Manager are provided that let you persist sessions to a durable data store such as a file or a JDBC database.
Read more
  • 0
  • 0
  • 4733

article-image-python-multimedia-working-audios
Packt
30 Aug 2010
14 min read
Save for later

Python Multimedia: Working with Audios

Packt
30 Aug 2010
14 min read
(For more resources on Python, see here.) So let's get on with it! Installation prerequisites Since we are going to use an external multimedia framework, it is necessary to install the necessary to install the packages mentioned in this section. GStreamer GStreamer is a popular open source multimedia framework that supports audio/video manipulation of a wide range of multimedia formats. It is written in the C programming language and provides bindings for other programming languages including Python. Several open source projects use GStreamer framework to develop their own multimedia application. Throughout this article, we will make use of the GStreamer framework for audio handling. In order to get this working with Python, we need to install both GStreamer and the Python bindings for GStreamer. Windows platform The binary distribution of GStreamer is not provided on the project website http://www.gstreamer.net/. Installing it from the source may require considerable effort on the part of Windows users. Fortunately, GStreamer WinBuilds project provides pre-compiled binary distributions. Here is the URL to the project website: http://www.gstreamer-winbuild.ylatuya.es The binary distribution for GStreamer as well as its Python bindings (Python 2.6) are available in the Download area of the website: http://www.gstreamer-winbuild.ylatuya.es/doku.php?id=download You need to install two packages. First, the GStreamer and then the Python bindings to the GStreamer. Download and install the GPL distribution of GStreamer available on the GStreamer WinBuilds project website. The name of the GStreamer executable is GStreamerWinBuild-0.10.5.1.exe. The version should be 0.10.5 or higher. By default, this installation will create a folder C:gstreamer on your machine. The bin directory within this folder contains runtime libraries needed while using GStreamer. Next, install the Python bindings for GStreamer. The binary distribution is available on the same website. Use the executable Pygst-0.10.15.1-Python2.6.exe pertaining to Python 2.6. The version should be 0.10.15 or higher. GStreamer WinBuilds appears to be an independent project. It is based on the OSSBuild developing suite. Visit http://code.google.com/p/ossbuild/ for more information. It could happen that the GStreamer binary built with Python 2.6 is no longer available on the mentioned website at the time you are reading this book. Therefore, it is advised that you should contact the developer community of OSSBuild. Perhaps they might help you out! Alternatively, you can build GStreamer from source on the Windows platform, using a Linux-like environment for Windows, such as Cygwin (http://www.cygwin.com/). Under this environment, you can first install dependent software packages such as Python 2.6, gcc compiler, and others. Download the gst-python-0.10.17.2.tar.gz package from the GStreamer website http://www.gstreamer.net/. Then extract this package and install it from sources using the Cygwin environment. The INSTALL file within this package will have installation instructions. Other platforms Many of the Linux distributions provide GStreamer package. You can search for the appropriate gst-python distribution (for Python 2.6) in the package repository. If such a package is not available, install gst-python from the source as discussed in the earlier the Windows platform section. If you are a Mac OS X user, visit http://py26-gst-python.darwinports.com/. It has detailed instructions on how to download and install the package Py26-gst-python version 0.10.17 (or higher). Mac OS X 10.5.x (Leopard) comes with the Python 2.5 distribution. If you are using packages using this default version of Python, GStreamer Python bindings using Python 2.5 are available on the darwinports website: http://gst-python.darwinports.com/ PyGobject There is a free multiplatform software utility library called 'GLib'. It provides data structures such as hash maps, linked lists, and so on. It also supports the creation of threads. The 'object system' of GLib is called GObject. Here, we need to install the Python bindings for GObject. The Python bindings are available on the PyGTK website at: http://www.pygtk.org/downloads.html. Windows platform The binary installer is available on the PyGTK website. The complete URL is: http://ftp.acc.umu.se/pub/GNOME/binaries/win32/pygobject/2.20/?. Download and install version 2.20 for Python 2.6. Other platforms For Linux, the source tarball is available on the PyGTK website. There could even be binary distribution in the package repository of your Linux operating system. The direct link to the Version 2.21 of PyGObject (source tarball) is: http://ftp.gnome.org/pub/GNOME/sources/pygobject/2.21/ If you are a Mac user and you have Python 2.6 installed, a distribution of PyGObject is available at http://py26-gobject.darwinports.com/. Install version 2.14 or later. Summary of installation prerequisites The following table summarizes the packages needed for this article. Package Download location Version Windows platform Linux/Unix/OS X platforms GStreamer http://www.gstreamer.net/ 0.10.5 or later Install using binary distribution available on the Gstreamer WinBuild website: http://www.gstreamer-winbuild.ylatuya.es/doku.php?id=download Use GStreamerWinBuild-0.10.5.1.exe (or later version if available). Linux: Use GStreamer distribution in package repository. Mac OS X: Download and install by following instructions on the website: http://gstreamer.darwinports.com/. Python Bindings for GStreamer http://www.gstreamer.net/ 0.10.15 or later for Python 2.6 Use binary provided by GStreamer WinBuild project. See http://www.gstreamer-winbuild.ylatuya.es for details pertaining to Python 2.6. Linux: Use gst-python distribution in the package repository. Mac OS X: Use this package (if you are using Python2.6): http://py26-gst-python.darwinports.com/. Linux/Mac: Build and install from the source tarball. Python bindings for GObject "PyGObject" Source distribution: http://www.pygtk.org/downloads.html 2.14 or later for Python 2.6 Use binary package from pygobject-2.20.0.win32-py2.6.exe Linux: Install from source if pygobject is not available in the package repository. Mac: Use this package on darwinports (if you are using Python2.6) See http://py26-gobject.darwinports.com/ for details. Testing the installation Ensure that the GStreamer and its Python bindings are properly installed. It is simple to test this. Just start Python from the command line and type the following: >>>import pygst If there is no error, it means the Python bindings are installed properly. Next, type the following: >>>pygst.require("0.10")>>>import gst If this import is successful, we are all set to use GStreamer for processing audios and videos! If import gst fails, it will probably complain that it is unable to work some required DLL/shared object. In this case, check your environment variables and make sure that the PATH variable has the correct path to the gstreamer/bin directory. The following lines of code in a Python interpreter show the typical location of the pygst and gst modules on the Windows platform. >>> import pygst>>> pygst<module 'pygst' from 'C:Python26libsite-packagespygst.pyc'>>>> pygst.require('0.10')>>> import gst>>> gst<module 'gst' from 'C:Python26libsite-packagesgst-0.10gst__init__.pyc'> Next, test if PyGObject is successfully installed. Start the Python interpreter and try importing the gobject module. >>import gobject If this works, we are all set to proceed! A primer on GStreamer In this article, we will be using GStreamer multimedia framework extensively. Before we move on to the topics that teach us various audio processing techniques, a primer on GStreamer is necessary. So what is GStreamer? It is a framework on top of which one can develop multimedia applications. The rich set of libraries it provides makes it easier to develop applications with complex audio/video processing capabilities. Fundamental components of GStreamer are briefly explained in the coming sub-sections. Comprehensive documentation is available on the GStreamer project website. GStreamer Application Development Manual is a very good starting point. In this section, we will briefly cover some of the important aspects of GStreamer. For further reading, you are recommended to visit the GStreamer project website: http://www.gstreamer.net/documentation/ gst-inspect and gst-launch We will start by learning the two important GStreamer commands. GStreamer can be run from the command line, by calling gst-launch-0.10.exe (on Windows) or gst-launch-0.10(on other platforms). The following command shows a typical execution of GStreamer on Linux. We will see what a pipeline means in the next sub-section. $gst-launch-0.10 pipeline_description GStreamer has a plugin architecture. It supports a huge number of plugins. To see more details about any plugin in your GStreamer installation, use the command gst-inspect-0.10 (gst-inspect-0.10.exe on Windows). We will use this command quite often. Use of this command is illustrated here. $gst-inspect-0.10 decodebin Here, decodebin is a plugin. Upon execution of the preceding command, it prints detailed information about the plugin decodebin. Elements and pipeline In GStreamer, the data flows in a pipeline. Various elements are connected together forming a pipeline, such that the output of the previous element is the input to the next one. A pipeline can be logically represented as follows: Element1 ! Element2 ! Element3 ! Element4 ! Element5 Here, Element1 through to Element5 are the element objects chained together by the symbol !. Each of the elements performs a specific task. One of the element objects performs the task of reading input data such as an audio or a video. Another element decodes the file read by the first element, whereas another element performs the job of converting this data into some other format and saving the output. As stated earlier, linking these element objects in a proper manner creates a pipeline. The concept of a pipeline is similar to the one used in Unix. Following is a Unix example of a pipeline. Here, the vertical separator | defines the pipe. $ls -la | more Here, the ls -la lists all the files in a directory. However, sometimes, this list is too long to be displayed in the shell window. So, adding | more allows a user to navigate the data. Now let's see a realistic example of running GStreamer from the command prompt. $ gst-launch-0.10 -v filesrc location=path/to/file.ogg ! decodebin ! audioconvert ! fakesink For a Windows user, the gst command name would be gst-launch-0.10.exe. The pipeline is constructed by specifying different elements. The !symbol links the adjacent elements, thereby forming the whole pipeline for the data to flow. For Python bindings of GStreamer, the abstract base class for pipeline elements is gst.Element, whereas gst.Pipeline class can be used to created pipeline instance. In a pipeline, the data is sent to a separate thread where it is processed until it reaches the end or a termination signal is sent. Plugins GStreamer is a plugin-based framework. There are several plugins available. A plugin is used to encapsulate the functionality of one or more GStreamer elements. Thus we can have a plugin where multiple elements work together to create the desired output. The plugin itself can then be used as an abstract element in the GStreamer pipeline. An example is decodebin. We will learn about it in the upcoming sections. A comprehensive list of available plugins is available at the GStreamer website http://gstreamer.freedesktop.org. In almost all applications to be developed, decodebin plugin will be used. For audio processing, the functionality provided by plugins such as gnonlin, audioecho, monoscope, interleave, and so on will be used. Bins In GStreamer, a bin is a container that manages the element objects added to it. A bin instance can be created using gst.Bin class. It is inherited from gst.Element and can act as an abstract element representing a bunch of elements within it. A GStreamer plugin decodebin is a good example representing a bin. The decodebin contains decoder elements. It auto-plugs the decoder to create the decoding pipeline. Pads Each element has some sort of connection points to handle data input and output. GStreamer refers to them as pads. Thus an element object can have one or more "receiver pads" termed as sink pads that accept data from the previous element in the pipeline. Similarly, there are 'source pads' that take the data out of the element as an input to the next element (if any) in the pipeline. The following is a very simple example that shows how source and sink pads are specified. >gst-launch-0.10.exe fakesrc num-bufferes=1 ! fakesink The fakesrc is the first element in the pipeline. Therefore, it only has a source pad. It transmits the data to the next linkedelement, that is fakesink which only has a sink pad to accept elements. Note that, in this case, since these are fakesrc and fakesink, just empty buffers are exchanged. A pad is defined by the class gst.Pad. A pad can be attached to an element object using the gst.Element.add_pad() method. The following is a diagrammatic representation of a GStreamer element with a pad. It illustrates two GStreamer elements within a pipeline, having a single source and sink pad. Now that we know how the pads operate, let's discuss some of special types of pads. In the example, we assumed that the pads for the element are always 'out there'. However, there are some situations where the element doesn't have the pads available all the time. Such elements request the pads they need at runtime. Such a pad is called a dynamic pad. Another type of pad is called ghost pad. These types are discussed in this section.   Dynamic pads Some objects such as decodebin do not have pads defined when they are created. Such elements determine the type of pad to be used at the runtime. For example, depending on the media file input being processed, the decodebin will create a pad. This is often referred to as dynamic pad or sometimes the available pad as it is not always available in elements such as decodebin. Ghost pads As stated in the Bins section a bin object can act as an abstract element. How is it achieved? For that, the bin uses 'ghost pads' or 'pseudo link pads'. The ghost pads of a bin are used to connect an appropriate element inside it. A ghost pad can be created using gst.GhostPad class. Caps The element objects send and receive the data by using the pads. The type of media data that the element objects will handle is determined by the caps (a short form for capabilities). It is a structure that describes the media formats supported by the element. The caps are defined by the class gst.Caps. Bus A bus refers to the object that delivers the message generated by GStreamer. A message is a gst.Message object that informs the application about an event within the pipeline. A message is put on the bus using the gst.Bus.gst_bus_post() method. The following code shows an example usage of the bus. 1 bus = pipeline.get_bus()2 bus.add_signal_watch()3 bus.connect("message", message_handler) The first line in the code creates a gst.Bus instance. Here the pipeline is an instance of gst.PipeLine. On the next line, we add a signal watch so that the bus gives out all the messages posted on that bus. Line 3 connects the signal with a Python method. In this example, the message is the signal string and the method it calls is message_handler. Playbin/Playbin2 Playbin is a GStreamer plugin that provides a high-level audio/video player. It can handle a number of things such as automatic detection of the input media file format, auto-determination of decoders, audio visualization and volume control, and so on. The following line of code creates a playbin element. playbin = gst.element_factory_make("playbin") It defines a property called uri. The URI (Uniform Resource Identifier) should be an absolute path to a file on your computer or on the Web. According to the GStreamer documentation, Playbin2 is just the latest unstable version but once stable, it will replace the Playbin. A Playbin2 instance can be created the same way as a Playbin instance. gst-inspect-0.10 playbin2 With this basic understanding, let us learn about various audio processing techniques using GStreamer and Python.
Read more
  • 0
  • 0
  • 4728
Modal Close icon
Modal Close icon