Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-securing-your-trixbox-server
Packt
12 Oct 2009
6 min read
Save for later

Securing Your trixbox Server

Packt
12 Oct 2009
6 min read
Start with a good firewall Never have your trixbox system exposed completely on the open Internet; always make sure it is behind a good firewall. While many people think that because trixbox is running on Linux, it is totally secure, Linux, like anything else, has its share of vulnerabilities, and if things are not configured properly, is fairly simple for hackers to get into. There are really good open-source firewalls available, such as pfSense, Viata, and M0n0Wall. Any access to system services, such as HTTP or SSH, should only be done via a VPN or using a pseudo-VPN such as Hamachi. The best designed security starts with being exposed to the outside world as little as possible. If we have remote extensions that cannot use VPNs, then we will be forced to leave SIP ports open, and the next step will be to secure those as well. Stopping unneeded services Since trixbox CE is basically a stock installation of CentOS Linux, very little hardening has been done to the system to secure it. This lack of security is intentional as the first level of defence should always be a good firewall. Since there will be people who still insist on putting the system in a data center with no firewall, some care will need to be taken to ensure that the system is as secure as possible. The first step is to disable any services that are running that could be potential security vulnerabilities. We can see the list of services that are used with the chkconfig –list command. [trixbox1.localdomain rules]# chkconfig --listanacron 0:off 1:off 2:on 3:on 4:on 5:on 6:offasterisk 0:off 1:off 2:off 3:off 4:off 5:off 6:offavahi-daemon 0:off 1:off 2:off 3:off 4:off 5:off 6:offavahi-dnsconfd 0:off 1:off 2:off 3:off 4:off 5:off 6:offbgpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offcapi 0:off 1:off 2:off 3:off 4:off 5:off 6:offcrond 0:off 1:off 2:on 3:on 4:on 5:on 6:offdc_client 0:off 1:off 2:off 3:off 4:off 5:off 6:offdc_server 0:off 1:off 2:off 3:off 4:off 5:off 6:offdhcpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offdhcrelay 0:off 1:off 2:off 3:off 4:off 5:off 6:offez-ipupdate 0:off 1:off 2:off 3:off 4:off 5:off 6:offhaldaemon 0:off 1:off 2:off 3:on 4:on 5:on 6:offhttpd 0:off 1:off 2:off 3:on 4:on 5:on 6:offip6tables 0:off 1:off 2:off 3:off 4:off 5:off 6:offiptables 0:off 1:off 2:off 3:off 4:off 5:off 6:offisdn 0:off 1:off 2:off 3:off 4:off 5:off 6:offkudzu 0:off 1:off 2:off 3:on 4:on 5:on 6:offlm_sensors 0:off 1:off 2:on 3:on 4:on 5:on 6:offlvm2-monitor 0:off 1:on 2:on 3:on 4:on 5:on 6:offmDNSResponder 0:off 1:off 2:off 3:on 4:on 5:on 6:offmcstrans 0:off 1:off 2:off 3:off 4:off 5:off 6:offmdmonitor 0:off 1:off 2:on 3:on 4:on 5:on 6:offmdmpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offmemcached 0:off 1:off 2:on 3:on 4:on 5:on 6:offmessagebus 0:off 1:off 2:off 3:on 4:on 5:on 6:offmultipathd 0:off 1:off 2:off 3:off 4:off 5:off 6:offmysqld 0:off 1:off 2:off 3:on 4:on 5:on 6:offnamed 0:off 1:off 2:off 3:off 4:off 5:off 6:offnetconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:offnetfs 0:off 1:off 2:off 3:on 4:on 5:on 6:offnetplugd 0:off 1:off 2:off 3:off 4:off 5:off 6:offnetwork 0:off 1:off 2:on 3:on 4:on 5:on 6:offnfs 0:off 1:off 2:off 3:off 4:off 5:off 6:offnfslock 0:off 1:off 2:off 3:on 4:on 5:on 6:offntpd 0:off 1:off 2:off 3:on 4:on 5:on 6:offospf6d 0:off 1:off 2:off 3:off 4:off 5:off 6:offospfd 0:off 1:off 2:off 3:off 4:off 5:off 6:offportmap 0:off 1:off 2:off 3:on 4:on 5:on 6:offpostfix 0:off 1:off 2:on 3:on 4:on 5:on 6:offrdisc 0:off 1:off 2:off 3:off 4:off 5:off 6:offrestorecond 0:off 1:off 2:on 3:on 4:on 5:on 6:offripd 0:off 1:off 2:off 3:off 4:off 5:off 6:offripngd 0:off 1:off 2:off 3:off 4:off 5:off 6:offrpcgssd 0:off 1:off 2:off 3:on 4:on 5:on 6:offrpcidmapd 0:off 1:off 2:off 3:on 4:on 5:on 6:offrpcsvcgssd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsaslauthd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsnmpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsnmptrapd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsshd 0:off 1:off 2:on 3:on 4:on 5:on 6:offsyslog 0:off 1:off 2:on 3:on 4:on 5:on 6:offvsftpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offxinetd 0:off 1:off 2:off 3:on 4:on 5:on 6:offzaptel 0:off 1:off 2:on 3:on 4:on 5:on 6:offzebra 0:off 1:off 2:off 3:off 4:off 5:off 6:off The highlighted lines are services that are started automatically on system startup. The following list of services is required by trixbox CE and should not be disabled: Anacron crond haldaemon httpd kudzu lm_sensors lvm2-monitor mDNSResponder mdmonitor memcached messagebus mysqld network ntpd postfix sshd syslog xinetd zaptel To disable a service, we use the command chkconfig <servicename> off. We can now turn off some of the services that are not needed: chkconfig ircd offchkconfig netfs offchkconfig nfslock offchkconfig openibd offchkconfig portmap offchkconfig restorecond offchkconfig rpcgssd offchkconfig rpcidmapd offchkconfig vsftpd off We can also stop the services immediately without having to reboot: service ircd stopservice netfs stopservice nfslock stopservice openibd stopservice portmap stopservice restorecond stopservice rpcgssd stopservice rpcidmapd stopservice vsftpd stop Securing SSH A very large misconception is that by using SSH to access your system, you are safe from outside attacks. The security of SSH access is only as good as the security you have used to secure SSH access. Far too often, we see systems that have been hacked because their root password is very simple to guess (things like password or trixbox are not safe passwords). Any dictionary word is not safe at all, and substituting numbers for letters is very poor practice as well. So, as long as SSH is exposed to the outside, it is vulnerable. The best thing to do, if you absolutely have to have SSH running on the open Internet, is to change the port number used to access SSH. This section will detail the best methods of securing your SSH connections. Create a remote login account First off, we should create a user on the system and only allow SSH connections from it. The username should be something that only you know and is not easily guessed. Here, we will create a user called trixuser and assign a password to it. The password should be something with letters, numbers, symbols, and not based on a dictionary word. Also, try to string it into a sentence making sure to use the letters, numbers, and symbols. Spaces in passwords work well too, and are hard to add in scripts that might try to break into your server. A nice and simple tool for creating hard-to-guess passwords can be found at http://www.pctools.com/guides/password/. [trixbox1.localdomain init.d]# useradd trixuser[trixbox1.localdomain init.d]# passwd trixuser Now, ensure that the new account works by using SSH to log in to the trixbox CE server with this new account. If it does not let you in, make sure the password is correct or try to reset it. If it works, continue on. Only allowing one account access to the system over SSH is a great way to lock out most brute force attacks. To do this, we need to edit the file in /etc/ssh/sshd_config and add the following to the file. AllowUsers trixuser The PermitRootLogin setting can be edited so that root can't log in over SSH. Remove the # from in front of the setting and change the yes to no. PermitRootLogin no
Read more
  • 0
  • 0
  • 4058

article-image-customizing-and-extending-aspnet-mvc-framework
Packt
12 Oct 2009
5 min read
Save for later

Customizing and Extending the ASP.NET MVC Framework

Packt
12 Oct 2009
5 min read
(For more resources on .NET, see here.) Creating a control When building applications, you probably also build controls. Controls are re-usable components that contain functionality that can be re-used in different locations. In ASP.NET Webforms, a control is much like an ASP.NET web page. You can add existing web server controls and markup to a custom control and define properties and methods for it. When, for example, a button on the control is clicked, the page is posted back to the server that performs the actions required by the control. The ASP.NET MVC framework does not support ViewState and postbacks, and therefore, cannot handle events that occur in the control. In ASP.NET MVC, controls are mainly re-usable portions of a view, called partial views, which can be used to display static HTML and generated content, based on ViewData received from a controller. In this topic, we will create a control to display employee details. We will start by creating a new ASP.NET MVC application using File | New | Project... in Visual Studio, and selecting ASP.NET MVC Application under Visual C# - Web. First of all, we will create a new Employee class inside the Models folder. The code for this Employee class is: public class Employee{ public string FirstName { get; set; } public string LastName { get; set; } public string Email { get; set; } public string Department { get; set; }} On the home page of our web application, we will list all of our employees. In order to do this, modify the Index action method of the HomeController to pass a list of employees to the view in the ViewData dictionary. Here's an example that creates a list of two employees and passes it to the view: public ActionResult Index(){ ViewData["Title"] = "Home Page"; ViewData["Message"] = "Our employees welcome you to our site!"; List<Employee> employees = new List<Employee> { new Employee{ FirstName = "Maarten", LastName = "Balliauw", Email = "maarten@maartenballiauw.be", Department = "Development" }, new Employee{ FirstName = "John", LastName = "Kimble", Email = "john@example.com", Department = "Development" } }; return View(employees);} The corresponding view, Index.aspx in the Views | Home folder of our ASP.NET MVC application, should be modified to accept a List<Employee> as a model. To do this, edit the code behind the Index.aspx.cs file and modify its contents as follows: using System.Collections.Generic;using System.Web.Mvc;using ControlExample.Models;namespace ControlExample.Views.Home{ public partial class Index : ViewPage<List<Employee>> { }} In the Index.aspx view, we can now use this list of employees. Because we will display details of more than one employee somewhere else in our ASP.NET MVC web application, let's make this a partial view. Right-click the Views | Shared folder, click on Add | New Item... and select the MVC View User Control item template under Visual C# | Web | MVC. Name the partial view, DisplayEmployee.ascx. The ASP.NET MVC framework provides the flexibility to use a strong-typed version of the ViewUserControl class, just as the ViewPage class does. The key difference between ViewUserControl and ViewUserControl<T>is that with the latter, the type of view data is explicitly passed in, whereas the non-generic version will contain only a dictionary of objects. Because the DisplayEmployee.aspx partial view will be used to render items of the type Employee, we can modify the DisplayEmployee. ascx code behind the file DisplayEmployee.ascx.cs and make it strong-typed: using ControlExample.Models;namespace ControlExample.Views.Shared{ public partial class DisplayEmployee : System.Web.Mvc.ViewUserControl<Employee> { }} In the view markup of our partial view, the model can now be easily referenced. Just as with a regular ViewPage, the ViewUserControl will have a ViewData property containing a Model property of the type Employee. Add the following code to DisplayEmployee.ascx: <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="DisplayEmployee.ascx.cs" Inheits="ControlExample.Views.Shared.DisplayEmployee" %><%=Html.Encode(Model.LastName)%>, <%=Html.Encode(Model.FirstName)%><br/><em><%=Html.Encode(Model.Department)%></em> The control can now be used on any view or control in the application. In the Views | Home | Index.aspx view, use the Model property (which is a List<Employee>) and render the control that we have just created for each employee: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" CodeBehind="Index.aspx.cs"Inherits="ControlExample.Views.Home.Index" %><asp:Content ID="indexContent" ContentPlaceHolderID="MainContent"runat="server"> <h2><%= Html.Encode(ViewData["Message"]) %></h2> <p>Here are our employees:</p> <ul> <% foreach (var employee inModel) { %> <li> <% Html.RenderPartial("DisplayEmployee", employee); %> </li> <% } %> </ul></asp:Content> In case the control's ViewData type is equal to the view page's ViewData type, another method of rendering can also be used. This method is similar to ASP.NET Webforms controls, and allows you to specify a control as a tag. Optionally, a ViewDataKey can be specified. The control will then fetch its data from the ViewData dictionary entry having this key. <uc1:EmployeeDetails ID="EmployeeDetails1" runat="server" ViewDataKey="...." /> For example, if the ViewData contains a key emp that is filled with an Employee instance, the user control could be rendered using the following markup: <uc1:EmployeeDetails ID="EmployeeDetails1" runat="server" ViewDataKey="emp" /> After running the ASP.NET MVC web application, the result will appear as shown in the following screenshot:
Read more
  • 0
  • 0
  • 6993

article-image-documenting-your-python-project-part1
Packt
12 Oct 2009
7 min read
Save for later

Documenting Your Python Project-part1

Packt
12 Oct 2009
7 min read
Documenting Your Project Documentation is work that is often neglected by developers and sometimes by managers. This is often due to a lack of time towards the end of development cycles, and the fact that people think they are bad at writing. Some of them are bad, but the majority of them are able to produce fine documentation. In any case, the result is a disorganized documentation made of documents that are written in a rush. Developers hate doing this kind of work most of the time. Things get even worse when existing documents need to be updated. Many projects out there are just providing poor, out-of-date documentation because the manager does not know how to deal with it. But setting up a documentation process at the beginning of the project and treating documents as if they were modules of code makes documenting easier. Writing can even be fun when a few rules are followed. This article provides a few tips to start documenting your project through: The seven rules of technical writing that summarize the best practices A reStructuredText primer, which is a plain text markup syntax used in most Python projects A guide for building good project documentation The Seven Rules of Technical Writing Writing good documentation is easier in many aspects than writing a code. Most developers think it is very hard, but by following a simple set of rules it becomes really easy. We are not talking here about writing a book of poems but a comprehensive piece of text that can be used to understand a design, an API, or anything that makes up the code base. Every developer is able to produce such material, and this section provides seven rules that can be applied in all cases. Write in two steps: Focus on ideas, and then on reviewing and shaping your text. Target the readership: Who is going to read it? Use a simple style: Keep it straight and simple. Use good grammar. Limit the scope of the information: Introduce one concept at a time. Use realistic code examples: Foos and bars should be dropped. Use a light but sufficient approach: You are not writing a book! Use templates: Help the readers to get habits. These rules are mostly inspired and adapted from Agile Documenting, a book by Andreas Rüping that focuses on producing the best documentation in software projects. Write in Two Steps Peter Elbow, in Writing with Power, explains that it is almost impossible for any human being to produce a perfect text in one shot. The problem is that many developers write documentation and try to directly come up with a perfect text. The only way they succeed in this exercise is by stopping the writing after every two sentences to read them back, and do some corrections. This means that they are focusing both on the content and the style of the text. This is too hard for the brain and the result is often not as good as it could be. A lot of time and energy is spent in polishing the style and shape of the text, before its meaning is completely thought through. Another approach is to drop the style and organization of the text and focus on its content. All ideas are laid down on paper, no matter how they are written. The developer starts to write a continuous stream and does not pause when he or she makes grammatical mistakes, or for anything that is not about the content. For instance, it does not matter if the sentences are barely understandable as long as the ideas are written down. He or she just writes down what he wants to say, with a rough organization. By doing this, the developer focuses on what he or she wants to say and will probably get more content out of his or her brain than he or she initially thought he or she would. Another side-effect when doing free writing is that other ideas that are not directly related to the topic will easily go through the mind. A good practice is to write them down on a second paper or screen when they appear, so they are not lost, and then get back to the main writing. The second step consists of reading back the whole text and polishing it so that it is comprehensible to everyone. Polishing a text means enhancing its style, correcting its faults, reorganizing it a bit, and removing any redundant information it has. When the time dedicated to write documentation is limited, a good practice is to cut this time in two equal durations—one for writing the content, and one to clean and organize the text. Focus on the content, and then on style and cleanliness. Target the Readership When starting a text, there is a simple question the writer should consider: Who is going to read it? This is not always obvious, as a technical text explains how a piece of software works, and is often written for every person who might get and use the code. The reader can be a manager who is looking for an appropriate technical solution to a problem, or a developer who needs to implement a feature with it. A designer might also read it to know if the package fits his or her needs from an architectural point of view. Let's apply a simple rule: Each text should have only one kind of readers. This philosophy makes the writing easier. The writer precisely knows what kind of reader he or she is dealing with. He or she can provide a concise and precise documentation that is not vaguely intended for all kinds of readers. A good practice is to provide a small introductory text that explains in one sentence what the documentation is about, and guides the reader to the appropriate part: Atomisator is a product that fetches RSS feeds and saves them in adatabase, with a filtering process.If you are a developer, you might want to look at the API description(api.txt)If you are a manager, you can read the features list and the FAQ(features.txt)If you are a designer, you can read the architecture andinfrastructure notes (arch.txt) By taking care of directing your readers in this way, you will probably produce better documentation. Know your readership before you start to write. Use a Simple Style Seth Godin is one of the best-selling writers on marketing topics. You might want to read Unleashing the Ideavirus, which is available for free on the Internet http://en.wikipedia.org/wiki/Unleashing_the_Ideavirus. Lately, he made an analysis on his blog to try to understand why his books sold so well. He made a list of all best sellers in the marketing area and compared the average number of words per sentences in each one of them. He realized that his books had the lowest number of words per sentence (thirteen words). This simple fact, Seth explained, proved that readers prefer short and simple sentences, rather than long and stylish ones. By keeping sentences short and simple, your writings will consume less brain power for their content to be extracted, processed, and then understood. Writing technical documentation aims to provide a software guide to readers. It is not a fiction story, and should be closer to your microwave notice than to the latest Stephen King novel. A few tips to keep in mind are: Use simple sentences; they should not be longer than two lines. Each paragraph should be composed of three or four sentences, at the most, that express one main idea. Let your text breathe. Don't repeat yourself too much: Avoid journalistic styles where ideas are repeated again and again to make sure they are understood. Don't use several tenses. Present tense is enough most of the time. Do not make jokes in the text if you are not a really fine writer. Being funny in a technical book is really hard, and few writers master it. If you really want to distill some humor, keep it in code examples and you will be fine. You are not writing fiction, so keep the style as simple as possible.
Read more
  • 0
  • 0
  • 8137

article-image-documenting-your-python-project-part2
Packt
12 Oct 2009
11 min read
Save for later

Documenting Your Python Project-part2

Packt
12 Oct 2009
11 min read
Building the Documentation An easier way to guide your readers and your writers is to provide each one of them with helpers and guidelines, as we have learned in the previous section of this article. From a writer's point of view, this is done by having a set of reusable templates together with a guide that describes how and when to use them in a project. It is called a documentation portfolio. From a reader point of view, being able to browse the documentation with no pain, and getting used to finding the info efficiently, is done by building a document landscape. Building the Portfolio There are many kinds of documents a software project can have, from low-level documents that refer directly to the code, to design papers that provide a high-level overview of the application. For instance, Scott Ambler defines an extensive list of document types in his book Agile Modeling (http://www.agilemodeling.com/essays/agileArchitecture.htm). He builds a portfolio from early specifications to operations documents. Even the project management documents are covered, so the whole documenting needs are built with a standardized set of templates. Since a complete portfolio is tightly related to the methodologies used to build the software, this article will only focus on a common subset that you can complete with your specific needs. Building an efficient portfolio takes a long time, as it captures your working habits. A common set of documents in software projects can be classified in three categories: Design: All documents that provide architectural information, and low-level design information, such as class diagrams, or database diagrams Usage: Documents on how to use the software; this can be in the shape of a cookbook and tutorials, or a module-level help Operations: Provide guidelines on how to deploy, upgrade, or operate the software Design The purpose of design documentation is to describe how the software works and how the code is organized. It is used by developers to understand the system but is also a good entry point for people who are trying to understand how the application works. The different kinds of design documents a software can have are: Architecture overview Database models Class diagrams with dependencies and hierarchy relations User interface wireframes Infrastructure description Mostly, these documents are composed of some diagrams and a minimum amount of text. The conventions used for the diagrams are very specific to the team and the project, and this is perfectly fine as long as it is consistent. UML provides thirteen diagrams that cover most aspects in a software design. The class diagram is probably the most used one, but it is possible to describe every aspect of software with it. See http://en.wikipedia.org/wiki/Unified_Modeling_Language#Diagrams. Following a specific modeling language such as UML is not often fully done, and teams just make up their own way throughout their common experience. They pick up good practice from UML or other modeling languages, and create their own recipes. For instance, for architecture overview diagrams, some designers just draw boxes and arrows on a whiteboard without following any particular design rules and take a picture of it. Others work with simple drawing programs such as Dia (http://www.gnome.org/projects/dia) or Microsoft Visio (not open source, so not free), since it is enough to understand the design. Database model diagrams depend on the kind of database you are using. There are complete data modeling software applications that provide drawing tools to automatically generate tables and their relations. But this is overkill in Python most of the time. If you are using an ORM such as SQLAlchemy (for instance), simple boxes with lists of fields, together with table relations are enough to describe your mappings before you start to write them. Class diagrams are often simplified UML class diagrams: There is no need in Python to specify the protected members of a class, for instance. So the tools used for an architectural overview diagram fit this need too. User interface diagrams depend on whether you are writing a web or a desktop application. Web applications often describe the center of the screen, since the header, footer, left, and right panels are common. Many web developers just handwrite those screens and capture them with a camera or a scanner. Others create prototypes in HTML and make screen snapshots. For desktop applications, snapshots on prototype screens, or annotated mock-ups made with tools such as Gimp or Photoshop are the most common way. Infrastructure overview diagrams are like architecture diagrams, but they focus on how the software interacts with third-party elements, such as mail servers, databases, or any kind of data streams. Common Template The important point when creating such documents is to make sure the target readership is perfectly known, and the content scope is limited. So a generic template for design documents can provide a light structure with a little advice for the writer. Such a structure can include: Title Author Tags (keywords) Description (abstract) Target (Who should read this?) Content (with diagrams) References to other documents The content should be three or four screens (a 1024x768 average screen) at the most, to be sure to limit the scope. If it gets bigger, it should be split into several documents or summarized. The template also provides the author's name and a list of tags to manage its evolutions and ease its classification. This will be covered later in the article. Paster is the right tool to use to provide templates for documentation. pbp.skels implements the design template described, and can be used exactly like code generation. A target folder is provided and a few questions are answered: $ paster create -t pbp_design_doc designSelected and implied templates:pbp.skels#pbp_design_doc A Design documentVariables:egg: designpackage: designproject: designEnter title ['Title']: Database specifications for atomisator.dbEnter short_name ['recipe']: mappersEnter author (Author name) ['John Doe']: TarekEnter keywords ['tag1 tag2']: database mapping sqlCreating template pbp_design_docCreating directory ./designCopying +short_name+.txt_tmpl to ./design/mappers.txt The result can then be completed: =========================================Database specifications for atomisator.db=========================================:Author: Tarek:Tags: database mapping sql:abstract:Write here a small abstract about your design document... contents ::Who should read this ?::::::::::::::::::::::Explain here who is the target readership.Content:::::::Write your document here. Do not hesitate to split it in severalsections.References::::::::::Put here references, and links to other documents. Usage Usage documentation describes how a particular part of the software works. This documentation can describe low-level parts such as how a function works, but also high-level parts such command-line arguments for calling the program. This is the most important part of documentation in framework applications, since the target readership is mainly the developers that are going to reuse the code. The three main kinds of documents are: Recipe: A short document that explains how to do something. This kind of document targets one readership and focuses on one specific topic. Tutorial: A step-by-step document that explains how to use a feature of the software. This document can refer to recipes, and each instance is intended to one readership. Module helper: A low-level document that explains what a module contains. This document could be shown (for instance) when you call the help built-in over a module. Recipe A recipe answers a very specific problem and provides a solution to resolve it. For example, ActiveState provides a Python Cookbook online (a cookbook is a collection of recipes), where developers can describe how to do something in Python (http://aspn.activestate.com/ASPN/Python/Cookbook). These recipes must be short and are structured like this: Title Submitter Last updated Version Category Description Source (the source code) Discussion (the text explaining the code) Comments (from the web) Often, they are one-screen long and do not go into great details. This structure perfectly fits a software's needs and can be adapted in a generic structure, where the target readership is added and the category replaced by tags: Title (short sentence) Author Tags (keywords) Who should read this? Prerequisites (other documents to read, for example) Problem (a short description) Solution (the main text, one or two screens) References (links to other documents) The date and version are not useful here, since we will see later that the documentation is managed like source code in the project. Like the design template, pbp.skels provide a pbp_recipe_doc template that can be used to generate this structure: $ paster create -t pbp_recipe_doc recipesSelected and implied templates:pbp.skels#pbp_recipe_doc A recipeVariables:egg: recipespackage: recipesproject: recipesEnter title (use a short question): How to use atomisator.dbEnter short_name ['recipe'] : atomisator-dbEnter author (Author name) ['John Doe']: TarekEnter keywords ['tag1 tag2']: atomisator dbCreating template pbp_recipe_docCreating directory ./recipesCopying +short_name+.txt_tmpl to ./recipes/atomisator-db.txt The result can then be completed by the writer: ========================How to use atomisator.db========================:Author: Tarek:Tags: atomisator db.. contents ::Who should read this ?::::::::::::::::::::::Explain here who is the target readership.Prerequisites:::::::::::::Put here the prerequisites for people to follow this recipe.Problem:::::::Explain here the problem resolved in a few sentences.Solution::::::::Put here the solution.References::::::::::Put here references, and links to other recipes. Tutorial A tutorial differs from a recipe in its purpose. It is not intended to resolve an isolated problem, but rather describes how to use a feature of the application step by step. This can be longer than a recipe and can concern many parts of the application. For example, Django provides a list of tutorials on its website. Writing your first Django App, part 1 (http://www.djangoproject.com/documentation/tutorial01) explains in ten screens how to build an application with Django. A structure for such a document can be: Title (short sentence) Author Tags (words) Description (abstract) Who should read this? Prerequisites (other documents to read, for example) Tutorial (the main text) References (links to other documents) The pbp_tutorial_doc template is provided in pbp.skels as well with this structure, which is similar to the design template. Module Helper The last template that can be added in our collection is the module helper template. A module helper refers to a single module and provides a description of its contents, together with usage examples. Some tools can automatically build such documents by extracting the docstrings and computing module help using pydoc, like Epydoc ( http://epydoc.sourceforge.net). So it is possible to generate an extensive documentation based on API introspection. This kind of documentation is often provided in Python frameworks. For instance Plone provides an http://api.plone.org server that keeps an up-to-date collection of module helpers. The main problems with this approach are: There is no smart selection performed over the modules that are really interesting to document. The code can be obfuscated by the documentation. Furthermore, module documentation provides examples that sometimes refer to several parts of the module, and are hard to split between the functions' and classes' docstrings. The module docstring could be used for that purpose by writing a text at the top of the module. But this ends in having a hybrid file composed of a block of text, then a block of code. This is rather obfuscating when the code represents less than 50% of the total length. If you are the author, this is perfectly fine. But when people try to read the code (not the documentation), they will have to jump the docstrings part. Another approach is to separate the text in its own file. A manual selection can then be operated to decide which Python module will have its module helper file. The documents can then be separated from the code base and allowed to live their own life, as we will see in the next part. This is how Python is documented. Many developers will disagree on the fact that doc and code separation is better than docstrings. This approach means that the documentation process is fully integrated in the development cycle; otherwise it will quickly become obsolete. The docstrings approach solves this problem by providing proximity between the code and its usage example, but doesn't bring it to a higher level: a document that can be used as part of a plain documentation. The template for Module Helper is really simple, as it contains just a little metadata before the content is written. The target is not defined since it is the developers who wish to use the module: Title (module name) Author Tags (words) Content
Read more
  • 0
  • 0
  • 7658

article-image-support-developers-spring-web-flow-2
Packt
12 Oct 2009
9 min read
Save for later

Support for Developers of Spring Web Flow 2

Packt
12 Oct 2009
9 min read
Build systems Build systems are not necessary for building web applications with Spring Web Flow, but they greatly assist a developer by resolving dependencies between packages and automating the build process. In this article, we will show you how to build your projects with Apache Ant and Apache Maven. Ant Ant is a powerful and very flexible build tool. You can write Extensible Markup Language (XML) files, which tell Ant how to build your application, where to find your dependencies, and where to copy the compiled files. Often, you won't find the need to download Ant, as it is already built-in into popular IDEs such as Eclipse and NetBeans. Ant does not provide you with an automatic dependency resolving mechanism. So you will have to manually download all the libraries your application needs. Alternatively, you can use a third-party dependency resolving system such as Apache Ivy, which we will describe later in this article. When you have obtained a copy of Ant, you can write a build.xml file as shown in the following code. <?xml version="1.0" encoding="UTF-8"?> <project name="login.flow" default="compile"> <description> login.flow </description> <property file="loginflow.properties"/> <path id="classpath"> <fileset dir="lib/"> <include name="*.jar" /> </fileset> </path> <target name="init"> <mkdir dir="${build}" /> <mkdir dir="${build}/WEB-INF/classes" /> </target> <target name="assemble-webapp" depends="init"> <copy todir="${build}" overwrite="y"> <fileset dir="${webapp-src}"> <include name="**/*/" /> </fileset> </copy> </target> <target name="compile" depends="assemble-webapp"> <javac srcdir="${src}" destdir="${build}/WEB-INF/classes"> <classpath refid="classpath" /> </javac> <echo>Copying resources</echo> <copy todir="${build}/WEB-INF/classes" overwrite="y"> <fileset dir="${resources}"> <include name="**/*/" /> </fileset> </copy> <echo>Copying libs</echo> <copy todir="${build}/WEB-INF/lib" overwrite="y"> <fileset dir="lib/"> <include name="*.jar" /> </fileset> </copy> </target> </project> First of all, we will specify that we have defined a few required folders in an external PROPERTIES file. The loginflow.properties, stored in your project's root folder, looks like this: src = src/main/java webapp-src = src/main/webapp resources = src/main/resources build = target/chapter02 These define the folders where your source code lies, where your libraries are located, and where to copy the compiled files and your resources. You do not have to declare them in a PROPERTIES file, but it makes re-using easier. Otherwise, you will have to write the folder names everywhere. This would make the build script hard to maintain if the folder layout changes. In the init target, we create the folders for the finished web application. The next is the assemble-webapp target, which depends on the init target. This means that if you execute the assemble-webapp target, the init target gets executed as well. This target will copy all the files belonging to your web application (such as the flow definition file and your JSP files) to the output folder. If you want to build your application, you will have to execute the compile target. It will initialize the output folder, copy everything your application needs to it, compile your Java source code, and copy the compiled files, along with the dependent libraries. If you want to use Apache Ivy for automatic dependency resolution, first, you have to download the distribution from http://ant.apache.org/ivy. This article refers to Version 2.0.0 Release Candidate 1 of Ivy. Unpack the ZIP file and put the ivy-2.0.0-rc1.jar file in your %ANT_HOME%lib folder. If you are using the Eclipse IDE, Ant is already built into the IDE. You can add the JAR file to its classpath by right-clicking on the task you want to execute and choosing Run As | Ant Build… In the appearing dialog, you can add the JAR file on the Classpath tab, either by clicking on Add JARs… and selecting a file from your workspace, or by selecting Add External JARs…, and looking for the file in your file system. Afterwards, you just have to tell Ant to load the required libraries automatically by modifying your build script. We have highlighted the important changes (to be made in the XML file) in the following source code: <project name="login.flow" default="compile"> ... <target name="resolve" description="--> retrieve dependencies with ivy"> <ivy:retrieve /> </target> ... </project> The last step, before we can actually build the project, involves specifying which libraries you want Ivy to download automatically. Therefore, we will now have to compose an ivy.xml file, stored in your project's root folder, which looks like this: <ivy-module version="2.0"> <info organisation="com.webflow2book" module="login.flow"/> <dependencies> <dependency org="org.springframework.webflow" name="org.springframework.binding" rev="2.0.5.RELEASE" /> <dependency org="org.springframework.webflow" name="org.springframework.js" rev="2.0.5.RELEASE" /> <dependency org="org.springframework.webflow" name="org.springframework.webflow" rev="2.0.5.RELEASE" /> </dependencies> ... </ivy-module> To keep the example simple, we only showed the Spring Web Flow entries of the file we just mentioned. In order to be able to build your whole project with Apache Ivy, you will have to add all other required libraries to the file. The org attribute corresponds to the groupId tag from Maven, as does the name attribute with the artifactId tag. The rev attribute matches the version tag in your pom.xml. Maven Maven is a popular application build system published by the Apache Software Foundation. You can get a binary distribution and plenty of information from the project's web site at http://maven.apache.org. After you have downloaded and unpacked the binary distribution, you have to set the M2_HOME environment variable to point to the folder where you unpacked the files. Additionally, we recommend adding the folder %M2_HOME%bin (on Microsoft® Windows system) or $M2_HOME/bin (on Unix or Linux systems) to your PATH variable. Maven has a configuration file called settings.xml, which lies in the M2_HOMEconf folder. Usually, you do not edit this file, unless you want to define proxy settings (for example, when you are in a corporate network where you have to specify a proxy server to access the Internet), or want to add additional package repositories. There are several plug-ins for the most popular IDEs around, which make working with Maven a lot easier than just using the command line. If you do not want to use a plug-in, you have to at least know that Maven requires your projects to have a specific folder layout. The default folder layout looks like this: The root folder, directly below your projects folder, is the src folder. In the main folder, you have all your source files (src/main/java), additional configuration files, and other resources you need (src/main/resources), and all JSP and other files you need for your web application (src/main/webapp). The test folder can have the same layout, but is used for all your test cases. Please see the project's website for more information on the folder layout. To actually build a project with Maven, you need a configuration file for your project. This file is always saved as pom.xml, and lies in the root folder of your project. The pom.xml for our example is too long to be included in this article. Nevertheless, we want to show you the basic layout. You can get the complete file from the code bundle uploaded on http://www.packtpub.com/files/code/5425_Code.zip. <project xsi_schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.webflow2book</groupId> <artifactId>chapter02</artifactId> <packaging>war</packaging> <version>0.0.1-SNAPSHOT</version> <name>chapter02 Maven Webapp</name> <url>http://maven.apache.org</url> This is a standard file header where you can define the name and version of your project. Further, you can also specify how your project is supposed to be packaged. As we wanted to build a web application, we used the war option. Next, we can de?ne all the dependencies our project has to the external libraries: <dependencies> <dependency> <groupId>org.springframework.webflow</groupId> <artifactId>org.springframework.binding</artifactId> <version>2.0.5.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.webflow</groupId> <artifactId>org.springframework.js</artifactId> <version>2.0.5.RELEASE</version> </dependency> <dependency> <groupId>org.springframework.webflow</groupId> <artifactId>org.springframework.webflow</artifactId> <version>2.0.5.RELEASE</version> </dependency> ... </dependencies> As you can see, defining a dependency is pretty straightforward. If you are using an IDE plug-in, the IDE can do most of this for you. To build the application, you can either use an IDE or open a command-line window and type commands that trigger the build. To build our example, we can enter the projects folder and type: mvn clean compile war:exploded This cleans up the target folder, compiles the source files, and compiles all necessary files for our web application in the target folder. If you use Tomcat, you can point your context docBase to the target folder. The application will be automatically deployed on the startup of Tomcat, and you can test your application.
Read more
  • 0
  • 0
  • 3351

article-image-getting-your-apex-components-logic-right
Packt
09 Oct 2009
10 min read
Save for later

Getting Your APEX Components Logic Right

Packt
09 Oct 2009
10 min read
Pre-generation editing After reading this article, we will understand our project a lot better. Also, to a certain level, we will be able control the way our application will be generated. Generation is often performed more than once as you refine the definitions and settings between iterations. In this article we will learn a lot of ways to edit the project in order to generate optimally. But we must understand that we will not cover all the exceptions in the generation process. If we want to do a real Forms to APEX conversion project, it will be very wise to carefully read the help texts in the Migration Documentation provided by Oracle in every APEX instance—especially the appendix called Oracle Forms Generation Capabilities and Workarounds, which will help you to understand the choices that can be made in the generation process. The information in these migration help texts tells us how the different components in Oracle Forms will be converted in APEX and how to implement business logic in the APEX application. For example, when we take a look at the Block to Page Region Mappings, we learn how APEX converts certain blocks to APEX regions during conversion. Investigating When we take a look at our conversion project, we must understand what will be generated. In case of generation, the most important parts are the blocks on our Forms modules. These are, quite literally, the building blocks our pages in APEX will be based upon. Of course, we have our program units, triggers, and much more; but the pages that are defined in the APEX application (which we put in production after the project is finished) will be based on Blocks, Reports, and Menus. This is why we need to adjust them before we generate anything. This might seem like a small part of the project as we look at the count of all the components in our project page, but that doesn't make it less important. We can't adjust reports as they are defined by the query that they are built upon, but we can alter the blocks. That's why we focus on those components first. Data blocks The building blocks of our APEX pages are the blocks and, of course, the reports. The blocks we can generate in our project are the ones that are based on database block. Non-database blocks such as those that hold menus and buttons are not generated by default, as they will be generated as blank pages. In the block overview page, we get the basic information about the blocks in our project. The way the blocks will be generated is determined by APEX based on the contents, the number of items on the block, and, most importantly, the number of records displayed. For further details on the generation rules, refer to the Migration Guide—Appendix A: Forms Generation Capabilities and Workarounds. In the Blocks overview page in our conversion project, we notice that not all the blocks are included. In other words, they aren't checked to be included in the project. This is because they are not oriented from a database block. To include or exclude a block during generation, we need to check or uncheck the specific block. Don't confuse this with the applicability of a block. We also might notice that some of the blocks are already set to complete. In our example we see that the S_CUSTOMER1 and S_CUSTOMER blocks are set to complete. If we take a look inside these components and check the annotations, they are indeed set to complete. There's also a note set for us. As we see in the following screenshot, it states Incorporating Enhanced Query: The Enhanced Query is something that we will use later in this article. But beware of the statement that a component is Complete as we will see that we might want to alter the query on which the customer's block is based. If we look at a block that is not yet set to complete in the overview page (such as the Orders block) and we look at the Application Express Page Query region in the details screen, we see that only Original Query is present. This is the query that is in the original Forms XML file we uploaded earlier. Although we have the Original Query present in our page, we can also alter it and customize the query on which this block is based. But this will be done later in the article. In this way, we have a better control over the way we will generate our application. We can't alter this query as it is to be implemented as a Master-Detail Form. Block items Each block contains a number of items. These items define the fields in our application and are derived from our Forms XML files. In the block details pages, we can find the details of the items on the particular block as well. Here we can see the most basic information about the items, namely their Type, Prompt, Column Name, and the Triggers on that particular item. We can also see the Name of the item if it is a Database Item and if the item is complete or not, and whether or not it is Applicable. When a block is set to complete, it is assumed that we have all the information required about the items, as we see in the example shown here: But there are also cases where we don't get all the information about the items we want. In our case, we might want to customize the query the block is based on or define the items further. We will cover this later in the article. In the above screenshot we notice that for all the items the Column Name is not known. This is an indication that the items will not be generated properly and we need to take a further look into the query and, maybe, some of the triggers. When we want to alter the completeness and applicability of the items in our block, there's a great functionality available on the upper-right of the Blocks Details page. In the Block Tasks section, we find a link that states: Set All Block Items Completeness and Applicability. This function is used to make bulk changes in the items in the block we are in. It can be useful to change the completeness of all items when we are not sure what more needs to be done. To set the completeness or the applicability with a bulk change on all the items, we click on the link in the Block Tasks region and this takes us to the following screen: In the Set Block Item & Trigger Status page we can select the Attribute (Items, Block Triggers, or Item Triggers), the Set Tracking Attribute (Complete or Applicable), and the Set Value (Yes or No). To make changes, set the correct attribute, tracking attribute, and value, and then click on Apply Changes. Original versus Enhanced Query As mentioned earlier, we can encounter both Original and Enhanced Queries in the blocks of our Forms. The Original Query is taken from the XML file directly as it is stated in the source of the block we are looking at. So where does the Enhanced Query originate from? This is one of the automatically generated parts of the Forms Conversion tool in APEX. If a block contains a POST QUERY trigger, the Forms Conversion tool generates an Enhanced Query for us. In the following screenshot, we see both the Enhanced Query and the Original Query in the S_CUSTOMER block. We can clearly notice the additional lines at the bottom of the Enhanced Query. The query in the Enhanced Query section still looks a lot like the one in the Original Query section, but is slightly altered. The code is generated automatically by taking the code from both the Original Query and POST QUERY triggers on this block. Please note that the query is automatically generated by APEX by adding a WHERE clause to the SQL query. This means that we will still need to check it and, probably, optimize it to work properly. The following screenshot shows us the POST QUERY trigger. Notice that it's set to both applicable and complete. This is because the code is now embedded in the enhanced query and so the trigger is taken care of for our project. Triggers Besides items, even blocks contain triggers. These define the actions in our blocks and are, therefore, equally important. Most of the triggers are very Forms-specific, but it's nice to be the judge of that ourselves. In the Orders Block, we have the Block Triggers region that contains the triggers in our orders block. The region tells us the name, applicability, and completeness. It gives us a snippet of the code inside the trigger and tells us the level it is set to (ITEM or BLOCK). A lot of the triggers in our project need to be implemented post-generation, which will be discussed later in this article. But as mentioned above, there is one trigger that we need in the pre-generation stage of our project. This is the POST-QUERY trigger. In this example, the applicability in the orders block is set to No. This is also the reason why we have no Enhanced Query to choose from in this block. The reasons behind setting the trigger to not applicable can be many, and you can learn more about the reasons if you read the migration help texts carefully. We probably want to change the applicability of the trigger ourselves because the POST-QUERY trigger contains some necessary information on how we need to define our block. If we click on the edit link (the pencil icon) for the POST-QUERY trigger, we can alter the applicability. Set the value for Applicable to Yes and click on Apply Changes. This will take us back to the Block Details screen. In the Triggers region, we can see that the applicability of the POST QUERY trigger is now set to Yes. Now if we scroll up to the Application Express Page Query region, we can also see that the Enhanced Query is now in place. As shown in the following screenshot, we can see that we automatically generated an extended version of the Original Query, embedding the logic in the Post Query trigger. For the developers among us, we can see that the query produced by the conversion tool in APEX doesn't make the query very optimal. We can rewrite the query in the Custom Query section, which we will describe later in this article. We are able to set the values for our triggers in the same way we used to set the applicability and completeness of the items in our blocks. In the upper-right corner of our Block Details screen, we find the Block Tasks region. Here we find the link to the tasks for items as well as triggers. Click on the Set All Block Triggers Completeness and Applicability to navigate to the screen where we can set the values. In the Attribute section, we can choose from both the block level triggers as well as the item level triggers. We can't adjust them all at once, so we may need to adjust them twice.  
Read more
  • 0
  • 0
  • 2652
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-preparing-your-forms-conversion-using-oracle-application-express-apex
Packt
09 Oct 2009
9 min read
Save for later

Preparing Your Forms Conversion Using Oracle Application Express (APEX)

Packt
09 Oct 2009
9 min read
When we are participating in a Forms Conversion project, it means we take the source files of our application, turn them into XML files, and upload them into the Forms Conversion part of APEX. This article describes what we do before uploading the XML files and starting our actual Forms Conversion project. Get your stuff! When we talk about source files, it would come in very handy if we got all the right versions of these files. In order to do the Conversion project, we need the same components that are used in the production environment. For these components, we have to get the source files of the components we want to convert. This means we have no use of the runtime files (Oracle Forms runtime files have the FMX extension). In other words, for Forms components we don't need the FMX files, but the FMB source files. These are a few ground rules we have to take into account: We need to make sure that there's no more development on the components we are about to use in our Conversion project. This is because we are now going to freeze our sources and new developments won't be taken into the Conversion project at all. So there will be no changes in our project. Put all the source files in a safe place. In other words, copy the latest version of your files into a new directory to which only you, and perhaps your teammates, have access. If the development team of your organization is using Oracle Designer for the development of its applications, it would be a good idea to generate all the modules from scratch. You would like to use the source on which the runtime files were created only if there are post-generation adjustments to be made in the modules. We need the following files for our Conversion project: Forms Modules: With the FMB extension Object Libraries: With the OLB extension Forms Menus: With the MMB extension PL/SQL Libraries: With the PLL extension Report Files: With the RDF, REX, or JSP extensions When we take these source files, we will be able to create all the necessary XML files that we need for the Forms Conversion project. Creating XML files To create XML files, we need three parts of the Oracle Developer Suite. All of these parts come with a normal 10g or 9i installation of the Developer Suite. These three parts are the Forms Builder, the Reports Builder, and the Forms2XML conversion tool. The Forms2XML conversion tool is the most extensive to understand and is used to create XML files from Form modules, Object Libraries, and Forms Menus. So, we will first discuss the possibilities of this tool. The Forms2XML conversion tool This tool can be used both from the command line as well as a Java applet. As the command line gives us all the possibilities we need and is as easy as a Java applet, we will only use the command-line possibilities. The frmf2xml command comes with some options. The following syntax is used while converting the Forms Modules, the Object Libraries, and the Forms Menus to an XML structure: frmf2xml [option] file [file] In other words, we follow these steps: We first type frmf2xml. Alternatively, we give one of the options with it. We tell the command which file we want to convert, and we have the option to address more than one file for the conversion to XML. We probably want to give the OVERWRITE=YES option with our command. This property ensures that the newly created XML file will overwrite the one with the same name in the directory where we are working. If another file with the same name already exists in this directory and we don't give the OVERWRITE option the value YES (the default is NO), the file will not be generated, as we see in the following screenshot: If there are any images used in modules (Forms or Object Libraries), the Forms2XML tool will refer to the image in the XML file created, and that file will create a TIF file of the image in the directory. The XML files that are created will be stored in the same directory from which we call the command. It will use the following syntax for the name of the XML file: formname.fmb will become formname_fmb.xml libraryname.olb will become libraryname_olb.xml menuname.mmb will become menuname_mmb.xml To convert the .FMB, OLB and, MMB files to XML, we need to do the following steps in the command prompt: Forms Modules The following steps are done in order to convert the .FMB file to XML: We will change the working directory to the directory that has the FMB file. In my example, I have stored all the files in a directory called summit directly under the C drive, like this: C:>cd C:summit Now, we can call the frmf2xml command to convert one of our Forms Modules to an XML file. In this example, we convert the orders.fmb module: C:summit>frmf2xml OVERWRITE=YES orders.fmb As we see in the following screenshot, this command creates an XML file called orders_fmb.xml in the working directory: Object Libraries To convert the .OLB file to XML, the following steps are needed: We first change the working directory to the directory that the OLB file is in. It's done like this: C:>cd C:summit Now we can call the frmf2xml command to convert one of our Object Libraries to an XML file. In this example, we convert the Form_Builder_II.olb library as follows: C:summit>frmf2xml OVERWRITE=YES Form_Builder_II.olb As we see in the following screenshot, the command creates an XML file calledForm_Builder_II_olb.xml and two images as .tif files in the working directory: Forms Menus To convert the MMB file to XML, we follow these steps: We change the working directory to the directory that the .MMB file is in, like this: C:>cd C:summit Now we can call the frmf2xml command to convert one of our Forms Menus to an XML file. In this example we convert the customers.mmb menu: C:summit>frmf2xml OVERWRITE=YES customers.mmb As we can see in the following screenshot, the command creates an XML file called customers_mmb.xml in the working directory: Report Files In our example, we will convert the Customers Report from a RDF file to an XML file. To do this, we follow the steps given here: We need to open the Employees.rdf file with Reports Builder. Open Reports Builder from your Start menu. If Reports Builder is opened, we need to cancel the wizard that asks us if we want to create a new report. After this we use Ctrl+O to open the Report File (or in the menu, File | Open) which we want to convert to XML as we see in the following screenshot: After this we use Shift+Ctrl+S (or in the File | Save As menu) to save the Report. We choose that we want to save the report as a Reports XML (*.xml) file and we click on the Save button as shown in the following screenshot: PL/SQL Libraries To convert PL/SQL Libraries to an XML format, it's easiest to use the convert command that comes with the Report Builder. With this command called rwconverter, we define the source type, call the source, and define the destination type and the destination. In this way, we have control over the way we need to convert the original .pll file to a .pld flat file that we can upload into the APEX Forms converter. It is possible to convert the PL/SQL Libraries with the convert option in Forms Builder, but, personally, I think this option works better. The rwconverter command has a few parameters we give with it to execute. They are given as follows: stype: This is the type of source file we need to convert. In our situation, this will be a .pll file and so the value we need to set is pllfile. source: This is the name of the source file, including the extension. In our case, it is wizard.pll. dtype: This is the file type we want to convert our source file to. In our case, it is a .pld file and so the value becomes pldfile. dest: This is the name, including the extension, of the destination file. In our case, it is wizard.pld. In our example, we use the wizard.pll file that's in our summit files directory. This PL/SQL Library that contains .pll files is normally used to create a PL/SQL Library in the Oracle Database. But this time, we will use it to create a .pld flat file that we will upload to APEX. First, we change the directory to work directory which has the original .pll file. In our case, the summit directory directly under the C drive, shown as follows: C:>cd C:summit After this, we call rwconverter in the command prompt as shown here: C:summit> rwconverter stype=pllfile source=wizard.pll dtype=pldfile dest=wizard.pld When you press the Enter key, a screen will open that is used to do the conversion. We will see that the types and names of the files are the same as we entered them in the command line. We need to click on the OK button to convert the file from .pll to .pld. The conversion may take a few seconds, but when the file has been converted we will see a confirmation that the conversion was successful. After this, we can look in the C:summit directory and we will see that a file wizard.pld is created.
Read more
  • 0
  • 0
  • 11837

article-image-development-ajax-web-widget
Packt
08 Oct 2009
10 min read
Save for later

Development of Ajax Web Widget

Packt
08 Oct 2009
10 min read
We’ve seen this kind of thing done by the popular bookmark sharing website, digg.com. We don’t need to visit digg.com to digg the URL. We can do it using a small object called Widget, provided by digg.com. This is just one simple example of providing remote functionality; you can find lots of widgets providing different functionality across various websites. What is a web widget? A web widget is a small piece of code provided by a third party website, which can be installed and executed in any kind of web page which is created using HTML (or XHTML). The common technologies used in web widgets are JavaScript, Ajax and Adobe flash but in some places other technologies such as HTML combined with CSS, and Java Applets are also used. A web widget can be used for: Serving useful information from a website. Providing some functionality (like: voting, polling). Endorsing the products and services of a website. Web widgets are also known as badges, modules, and snippets. And, these are commonly used by bloggers, social network users and advertisers. Popular Web Widgets You can find many examples of the web widgets on the World Wide Web. These include: Google Adsense – It is probably the most popular web widget you can see on the World Wide Web. All you’ve to do, is get a few lines of code from Google and place it in your website. Different kinds of contextual advertisements get displayed on your website according to the content displayed on your website. Digg this widget – If You want to display news or articles then you’re most probably going to do so with this widget. Provided by the social book marking website digg.com, it is one of the popular bookmark sharing websites. Your website will lots of visitors if your article gets more and more digs, and gets featured on the first page. Del.icio.us widget – You might have seen the widget previously, or text showing “Save to del.icio.us”, “Add to del.icio.us” and “saved by XXX users”. This is another popular social bookmarking website. Snap preview –Provided by snap.com, this widget shows the thumbnail preview of a web page. Survey widget from Polldaddy.com – If you need to run some kind of poll then you can use this widget to create and manage the poll on your website. Google Maps – A rather well known widget, which adds different map functionality in Google maps. you must get an api key from Google to use this widget. Benefits of Providing Web Widgets I came to know about dzone.com, a social bookmarking website for web and software developers, from a widget I found in a blog. Let’s look at the benefits of web widgets: A widget can be an excellent publicity tool. Most of the people who come to know about digg.com, del.icio.us or reddit.com do so from their web widgets in various news portal and blogs. Providing web services- This is useful since the user doesn’t have to visit that particular website to get its services. For example, a visitor to a website which has a Polldaddy web widgets can participate on a survey without visiting to polldaddy.com. Web widgets can be one of the ways of bringing income for your website, by utilizing advertising widgets such as Google adsense (discussed above). Widgets can generate more traffic to your website. For example, you can make a widget that evaluates the price of a product on a website. Widgets across multiple websites means visitors from multiple sources, hence more traffic to you website. How do Web Widgets work As can be seen in the above figure, the web widget is used by a widget client, which is a web page. The web pages just use a chunk of code provided by the functionality providing website. Furthermore, all the services and functionality are provided by the code residing on the remote server which the widget client doesn’t have to care about. In a web widget system, there is optional system called Widget Management System which can manage the output of the widget. Widget Management System A Widget Management System is used for controlling the various operations of the widget, including things like height or width or color. Most widget providing websites allows you to customize the way widget looks in blogs or website. For example, you can customize the size and color of Google adsense by logging into the Google adsense account system. Some widget Management Systems also allow you to manage the content of the widget as well. In the survey widget from vizu.com, you can manage parameters like the topic of poll and number of answers of the poll. Concept of Technologies Used To make the most out of Ajax web widgets, let’s get familiar with some technologies. HTML and XHTML Hypertext Markup language(HTML) is one of the most known markup language for constructing web pages. As the name implies, we use it to markup the text document so that web browsers know how to display them. XHTML on the other hand, is the way to standardize the HTML by making HTML a dialect of XML. XHTML comes in different flavors like transitional, strict or frameset, with each flavor offering either different capabilities or different degrees of conformance to the XML standard. Difference between HTML and XHTML The single biggest difference between HTML and XHTML is that XHML must be well-formed according to XML conventions. Because an XHTML document is essentially XML in nature, simply following the traditional HTML practices is not enough. An XHML document must be well formed. For example, let’s take the examples of “img” and “input” elements of HTML. <img src="logo.gif" alt="My Site" border="0" > <input type="text" name="only_html" id="user_name" value="Not XHTML" > Both of the above statements are perfect HTML statements but they are not well graded constructions in XHTML as they are not well formed according to compliance of XML (those tags are not enclosed properly). To figure out the difference, first look at the above tags in the XHML valid syntax. <img src="logo.gif" alt="My Site" border="0" /><input type="text" name="only_html" id="user_name" value="Not XHTML" /> Did you find the difference? The latter tags are well formed and the previous one is not. If you look closely, you can find that the HTML tags in the second example are closed like XML. The point is you should have well formed document in order to allow JavaScript to access the DOM. You can also add comments in the HTML document. Any information beginning with the character string <!-- and ending with the string --> will be ignored: <!-- This is a comment within HTML--> You can add the comment in the same fashion in an XML document as well. Concept of IFRAME IFrame (Inline Frame ) is a useful HTML element for creating web widgets. Using IFrame, we can embed the HTML document from the remote server into our document. This feature is most commonly used in the web widgets. We can define the width and height of IFrame using the height and width property of the IFrame element. IFrame can be defined in any part of the web pages. IFrame behaves like an inline object and the user can scroll through the Iframe to view the content which is out of view. We can also remove the scroll bar from the IFrame by setting scrolling property to no. The embedded document can be reloaded asynchronously using IFrame and can be an alternative option to an XMLHttpRequest object. For this reason, IFrames are widely used by Ajax applications. You can use the IFrame in the HTML document in the following way: <html> <body> <iframe src="http://wwww.example.com" width="200"> If your browser doesn’t support IFrame, this text will be displayed in the browser </iframe> </body></html> CSS(Cascading Style Sheet) A Cascading Style Sheet is a style sheet language which is used to define the presentation of document written in HTML or XHTML pages. CSS is mainly used for defining the layout, colors and fonts of the elements of a web page along with other perspectives of the document. While the mark up language can be used to define the content of the web page, the primary goal of providing a CSS is to separate the document’s content from the document’s presentation. The Internet Media type “text/css” is reserved for CSS. Using Cascading Style Sheet CSS can be used in different ways in a web page. It can be written with the <style></style> tag of the web-page. It can be defined in the style attribute of the element. Also, CSS can be defined in a different page with extension .css that links with the web page to define the presentation. What is the benefit of using CSS then? You can define the values for a set of attributes for all HTML elements (of a similar type), then change the presentation with one change in the CSS property <style type="text/css">all the css declaration goes here</style> But what about defining the CSS property for different web pages? The above method can be cumbersome and you would need to replicate the same code to different web pages. To overcome such a situation, you can define all the property in a single page let’s say call it style.css and link that CSS to the web pages. <link href="style.css" type="text/css" rel="stylesheet" /> Furthermore, you can define the styles within the elements using the style property of the HTML elements, which is commonly known as inline styles. <p style="text-align:justify">This is content inside paragraph</p> As you’ve seen, we’ve quickly defined the text-align attribute to justify Defining Style Sheet rules A style sheet allows you to change the different attributes of the HTML elements by defining a styling rule for them. A rule has two components: selectors, which defines which HTML elements to apply the styling rule and declaration, which contains the styling rule for the selectors. p { background-color:#999999 } In the above css rule, p is the selector and background-color:#999999, is the declaration of the rule. The above style defines the rule for the paragraph element <p></p>. .box { width:300px } In the above rule, selectors is defined as a class and it can be used in any element with the class property of the element. <p class="box">..content..</p> <div class="box">content</div> Class can be used with any HTML element. #container { width:1000px } “#” is known as an id selector in CSS. The above rule applies to an element which has the id “container” such as below: <div id="container">..content..</div> We can also define a set of rules for multiple elements using comma (,) among the selectors: h1, h2, h3 {font-size:12px; color:#FF0000 } There are many more styles of defining selectors and CSS properties.
Read more
  • 0
  • 0
  • 4320

article-image-liferay-mail-and-sms-text-messenger-portlet
Packt
08 Oct 2009
5 min read
Save for later

Liferay Mail and SMS Text Messenger Portlet

Packt
08 Oct 2009
5 min read
Working with Mail Portlet For the purpose of this article, we will use an intranet website called book.com  which is created  for a fictions company named "Palm Tree Publications". In order to let employees manage their emails, we can use the Liferay Mail portlet. As an administrator of "Palm Tree Publications", you need to create a Page called "Mail" under the Page, "Community" at the Book Lovers Community Public Pages and also add the Mail portlet in the Page, "Mail". Experiencing Mail Management First of all, login as "Palm Tree". Then, let's do the above as follows: Add a Page called "Mail" under the Page "Community" at the Book Lovers Community Public Pages, if the Page is not already present. If the Mail portlet is not already present, add it in the Page, "Mail" of the Book Lovers Community where you want to manage mails. You will see the Mail portlet as shown in the following figure. Let's assume that we set up the mail domain as "cignex.com" in the Enterprise Admin portlet for testing purposes, as we have a mail engine with this mail domain already. Of course, you could set up the mail domain as "book.com", or something else, if you had a mail engine with this mail domain in your hand. As an editor of the editorial department, "Lotti Stein", you may want to manage your mails in the mail domain, "cignex.com". You can first choose a user name for your personal company the email address, say "admin1234" and register. Let's do it as follows: Login as "Lotti Stein". Go to the Page "Mail" under the Page, "Community", at the Book Lovers community Public Pages. Locate the Mail portlet. Input the value for User name as "admin1234". Click the Register button. Your new email address is "admin1234@cignex.com". This email address will also serve as your login, as shown in the following figure. You can now check for new messages in your inbox, by clicking the Inbox link first. Then, you can view the Unread messages, and either Check Mail or create a New mail, as shown in the following figure: You can go to the Page of Mail management by clicking on any link of Unread Messages, or Check Mail button or New button. Further, you can manage emails through the Mail portlet of your current account. Email management includes the following features (as shown in the following figure): Create a New email. Check Mail. Reply to an email. Reply All emails. Forward emails. Delete emails. Print emails, and Search. Note that the first email with the subject, "Users in Staging server", is sent through the SMS Text Messenger portlet. For more details, refer to the forthcoming section. How to Set up Mail Server? In order to make the Mail portlet work, we have to set up a mail server with IMAP, POP and SMTP protocols. Suppose that the Enterprise "Palm Tree Publications" has a mail server with the domain "exg3.exghost.com", an account "admin@cignex.com/admin1234", and protocol IMAP, POP and SMTP. As an administrator, you need to integrate this mail server with IMAP, POP and SMTP protocol in Liferay. Let's do it as follows: Find the file ROOT.xml in $TOMCAT_DIR/conf/Catalina/localhost. Find the mail configuration first. Then configure it as follows: <!-- Mail --><Resourcename="mail/MailSession"auth="Container"type="javax.mail.Session"mail.imap.host="exg3.exghost.com"mail.imap.port="143"mail.pop.host="exg3.exghost.com"mail.pop.port="110"mail.store.protocol="imap"mail.transport.protocol="smtp"mail.smtp.host="exg3.exghost.com"mail.smtp.port="2525"mail.smtp.auth="true"mail.smtp.starttls.enable="true"mail.smtp.user="admin@cignex.com"password="admin1234"mail.smtp.socketFactory.class="javax.net.ssl.SSLSocketFactory"/> In short, a Mail portlet is an AJAX web-mail client. We can configure it to work with any mail server. It reduces page refreshes, since it displays message previews and message lists in a dual pane window. How to Set up Mail Portlet? If you have proper Permissions, you can change the preferences of the Mail portlet. To change the preferences, you can simply click the Preferences icon to the upper right of the Mail portlet. With the Recipients tab selected, you can find potential recipients from the Directory (Enabled or Disabled) and the Organization (My Organization or All Available). Click the Save button after making any changes. Using the Filters tab, you can set the values to filter emails associated with an email address to a Folder. Click the Save button after making any changes. Note that the maximum number of email addresses is ten. This number is also configurable at the portal-ext.properties Similarly, the Forward Address tab allows all emails to be forwarded to the email address you want. Enter one email address Per Line. Remove all entries to disable email forwarding. Select Yes to leave, or No to not leave a copy of the forwarded message. Click the Save button after making any changes. Further, the Signature tab also allows you to set up your signature using HTML text editor. The signature you have set up will be added to each outgoing message. Click the Save button after making any changes. The Vacation Message tab allows you to set up vacation messages using HTML text editor. The vacation message notifies others of your absence (as shown in the following figure). Click the Save button after making any changes.
Read more
  • 0
  • 0
  • 3715

article-image-getting-jump-start-ironpython
Packt
07 Oct 2009
9 min read
Save for later

Getting a Jump-Start with IronPython

Packt
07 Oct 2009
9 min read
Where do you get it? Before getting started, you’ll need to download IronPython 2.0.1 (though the contents of this article could just as easily be applied to past or even future versions). The official IronPython site is www.codeplex.com/ironpython. Here you’ll find not only the IronPython bits, but also samples, source code, documentation and many other resources. After clicking on the "Downloads" tab at the top you will be presented with three download options: IronPython.msi, IronPython-2.0.1-Bin.zip (the binaries) or IronPython-2.0.1-Src.zip (the source code). If you already have CPython installed—the standard Python implementation—the binaries are probably your best bet. You simply unzip the files to your preferred installation directory and you’re done. If you don’t have CPython installed, I recommend the IronPython.msi file since it comes prepackaged with portions of the CPython Standard Library. Figure 1. IronPython installation directory. There are a few items I would like to highlight in the IronPython installation directory displayed in Figure 1. The first is the FAQ.html file. This covers all of your basic IronPython questions, from licensing questions to implementation details. Periodically reviewing this while you’re learning IronPython will probably save you a lot of frustration. The second item of importance is the two executables, ipy.exe and ipyw.exe. As you probably guessed, these are what you use to launch IronPython; ipy.exe is used for scripts and console applications while ipyw.exe is reserved for other types of applications (Windows Forms, WPF, etc). Lastly, I’d like to draw your attention to the Tutorial folder. Inside the Tutorial folder, you’ll find a Tutorial.html file in addition to a number of other files. The Tutorial.html file is a comprehensive review of what you need to know to get started with IronPython. If you want to be quickly productive, be sure to at least review the tutorial. It will answer many of your questions. Visual Studio or a Text Editor? One thing that neither the ReadMe nor the Tutorial really covers is the tooling story. While Visual Studio 2008 is a viable Python development tool, you may want to consider other options. Personally, I bounce between VS and SciTE, but I’m always watching for new tools that might improve my development experience.  There are a number of IDEs and debuggers out there and you owe it to yourself to investigate them. Sometimes, however, Visual Studio IS the right tool for the job. If that’s the case then you’ll need to install the Visual Studio SDK from http://www.microsoft.com/downloads/details.aspx?FamilyID=30402623-93ca-479a-867c-04dc45164f5b&displaylang=en. Let’s Write Some Code! To get started, let’s create a simple python script and execute it with ipy. In a new file called "sample.py" (Python files are indicated by a ".py" extension), type "print ‘Hello, world’". Open a command window, navigate to the directory where you saved sample.py and then call ipy.exe passing "sample.py" as an argument. Figure 1 displays what you might expect to see in the console window. Figure 2. Executing a script using the comand line Not that executing from the Command Line isn’t effective, but I prefer a more efficient approach.  Therefore I’m going to use SciTE, an editor I briefly mentioned earlier, to duplicate the example in Figure2. Why SciTE? I get syntax highlighting, I can run my code simply by hitting F5 and the stdout is redirected to SciTE’s output window. In short, I never have to leave my coding environment. If you performed the above "hello, world" example in SciTE, the example would look like Figure 2. Figure 3. Executing a script using SciTE Congratulations! You’ve written your first bit of Python code! The problem is it doesn’t really touch any .NET namespaces. Fortunately, this is not a difficult thing to do. Figure 3 shows all the code you need to start working with the System namespace. Figure 4. You only need a single of code to gain access to the System namespace With that simple import statement, we now enjoy access to the entirety of the System namespace. For example, to access the String class we simply would type System.String. That’s great for getting started but what happens when we want to use something like the Regex class? Do we have to type System.Text.RegularExpressions.Regex? Figure 5. Using .NET regular expressions from IronPython No! The first line of Figure 5 introduces a new form of the import statement that only imports the specific items you want. In our case, we only want the Regex class. The code in line 3 demonstrates creating a new instance of the Regex class. Note the lack of a "new" keyword. Python considers "new" redundant since you have to include parentheses anyways. Another interesting note is the syntax—or is it the lack of syntax—for creating a variable. There’s no declaration statement or type required. We simply create a name that we set equal to a new instance of the Regex class. If you’ve ever written any PHP or classic ASP, this should feel pretty familiar to you.  Finally, the print statement on line 6 produces the output shown in Figure 6. Figure 6. Output from the Regex example The last example was easy because IronPython already holds a reference to System and mscorlib. Let’s push our limits and create a simple Windows form. This requires a bit more work. Figure 7. Using the clr module to add a reference A Quick Review of Python Classes Figure 7 introduces the clr module as a way of adding references to other libraries in the Global Assembly Cache (GAC). Once we have a reference, now we can import the Form, TextBox and Button classes so we can start constructing our GUI. Before we do that though, we have a couple of concepts we need to cover. Figure 8. Introducing classes and methods Up until this point, we really haven’t needed to create any classes or methods. But now that we need to create a form we’re going to need both. Figure 8 demonstrates a very simple class and an equally simple method. I think it’s clear that the "class" keyword defines a class and the "def" keyword defines a method. You probably correctly assumed that "(object)" after "MyClass" is Python’s way of expressing inheritance. The "pass" keyword, however, may not be immediately obvious. In Python, classes and methods cannot be empty. Therefore, if you have a class or method that you aren’t quite ready to code yet, you can use the "pass" statement until you are ready. A more subtle characteristic of Figure 8 is the whitespace. In Python we indent the contents of control structures with four spaces. Tabs will also work, but by convention four spaces are used. In our example above, since "my_method" has no preceding spaces, it’s clear that "my_method" is not part of "MyClass". So how would we make "my_method" a class method? Logically, you would think that simply deleting the "pass" statement under "MyClass" and indenting "my_method" would be enough, but that isn’t the case. There’s one more addition we need to make. Figure 9. Creating a class method As Figure 9 demonstrates, we need to pass "self" as a parameter to "my_method". The first—and sometimes the only—parameter in a class method’s parameter list must always be an instance of the containing class. By convention, this parameter should be named "self", though you could call it anything you’d like. Why the extra step? That’s because Python values the explicit over the implicit. Hiding this detail from the developer is at odds with Python’s philosophy. Creating a Windows Form Now that we have an understanding of classes, methods and whitespace, Figure 10 continues our example from Figure 7 by creating a blank form. Figure 10. Creating a blank form The code in Figure 10 should be fairly understandable. We create the "MyForm" class by inheriting from "System.Windows.Forms.Form". We create a new instance of "MyForm" and pass the resulting object to the "Application.Run()" method. The only thing that may give you pause is the "__init__()" method. The "__init__()" method is what’s called a magic method. Magic methods are designated with double underscores on either end of the method name and are rarely called directly. For instance, when the code in Line 10 of Figure 10 executes, the "__init__()" method defined in "MyForm" is actually being called behind the scenes. Figure 11. Populating the form with controls and handling an event handler Figure 11 adds a lot of code to our application, most of which isn’t very interesting. The exception here is the Click event of the goButton. In C#, the method would get passed as an argument in the constructor of a new EventHandler. In IronPython, we simply add a function with the proper signature to the Click event.  Now that we have a button that will respond to a click, Figure 12 shows a modified version of our regular expression sample code from earlier inserted into the click method. Note the "__str__()" magic method is the equivalent of ToString(). Figure 12. Populating click with our regular expression example When we run the code, you should see the form displayed in Figure 13. You can enter dates into the top textbox, press the button and either True or False will appear in the lower textbox indicating the results of the IsMatch() function. Figure 13. Completed form Conclusion During the course of one brief article, you went from knowing little of IronPython to using it to build Windows Forms. We were able to move so quickly because we leveraged your existing .NET knowledge.  We spent most of our time talking about the very intuitive Python syntax. Go through sample or even production code you've written in the past and duplicate it in IronPython. You’ll find working with familiar .NET libraries will speed your learning process, making it more fun. Before you know it, Python will become second-nature!
Read more
  • 0
  • 0
  • 3795
article-image-java-server-faces-jsf-tools
Packt
07 Oct 2009
5 min read
Save for later

Java Server Faces (JSF) Tools

Packt
07 Oct 2009
5 min read
Throughout this article, we will follow the "learning by example" technique, and we will develop a completely functional JSF application that will represent a JSF registration form as you can see in the following screenshot. These kinds of forms can be seen on many sites, so it will be very useful to know how to develop them with JSF Tools. The example consists of a simple JSP page, named register.jsp, containing a JSF form with four fields (these fields will be mapped into a managed bean), as follows: personName – this is a text field for the user's name personAge – this is a text field for the user's age personPhone – this is a text field for the user's phone number personBirthDate – this is a text field for the user's birth date The information provided by the user will be properly converted and validated using JSF capabilities. Afterwards, the information will be displayed on a second JSP page, named success.jsp. Overview of JSF Java Server Faces (JSF) is a popular framework used to develop User Interfaces (UI) for server-based applications (it is also known as JSR 127 and is a part of J2EE 1.5). It contains a set of UI components totally managed through JSF support, like handling events, validation, navigation rules, internationalization, accessibility, customizability, and so on. In addition, it contains a tag library for expressing UI components within a JSP page. Among JSF features, we mention the ones that JSF provides: A set of base UI components Extension of the base UI components to create additional UI component libraries Custom UI components Reusable UI components Read/write application date to and from UI components Managing UI state across server requests Wiring client-generated events to server-side application code Multiple rendering capabilities that enable JSF UI components to render themselves differently depending on the client type JSF is tool friendly JSF is implementation agnostic Abstract away from HTTP, HTML, and JSP Speaking of JSF life cycle, you should know that every JSF request that renders a JSP involves a JSF component tree, also called a view. Each request is made up of phases. By standard, we have the following phases (shown in the following figure): The restore view is built Request values are applied Validations are processed Model values are updated The applications is invoked A response is rendered During the above phases, the events are notified to event listeners We end this overview with a few bullets regarding JSF UI components: A UI component can be anything from a simple to a complex user interface (for example, from a simple text field to an entire page) A UI component can be associated to model data objects through value binding A UI component can use helper objects, like converters and validators A UI component can be rendered in different ways, based on invocation A UI component can be invoked directly or from JSP pages using JSF tag libraries Creating a JSF project stub In this section you will see how to create a JSF project stub with JSF Tools. This is a straightforward task that is based on the following steps: From the File menu, select New | Project option. In the New Project window, expand the JBoss Tools Web node and select the JSF Project option (as shown in the following screenshot). After that, click the Next button. In the next window, it is mandatory to specify a name for the new project (Project Name field), a location (Location field—only if you don't use the default path), a JSF implementation (JSF Environment field), and a template (Template field). As you can see, JSF Tools offers a set of predefined templates as follows: JSFBlankWithLibs—this is a blank JSF project with complete JSF support. JSFKickStartWithLibs—this is a demo JSF project with complete JSF support. JSFKickStartWithoutLibs—this is a demo JSF project without JSF support. In this case, the JSF libraries are missing for avoiding the potential conflicts with the servers that already offer JSF support. JSFBlankWithoutLibs—this is a blank JSF project without JSF support. In this case, the JSF libraries are missing for avoiding the potential conflicts with the servers that already offer JSF support (for example, JBoss AS includes JSF support). The next screenshot is an example of how to configure our JSF project at this step. At the end, just click the Next button: This step allows you to set the servlet version (Servlet Version field), the runtime (Runtime field) used for building and compiling the application, and the server where the application will be deployed (Target Server field). Note that this server is in direct relationship with the selected runtime. The following screenshot shows an example of how to complete this step (click on the Finish button): After a few seconds, the new JSF project stub is created and ready to take shape! You can see the new project in the Package Explorer view.
Read more
  • 0
  • 0
  • 8222

article-image-data-access-methods-flex-3
Packt
06 Oct 2009
4 min read
Save for later

Data Access Methods in Flex 3

Packt
06 Oct 2009
4 min read
Flex provides a range of data access components to work with server-side remote data. These components are also commonly called Remote Procedure Call or RPC services. The Remote Procedure Call allows you to execute remote or server-side procedure or method, either locally or remotely in another address space without having to write the details of that method explicitly. (For more information on RPC, visit http://en.wikipedia.org/wiki/Remote_procedure_call.) The Flex data access components are based on Service Oriented Architecture (SOA). The Flex data access components allow you to call the server-side business logic built in Java, PHP, ColdFusion, .NET, or any other server-side technology to send and receive remote data. In this article by Satish Kore, we will learn how to interact with a server environment (specifically built with Java). We will look at the various data access components which includes HTTPService class and WebService class. This article focuses on providing in-depth information on the various data access methods available in Flex. Flex data access components Flex data access components provide call-and-response model to access remote data. There are three methods for accessing remote data in Flex: via the HTTPService class, the WebService class (compliant with Simple Object Access Protocol, or SOAP), or the RemoteObject class. The HTTPService class The HTTPService class is a basic method to send and receive remote data over the HTTP protocol. It allows you to perform traditional HTTP GET and POST requests to a URL. The HTTPService class can be used with any kind of service-side technology such as JSP, PHP, ColdFusion, ASP, and so on. The HTTP service is also known as a REST-style web service. REST stands for Representational State Transfer. It is an approach for getting content from a web server via web pages. Web pages are written in many languages such as JSP, PHP, ColdFusion, ASP, and so on that receive POST or GET requests. Output can be formatted in XML instead of HTML, which can be used in Flex applications for manipulating and displaying content. For example, RSS feeds output standard XML content when accessed using URL. For more information about REST, see www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm. Using the HTTPService tag in MXML The HTTPService class can be used to load both static and dynamic content from remote URLs. For example, you can use HTTPService to load an XML fi le that is located on a server, or you can also use HTTPService in conjunction with server-side technologies that return dynamic results to the Flex application. You can use both the HTTPS and the HTTP protocol to access secure and non-secure remote content. The following is the basic syntax of a HTTPService class: <mx:HTTPService id="instanceName" method="GET|POST|HEAD|OPTIONS|PUT|TRACE|DELETE" resultFormat="object|array|xml|e4x|flashvars|text" url="http://www.mydomain.com/myFile.jsp" fault="faultHandler(event);" result="resultHandler(event)"/> The following are the properties and events: id: Specifies instance identifier name method: Specifies HTTP method; the default value is GET resultFormat: Specifies the format of the result data url: Specifies the complete remote URL which will be called fault: Specifies the event handler method that'll be called when a fault or error occurs while connecting to the URL result: Specifies the event handler method to call when the HttpService call completed successfully and returned results When you call the HTTPService.send() method, Flex will make the HTTP request to the specified URL. The HTTP response will be returned in the result property of the ResultEvent class in the result handler of the HTTPService class. You can also pass an optional parameter to the send() method by passing in an object. Any attributes of the object will be passed in as a URL-encoded query string along with URL request. For example: var param:Object = {title:"Book Title", isbn:"1234567890"}myHS.send(param); In the above code example, we have created an object called param and added two string attributes to it: title and isbn. When you pass the param object to the send() method, these two string attributes will get converted to URL-encoded http parameters and your HTTP request URL will look like this: http://www.domain.com/myjsp.jsp?title=Book Title &isbn=1234567890. This way, you can pass any number of HTTP parameters to the URL and these parameters can be accessed in your server-side code. For example, in a JSP or servlet, you could use request.getParameter("title") to get the HTTP request parameter.
Read more
  • 0
  • 0
  • 3037

article-image-create-local-ubuntu-repository-using-apt-mirror-and-apt-cacher
Packt
05 Oct 2009
7 min read
Save for later

Create a Local Ubuntu Repository using Apt-Mirror and Apt-Cacher

Packt
05 Oct 2009
7 min read
How can a company or organization minimize bandwidth costs when maintaining multiple Ubuntu installations? With bandwidth becoming the currency of the new millennium, being responsible with the bandwidth you have can be a real concern. In this article by Christer Edwards, we will learn how to create, maintain and make available a local Ubuntu repository mirror, allowing you to save bandwidth and improve network efficiency with each machine you add to your network. (For more resources on Ubuntu, see here.) Background Before I begin, let me give you some background into what prompted me to initially come up with these solutions. I used to volunteer at local university user groups whenever they held "installfests". I always enjoyed offering my support and expertise with new Linux users. I have to say, the distribution that we helped deploy the most was Ubuntu. The fact that Ubuntu's parent company, Canonical, provided us with pressed CDs didn't hurt! Despite having more CDs than we could handle we still regularly ran into the same problem. Bandwidth. Fetching updates and security errata was always our bottleneck, consistently slowing down our deployments. Imagine you are in a conference room with dozens of other Linux users, each of them trying to use the internet to fetch updates. Imagine how slow and bogged down that internet connection would be. If you've ever been to a large tech conference you've probably experienced this first hand! After a few of these "installfests" I started looking for a solution to our bandwidth issue. I knew that all of these machines were downloading the same updates and security errata, therefore duplicating work that another user had already done. It just seemed so inefficient. There had to be a better way. Eventually I came across two solutions, which I will outline below. Apt-Mirror The first method makes use of a tool called “apt-mirror”. It is a perl-based utility for downloading and mirroring the entire contents of a public repository. This may likely include packages that you don't use and will not use, but anything stored in a public repository will also be stored in your mirror. To configure apt-mirror you will need the following: apt-mirror package (sudo aptitude install apt-mirror) apache2 package (sudo aptitude install apache2) roughly 15G of storage per release, per architecture Once these requirements are met you can begin configuring the apt-mirror tool. The main items that you'll need to define are: storage location (base_path) number of download threads (nthreads) release(s) and architecture(s) you want The configuration is done in the /etc/apt/mirror.list file. A default config was put in place when you installed the package, but it is generic. You'll need to update the values as mentioned above. Below I have included a complete configuration which will mirror Ubuntu 8.04 LTS for both 32 and 64 bit installations. It will require nearly 30G of storage space, which I will be putting on a removable drive. The drive will be mounted at /media/STORAGE/. # apt-mirror configuration file#### The default configuration options (uncomment and change to override)###set base_path /media/STORAGE/# set mirror_path $base_path/mirror# set skel_path $base_path/skel# set var_path $base_path/var## set defaultarch <running host architecture>set nthreads 20## 8.04 "hardy" i386 mirrordeb-i386 http://us.archive.ubuntu.com/ubuntu hardy main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-updates main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-security main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-backports main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-proposed main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy main/debian-installer restricted/debian-installer universe/debian-installer multiverse/debian-installerdeb-i386 http://packages.medibuntu.org/ hardy free non-free# 8.04 "hardy" amd64 mirrordeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-updates main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-security main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-backports main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-proposed main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy main/debian-installer restricted/debian-installer universe/debian-installer multiverse/debian-installerdeb-amd64 http://packages.medibuntu.org/ hardy free non-free# Cleaning sectionclean http://us.archive.ubuntu.com/clean http://packages.medibuntu.org/ It should be noted that each of the repository lines within the file should begin with deb-i386 or deb-amd64. Formatting may have changed based on the web formatting. Once your configuration is saved you can begin to populate your mirror by running the command: apt-mirror Be warned that, based on your internet connection speeds, this could take quite a long time. Hours, if not more than a day for the initial 30G to download. Each time you run the command after the initial download will be much faster, as only incremental updates are downloaded. You may be wondering if it is possible to cancel the transfer before it is finished and begin again where it left off. Yes, I have done this countless times and I have not run into any issues. You should now have a copy of the repository stored locally on your machine. In order for this to be available to other clients you'll need to share the contents over http. This is why we listed Apache as a requirement above. In my example configuration above I mirrored the repository on a removable drive, and mounted it at /media/STORAGE. What I need to do now is make that address available over the web. This can be done by way of a symbolic link. cd /var/www/sudo ln -s /media/STORAGE/mirror/us.archive.ubuntu.com/ubuntu/ ubuntu The commands above will tell the filesystem to follow any requests for “ubuntu” back up to the mounted external drive, where it will find your mirrored contents. If you have any problems with this linking double-check your paths (as compared to those suggested here) and make sure your link points to the ubuntu directory, which is a subdirectory of the mirror you pulled from. If you point anywhere below this point your clients will not properly be able to find the contents. The additional task of keeping this newly downloaded mirror updated with the latest updates can be automated by way of a cron job. By activating the cron job your machine will automatically run the apt-mirror command on a regular daily basis, keeping your mirror up to date without any additional effort on your part. To activate the automated cron job, edit the file /etc/cron.d/apt-mirror. There will be a sample entry in the file. Simply uncomment the line and (optionally) change the “4” to an hour more convenient for you. If left with its defaults it will run the apt-mirror command, updating and synchronizing your mirror, every morning at 4:00am. The final step needed, now that you have your repository created, shared over http and configured to automatically update, is to configure your clients to use it. Ubuntu clients define where they should grab errata and security updates in the /etc/apt/sources.list file. This will usually point to the default or regional mirror. To update your clients to use your local mirror instead you'll need to edit the /etc/apt/sources.list and comment the existing entries (you may want to revert to them later!) Once you've commented the entries you can create new entries, this time pointing to your mirror. A sample entry pointing to a local mirror might look something like this: deb http://192.168.0.10/ubuntu hardy main restricted universe multiversedeb http://192.168.0.10/ubuntu hardy-updates main restricted universe multiversedeb http://192.168.0.10/ubuntu hardy-security main restricted universe multiverse Basically what you are doing is recreating your existing entries but replacing the archive.ubuntu.com with the IP of your local mirror. As long as the mirrored contents are made available over http on the mirror-server itself you should have no problems. If you do run into problems check your apache logs for details. By following these simple steps you'll have your own private (even portable!) Ubuntu repository available within your LAN, saving you future bandwidth and avoiding redundant downloads.
Read more
  • 0
  • 0
  • 33055
article-image-creating-dynamic-reports-databases-using-jasperreports-35-2
Packt
05 Oct 2009
10 min read
Save for later

Creating Dynamic Reports from Databases Using JasperReports 3.5

Packt
05 Oct 2009
10 min read
Datasource definition A datasource is what JasperReports uses to obtain data for generating a report. Data can be obtained from databases, XML files, arrays of objects, collections of objects, and XML files. In this article, we will focus on using databases as a datasource. Database for our reports We will use a MySQL database to obtain data for our reports. The database is a subset of public domain data that can be downloaded from http://dl.flightstats.us. The original download is 1.3 GB, so we deleted most of the tables and a lot of data to trim the download size considerably. MySQL dump of the modified database can be found as part of code download at http://www.packtpub.com/files/code/8082_Code.zip. The flightstats database contains the following tables: aircraft aircraft_models aircraft_types aircraft_engines aircraft_engine_types The database structure can be seen in the following diagram: The flightstats database uses the default MyISAM storage engine for the MySQL RDBMS, which does not support referential integrity (foreign keys). That is why we don't see any arrows in the diagram indicating dependencies between the tables. Let's create a report that will show the most powerful aircraft in the database. Let's say, those with horsepower of 1000 or above. The report will show the aircraft tail number and serial number, the aircraft model, and the aircraft's engine model. The following query will give us the required results: SELECT a.tail_num, a.aircraft_serial, am.model as aircraft_model, ae.model AS engine_model FROM aircraft a, aircraft_models am, aircraft_engines ae WHERE a.aircraft_engine_code in (select aircraft_engine_code from aircraft_engines where horsepower >= 1000) AND am.aircraft_model_code = a.aircraft_model_code AND ae.aircraft_engine_code = a.aircraft_engine_code The above query retrieves the following data from the database: Generating database reports There are two ways to generate database reports—either by embedding SQL queries into the JRXML report template or by passing data from the database to the compiled report through a datasource. We will discuss both of these techniques. We will first create the report by embedding the query into the JRXML template. Then, we will generate the same report by passing it through a datasource containing the database data. Embedding SQL queries into a report template JasperReports allows us to embed database queries into a report template. This can be achieved by using the <queryString> element of the JRXML file. The following example demonstrates this technique: <?xml version="1.0" encoding="UTF-8" ?> <jasperReport xsi_schemaLocation="http://jasperreports.sourceforge.net/jasperreports http://jasperreports.sourceforge.net/xsd/jasperreport.xsd" name="DbReport"> <queryString> <![CDATA[select a.tail_num, a.aircraft_serial, am.model as aircraft_model, ae.model as engine_model from aircraft a, aircraft_models am, aircraft_engines ae where a.aircraft_engine_code in ( select aircraft_engine_code from aircraft_engines where horsepower >= 1000) and am.aircraft_model_code = a.aircraft_model_code and ae.aircraft_engine_code = a.aircraft_engine_code]]> </queryString> <field name="tail_num" class="java.lang.String" /> <field name="aircraft_serial" class="java.lang.String" /> <field name="aircraft_model" class="java.lang.String" /> <field name="engine_model" class="java.lang.String" /> <pageHeader> <band height="30"> <staticText> <reportElement x="0" y="0" width="69" height="24" /> <textElement verticalAlignment="Bottom" /> <text> <![CDATA[Tail Number: ]]> </text> </staticText> <staticText> <reportElement x="140" y="0" width="79" height="24" /> <text> <![CDATA[Serial Number: ]]> </text> </staticText> <staticText> <reportElement x="280" y="0" width="69" height="24" /> <text> <![CDATA[Model: ]]> </text> </staticText> <staticText> <reportElement x="420" y="0" width="69" height="24" /> <text> <![CDATA[Engine: ]]> </text> </staticText> </band> </pageHeader> <detail> <band height="30"> <textField> <reportElement x="0" y="0" width="69" height="24" /> <textFieldExpression class="java.lang.String"> <![CDATA[$F{tail_num}]]> </textFieldExpression> </textField> <textField> <reportElement x="140" y="0" width="69" height="24" /> <textFieldExpression class="java.lang.String"> <![CDATA[$F{aircraft_serial}]]> </textFieldExpression> </textField> <textField> <reportElement x="280" y="0" width="69" height="24" /> <textFieldExpression class="java.lang.String"> <![CDATA[$F{aircraft_model}]]> </textFieldExpression> </textField> <textField> <reportElement x="420" y="0" width="69" height="24" /> <textFieldExpression class="java.lang.String"> <![CDATA[$F{engine_model}]]> </textFieldExpression> </textField> </band> </detail> </jasperReport> The <queryString> element is used to embed a database query into the report template. In the given code example, the <queryString> element contains the query wrapped in a CDATA block for execution. The <queryString> element has no attributes or subelements other than the CDATA block containing the query. Text wrapped inside an XML CDATA block is ignored by the XML parser. As seen in the given example, our query contains the > character, which would invalidate the XML block if it wasn't inside a CDATA block. A CDATA block is optional if the data inside it does not break the XML structure. However, for consistency and maintainability, we chose to use it wherever it is allowed in the example. The <field> element defines fields that are populated at runtime when the report is filled. Field names must match the column names or alias of the corresponding columns in the SQL query. The class attribute of the <field> element is optional; its default value is java.lang.String. Even though all of our fields are strings, we still added the class attribute for clarity. In the last example, the syntax to obtain the value of a report field is $F{field_name}, where field_name is the name of the field as defined. The next element that we'll discuss is the <textField> element. Text fields are used to display dynamic textual data in reports. In this case, we are using them to display the value of the fields. Like all the subelements of <band>, text fields must contain a <reportElement> subelement indicating the text field's height, width, and x, y coordinates within the band. The data that is displayed in text fields is defined by the <textFieldExpression> subelement of <textField>. The <textFieldExpresson> element has a single subelement, which is the report expression that will be displayed by the text field and wrapped in an XML CDATA block. In this example, each text field is displaying the value of a field. Therefore, the expression inside the <textFieldExpression> element uses the field syntax $F{field_name}, as explained before. Compiling a report containing a query is no different from compiling a report without a query. It can be done programmatically or by using the custom JasperReports jrc ANT task. Generating the report As we have mentioned previously, in JasperReports terminology, the action of generating a report from a binary report template is called filling the report. To fill a report containing an embedded database query, we must pass a database connection object to the report. The following example illustrates this process: package net.ensode.jasperbook; import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.util.HashMap; import net.sf.jasperreports.engine.JRException; import net.sf.jasperreports.engine.JasperFillManager; public class DbReportFill { Connection connection; public void generateReport() { try { Class.forName("com.mysql.jdbc.Driver"); connection = DriverManager.getConnection("jdbc:mysql: //localhost:3306/flightstats?user=user&password=secret"); System.out.println("Filling report..."); JasperFillManager.fillReportToFile("reports/DbReport. jasper", new HashMap(), connection); System.out.println("Done!"); connection.close(); } catch (JRException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); } catch (SQLException e) { e.printStackTrace(); } } public static void main(String[] args) { new DbReportFill().generateReport(); } } As seen in this example, a database connection is passed to the report in the form of a java.sql.Connection object as the last parameter of the static JasperFillManager.fillReportToFile() method. The first two parameters are as follows: a string (used to indicate the location of the binary report template or jasper file) and an instance of a class implementing the java.util.Map interface (used for passing additional parameters to the report). As we don't need to pass any additional parameters for this report, we used an empty HashMap. There are six overloaded versions of the JasperFillManager.fillReportToFile() method, three of which take a connection object as a parameter. For simplicity, our examples open and close database connections every time they are executed. It is usually a better idea to use a connection pool, as connection pools increase the performance considerably. Most Java EE application servers come with connection pooling functionality, and the commons-dbcp component of Apache Commons includes utility classes for adding connection pooling capabilities to the applications that do not make use of an application server. After executing the above example, a new report, or JRPRINT file is saved to disk. We can view it by using the JasperViewer utility included with JasperReports. In this example, we created the report and immediately saved it to disk. The JasperFillManager class also contains methods to send a report to an output stream or to store it in memory in the form of a JasperPrint object. Storing the compiled report in a JasperPrint object allows us to manipulate the report in our code further. We could, for example, export it to PDF or another format. The method used to store a report into a JasperPrint object is JasperFillManager.fillReport(). The method used for sending the report to an output stream is JasperFillManager.fillReportToStream(). These two methods accept the same parameters as JasperFillManager.fillReportToFile() and are trivial to use once we are familiar with this method. Refer to the JasperReports API for details. In the next example, we will fill our report and immediately export it to PDF by taking advantage of the net.sf.jasperreports.engine.JasperRunManager.runReportToPdfStream() method. package net.ensode.jasperbook; import java.io.IOException; import java.io.InputStream; import java.io.PrintWriter; import java.io.StringWriter; import java.sql.Connection; import java.sql.DriverManager; import java.util.HashMap; import javax.servlet.ServletException; import javax.servlet.ServletOutputStream; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import net.sf.jasperreports.engine.JasperRunManager; public class DbReportServlet extends HttpServlet { protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { Connection connection; response.setContentType("application/pdf"); ServletOutputStream servletOutputStream = response .getOutputStream(); InputStream reportStream = getServletConfig() .getServletContext().getResourceAsStream( "/reports/DbReport.jasper"); try { Class.forName("com.mysql.jdbc.Driver"); connection = DriverManager.getConnection("jdbc:mysql: //localhost:3306/flightstats?user=dbUser&password=secret"); JasperRunManager.runReportToPdfStream(reportStream, servletOutputStream, new HashMap(), connection); connection.close(); servletOutputStream.flush(); servletOutputStream.close(); } catch (Exception e) { // display stack trace in the browser StringWriter stringWriter = new StringWriter(); PrintWriter printWriter = new PrintWriter(stringWriter); e.printStackTrace(printWriter); response.setContentType("text/plain"); response.getOutputStream().print(stringWriter.toString()); } } } The only difference between static and dynamic reports is that for dynamic reports we pass a connection to the report for generating a database report. After deploying this servlet and pointing the browser to its URL, we should see a screen similar to the following screenshot:  
Read more
  • 0
  • 0
  • 9406

article-image-load-testing-using-visual-studio-2008-part-2
Packt
01 Oct 2009
9 min read
Save for later

Load Testing Using Visual Studio 2008: Part 2

Packt
01 Oct 2009
9 min read
Editing load tests The load can contain one or more scenarios for testing. The scenarios can be edited any time during the design. To edit a scenario, select the scenario you want to edit and right-click to edit the test mix, browser mix, or network mix in the existing scenario or add a new scenario to the load test. The context menu has different options for editing as shown here: The Add Scenario... will open the same wizard, which we used before adding the scenario to the load test. We can keep adding the scenarios as much as we need. The scenario properties window also helps us modify some properties such as the Think Profile and the Think Time Between Test Iteration and the scenario Name. The Add Tests... option is used for adding more tests to the test mix from the tests list in the project. We can add as many tests as required to be part of the test mix. The Edit Test Mix... option is used for editing the test mix in the selected scenario. This option will open a dialog with the selected tests and distribution. Using this Edit Test Mix window we can: Change the test mix model listed in the drop–down. Add new tests to the list and modify the distribution percentage. Select an initial test that executes before other tests for each virtual server. The browse option next to it opens a dialog showing all the tests from the project from which we can select the initial test. Similar to the initial test, we can choose a test which is the final test to run during the test execution. The same option is used here to select the test from the list of available tests. The Edit Browser Mix... option opens the Edit Browser Mix dialog from where you can select the new browser to be included to the browser mix and delete or change the existing browsers selected in the mix. The Edit Network Mix... option opens the Edit Network Mix dialog from where you can add new browsers to the list and modify the distribution percentages. We can change or delete the existing network mix. For changing the existing load pattern, select the Load Pattern under the Scenarios and open the Properties window which shows the current patterns properties. You can change or choose any pattern from the available patterns in the list as shown in the following screenshots: The Run Settings can be multiple for the load tests, but at any time only one can be active. To make the run settings active, select Run Settings, right-click and select Set as Active. The properties of the run settings can be modified directly using the properties window. The properties that can be modified include results storage, SQL tracing, test iterations, timings, and the web test connections. Adding context parameters The web tests can have context parameters added to them. The context parameter is used in place of the common values of multiple requests in the web test. For example, every request has the same web server name, which can be replaced by the context parameters. So whenever the web server changes, we can just change the context parameter value, which replaces all the requests with the new server name. We know that the load test can have web tests and unit tests in the list. If there is a change in the web server for the load test other than what we used for web tests then we will end up modifying the context parameter values in all the web tests used in the load tests. Instead of this, we can include another context parameter in the load test with the same name used in the web tests. The context parameter added to the load test will override the same context parameter used in the web tests. To add new context parameter to the load test, select the Run Settings and right-click to choose the Add Context Parameter option, which adds a new context parameter. For example, the context parameter used in the web test has the web server value as this.Context.Add("WebServer1", "http://localhost:49459"); Now to overwrite this in load tests, add a new context parameter with the same name as shown below: Results store All information collected during the load test run is stored in the central result store. The load test results store contains the data collected by the performance counters and the violation information and errors that occurred during the load test. The result store is the SQL server database created using the script loadtestresultsrepository.sql, which contains all the SQL queries to create the objects required for the result store. If there are no controllers involved in the test and if it is the local test, we can create the results store sql database using SQL Express. Running the script creates the store using SQL Express. Running this script once on the local machine is enough for creating the result store. This is a global central store for all the load tests in the local machine. To create the store, open the visual studio command prompt and run the command with the actual drive where you have installed the visual studio. cd c:Program FilesMicrosoft Visual Studio 9.0Common7IDE In the same folder, run the following command which creates the database store SQLCMD /S localhostsqlexpress /i loadtestresultsrepository.sql If you have any other SQL Server and if you want to use that to have the result store then you can run the script on that server and use that server in connection parameters for the load test. For example, if you have the SQL Server name as SQLServer1 and if the result store has to be created in that store, then run the command as below: SQLCMD /S SQLServer1 -U <user name> -P <password> -iloadtestresultsrepository.sql All these commands create the result store database in the SQL Server. If you look at the tables created in the store, it would look like this: If you are using a controller for the load tests, the installation of the controller itself takes care of creating the results store on the controller machine. The controller can be installed using the Visual Studio 2008 Team Test Load agent Product. To connect to the SQL Server result store database, select the Test option from the Visual Studio IDE and then select the Administer Test Controller window. This option would be available only in the controller machine. If the result store is on a different machine or the controller machine, select the controller from the list or select <Local-No controller>, if it is in the local machine without any controller. Then select the Load Test Results store using the browse button and close the Administer test Controller window. The controllers are used for administering the agent computers and these controller and agents form the rig. Multiple agents are required to simulate a large number of loads from different locations. All the performance data collected from all these agents are saved at the central result store at the controller or any global store configured at the controller. Running the load test Load tests are run like any other test in Visual Studio. Visual Studio also provides multiple options for running the load test. One is through the Test View window where all the tests are listed. We can select the load test, right-click and choose the option Run Selection option, which starts the load tests to run. The second option is to use the Test List Editor. Select the load test from the test list in the test lists editor and choose the option to run the selected tests from the test list editor toolbar. The third option is the built-in run option in the load test editor toolbar. Select the load test from the project and open the load test. This opens the load test in the load test editor. The toolbar for this load test editor has the option to run the currently opened load test. The fourth option is through the command line command. MS Test command line utility is used for running the test. This utility is installed along with the Visual Studio Team System for Test. Open the Visual Studio command prompt. From the folder where the load test resides, run the following command to start the load test mstest /testcontainer:LoadTest1.loadtest In all the above cases, the load test editor will show the progress during the test run. But the other option does not show the progress instead stores the result to the result store repository. It can be loaded later to see the test result and analyze it. You can follow these steps to open the last run tests: Open the menu option Test | Windows | Test Runs. From the Connect drop-down, select the location for the test results store. On selecting this, you can see the trace files of the last run tests getting loaded in the window. Select the test run name from the list and double-click to open the test results for the selected run. Double-click the test result shown in the Results window that connects to the store repository, fetches the data for the selected test result, and presents in the load test window. The end result of the load test editor window will look like the one shown in the following screenshot with all the performance counter values and the violation points. More details about the graph are given under the Graphical View subsection.
Read more
  • 0
  • 0
  • 1938
Modal Close icon
Modal Close icon