Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-mixing-aspnet-webforms-and-aspnet-mvc
Packt
12 Oct 2009
6 min read
Save for later

Mixing ASP.NET Webforms and ASP.NET MVC

Packt
12 Oct 2009
6 min read
Ever since Microsoft started working on the ASP.NET MVC framework, one of the primary concerns was the framework's ability to re-use as many features as possible from ASP.NET Webforms. In this article by Maarten Balliauw, we will see how we can mix ASP.NET Webforms and ASP.NET MVC in one application and how data is shared between both these technologies. (For more resources on .NET, see here.) Not every ASP.NET MVC web application will be built from scratch. Several projects will probably end up migrating from classic ASP.NET to ASP.NET MVC. The question of how to combine both technologies in one application arises—is it possible to combine both ASP.NET Webforms and ASP.NET MVC in one web application? Luckily, the answer is yes. Combining ASP.NET Webforms and ASP.NET MVC in one application is possible—in fact, it is quite easy. The reason for this is that the ASP.NET MVC framework has been built on top of ASP.NET. There's actually only one crucial difference: ASP.NET lives in System.Web, whereas ASP.NET MVC lives in System.Web, System.Web.Routing, System.Web.Abstractions, and System.Web.Mvc. This means that adding these assemblies as a reference in an existing ASP.NET application should give you a good start on combining the two technologies. Another advantage of the fact that ASP.NET MVC is built on top of ASP.NET is that data can be easily shared between both of these technologies. For example, the Session state object is available in both the technologies, effectively enabling data to be shared via the Session state. Plugging ASP.NET MVC into an existing ASP.NET application An ASP.NET Webforms application can become ASP.NET MVC enabled by following some simple steps. First of all, add a reference to the following three assemblies to your existing ASP.NET application: System.Web.Routing System.Web.Abstractions System.Web.Mvc After adding these assembly references, the ASP.NET MVC folder structure should be created. Because the ASP.NET MVC framework is based on some conventions (for example, controllers are located in Controllers), these conventions should be respected. Add the folder Controllers, Views, and Views | Shared to your existing ASP.NET application. The next step in enabling ASP.NET MVC in an ASP.NET Webforms application is to update the web.config file, with the following code: < ?xml version="1.0"?> <configuration> <system.web> <compilation debug="false"> <assemblies> <add assembly="System.Core, Version=3.5.0.0, Culture=neutral, PublicKeyToken=B77A5C561934E089"/> <add assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Abstractions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add assembly="System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </assemblies> </compilation> <pages> <namespaces> <add namespace="System.Web.Mvc"/> <add namespace="System.Web.Mvc.Ajax"/> <add namespace="System.Web.Mvc.Html" /> <add namespace="System.Web.Routing"/> <add namespace="System.Linq"/> <add namespace="System.Collections.Generic"/> </namespaces> </pages> <httpModules> <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule, System.Web.Routing, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </httpModules> </system.web> </configuration> Note that your existing ASP.NET Webforms web.config should not be replaced by the above web.config! The configured sections should be inserted into an existing web.config file in order to enable ASP.NET MVC. There's one thing left to do: configure routing. This can easily be done by adding the default ASP.NET MVC's global application class contents into an existing (or new) global application class, Global.asax. using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Mvc; using System.Web.Routing; namespace MixingBothWorldsExample { public class Global : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("{resource}.aspx/{*pathInfo}"); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); } protected void Application_Start() { RegisterRoutes(RouteTable.Routes); } } } This code registers a default ASP.NET MVC route, which will map any URL of the form /Controller/Action/Idinto a controller instance and action method. There's one difference with an ASP.NET MVC application that needs to be noted—a catch-all route is defined in order to prevent a request for ASP.NET Webforms to be routed into ASP.NET MVC. This catch-all route looks like this: routes.IgnoreRoute("{resource}.aspx/{*pathInfo}"); This is basically triggered on every request ending in .aspx. It tells the routing engine to ignore this request and leave it to ASP.NET Webforms to handle things. With the ASP.NET MVC assemblies referenced, the folder structure created, and the necessary configurations in place, we can now start adding controllers and views. Add a new controller in the Controllers folder, for example, the following simpleHomeController: using System.Web.Mvc; namespace MixingBothWorldsExample.Controllers { public class HomeController : Controller { public ActionResult Index() { ViewData["Message"] = "This is ASP.NET MVC!"; return View(); } } } The above controller will simply render a view, and pass it a message through the ViewData dictionary. This view, located in Views | Home | Index.aspx, would look like this: <%@ Page Language="C#" AutoEventWireup="true" CodeBehind="Index.aspx.cs" Inherits="MixingBothWorldsExample.Views.Home.Index" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html > <head id="Head1" runat="server"> <title></title> </head> <body> <div> <h1><%=Html.Encode(ViewData["Message"]) %></h1> </div> </body> </html> The above view renders a simple HTML page and renders the ViewData dictionary's message as the page title.
Read more
  • 0
  • 1
  • 46362

article-image-elements-spring-web-flow-configuration-file
Packt
12 Oct 2009
9 min read
Save for later

The Elements of the Spring Web Flow Configuration File

Packt
12 Oct 2009
9 min read
Let's take a look at the XML Schema Definition (XSD) file of the Spring Web Flow configuration file. To make the file more compact and easier to read, we have removed documentation and similar additions from the file. The complete file is downloadable from http://www.springframework.org/schema/webflow/spring-webflow-2.0.xsd. An XSD file is a file that is used to describe an XML Schema, which in itself is a W3C recommendation to describe the structure of XML files. In this kind of definition files, you can describe which elements can or must be present in an XML file. XSD files are primarily used to check the validity of XML files that are supposed to follow a certain structure. The schema can also be used to automatically create code, using code generation tools. The elements of the Spring configuration file are: flow The root element of the Spring Web Flow definition file is the flow element. It has to be present in all configurations because all other elements are sub-elements of the flow tag, which defines exactly one flow. If you want to define more than one flow, you will have to use the same number of flow tags. As every configuration file allows only one root element, you will have to define a new configuration file for every flow you want to define. attribute Using  the attribute tag, you can define metadata for a flow. You can use this metadata to change the behavior of the flow. secured The secured tag is used to secure your flow using Spring Security. The element is defined like this: <xsd:complexType name="secured"> <xsd:attribute name="attributes" type="xsd:string" use="required" /> <xsd:attribute name="match" use="optional"> <xsd:simpleType> <xsd:restriction base="xsd:string"> <xsd:enumeration value="any" /> <xsd:enumeration value="all" /> </xsd:restriction> </xsd:simpleType> </xsd:attribute> </xsd:complexType> If you want to use the secured element, you will have to at least define the attributes attribute. You can use the attributes element to define roles that are allowed to access the flow, for example. The match attribute defines whether all the elements specified in the attributes attribute have to match successfully (all), or if one of them is sufficient (any). persistence-context The persistence-context element enables you to use a persistence provider in your flow definition. This lets you persist database objects in action-states. To use persistence-context tag, you also have to define a data source and configure the persistence provider of your choice (for example, Hibernate or the Java Persistence API) in your Spring application context configuration file. The persistence-context element is empty, which means that you just have to add <persistence-context /> to your flow definition, to enable its features. var The var element can be used to define instance variables, which are accessible in your entire flow. These variables are quite important, so make sure that you are familiar with them. Using var elements, you can define model objects. Later, you can bind these model objects to the forms in your JSP web sites and can store them in a database using the persistence features enabled with the persistence-context element.  Nevertheless, using instance variables is not mandatory, so you do not have to define any, unless required in your flow. The element is defined as shown in the following snippet from the XSD file: <xsd:element name="var" minOccurs="0" maxOccurs="unbounded"> <xsd:complexType> <xsd:attribute name="name" type="xsd:string" use="required" /> <xsd:attribute name="class" type="type" use="required" /> </xsd:complexType> </xsd:element> The element has two attributes that are both required. The instance variable you want to define needs a name, so that you can reference it later in your JSP files. Spring Web Flow also needs to know the type of the variable, so you have to define the class attribute with the class of your variable. input You can use the input element to pass information into a flow. When you call a flow, you can (or will have to, depending on whether the input element is required or not) pass objects into the flow. You can then use these objects to process your flow. The XML Schema definition of this element looks like this: <xsd:complexType name="input"> <xsd:attribute name="name" type="expression" use="required" /> <xsd:attribute name="value" type="expression" /> <xsd:attribute name="type" type="type" /> <xsd:attribute name="required" type="xsd:boolean" /> </xsd:complexType> The input element possesses certain attributes, of which only the name attribute is required. You have to specify a name for the input argument, which you can use to reference the variable in your flow. The value attribute is used to specify the value of the attribute, for example, if you want to define a default value for the variable. You can also define a type (for example int or long) for the variable if you want a specific type of information. A type conversion will be tried if the argument passed to the flow does not match the type you expect. With the required attribute, you can control if the user of your flow has to pass in a variable, or if the input attribute is optional. output While you can define input parameters with the input element, you can specify return values with the output element. These are variables that will be passed to your end-state as the result value of your flow. Here's the XML Schema definition of the output element: <xsd:complexType name="output"> <xsd:attribute name="name" type="expression" use="required" /> <xsd:attribute name="value" type="expression" /> <xsd:attribute name="type" type="type" /> <xsd:attribute name="required" type="xsd:boolean" /> </xsd:attribute> </xsd:complexType> The definition is quite similar to the input element. You can also see that the name of the output element is required. Otherwise, you have no means of referencing the variable from your end-state. The value attribute is the value of your variable, for example, the result of a computation or a user returned from a database. Of course, you can also specify the type you expect your output variable to be. As with the input element, a type conversion will be attempted if the type of the variable does not match the type specified here. The required attribute will check if nothing was specified, or the result of a computation is null. If it is null, an error will be thrown.   actionTypes Per se, this is not an element which you can use in your flow definition, but it is a very important part of the XML Schema definition, referenced by many other elements. The definition is quite complex and looks like this: <xsd:group name="actionTypes"> <xsd:choice> <xsd:element name="evaluate"> <xsd:complexType> <xsd:sequence> <xsd:element name="attribute" type="attribute" minOccurs="0" maxOccurs="unbounded" /> </xsd:sequence> <xsd:attribute name="expression" type="expression" use="required" /> <xsd:attribute name="result" type="expression" use="optional" /> <xsd:attribute name="result-type" type="type" use="optional" /> </xsd:complexType> </xsd:element> <xsd:element name="render"> <xsd:complexType> <xsd:sequence> <xsd:element name="attribute" type="attribute" minOccurs="0" maxOccurs="unbounded" /> </xsd:sequence> <xsd:attribute name="fragments" type="xsd:string" use="required" /> </xsd:complexType> </xsd:element> <xsd:element name="set"> <xsd:complexType> <xsd:sequence> <xsd:element name="attribute" type="attribute" minOccurs="0" maxOccurs="unbounded" /> </xsd:sequence> <xsd:attribute name="name" type="expression" use="required" /> <xsd:attribute name="value" type="expression" use="required" /> <xsd:attribute name="type" type="type" /> </xsd:complexType> </xsd:element> </xsd:choice> </xsd:group> actionTypes is a group of sub-elements, namely evaluate, render, and set. Let's go through the single elements to understand how the whole definition works. evaluate With the evaluate element, you can execute code based on the expression languages. As described by the above source code, the element has three attributes; one of them is required. The expression attribute is required and executes the code that you want to run. The result value of the expression, if present, can be stored in the result attribute. Using the result-type attribute, you can convert the result value in the specified type, if needed. Additionally, you can define attributes using a sub-element of the attribute element. render Use the render element to partially render content of a web site. Using the required fragments attribute, you can define which fragments should be rendered. When the link on a web site is clicked the entire web site is not a re-load. Spring JavaScript allows you to update chosen parts of your web page. The remaining web site will not be reloaded, which can greatly enhance the performance of your web application. As is the case with the evaluate element, you can also specify additional attributes using the attribute sub-element. set The set element can be used to set attributes in one of the Spring Web Flows scopes. With the name attribute you can define where (in which scope) you want to define the attribute, and how it should be called. The following short source code illustrates how the set element works: <set name="flowScope.myVariable" value="myValue" type="long" /> As you can see, the name consists of the name of the scope and the name of your variable, delimited by a dot (.). Both the name attribute and the value attribute are required. The value is the actual value of your variable. The type attribute is optional and describes the type of your variable. As before, you can also define additional attributes using the attribute sub-element.
Read more
  • 0
  • 0
  • 2806

article-image-securing-your-trixbox-server
Packt
12 Oct 2009
6 min read
Save for later

Securing Your trixbox Server

Packt
12 Oct 2009
6 min read
Start with a good firewall Never have your trixbox system exposed completely on the open Internet; always make sure it is behind a good firewall. While many people think that because trixbox is running on Linux, it is totally secure, Linux, like anything else, has its share of vulnerabilities, and if things are not configured properly, is fairly simple for hackers to get into. There are really good open-source firewalls available, such as pfSense, Viata, and M0n0Wall. Any access to system services, such as HTTP or SSH, should only be done via a VPN or using a pseudo-VPN such as Hamachi. The best designed security starts with being exposed to the outside world as little as possible. If we have remote extensions that cannot use VPNs, then we will be forced to leave SIP ports open, and the next step will be to secure those as well. Stopping unneeded services Since trixbox CE is basically a stock installation of CentOS Linux, very little hardening has been done to the system to secure it. This lack of security is intentional as the first level of defence should always be a good firewall. Since there will be people who still insist on putting the system in a data center with no firewall, some care will need to be taken to ensure that the system is as secure as possible. The first step is to disable any services that are running that could be potential security vulnerabilities. We can see the list of services that are used with the chkconfig –list command. [trixbox1.localdomain rules]# chkconfig --listanacron 0:off 1:off 2:on 3:on 4:on 5:on 6:offasterisk 0:off 1:off 2:off 3:off 4:off 5:off 6:offavahi-daemon 0:off 1:off 2:off 3:off 4:off 5:off 6:offavahi-dnsconfd 0:off 1:off 2:off 3:off 4:off 5:off 6:offbgpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offcapi 0:off 1:off 2:off 3:off 4:off 5:off 6:offcrond 0:off 1:off 2:on 3:on 4:on 5:on 6:offdc_client 0:off 1:off 2:off 3:off 4:off 5:off 6:offdc_server 0:off 1:off 2:off 3:off 4:off 5:off 6:offdhcpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offdhcrelay 0:off 1:off 2:off 3:off 4:off 5:off 6:offez-ipupdate 0:off 1:off 2:off 3:off 4:off 5:off 6:offhaldaemon 0:off 1:off 2:off 3:on 4:on 5:on 6:offhttpd 0:off 1:off 2:off 3:on 4:on 5:on 6:offip6tables 0:off 1:off 2:off 3:off 4:off 5:off 6:offiptables 0:off 1:off 2:off 3:off 4:off 5:off 6:offisdn 0:off 1:off 2:off 3:off 4:off 5:off 6:offkudzu 0:off 1:off 2:off 3:on 4:on 5:on 6:offlm_sensors 0:off 1:off 2:on 3:on 4:on 5:on 6:offlvm2-monitor 0:off 1:on 2:on 3:on 4:on 5:on 6:offmDNSResponder 0:off 1:off 2:off 3:on 4:on 5:on 6:offmcstrans 0:off 1:off 2:off 3:off 4:off 5:off 6:offmdmonitor 0:off 1:off 2:on 3:on 4:on 5:on 6:offmdmpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offmemcached 0:off 1:off 2:on 3:on 4:on 5:on 6:offmessagebus 0:off 1:off 2:off 3:on 4:on 5:on 6:offmultipathd 0:off 1:off 2:off 3:off 4:off 5:off 6:offmysqld 0:off 1:off 2:off 3:on 4:on 5:on 6:offnamed 0:off 1:off 2:off 3:off 4:off 5:off 6:offnetconsole 0:off 1:off 2:off 3:off 4:off 5:off 6:offnetfs 0:off 1:off 2:off 3:on 4:on 5:on 6:offnetplugd 0:off 1:off 2:off 3:off 4:off 5:off 6:offnetwork 0:off 1:off 2:on 3:on 4:on 5:on 6:offnfs 0:off 1:off 2:off 3:off 4:off 5:off 6:offnfslock 0:off 1:off 2:off 3:on 4:on 5:on 6:offntpd 0:off 1:off 2:off 3:on 4:on 5:on 6:offospf6d 0:off 1:off 2:off 3:off 4:off 5:off 6:offospfd 0:off 1:off 2:off 3:off 4:off 5:off 6:offportmap 0:off 1:off 2:off 3:on 4:on 5:on 6:offpostfix 0:off 1:off 2:on 3:on 4:on 5:on 6:offrdisc 0:off 1:off 2:off 3:off 4:off 5:off 6:offrestorecond 0:off 1:off 2:on 3:on 4:on 5:on 6:offripd 0:off 1:off 2:off 3:off 4:off 5:off 6:offripngd 0:off 1:off 2:off 3:off 4:off 5:off 6:offrpcgssd 0:off 1:off 2:off 3:on 4:on 5:on 6:offrpcidmapd 0:off 1:off 2:off 3:on 4:on 5:on 6:offrpcsvcgssd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsaslauthd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsnmpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsnmptrapd 0:off 1:off 2:off 3:off 4:off 5:off 6:offsshd 0:off 1:off 2:on 3:on 4:on 5:on 6:offsyslog 0:off 1:off 2:on 3:on 4:on 5:on 6:offvsftpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offxinetd 0:off 1:off 2:off 3:on 4:on 5:on 6:offzaptel 0:off 1:off 2:on 3:on 4:on 5:on 6:offzebra 0:off 1:off 2:off 3:off 4:off 5:off 6:off The highlighted lines are services that are started automatically on system startup. The following list of services is required by trixbox CE and should not be disabled: Anacron crond haldaemon httpd kudzu lm_sensors lvm2-monitor mDNSResponder mdmonitor memcached messagebus mysqld network ntpd postfix sshd syslog xinetd zaptel To disable a service, we use the command chkconfig <servicename> off. We can now turn off some of the services that are not needed: chkconfig ircd offchkconfig netfs offchkconfig nfslock offchkconfig openibd offchkconfig portmap offchkconfig restorecond offchkconfig rpcgssd offchkconfig rpcidmapd offchkconfig vsftpd off We can also stop the services immediately without having to reboot: service ircd stopservice netfs stopservice nfslock stopservice openibd stopservice portmap stopservice restorecond stopservice rpcgssd stopservice rpcidmapd stopservice vsftpd stop Securing SSH A very large misconception is that by using SSH to access your system, you are safe from outside attacks. The security of SSH access is only as good as the security you have used to secure SSH access. Far too often, we see systems that have been hacked because their root password is very simple to guess (things like password or trixbox are not safe passwords). Any dictionary word is not safe at all, and substituting numbers for letters is very poor practice as well. So, as long as SSH is exposed to the outside, it is vulnerable. The best thing to do, if you absolutely have to have SSH running on the open Internet, is to change the port number used to access SSH. This section will detail the best methods of securing your SSH connections. Create a remote login account First off, we should create a user on the system and only allow SSH connections from it. The username should be something that only you know and is not easily guessed. Here, we will create a user called trixuser and assign a password to it. The password should be something with letters, numbers, symbols, and not based on a dictionary word. Also, try to string it into a sentence making sure to use the letters, numbers, and symbols. Spaces in passwords work well too, and are hard to add in scripts that might try to break into your server. A nice and simple tool for creating hard-to-guess passwords can be found at http://www.pctools.com/guides/password/. [trixbox1.localdomain init.d]# useradd trixuser[trixbox1.localdomain init.d]# passwd trixuser Now, ensure that the new account works by using SSH to log in to the trixbox CE server with this new account. If it does not let you in, make sure the password is correct or try to reset it. If it works, continue on. Only allowing one account access to the system over SSH is a great way to lock out most brute force attacks. To do this, we need to edit the file in /etc/ssh/sshd_config and add the following to the file. AllowUsers trixuser The PermitRootLogin setting can be edited so that root can't log in over SSH. Remove the # from in front of the setting and change the yes to no. PermitRootLogin no
Read more
  • 0
  • 0
  • 4058

article-image-customizing-and-extending-aspnet-mvc-framework
Packt
12 Oct 2009
5 min read
Save for later

Customizing and Extending the ASP.NET MVC Framework

Packt
12 Oct 2009
5 min read
(For more resources on .NET, see here.) Creating a control When building applications, you probably also build controls. Controls are re-usable components that contain functionality that can be re-used in different locations. In ASP.NET Webforms, a control is much like an ASP.NET web page. You can add existing web server controls and markup to a custom control and define properties and methods for it. When, for example, a button on the control is clicked, the page is posted back to the server that performs the actions required by the control. The ASP.NET MVC framework does not support ViewState and postbacks, and therefore, cannot handle events that occur in the control. In ASP.NET MVC, controls are mainly re-usable portions of a view, called partial views, which can be used to display static HTML and generated content, based on ViewData received from a controller. In this topic, we will create a control to display employee details. We will start by creating a new ASP.NET MVC application using File | New | Project... in Visual Studio, and selecting ASP.NET MVC Application under Visual C# - Web. First of all, we will create a new Employee class inside the Models folder. The code for this Employee class is: public class Employee{ public string FirstName { get; set; } public string LastName { get; set; } public string Email { get; set; } public string Department { get; set; }} On the home page of our web application, we will list all of our employees. In order to do this, modify the Index action method of the HomeController to pass a list of employees to the view in the ViewData dictionary. Here's an example that creates a list of two employees and passes it to the view: public ActionResult Index(){ ViewData["Title"] = "Home Page"; ViewData["Message"] = "Our employees welcome you to our site!"; List<Employee> employees = new List<Employee> { new Employee{ FirstName = "Maarten", LastName = "Balliauw", Email = "maarten@maartenballiauw.be", Department = "Development" }, new Employee{ FirstName = "John", LastName = "Kimble", Email = "john@example.com", Department = "Development" } }; return View(employees);} The corresponding view, Index.aspx in the Views | Home folder of our ASP.NET MVC application, should be modified to accept a List<Employee> as a model. To do this, edit the code behind the Index.aspx.cs file and modify its contents as follows: using System.Collections.Generic;using System.Web.Mvc;using ControlExample.Models;namespace ControlExample.Views.Home{ public partial class Index : ViewPage<List<Employee>> { }} In the Index.aspx view, we can now use this list of employees. Because we will display details of more than one employee somewhere else in our ASP.NET MVC web application, let's make this a partial view. Right-click the Views | Shared folder, click on Add | New Item... and select the MVC View User Control item template under Visual C# | Web | MVC. Name the partial view, DisplayEmployee.ascx. The ASP.NET MVC framework provides the flexibility to use a strong-typed version of the ViewUserControl class, just as the ViewPage class does. The key difference between ViewUserControl and ViewUserControl<T>is that with the latter, the type of view data is explicitly passed in, whereas the non-generic version will contain only a dictionary of objects. Because the DisplayEmployee.aspx partial view will be used to render items of the type Employee, we can modify the DisplayEmployee. ascx code behind the file DisplayEmployee.ascx.cs and make it strong-typed: using ControlExample.Models;namespace ControlExample.Views.Shared{ public partial class DisplayEmployee : System.Web.Mvc.ViewUserControl<Employee> { }} In the view markup of our partial view, the model can now be easily referenced. Just as with a regular ViewPage, the ViewUserControl will have a ViewData property containing a Model property of the type Employee. Add the following code to DisplayEmployee.ascx: <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="DisplayEmployee.ascx.cs" Inheits="ControlExample.Views.Shared.DisplayEmployee" %><%=Html.Encode(Model.LastName)%>, <%=Html.Encode(Model.FirstName)%><br/><em><%=Html.Encode(Model.Department)%></em> The control can now be used on any view or control in the application. In the Views | Home | Index.aspx view, use the Model property (which is a List<Employee>) and render the control that we have just created for each employee: <%@ Page Language="C#" MasterPageFile="~/Views/Shared/Site.Master" AutoEventWireup="true" CodeBehind="Index.aspx.cs"Inherits="ControlExample.Views.Home.Index" %><asp:Content ID="indexContent" ContentPlaceHolderID="MainContent"runat="server"> <h2><%= Html.Encode(ViewData["Message"]) %></h2> <p>Here are our employees:</p> <ul> <% foreach (var employee inModel) { %> <li> <% Html.RenderPartial("DisplayEmployee", employee); %> </li> <% } %> </ul></asp:Content> In case the control's ViewData type is equal to the view page's ViewData type, another method of rendering can also be used. This method is similar to ASP.NET Webforms controls, and allows you to specify a control as a tag. Optionally, a ViewDataKey can be specified. The control will then fetch its data from the ViewData dictionary entry having this key. <uc1:EmployeeDetails ID="EmployeeDetails1" runat="server" ViewDataKey="...." /> For example, if the ViewData contains a key emp that is filled with an Employee instance, the user control could be rendered using the following markup: <uc1:EmployeeDetails ID="EmployeeDetails1" runat="server" ViewDataKey="emp" /> After running the ASP.NET MVC web application, the result will appear as shown in the following screenshot:
Read more
  • 0
  • 0
  • 6993

article-image-documenting-your-python-project-part1
Packt
12 Oct 2009
7 min read
Save for later

Documenting Your Python Project-part1

Packt
12 Oct 2009
7 min read
Documenting Your Project Documentation is work that is often neglected by developers and sometimes by managers. This is often due to a lack of time towards the end of development cycles, and the fact that people think they are bad at writing. Some of them are bad, but the majority of them are able to produce fine documentation. In any case, the result is a disorganized documentation made of documents that are written in a rush. Developers hate doing this kind of work most of the time. Things get even worse when existing documents need to be updated. Many projects out there are just providing poor, out-of-date documentation because the manager does not know how to deal with it. But setting up a documentation process at the beginning of the project and treating documents as if they were modules of code makes documenting easier. Writing can even be fun when a few rules are followed. This article provides a few tips to start documenting your project through: The seven rules of technical writing that summarize the best practices A reStructuredText primer, which is a plain text markup syntax used in most Python projects A guide for building good project documentation The Seven Rules of Technical Writing Writing good documentation is easier in many aspects than writing a code. Most developers think it is very hard, but by following a simple set of rules it becomes really easy. We are not talking here about writing a book of poems but a comprehensive piece of text that can be used to understand a design, an API, or anything that makes up the code base. Every developer is able to produce such material, and this section provides seven rules that can be applied in all cases. Write in two steps: Focus on ideas, and then on reviewing and shaping your text. Target the readership: Who is going to read it? Use a simple style: Keep it straight and simple. Use good grammar. Limit the scope of the information: Introduce one concept at a time. Use realistic code examples: Foos and bars should be dropped. Use a light but sufficient approach: You are not writing a book! Use templates: Help the readers to get habits. These rules are mostly inspired and adapted from Agile Documenting, a book by Andreas Rüping that focuses on producing the best documentation in software projects. Write in Two Steps Peter Elbow, in Writing with Power, explains that it is almost impossible for any human being to produce a perfect text in one shot. The problem is that many developers write documentation and try to directly come up with a perfect text. The only way they succeed in this exercise is by stopping the writing after every two sentences to read them back, and do some corrections. This means that they are focusing both on the content and the style of the text. This is too hard for the brain and the result is often not as good as it could be. A lot of time and energy is spent in polishing the style and shape of the text, before its meaning is completely thought through. Another approach is to drop the style and organization of the text and focus on its content. All ideas are laid down on paper, no matter how they are written. The developer starts to write a continuous stream and does not pause when he or she makes grammatical mistakes, or for anything that is not about the content. For instance, it does not matter if the sentences are barely understandable as long as the ideas are written down. He or she just writes down what he wants to say, with a rough organization. By doing this, the developer focuses on what he or she wants to say and will probably get more content out of his or her brain than he or she initially thought he or she would. Another side-effect when doing free writing is that other ideas that are not directly related to the topic will easily go through the mind. A good practice is to write them down on a second paper or screen when they appear, so they are not lost, and then get back to the main writing. The second step consists of reading back the whole text and polishing it so that it is comprehensible to everyone. Polishing a text means enhancing its style, correcting its faults, reorganizing it a bit, and removing any redundant information it has. When the time dedicated to write documentation is limited, a good practice is to cut this time in two equal durations—one for writing the content, and one to clean and organize the text. Focus on the content, and then on style and cleanliness. Target the Readership When starting a text, there is a simple question the writer should consider: Who is going to read it? This is not always obvious, as a technical text explains how a piece of software works, and is often written for every person who might get and use the code. The reader can be a manager who is looking for an appropriate technical solution to a problem, or a developer who needs to implement a feature with it. A designer might also read it to know if the package fits his or her needs from an architectural point of view. Let's apply a simple rule: Each text should have only one kind of readers. This philosophy makes the writing easier. The writer precisely knows what kind of reader he or she is dealing with. He or she can provide a concise and precise documentation that is not vaguely intended for all kinds of readers. A good practice is to provide a small introductory text that explains in one sentence what the documentation is about, and guides the reader to the appropriate part: Atomisator is a product that fetches RSS feeds and saves them in adatabase, with a filtering process.If you are a developer, you might want to look at the API description(api.txt)If you are a manager, you can read the features list and the FAQ(features.txt)If you are a designer, you can read the architecture andinfrastructure notes (arch.txt) By taking care of directing your readers in this way, you will probably produce better documentation. Know your readership before you start to write. Use a Simple Style Seth Godin is one of the best-selling writers on marketing topics. You might want to read Unleashing the Ideavirus, which is available for free on the Internet http://en.wikipedia.org/wiki/Unleashing_the_Ideavirus. Lately, he made an analysis on his blog to try to understand why his books sold so well. He made a list of all best sellers in the marketing area and compared the average number of words per sentences in each one of them. He realized that his books had the lowest number of words per sentence (thirteen words). This simple fact, Seth explained, proved that readers prefer short and simple sentences, rather than long and stylish ones. By keeping sentences short and simple, your writings will consume less brain power for their content to be extracted, processed, and then understood. Writing technical documentation aims to provide a software guide to readers. It is not a fiction story, and should be closer to your microwave notice than to the latest Stephen King novel. A few tips to keep in mind are: Use simple sentences; they should not be longer than two lines. Each paragraph should be composed of three or four sentences, at the most, that express one main idea. Let your text breathe. Don't repeat yourself too much: Avoid journalistic styles where ideas are repeated again and again to make sure they are understood. Don't use several tenses. Present tense is enough most of the time. Do not make jokes in the text if you are not a really fine writer. Being funny in a technical book is really hard, and few writers master it. If you really want to distill some humor, keep it in code examples and you will be fine. You are not writing fiction, so keep the style as simple as possible.
Read more
  • 0
  • 0
  • 8137

article-image-preparing-your-forms-conversion-using-oracle-application-express-apex
Packt
09 Oct 2009
9 min read
Save for later

Preparing Your Forms Conversion Using Oracle Application Express (APEX)

Packt
09 Oct 2009
9 min read
When we are participating in a Forms Conversion project, it means we take the source files of our application, turn them into XML files, and upload them into the Forms Conversion part of APEX. This article describes what we do before uploading the XML files and starting our actual Forms Conversion project. Get your stuff! When we talk about source files, it would come in very handy if we got all the right versions of these files. In order to do the Conversion project, we need the same components that are used in the production environment. For these components, we have to get the source files of the components we want to convert. This means we have no use of the runtime files (Oracle Forms runtime files have the FMX extension). In other words, for Forms components we don't need the FMX files, but the FMB source files. These are a few ground rules we have to take into account: We need to make sure that there's no more development on the components we are about to use in our Conversion project. This is because we are now going to freeze our sources and new developments won't be taken into the Conversion project at all. So there will be no changes in our project. Put all the source files in a safe place. In other words, copy the latest version of your files into a new directory to which only you, and perhaps your teammates, have access. If the development team of your organization is using Oracle Designer for the development of its applications, it would be a good idea to generate all the modules from scratch. You would like to use the source on which the runtime files were created only if there are post-generation adjustments to be made in the modules. We need the following files for our Conversion project: Forms Modules: With the FMB extension Object Libraries: With the OLB extension Forms Menus: With the MMB extension PL/SQL Libraries: With the PLL extension Report Files: With the RDF, REX, or JSP extensions When we take these source files, we will be able to create all the necessary XML files that we need for the Forms Conversion project. Creating XML files To create XML files, we need three parts of the Oracle Developer Suite. All of these parts come with a normal 10g or 9i installation of the Developer Suite. These three parts are the Forms Builder, the Reports Builder, and the Forms2XML conversion tool. The Forms2XML conversion tool is the most extensive to understand and is used to create XML files from Form modules, Object Libraries, and Forms Menus. So, we will first discuss the possibilities of this tool. The Forms2XML conversion tool This tool can be used both from the command line as well as a Java applet. As the command line gives us all the possibilities we need and is as easy as a Java applet, we will only use the command-line possibilities. The frmf2xml command comes with some options. The following syntax is used while converting the Forms Modules, the Object Libraries, and the Forms Menus to an XML structure: frmf2xml [option] file [file] In other words, we follow these steps: We first type frmf2xml. Alternatively, we give one of the options with it. We tell the command which file we want to convert, and we have the option to address more than one file for the conversion to XML. We probably want to give the OVERWRITE=YES option with our command. This property ensures that the newly created XML file will overwrite the one with the same name in the directory where we are working. If another file with the same name already exists in this directory and we don't give the OVERWRITE option the value YES (the default is NO), the file will not be generated, as we see in the following screenshot: If there are any images used in modules (Forms or Object Libraries), the Forms2XML tool will refer to the image in the XML file created, and that file will create a TIF file of the image in the directory. The XML files that are created will be stored in the same directory from which we call the command. It will use the following syntax for the name of the XML file: formname.fmb will become formname_fmb.xml libraryname.olb will become libraryname_olb.xml menuname.mmb will become menuname_mmb.xml To convert the .FMB, OLB and, MMB files to XML, we need to do the following steps in the command prompt: Forms Modules The following steps are done in order to convert the .FMB file to XML: We will change the working directory to the directory that has the FMB file. In my example, I have stored all the files in a directory called summit directly under the C drive, like this: C:>cd C:summit Now, we can call the frmf2xml command to convert one of our Forms Modules to an XML file. In this example, we convert the orders.fmb module: C:summit>frmf2xml OVERWRITE=YES orders.fmb As we see in the following screenshot, this command creates an XML file called orders_fmb.xml in the working directory: Object Libraries To convert the .OLB file to XML, the following steps are needed: We first change the working directory to the directory that the OLB file is in. It's done like this: C:>cd C:summit Now we can call the frmf2xml command to convert one of our Object Libraries to an XML file. In this example, we convert the Form_Builder_II.olb library as follows: C:summit>frmf2xml OVERWRITE=YES Form_Builder_II.olb As we see in the following screenshot, the command creates an XML file calledForm_Builder_II_olb.xml and two images as .tif files in the working directory: Forms Menus To convert the MMB file to XML, we follow these steps: We change the working directory to the directory that the .MMB file is in, like this: C:>cd C:summit Now we can call the frmf2xml command to convert one of our Forms Menus to an XML file. In this example we convert the customers.mmb menu: C:summit>frmf2xml OVERWRITE=YES customers.mmb As we can see in the following screenshot, the command creates an XML file called customers_mmb.xml in the working directory: Report Files In our example, we will convert the Customers Report from a RDF file to an XML file. To do this, we follow the steps given here: We need to open the Employees.rdf file with Reports Builder. Open Reports Builder from your Start menu. If Reports Builder is opened, we need to cancel the wizard that asks us if we want to create a new report. After this we use Ctrl+O to open the Report File (or in the menu, File | Open) which we want to convert to XML as we see in the following screenshot: After this we use Shift+Ctrl+S (or in the File | Save As menu) to save the Report. We choose that we want to save the report as a Reports XML (*.xml) file and we click on the Save button as shown in the following screenshot: PL/SQL Libraries To convert PL/SQL Libraries to an XML format, it's easiest to use the convert command that comes with the Report Builder. With this command called rwconverter, we define the source type, call the source, and define the destination type and the destination. In this way, we have control over the way we need to convert the original .pll file to a .pld flat file that we can upload into the APEX Forms converter. It is possible to convert the PL/SQL Libraries with the convert option in Forms Builder, but, personally, I think this option works better. The rwconverter command has a few parameters we give with it to execute. They are given as follows: stype: This is the type of source file we need to convert. In our situation, this will be a .pll file and so the value we need to set is pllfile. source: This is the name of the source file, including the extension. In our case, it is wizard.pll. dtype: This is the file type we want to convert our source file to. In our case, it is a .pld file and so the value becomes pldfile. dest: This is the name, including the extension, of the destination file. In our case, it is wizard.pld. In our example, we use the wizard.pll file that's in our summit files directory. This PL/SQL Library that contains .pll files is normally used to create a PL/SQL Library in the Oracle Database. But this time, we will use it to create a .pld flat file that we will upload to APEX. First, we change the directory to work directory which has the original .pll file. In our case, the summit directory directly under the C drive, shown as follows: C:>cd C:summit After this, we call rwconverter in the command prompt as shown here: C:summit> rwconverter stype=pllfile source=wizard.pll dtype=pldfile dest=wizard.pld When you press the Enter key, a screen will open that is used to do the conversion. We will see that the types and names of the files are the same as we entered them in the command line. We need to click on the OK button to convert the file from .pll to .pld. The conversion may take a few seconds, but when the file has been converted we will see a confirmation that the conversion was successful. After this, we can look in the C:summit directory and we will see that a file wizard.pld is created.
Read more
  • 0
  • 0
  • 11837
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-getting-your-apex-components-logic-right
Packt
09 Oct 2009
10 min read
Save for later

Getting Your APEX Components Logic Right

Packt
09 Oct 2009
10 min read
Pre-generation editing After reading this article, we will understand our project a lot better. Also, to a certain level, we will be able control the way our application will be generated. Generation is often performed more than once as you refine the definitions and settings between iterations. In this article we will learn a lot of ways to edit the project in order to generate optimally. But we must understand that we will not cover all the exceptions in the generation process. If we want to do a real Forms to APEX conversion project, it will be very wise to carefully read the help texts in the Migration Documentation provided by Oracle in every APEX instance—especially the appendix called Oracle Forms Generation Capabilities and Workarounds, which will help you to understand the choices that can be made in the generation process. The information in these migration help texts tells us how the different components in Oracle Forms will be converted in APEX and how to implement business logic in the APEX application. For example, when we take a look at the Block to Page Region Mappings, we learn how APEX converts certain blocks to APEX regions during conversion. Investigating When we take a look at our conversion project, we must understand what will be generated. In case of generation, the most important parts are the blocks on our Forms modules. These are, quite literally, the building blocks our pages in APEX will be based upon. Of course, we have our program units, triggers, and much more; but the pages that are defined in the APEX application (which we put in production after the project is finished) will be based on Blocks, Reports, and Menus. This is why we need to adjust them before we generate anything. This might seem like a small part of the project as we look at the count of all the components in our project page, but that doesn't make it less important. We can't adjust reports as they are defined by the query that they are built upon, but we can alter the blocks. That's why we focus on those components first. Data blocks The building blocks of our APEX pages are the blocks and, of course, the reports. The blocks we can generate in our project are the ones that are based on database block. Non-database blocks such as those that hold menus and buttons are not generated by default, as they will be generated as blank pages. In the block overview page, we get the basic information about the blocks in our project. The way the blocks will be generated is determined by APEX based on the contents, the number of items on the block, and, most importantly, the number of records displayed. For further details on the generation rules, refer to the Migration Guide—Appendix A: Forms Generation Capabilities and Workarounds. In the Blocks overview page in our conversion project, we notice that not all the blocks are included. In other words, they aren't checked to be included in the project. This is because they are not oriented from a database block. To include or exclude a block during generation, we need to check or uncheck the specific block. Don't confuse this with the applicability of a block. We also might notice that some of the blocks are already set to complete. In our example we see that the S_CUSTOMER1 and S_CUSTOMER blocks are set to complete. If we take a look inside these components and check the annotations, they are indeed set to complete. There's also a note set for us. As we see in the following screenshot, it states Incorporating Enhanced Query: The Enhanced Query is something that we will use later in this article. But beware of the statement that a component is Complete as we will see that we might want to alter the query on which the customer's block is based. If we look at a block that is not yet set to complete in the overview page (such as the Orders block) and we look at the Application Express Page Query region in the details screen, we see that only Original Query is present. This is the query that is in the original Forms XML file we uploaded earlier. Although we have the Original Query present in our page, we can also alter it and customize the query on which this block is based. But this will be done later in the article. In this way, we have a better control over the way we will generate our application. We can't alter this query as it is to be implemented as a Master-Detail Form. Block items Each block contains a number of items. These items define the fields in our application and are derived from our Forms XML files. In the block details pages, we can find the details of the items on the particular block as well. Here we can see the most basic information about the items, namely their Type, Prompt, Column Name, and the Triggers on that particular item. We can also see the Name of the item if it is a Database Item and if the item is complete or not, and whether or not it is Applicable. When a block is set to complete, it is assumed that we have all the information required about the items, as we see in the example shown here: But there are also cases where we don't get all the information about the items we want. In our case, we might want to customize the query the block is based on or define the items further. We will cover this later in the article. In the above screenshot we notice that for all the items the Column Name is not known. This is an indication that the items will not be generated properly and we need to take a further look into the query and, maybe, some of the triggers. When we want to alter the completeness and applicability of the items in our block, there's a great functionality available on the upper-right of the Blocks Details page. In the Block Tasks section, we find a link that states: Set All Block Items Completeness and Applicability. This function is used to make bulk changes in the items in the block we are in. It can be useful to change the completeness of all items when we are not sure what more needs to be done. To set the completeness or the applicability with a bulk change on all the items, we click on the link in the Block Tasks region and this takes us to the following screen: In the Set Block Item & Trigger Status page we can select the Attribute (Items, Block Triggers, or Item Triggers), the Set Tracking Attribute (Complete or Applicable), and the Set Value (Yes or No). To make changes, set the correct attribute, tracking attribute, and value, and then click on Apply Changes. Original versus Enhanced Query As mentioned earlier, we can encounter both Original and Enhanced Queries in the blocks of our Forms. The Original Query is taken from the XML file directly as it is stated in the source of the block we are looking at. So where does the Enhanced Query originate from? This is one of the automatically generated parts of the Forms Conversion tool in APEX. If a block contains a POST QUERY trigger, the Forms Conversion tool generates an Enhanced Query for us. In the following screenshot, we see both the Enhanced Query and the Original Query in the S_CUSTOMER block. We can clearly notice the additional lines at the bottom of the Enhanced Query. The query in the Enhanced Query section still looks a lot like the one in the Original Query section, but is slightly altered. The code is generated automatically by taking the code from both the Original Query and POST QUERY triggers on this block. Please note that the query is automatically generated by APEX by adding a WHERE clause to the SQL query. This means that we will still need to check it and, probably, optimize it to work properly. The following screenshot shows us the POST QUERY trigger. Notice that it's set to both applicable and complete. This is because the code is now embedded in the enhanced query and so the trigger is taken care of for our project. Triggers Besides items, even blocks contain triggers. These define the actions in our blocks and are, therefore, equally important. Most of the triggers are very Forms-specific, but it's nice to be the judge of that ourselves. In the Orders Block, we have the Block Triggers region that contains the triggers in our orders block. The region tells us the name, applicability, and completeness. It gives us a snippet of the code inside the trigger and tells us the level it is set to (ITEM or BLOCK). A lot of the triggers in our project need to be implemented post-generation, which will be discussed later in this article. But as mentioned above, there is one trigger that we need in the pre-generation stage of our project. This is the POST-QUERY trigger. In this example, the applicability in the orders block is set to No. This is also the reason why we have no Enhanced Query to choose from in this block. The reasons behind setting the trigger to not applicable can be many, and you can learn more about the reasons if you read the migration help texts carefully. We probably want to change the applicability of the trigger ourselves because the POST-QUERY trigger contains some necessary information on how we need to define our block. If we click on the edit link (the pencil icon) for the POST-QUERY trigger, we can alter the applicability. Set the value for Applicable to Yes and click on Apply Changes. This will take us back to the Block Details screen. In the Triggers region, we can see that the applicability of the POST QUERY trigger is now set to Yes. Now if we scroll up to the Application Express Page Query region, we can also see that the Enhanced Query is now in place. As shown in the following screenshot, we can see that we automatically generated an extended version of the Original Query, embedding the logic in the Post Query trigger. For the developers among us, we can see that the query produced by the conversion tool in APEX doesn't make the query very optimal. We can rewrite the query in the Custom Query section, which we will describe later in this article. We are able to set the values for our triggers in the same way we used to set the applicability and completeness of the items in our blocks. In the upper-right corner of our Block Details screen, we find the Block Tasks region. Here we find the link to the tasks for items as well as triggers. Click on the Set All Block Triggers Completeness and Applicability to navigate to the screen where we can set the values. In the Attribute section, we can choose from both the block level triggers as well as the item level triggers. We can't adjust them all at once, so we may need to adjust them twice.  
Read more
  • 0
  • 0
  • 2652

article-image-development-ajax-web-widget
Packt
08 Oct 2009
10 min read
Save for later

Development of Ajax Web Widget

Packt
08 Oct 2009
10 min read
We’ve seen this kind of thing done by the popular bookmark sharing website, digg.com. We don’t need to visit digg.com to digg the URL. We can do it using a small object called Widget, provided by digg.com. This is just one simple example of providing remote functionality; you can find lots of widgets providing different functionality across various websites. What is a web widget? A web widget is a small piece of code provided by a third party website, which can be installed and executed in any kind of web page which is created using HTML (or XHTML). The common technologies used in web widgets are JavaScript, Ajax and Adobe flash but in some places other technologies such as HTML combined with CSS, and Java Applets are also used. A web widget can be used for: Serving useful information from a website. Providing some functionality (like: voting, polling). Endorsing the products and services of a website. Web widgets are also known as badges, modules, and snippets. And, these are commonly used by bloggers, social network users and advertisers. Popular Web Widgets You can find many examples of the web widgets on the World Wide Web. These include: Google Adsense – It is probably the most popular web widget you can see on the World Wide Web. All you’ve to do, is get a few lines of code from Google and place it in your website. Different kinds of contextual advertisements get displayed on your website according to the content displayed on your website. Digg this widget – If You want to display news or articles then you’re most probably going to do so with this widget. Provided by the social book marking website digg.com, it is one of the popular bookmark sharing websites. Your website will lots of visitors if your article gets more and more digs, and gets featured on the first page. Del.icio.us widget – You might have seen the widget previously, or text showing “Save to del.icio.us”, “Add to del.icio.us” and “saved by XXX users”. This is another popular social bookmarking website. Snap preview –Provided by snap.com, this widget shows the thumbnail preview of a web page. Survey widget from Polldaddy.com – If you need to run some kind of poll then you can use this widget to create and manage the poll on your website. Google Maps – A rather well known widget, which adds different map functionality in Google maps. you must get an api key from Google to use this widget. Benefits of Providing Web Widgets I came to know about dzone.com, a social bookmarking website for web and software developers, from a widget I found in a blog. Let’s look at the benefits of web widgets: A widget can be an excellent publicity tool. Most of the people who come to know about digg.com, del.icio.us or reddit.com do so from their web widgets in various news portal and blogs. Providing web services- This is useful since the user doesn’t have to visit that particular website to get its services. For example, a visitor to a website which has a Polldaddy web widgets can participate on a survey without visiting to polldaddy.com. Web widgets can be one of the ways of bringing income for your website, by utilizing advertising widgets such as Google adsense (discussed above). Widgets can generate more traffic to your website. For example, you can make a widget that evaluates the price of a product on a website. Widgets across multiple websites means visitors from multiple sources, hence more traffic to you website. How do Web Widgets work As can be seen in the above figure, the web widget is used by a widget client, which is a web page. The web pages just use a chunk of code provided by the functionality providing website. Furthermore, all the services and functionality are provided by the code residing on the remote server which the widget client doesn’t have to care about. In a web widget system, there is optional system called Widget Management System which can manage the output of the widget. Widget Management System A Widget Management System is used for controlling the various operations of the widget, including things like height or width or color. Most widget providing websites allows you to customize the way widget looks in blogs or website. For example, you can customize the size and color of Google adsense by logging into the Google adsense account system. Some widget Management Systems also allow you to manage the content of the widget as well. In the survey widget from vizu.com, you can manage parameters like the topic of poll and number of answers of the poll. Concept of Technologies Used To make the most out of Ajax web widgets, let’s get familiar with some technologies. HTML and XHTML Hypertext Markup language(HTML) is one of the most known markup language for constructing web pages. As the name implies, we use it to markup the text document so that web browsers know how to display them. XHTML on the other hand, is the way to standardize the HTML by making HTML a dialect of XML. XHTML comes in different flavors like transitional, strict or frameset, with each flavor offering either different capabilities or different degrees of conformance to the XML standard. Difference between HTML and XHTML The single biggest difference between HTML and XHTML is that XHML must be well-formed according to XML conventions. Because an XHTML document is essentially XML in nature, simply following the traditional HTML practices is not enough. An XHML document must be well formed. For example, let’s take the examples of “img” and “input” elements of HTML. <img src="logo.gif" alt="My Site" border="0" > <input type="text" name="only_html" id="user_name" value="Not XHTML" > Both of the above statements are perfect HTML statements but they are not well graded constructions in XHTML as they are not well formed according to compliance of XML (those tags are not enclosed properly). To figure out the difference, first look at the above tags in the XHML valid syntax. <img src="logo.gif" alt="My Site" border="0" /><input type="text" name="only_html" id="user_name" value="Not XHTML" /> Did you find the difference? The latter tags are well formed and the previous one is not. If you look closely, you can find that the HTML tags in the second example are closed like XML. The point is you should have well formed document in order to allow JavaScript to access the DOM. You can also add comments in the HTML document. Any information beginning with the character string <!-- and ending with the string --> will be ignored: <!-- This is a comment within HTML--> You can add the comment in the same fashion in an XML document as well. Concept of IFRAME IFrame (Inline Frame ) is a useful HTML element for creating web widgets. Using IFrame, we can embed the HTML document from the remote server into our document. This feature is most commonly used in the web widgets. We can define the width and height of IFrame using the height and width property of the IFrame element. IFrame can be defined in any part of the web pages. IFrame behaves like an inline object and the user can scroll through the Iframe to view the content which is out of view. We can also remove the scroll bar from the IFrame by setting scrolling property to no. The embedded document can be reloaded asynchronously using IFrame and can be an alternative option to an XMLHttpRequest object. For this reason, IFrames are widely used by Ajax applications. You can use the IFrame in the HTML document in the following way: <html> <body> <iframe src="http://wwww.example.com" width="200"> If your browser doesn’t support IFrame, this text will be displayed in the browser </iframe> </body></html> CSS(Cascading Style Sheet) A Cascading Style Sheet is a style sheet language which is used to define the presentation of document written in HTML or XHTML pages. CSS is mainly used for defining the layout, colors and fonts of the elements of a web page along with other perspectives of the document. While the mark up language can be used to define the content of the web page, the primary goal of providing a CSS is to separate the document’s content from the document’s presentation. The Internet Media type “text/css” is reserved for CSS. Using Cascading Style Sheet CSS can be used in different ways in a web page. It can be written with the <style></style> tag of the web-page. It can be defined in the style attribute of the element. Also, CSS can be defined in a different page with extension .css that links with the web page to define the presentation. What is the benefit of using CSS then? You can define the values for a set of attributes for all HTML elements (of a similar type), then change the presentation with one change in the CSS property <style type="text/css">all the css declaration goes here</style> But what about defining the CSS property for different web pages? The above method can be cumbersome and you would need to replicate the same code to different web pages. To overcome such a situation, you can define all the property in a single page let’s say call it style.css and link that CSS to the web pages. <link href="style.css" type="text/css" rel="stylesheet" /> Furthermore, you can define the styles within the elements using the style property of the HTML elements, which is commonly known as inline styles. <p style="text-align:justify">This is content inside paragraph</p> As you’ve seen, we’ve quickly defined the text-align attribute to justify Defining Style Sheet rules A style sheet allows you to change the different attributes of the HTML elements by defining a styling rule for them. A rule has two components: selectors, which defines which HTML elements to apply the styling rule and declaration, which contains the styling rule for the selectors. p { background-color:#999999 } In the above css rule, p is the selector and background-color:#999999, is the declaration of the rule. The above style defines the rule for the paragraph element <p></p>. .box { width:300px } In the above rule, selectors is defined as a class and it can be used in any element with the class property of the element. <p class="box">..content..</p> <div class="box">content</div> Class can be used with any HTML element. #container { width:1000px } “#” is known as an id selector in CSS. The above rule applies to an element which has the id “container” such as below: <div id="container">..content..</div> We can also define a set of rules for multiple elements using comma (,) among the selectors: h1, h2, h3 {font-size:12px; color:#FF0000 } There are many more styles of defining selectors and CSS properties.
Read more
  • 0
  • 0
  • 4320

article-image-liferay-mail-and-sms-text-messenger-portlet
Packt
08 Oct 2009
5 min read
Save for later

Liferay Mail and SMS Text Messenger Portlet

Packt
08 Oct 2009
5 min read
Working with Mail Portlet For the purpose of this article, we will use an intranet website called book.com  which is created  for a fictions company named "Palm Tree Publications". In order to let employees manage their emails, we can use the Liferay Mail portlet. As an administrator of "Palm Tree Publications", you need to create a Page called "Mail" under the Page, "Community" at the Book Lovers Community Public Pages and also add the Mail portlet in the Page, "Mail". Experiencing Mail Management First of all, login as "Palm Tree". Then, let's do the above as follows: Add a Page called "Mail" under the Page "Community" at the Book Lovers Community Public Pages, if the Page is not already present. If the Mail portlet is not already present, add it in the Page, "Mail" of the Book Lovers Community where you want to manage mails. You will see the Mail portlet as shown in the following figure. Let's assume that we set up the mail domain as "cignex.com" in the Enterprise Admin portlet for testing purposes, as we have a mail engine with this mail domain already. Of course, you could set up the mail domain as "book.com", or something else, if you had a mail engine with this mail domain in your hand. As an editor of the editorial department, "Lotti Stein", you may want to manage your mails in the mail domain, "cignex.com". You can first choose a user name for your personal company the email address, say "admin1234" and register. Let's do it as follows: Login as "Lotti Stein". Go to the Page "Mail" under the Page, "Community", at the Book Lovers community Public Pages. Locate the Mail portlet. Input the value for User name as "admin1234". Click the Register button. Your new email address is "admin1234@cignex.com". This email address will also serve as your login, as shown in the following figure. You can now check for new messages in your inbox, by clicking the Inbox link first. Then, you can view the Unread messages, and either Check Mail or create a New mail, as shown in the following figure: You can go to the Page of Mail management by clicking on any link of Unread Messages, or Check Mail button or New button. Further, you can manage emails through the Mail portlet of your current account. Email management includes the following features (as shown in the following figure): Create a New email. Check Mail. Reply to an email. Reply All emails. Forward emails. Delete emails. Print emails, and Search. Note that the first email with the subject, "Users in Staging server", is sent through the SMS Text Messenger portlet. For more details, refer to the forthcoming section. How to Set up Mail Server? In order to make the Mail portlet work, we have to set up a mail server with IMAP, POP and SMTP protocols. Suppose that the Enterprise "Palm Tree Publications" has a mail server with the domain "exg3.exghost.com", an account "admin@cignex.com/admin1234", and protocol IMAP, POP and SMTP. As an administrator, you need to integrate this mail server with IMAP, POP and SMTP protocol in Liferay. Let's do it as follows: Find the file ROOT.xml in $TOMCAT_DIR/conf/Catalina/localhost. Find the mail configuration first. Then configure it as follows: <!-- Mail --><Resourcename="mail/MailSession"auth="Container"type="javax.mail.Session"mail.imap.host="exg3.exghost.com"mail.imap.port="143"mail.pop.host="exg3.exghost.com"mail.pop.port="110"mail.store.protocol="imap"mail.transport.protocol="smtp"mail.smtp.host="exg3.exghost.com"mail.smtp.port="2525"mail.smtp.auth="true"mail.smtp.starttls.enable="true"mail.smtp.user="admin@cignex.com"password="admin1234"mail.smtp.socketFactory.class="javax.net.ssl.SSLSocketFactory"/> In short, a Mail portlet is an AJAX web-mail client. We can configure it to work with any mail server. It reduces page refreshes, since it displays message previews and message lists in a dual pane window. How to Set up Mail Portlet? If you have proper Permissions, you can change the preferences of the Mail portlet. To change the preferences, you can simply click the Preferences icon to the upper right of the Mail portlet. With the Recipients tab selected, you can find potential recipients from the Directory (Enabled or Disabled) and the Organization (My Organization or All Available). Click the Save button after making any changes. Using the Filters tab, you can set the values to filter emails associated with an email address to a Folder. Click the Save button after making any changes. Note that the maximum number of email addresses is ten. This number is also configurable at the portal-ext.properties Similarly, the Forward Address tab allows all emails to be forwarded to the email address you want. Enter one email address Per Line. Remove all entries to disable email forwarding. Select Yes to leave, or No to not leave a copy of the forwarded message. Click the Save button after making any changes. Further, the Signature tab also allows you to set up your signature using HTML text editor. The signature you have set up will be added to each outgoing message. Click the Save button after making any changes. The Vacation Message tab allows you to set up vacation messages using HTML text editor. The vacation message notifies others of your absence (as shown in the following figure). Click the Save button after making any changes.
Read more
  • 0
  • 0
  • 3715

article-image-getting-jump-start-ironpython
Packt
07 Oct 2009
9 min read
Save for later

Getting a Jump-Start with IronPython

Packt
07 Oct 2009
9 min read
Where do you get it? Before getting started, you’ll need to download IronPython 2.0.1 (though the contents of this article could just as easily be applied to past or even future versions). The official IronPython site is www.codeplex.com/ironpython. Here you’ll find not only the IronPython bits, but also samples, source code, documentation and many other resources. After clicking on the "Downloads" tab at the top you will be presented with three download options: IronPython.msi, IronPython-2.0.1-Bin.zip (the binaries) or IronPython-2.0.1-Src.zip (the source code). If you already have CPython installed—the standard Python implementation—the binaries are probably your best bet. You simply unzip the files to your preferred installation directory and you’re done. If you don’t have CPython installed, I recommend the IronPython.msi file since it comes prepackaged with portions of the CPython Standard Library. Figure 1. IronPython installation directory. There are a few items I would like to highlight in the IronPython installation directory displayed in Figure 1. The first is the FAQ.html file. This covers all of your basic IronPython questions, from licensing questions to implementation details. Periodically reviewing this while you’re learning IronPython will probably save you a lot of frustration. The second item of importance is the two executables, ipy.exe and ipyw.exe. As you probably guessed, these are what you use to launch IronPython; ipy.exe is used for scripts and console applications while ipyw.exe is reserved for other types of applications (Windows Forms, WPF, etc). Lastly, I’d like to draw your attention to the Tutorial folder. Inside the Tutorial folder, you’ll find a Tutorial.html file in addition to a number of other files. The Tutorial.html file is a comprehensive review of what you need to know to get started with IronPython. If you want to be quickly productive, be sure to at least review the tutorial. It will answer many of your questions. Visual Studio or a Text Editor? One thing that neither the ReadMe nor the Tutorial really covers is the tooling story. While Visual Studio 2008 is a viable Python development tool, you may want to consider other options. Personally, I bounce between VS and SciTE, but I’m always watching for new tools that might improve my development experience.  There are a number of IDEs and debuggers out there and you owe it to yourself to investigate them. Sometimes, however, Visual Studio IS the right tool for the job. If that’s the case then you’ll need to install the Visual Studio SDK from http://www.microsoft.com/downloads/details.aspx?FamilyID=30402623-93ca-479a-867c-04dc45164f5b&displaylang=en. Let’s Write Some Code! To get started, let’s create a simple python script and execute it with ipy. In a new file called "sample.py" (Python files are indicated by a ".py" extension), type "print ‘Hello, world’". Open a command window, navigate to the directory where you saved sample.py and then call ipy.exe passing "sample.py" as an argument. Figure 1 displays what you might expect to see in the console window. Figure 2. Executing a script using the comand line Not that executing from the Command Line isn’t effective, but I prefer a more efficient approach.  Therefore I’m going to use SciTE, an editor I briefly mentioned earlier, to duplicate the example in Figure2. Why SciTE? I get syntax highlighting, I can run my code simply by hitting F5 and the stdout is redirected to SciTE’s output window. In short, I never have to leave my coding environment. If you performed the above "hello, world" example in SciTE, the example would look like Figure 2. Figure 3. Executing a script using SciTE Congratulations! You’ve written your first bit of Python code! The problem is it doesn’t really touch any .NET namespaces. Fortunately, this is not a difficult thing to do. Figure 3 shows all the code you need to start working with the System namespace. Figure 4. You only need a single of code to gain access to the System namespace With that simple import statement, we now enjoy access to the entirety of the System namespace. For example, to access the String class we simply would type System.String. That’s great for getting started but what happens when we want to use something like the Regex class? Do we have to type System.Text.RegularExpressions.Regex? Figure 5. Using .NET regular expressions from IronPython No! The first line of Figure 5 introduces a new form of the import statement that only imports the specific items you want. In our case, we only want the Regex class. The code in line 3 demonstrates creating a new instance of the Regex class. Note the lack of a "new" keyword. Python considers "new" redundant since you have to include parentheses anyways. Another interesting note is the syntax—or is it the lack of syntax—for creating a variable. There’s no declaration statement or type required. We simply create a name that we set equal to a new instance of the Regex class. If you’ve ever written any PHP or classic ASP, this should feel pretty familiar to you.  Finally, the print statement on line 6 produces the output shown in Figure 6. Figure 6. Output from the Regex example The last example was easy because IronPython already holds a reference to System and mscorlib. Let’s push our limits and create a simple Windows form. This requires a bit more work. Figure 7. Using the clr module to add a reference A Quick Review of Python Classes Figure 7 introduces the clr module as a way of adding references to other libraries in the Global Assembly Cache (GAC). Once we have a reference, now we can import the Form, TextBox and Button classes so we can start constructing our GUI. Before we do that though, we have a couple of concepts we need to cover. Figure 8. Introducing classes and methods Up until this point, we really haven’t needed to create any classes or methods. But now that we need to create a form we’re going to need both. Figure 8 demonstrates a very simple class and an equally simple method. I think it’s clear that the "class" keyword defines a class and the "def" keyword defines a method. You probably correctly assumed that "(object)" after "MyClass" is Python’s way of expressing inheritance. The "pass" keyword, however, may not be immediately obvious. In Python, classes and methods cannot be empty. Therefore, if you have a class or method that you aren’t quite ready to code yet, you can use the "pass" statement until you are ready. A more subtle characteristic of Figure 8 is the whitespace. In Python we indent the contents of control structures with four spaces. Tabs will also work, but by convention four spaces are used. In our example above, since "my_method" has no preceding spaces, it’s clear that "my_method" is not part of "MyClass". So how would we make "my_method" a class method? Logically, you would think that simply deleting the "pass" statement under "MyClass" and indenting "my_method" would be enough, but that isn’t the case. There’s one more addition we need to make. Figure 9. Creating a class method As Figure 9 demonstrates, we need to pass "self" as a parameter to "my_method". The first—and sometimes the only—parameter in a class method’s parameter list must always be an instance of the containing class. By convention, this parameter should be named "self", though you could call it anything you’d like. Why the extra step? That’s because Python values the explicit over the implicit. Hiding this detail from the developer is at odds with Python’s philosophy. Creating a Windows Form Now that we have an understanding of classes, methods and whitespace, Figure 10 continues our example from Figure 7 by creating a blank form. Figure 10. Creating a blank form The code in Figure 10 should be fairly understandable. We create the "MyForm" class by inheriting from "System.Windows.Forms.Form". We create a new instance of "MyForm" and pass the resulting object to the "Application.Run()" method. The only thing that may give you pause is the "__init__()" method. The "__init__()" method is what’s called a magic method. Magic methods are designated with double underscores on either end of the method name and are rarely called directly. For instance, when the code in Line 10 of Figure 10 executes, the "__init__()" method defined in "MyForm" is actually being called behind the scenes. Figure 11. Populating the form with controls and handling an event handler Figure 11 adds a lot of code to our application, most of which isn’t very interesting. The exception here is the Click event of the goButton. In C#, the method would get passed as an argument in the constructor of a new EventHandler. In IronPython, we simply add a function with the proper signature to the Click event.  Now that we have a button that will respond to a click, Figure 12 shows a modified version of our regular expression sample code from earlier inserted into the click method. Note the "__str__()" magic method is the equivalent of ToString(). Figure 12. Populating click with our regular expression example When we run the code, you should see the form displayed in Figure 13. You can enter dates into the top textbox, press the button and either True or False will appear in the lower textbox indicating the results of the IsMatch() function. Figure 13. Completed form Conclusion During the course of one brief article, you went from knowing little of IronPython to using it to build Windows Forms. We were able to move so quickly because we leveraged your existing .NET knowledge.  We spent most of our time talking about the very intuitive Python syntax. Go through sample or even production code you've written in the past and duplicate it in IronPython. You’ll find working with familiar .NET libraries will speed your learning process, making it more fun. Before you know it, Python will become second-nature!
Read more
  • 0
  • 0
  • 3795
article-image-java-server-faces-jsf-tools
Packt
07 Oct 2009
5 min read
Save for later

Java Server Faces (JSF) Tools

Packt
07 Oct 2009
5 min read
Throughout this article, we will follow the "learning by example" technique, and we will develop a completely functional JSF application that will represent a JSF registration form as you can see in the following screenshot. These kinds of forms can be seen on many sites, so it will be very useful to know how to develop them with JSF Tools. The example consists of a simple JSP page, named register.jsp, containing a JSF form with four fields (these fields will be mapped into a managed bean), as follows: personName – this is a text field for the user's name personAge – this is a text field for the user's age personPhone – this is a text field for the user's phone number personBirthDate – this is a text field for the user's birth date The information provided by the user will be properly converted and validated using JSF capabilities. Afterwards, the information will be displayed on a second JSP page, named success.jsp. Overview of JSF Java Server Faces (JSF) is a popular framework used to develop User Interfaces (UI) for server-based applications (it is also known as JSR 127 and is a part of J2EE 1.5). It contains a set of UI components totally managed through JSF support, like handling events, validation, navigation rules, internationalization, accessibility, customizability, and so on. In addition, it contains a tag library for expressing UI components within a JSP page. Among JSF features, we mention the ones that JSF provides: A set of base UI components Extension of the base UI components to create additional UI component libraries Custom UI components Reusable UI components Read/write application date to and from UI components Managing UI state across server requests Wiring client-generated events to server-side application code Multiple rendering capabilities that enable JSF UI components to render themselves differently depending on the client type JSF is tool friendly JSF is implementation agnostic Abstract away from HTTP, HTML, and JSP Speaking of JSF life cycle, you should know that every JSF request that renders a JSP involves a JSF component tree, also called a view. Each request is made up of phases. By standard, we have the following phases (shown in the following figure): The restore view is built Request values are applied Validations are processed Model values are updated The applications is invoked A response is rendered During the above phases, the events are notified to event listeners We end this overview with a few bullets regarding JSF UI components: A UI component can be anything from a simple to a complex user interface (for example, from a simple text field to an entire page) A UI component can be associated to model data objects through value binding A UI component can use helper objects, like converters and validators A UI component can be rendered in different ways, based on invocation A UI component can be invoked directly or from JSP pages using JSF tag libraries Creating a JSF project stub In this section you will see how to create a JSF project stub with JSF Tools. This is a straightforward task that is based on the following steps: From the File menu, select New | Project option. In the New Project window, expand the JBoss Tools Web node and select the JSF Project option (as shown in the following screenshot). After that, click the Next button. In the next window, it is mandatory to specify a name for the new project (Project Name field), a location (Location field—only if you don't use the default path), a JSF implementation (JSF Environment field), and a template (Template field). As you can see, JSF Tools offers a set of predefined templates as follows: JSFBlankWithLibs—this is a blank JSF project with complete JSF support. JSFKickStartWithLibs—this is a demo JSF project with complete JSF support. JSFKickStartWithoutLibs—this is a demo JSF project without JSF support. In this case, the JSF libraries are missing for avoiding the potential conflicts with the servers that already offer JSF support. JSFBlankWithoutLibs—this is a blank JSF project without JSF support. In this case, the JSF libraries are missing for avoiding the potential conflicts with the servers that already offer JSF support (for example, JBoss AS includes JSF support). The next screenshot is an example of how to configure our JSF project at this step. At the end, just click the Next button: This step allows you to set the servlet version (Servlet Version field), the runtime (Runtime field) used for building and compiling the application, and the server where the application will be deployed (Target Server field). Note that this server is in direct relationship with the selected runtime. The following screenshot shows an example of how to complete this step (click on the Finish button): After a few seconds, the new JSF project stub is created and ready to take shape! You can see the new project in the Package Explorer view.
Read more
  • 0
  • 0
  • 8222

article-image-data-access-methods-flex-3
Packt
06 Oct 2009
4 min read
Save for later

Data Access Methods in Flex 3

Packt
06 Oct 2009
4 min read
Flex provides a range of data access components to work with server-side remote data. These components are also commonly called Remote Procedure Call or RPC services. The Remote Procedure Call allows you to execute remote or server-side procedure or method, either locally or remotely in another address space without having to write the details of that method explicitly. (For more information on RPC, visit http://en.wikipedia.org/wiki/Remote_procedure_call.) The Flex data access components are based on Service Oriented Architecture (SOA). The Flex data access components allow you to call the server-side business logic built in Java, PHP, ColdFusion, .NET, or any other server-side technology to send and receive remote data. In this article by Satish Kore, we will learn how to interact with a server environment (specifically built with Java). We will look at the various data access components which includes HTTPService class and WebService class. This article focuses on providing in-depth information on the various data access methods available in Flex. Flex data access components Flex data access components provide call-and-response model to access remote data. There are three methods for accessing remote data in Flex: via the HTTPService class, the WebService class (compliant with Simple Object Access Protocol, or SOAP), or the RemoteObject class. The HTTPService class The HTTPService class is a basic method to send and receive remote data over the HTTP protocol. It allows you to perform traditional HTTP GET and POST requests to a URL. The HTTPService class can be used with any kind of service-side technology such as JSP, PHP, ColdFusion, ASP, and so on. The HTTP service is also known as a REST-style web service. REST stands for Representational State Transfer. It is an approach for getting content from a web server via web pages. Web pages are written in many languages such as JSP, PHP, ColdFusion, ASP, and so on that receive POST or GET requests. Output can be formatted in XML instead of HTML, which can be used in Flex applications for manipulating and displaying content. For example, RSS feeds output standard XML content when accessed using URL. For more information about REST, see www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm. Using the HTTPService tag in MXML The HTTPService class can be used to load both static and dynamic content from remote URLs. For example, you can use HTTPService to load an XML fi le that is located on a server, or you can also use HTTPService in conjunction with server-side technologies that return dynamic results to the Flex application. You can use both the HTTPS and the HTTP protocol to access secure and non-secure remote content. The following is the basic syntax of a HTTPService class: <mx:HTTPService id="instanceName" method="GET|POST|HEAD|OPTIONS|PUT|TRACE|DELETE" resultFormat="object|array|xml|e4x|flashvars|text" url="http://www.mydomain.com/myFile.jsp" fault="faultHandler(event);" result="resultHandler(event)"/> The following are the properties and events: id: Specifies instance identifier name method: Specifies HTTP method; the default value is GET resultFormat: Specifies the format of the result data url: Specifies the complete remote URL which will be called fault: Specifies the event handler method that'll be called when a fault or error occurs while connecting to the URL result: Specifies the event handler method to call when the HttpService call completed successfully and returned results When you call the HTTPService.send() method, Flex will make the HTTP request to the specified URL. The HTTP response will be returned in the result property of the ResultEvent class in the result handler of the HTTPService class. You can also pass an optional parameter to the send() method by passing in an object. Any attributes of the object will be passed in as a URL-encoded query string along with URL request. For example: var param:Object = {title:"Book Title", isbn:"1234567890"}myHS.send(param); In the above code example, we have created an object called param and added two string attributes to it: title and isbn. When you pass the param object to the send() method, these two string attributes will get converted to URL-encoded http parameters and your HTTP request URL will look like this: http://www.domain.com/myjsp.jsp?title=Book Title &isbn=1234567890. This way, you can pass any number of HTTP parameters to the URL and these parameters can be accessed in your server-side code. For example, in a JSP or servlet, you could use request.getParameter("title") to get the HTTP request parameter.
Read more
  • 0
  • 0
  • 3037

article-image-create-local-ubuntu-repository-using-apt-mirror-and-apt-cacher
Packt
05 Oct 2009
7 min read
Save for later

Create a Local Ubuntu Repository using Apt-Mirror and Apt-Cacher

Packt
05 Oct 2009
7 min read
How can a company or organization minimize bandwidth costs when maintaining multiple Ubuntu installations? With bandwidth becoming the currency of the new millennium, being responsible with the bandwidth you have can be a real concern. In this article by Christer Edwards, we will learn how to create, maintain and make available a local Ubuntu repository mirror, allowing you to save bandwidth and improve network efficiency with each machine you add to your network. (For more resources on Ubuntu, see here.) Background Before I begin, let me give you some background into what prompted me to initially come up with these solutions. I used to volunteer at local university user groups whenever they held "installfests". I always enjoyed offering my support and expertise with new Linux users. I have to say, the distribution that we helped deploy the most was Ubuntu. The fact that Ubuntu's parent company, Canonical, provided us with pressed CDs didn't hurt! Despite having more CDs than we could handle we still regularly ran into the same problem. Bandwidth. Fetching updates and security errata was always our bottleneck, consistently slowing down our deployments. Imagine you are in a conference room with dozens of other Linux users, each of them trying to use the internet to fetch updates. Imagine how slow and bogged down that internet connection would be. If you've ever been to a large tech conference you've probably experienced this first hand! After a few of these "installfests" I started looking for a solution to our bandwidth issue. I knew that all of these machines were downloading the same updates and security errata, therefore duplicating work that another user had already done. It just seemed so inefficient. There had to be a better way. Eventually I came across two solutions, which I will outline below. Apt-Mirror The first method makes use of a tool called “apt-mirror”. It is a perl-based utility for downloading and mirroring the entire contents of a public repository. This may likely include packages that you don't use and will not use, but anything stored in a public repository will also be stored in your mirror. To configure apt-mirror you will need the following: apt-mirror package (sudo aptitude install apt-mirror) apache2 package (sudo aptitude install apache2) roughly 15G of storage per release, per architecture Once these requirements are met you can begin configuring the apt-mirror tool. The main items that you'll need to define are: storage location (base_path) number of download threads (nthreads) release(s) and architecture(s) you want The configuration is done in the /etc/apt/mirror.list file. A default config was put in place when you installed the package, but it is generic. You'll need to update the values as mentioned above. Below I have included a complete configuration which will mirror Ubuntu 8.04 LTS for both 32 and 64 bit installations. It will require nearly 30G of storage space, which I will be putting on a removable drive. The drive will be mounted at /media/STORAGE/. # apt-mirror configuration file#### The default configuration options (uncomment and change to override)###set base_path /media/STORAGE/# set mirror_path $base_path/mirror# set skel_path $base_path/skel# set var_path $base_path/var## set defaultarch <running host architecture>set nthreads 20## 8.04 "hardy" i386 mirrordeb-i386 http://us.archive.ubuntu.com/ubuntu hardy main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-updates main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-security main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-backports main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy-proposed main restricted universe multiversedeb-i386 http://us.archive.ubuntu.com/ubuntu hardy main/debian-installer restricted/debian-installer universe/debian-installer multiverse/debian-installerdeb-i386 http://packages.medibuntu.org/ hardy free non-free# 8.04 "hardy" amd64 mirrordeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-updates main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-security main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-backports main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy-proposed main restricted universe multiversedeb-amd64 http://us.archive.ubuntu.com/ubuntu hardy main/debian-installer restricted/debian-installer universe/debian-installer multiverse/debian-installerdeb-amd64 http://packages.medibuntu.org/ hardy free non-free# Cleaning sectionclean http://us.archive.ubuntu.com/clean http://packages.medibuntu.org/ It should be noted that each of the repository lines within the file should begin with deb-i386 or deb-amd64. Formatting may have changed based on the web formatting. Once your configuration is saved you can begin to populate your mirror by running the command: apt-mirror Be warned that, based on your internet connection speeds, this could take quite a long time. Hours, if not more than a day for the initial 30G to download. Each time you run the command after the initial download will be much faster, as only incremental updates are downloaded. You may be wondering if it is possible to cancel the transfer before it is finished and begin again where it left off. Yes, I have done this countless times and I have not run into any issues. You should now have a copy of the repository stored locally on your machine. In order for this to be available to other clients you'll need to share the contents over http. This is why we listed Apache as a requirement above. In my example configuration above I mirrored the repository on a removable drive, and mounted it at /media/STORAGE. What I need to do now is make that address available over the web. This can be done by way of a symbolic link. cd /var/www/sudo ln -s /media/STORAGE/mirror/us.archive.ubuntu.com/ubuntu/ ubuntu The commands above will tell the filesystem to follow any requests for “ubuntu” back up to the mounted external drive, where it will find your mirrored contents. If you have any problems with this linking double-check your paths (as compared to those suggested here) and make sure your link points to the ubuntu directory, which is a subdirectory of the mirror you pulled from. If you point anywhere below this point your clients will not properly be able to find the contents. The additional task of keeping this newly downloaded mirror updated with the latest updates can be automated by way of a cron job. By activating the cron job your machine will automatically run the apt-mirror command on a regular daily basis, keeping your mirror up to date without any additional effort on your part. To activate the automated cron job, edit the file /etc/cron.d/apt-mirror. There will be a sample entry in the file. Simply uncomment the line and (optionally) change the “4” to an hour more convenient for you. If left with its defaults it will run the apt-mirror command, updating and synchronizing your mirror, every morning at 4:00am. The final step needed, now that you have your repository created, shared over http and configured to automatically update, is to configure your clients to use it. Ubuntu clients define where they should grab errata and security updates in the /etc/apt/sources.list file. This will usually point to the default or regional mirror. To update your clients to use your local mirror instead you'll need to edit the /etc/apt/sources.list and comment the existing entries (you may want to revert to them later!) Once you've commented the entries you can create new entries, this time pointing to your mirror. A sample entry pointing to a local mirror might look something like this: deb http://192.168.0.10/ubuntu hardy main restricted universe multiversedeb http://192.168.0.10/ubuntu hardy-updates main restricted universe multiversedeb http://192.168.0.10/ubuntu hardy-security main restricted universe multiverse Basically what you are doing is recreating your existing entries but replacing the archive.ubuntu.com with the IP of your local mirror. As long as the mirrored contents are made available over http on the mirror-server itself you should have no problems. If you do run into problems check your apache logs for details. By following these simple steps you'll have your own private (even portable!) Ubuntu repository available within your LAN, saving you future bandwidth and avoiding redundant downloads.
Read more
  • 0
  • 0
  • 33055
article-image-creating-dynamic-reports-databases-using-jasperreports-35-2
Packt
05 Oct 2009
10 min read
Save for later

Creating Dynamic Reports from Databases Using JasperReports 3.5

Packt
05 Oct 2009
10 min read
Datasource definition A datasource is what JasperReports uses to obtain data for generating a report. Data can be obtained from databases, XML files, arrays of objects, collections of objects, and XML files. In this article, we will focus on using databases as a datasource. Database for our reports We will use a MySQL database to obtain data for our reports. The database is a subset of public domain data that can be downloaded from http://dl.flightstats.us. The original download is 1.3 GB, so we deleted most of the tables and a lot of data to trim the download size considerably. MySQL dump of the modified database can be found as part of code download at http://www.packtpub.com/files/code/8082_Code.zip. The flightstats database contains the following tables: aircraft aircraft_models aircraft_types aircraft_engines aircraft_engine_types The database structure can be seen in the following diagram: The flightstats database uses the default MyISAM storage engine for the MySQL RDBMS, which does not support referential integrity (foreign keys). That is why we don't see any arrows in the diagram indicating dependencies between the tables. Let's create a report that will show the most powerful aircraft in the database. Let's say, those with horsepower of 1000 or above. The report will show the aircraft tail number and serial number, the aircraft model, and the aircraft's engine model. The following query will give us the required results: SELECT a.tail_num, a.aircraft_serial, am.model as aircraft_model, ae.model AS engine_model FROM aircraft a, aircraft_models am, aircraft_engines ae WHERE a.aircraft_engine_code in (select aircraft_engine_code from aircraft_engines where horsepower >= 1000) AND am.aircraft_model_code = a.aircraft_model_code AND ae.aircraft_engine_code = a.aircraft_engine_code The above query retrieves the following data from the database: Generating database reports There are two ways to generate database reports—either by embedding SQL queries into the JRXML report template or by passing data from the database to the compiled report through a datasource. We will discuss both of these techniques. We will first create the report by embedding the query into the JRXML template. Then, we will generate the same report by passing it through a datasource containing the database data. Embedding SQL queries into a report template JasperReports allows us to embed database queries into a report template. This can be achieved by using the <queryString> element of the JRXML file. The following example demonstrates this technique: <?xml version="1.0" encoding="UTF-8" ?> <jasperReport xsi_schemaLocation="http://jasperreports.sourceforge.net/jasperreports http://jasperreports.sourceforge.net/xsd/jasperreport.xsd" name="DbReport"> <queryString> <![CDATA[select a.tail_num, a.aircraft_serial, am.model as aircraft_model, ae.model as engine_model from aircraft a, aircraft_models am, aircraft_engines ae where a.aircraft_engine_code in ( select aircraft_engine_code from aircraft_engines where horsepower >= 1000) and am.aircraft_model_code = a.aircraft_model_code and ae.aircraft_engine_code = a.aircraft_engine_code]]> </queryString> <field name="tail_num" class="java.lang.String" /> <field name="aircraft_serial" class="java.lang.String" /> <field name="aircraft_model" class="java.lang.String" /> <field name="engine_model" class="java.lang.String" /> <pageHeader> <band height="30"> <staticText> <reportElement x="0" y="0" width="69" height="24" /> <textElement verticalAlignment="Bottom" /> <text> <![CDATA[Tail Number: ]]> </text> </staticText> <staticText> <reportElement x="140" y="0" width="79" height="24" /> <text> <![CDATA[Serial Number: ]]> </text> </staticText> <staticText> <reportElement x="280" y="0" width="69" height="24" /> <text> <![CDATA[Model: ]]> </text> </staticText> <staticText> <reportElement x="420" y="0" width="69" height="24" /> <text> <![CDATA[Engine: ]]> </text> </staticText> </band> </pageHeader> <detail> <band height="30"> <textField> <reportElement x="0" y="0" width="69" height="24" /> <textFieldExpression class="java.lang.String"> <![CDATA[$F{tail_num}]]> </textFieldExpression> </textField> <textField> <reportElement x="140" y="0" width="69" height="24" /> <textFieldExpression class="java.lang.String"> <![CDATA[$F{aircraft_serial}]]> </textFieldExpression> </textField> <textField> <reportElement x="280" y="0" width="69" height="24" /> <textFieldExpression class="java.lang.String"> <![CDATA[$F{aircraft_model}]]> </textFieldExpression> </textField> <textField> <reportElement x="420" y="0" width="69" height="24" /> <textFieldExpression class="java.lang.String"> <![CDATA[$F{engine_model}]]> </textFieldExpression> </textField> </band> </detail> </jasperReport> The <queryString> element is used to embed a database query into the report template. In the given code example, the <queryString> element contains the query wrapped in a CDATA block for execution. The <queryString> element has no attributes or subelements other than the CDATA block containing the query. Text wrapped inside an XML CDATA block is ignored by the XML parser. As seen in the given example, our query contains the > character, which would invalidate the XML block if it wasn't inside a CDATA block. A CDATA block is optional if the data inside it does not break the XML structure. However, for consistency and maintainability, we chose to use it wherever it is allowed in the example. The <field> element defines fields that are populated at runtime when the report is filled. Field names must match the column names or alias of the corresponding columns in the SQL query. The class attribute of the <field> element is optional; its default value is java.lang.String. Even though all of our fields are strings, we still added the class attribute for clarity. In the last example, the syntax to obtain the value of a report field is $F{field_name}, where field_name is the name of the field as defined. The next element that we'll discuss is the <textField> element. Text fields are used to display dynamic textual data in reports. In this case, we are using them to display the value of the fields. Like all the subelements of <band>, text fields must contain a <reportElement> subelement indicating the text field's height, width, and x, y coordinates within the band. The data that is displayed in text fields is defined by the <textFieldExpression> subelement of <textField>. The <textFieldExpresson> element has a single subelement, which is the report expression that will be displayed by the text field and wrapped in an XML CDATA block. In this example, each text field is displaying the value of a field. Therefore, the expression inside the <textFieldExpression> element uses the field syntax $F{field_name}, as explained before. Compiling a report containing a query is no different from compiling a report without a query. It can be done programmatically or by using the custom JasperReports jrc ANT task. Generating the report As we have mentioned previously, in JasperReports terminology, the action of generating a report from a binary report template is called filling the report. To fill a report containing an embedded database query, we must pass a database connection object to the report. The following example illustrates this process: package net.ensode.jasperbook; import java.sql.Connection; import java.sql.DriverManager; import java.sql.SQLException; import java.util.HashMap; import net.sf.jasperreports.engine.JRException; import net.sf.jasperreports.engine.JasperFillManager; public class DbReportFill { Connection connection; public void generateReport() { try { Class.forName("com.mysql.jdbc.Driver"); connection = DriverManager.getConnection("jdbc:mysql: //localhost:3306/flightstats?user=user&password=secret"); System.out.println("Filling report..."); JasperFillManager.fillReportToFile("reports/DbReport. jasper", new HashMap(), connection); System.out.println("Done!"); connection.close(); } catch (JRException e) { e.printStackTrace(); } catch (ClassNotFoundException e) { e.printStackTrace(); } catch (SQLException e) { e.printStackTrace(); } } public static void main(String[] args) { new DbReportFill().generateReport(); } } As seen in this example, a database connection is passed to the report in the form of a java.sql.Connection object as the last parameter of the static JasperFillManager.fillReportToFile() method. The first two parameters are as follows: a string (used to indicate the location of the binary report template or jasper file) and an instance of a class implementing the java.util.Map interface (used for passing additional parameters to the report). As we don't need to pass any additional parameters for this report, we used an empty HashMap. There are six overloaded versions of the JasperFillManager.fillReportToFile() method, three of which take a connection object as a parameter. For simplicity, our examples open and close database connections every time they are executed. It is usually a better idea to use a connection pool, as connection pools increase the performance considerably. Most Java EE application servers come with connection pooling functionality, and the commons-dbcp component of Apache Commons includes utility classes for adding connection pooling capabilities to the applications that do not make use of an application server. After executing the above example, a new report, or JRPRINT file is saved to disk. We can view it by using the JasperViewer utility included with JasperReports. In this example, we created the report and immediately saved it to disk. The JasperFillManager class also contains methods to send a report to an output stream or to store it in memory in the form of a JasperPrint object. Storing the compiled report in a JasperPrint object allows us to manipulate the report in our code further. We could, for example, export it to PDF or another format. The method used to store a report into a JasperPrint object is JasperFillManager.fillReport(). The method used for sending the report to an output stream is JasperFillManager.fillReportToStream(). These two methods accept the same parameters as JasperFillManager.fillReportToFile() and are trivial to use once we are familiar with this method. Refer to the JasperReports API for details. In the next example, we will fill our report and immediately export it to PDF by taking advantage of the net.sf.jasperreports.engine.JasperRunManager.runReportToPdfStream() method. package net.ensode.jasperbook; import java.io.IOException; import java.io.InputStream; import java.io.PrintWriter; import java.io.StringWriter; import java.sql.Connection; import java.sql.DriverManager; import java.util.HashMap; import javax.servlet.ServletException; import javax.servlet.ServletOutputStream; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import net.sf.jasperreports.engine.JasperRunManager; public class DbReportServlet extends HttpServlet { protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { Connection connection; response.setContentType("application/pdf"); ServletOutputStream servletOutputStream = response .getOutputStream(); InputStream reportStream = getServletConfig() .getServletContext().getResourceAsStream( "/reports/DbReport.jasper"); try { Class.forName("com.mysql.jdbc.Driver"); connection = DriverManager.getConnection("jdbc:mysql: //localhost:3306/flightstats?user=dbUser&password=secret"); JasperRunManager.runReportToPdfStream(reportStream, servletOutputStream, new HashMap(), connection); connection.close(); servletOutputStream.flush(); servletOutputStream.close(); } catch (Exception e) { // display stack trace in the browser StringWriter stringWriter = new StringWriter(); PrintWriter printWriter = new PrintWriter(stringWriter); e.printStackTrace(printWriter); response.setContentType("text/plain"); response.getOutputStream().print(stringWriter.toString()); } } } The only difference between static and dynamic reports is that for dynamic reports we pass a connection to the report for generating a database report. After deploying this servlet and pointing the browser to its URL, we should see a screen similar to the following screenshot:  
Read more
  • 0
  • 0
  • 9406

article-image-ubuntu-user-interface-tweaks
Packt
01 Oct 2009
10 min read
Save for later

Ubuntu User Interface Tweaks

Packt
01 Oct 2009
10 min read
I have spent time on all of the major Desktop operating systems and Linux is by far the most customizable. The GNOME Desktop environment, the default environment of Ubuntu and many other Linux distributions, is a very simple yet very customizable interface. I have spent a lot of time around a lot of Linux users and rarely do I find two Desktops that look the same. Whether it is simple desktop background customizations or much more complex UI alterations, GNOME allows you to make your desktop your own. Just like any other environment that you're going to find yourself in for an extended period of time, you're going to want to make it your own. The GNOME Desktop offers this ability in a number of ways. First of all I'll cover perhaps the more obvious methods, and then I'll move to the more complex. As mentioned in the introduction, by the end of this article you'll know how to automate (script) the customization of your desktop down to the very last detail. This is perfect for those that find themselves reinstalling their machines on a regular basis. Appearance GNOME offers a number of basic customizations within the Applications menu. To use the "Appearance Preferences" tool simply navigate to: System > Preferences > Appearance You'll find that the main screen allows you to change your basic theme. The theme includes the environment color scheme, icon set and window bordering. This is often one of the very first things that users will change on a new installation. Of the default theme selections I generally prefer "Clearlooks" over the default Ubuntu brown color. The next tab allows you to set your background. This is the graphic, color or gradient that you want to appear on your desktop. This is also a very common customization. More often than not users will find third-party graphics specific in this section. A great place to find user-generated desktop content is the http://gnome-look.org website. It is dedicated to user-generated Artwork for the GNOME and Ubuntu desktop. On the third tab you'll find Fonts. I have found that fonts do play a very important role in the look of your desktop. For the longest time I didn't bother with customizing my fonts, but after being introduced to a few that I like, it is a must-have in my desktop customization list. My personal preference is to use the "Droid Sans" font, at 10pt for all settings. I think this is a very clean, crisp font design that really changes the desktop look and feel.  If you'd like to try out this font set you'll have to install it. This can be done using: sudo aptitude install ttf-droid Another noticeable customization to your desktop on the Fonts tab is the Rendering option. For laptops you'll definitely want to select the bottom-right option, "Subpixel Smoothing (LCDs)". You should notice a change right away when you check the box. Finally, the Details button on the fonts tab can make great improvements to the over all look. This is where you can set your font resolution. I highly suggest setting this value to 96 dots per inch (dpi). Recent versions of Ubuntu try to dynamically detect the preferred dpi. Unfortunately I haven't had the best of luck with this new feature, so I've continued to set it manually. I think you'll notice a change if your system is on something other than 96. Setting the font to "Droid Sans" 10pt and the resolution to 96 dpi is one of the biggest visual changes that I make to my system! The final tab in the Appearances tool is the Interface. This tab allows you to customize simple things like whether or not your Applications menu should display icons or not. Personally, I have found that I like the default settings, but I would suggest trying a few customizations and finding out what you like. If you've followed the suggestions so far I'm sure your desktop likely looks a lot different than it did out of the box. By changing the theme, desktop background, font and dpi you may have already made drastic changes. I'd like to also share with you some of the additional changes that I make, which will help demonstrate some of the more advanced, little known features of the GNOME desktop. gconf-editor A default Ubuntu system comes with a behind-the-scenes tool called the "gconf-editor". This is basically a graphical editor for your entire GNOME configuration settings. At first use it can be a bit confusing, but once you figure out where and how to find your preferred settings it becomes much easier. To launch the gconf-editor press the key combination ALT-F2 and type: gconf-editor I have heard people compare this tool to Microsoft's registry tool, but I assure you that it is far less complicated! It simply stores GNOME configuration and application settings. It even includes the changes that you made above! Anytime you make a change to the graphical interface it gets stored, and this tool is a graphical way to view those changes. Let's change something else, this time using the gconf-editor. Another of my favorite interface customizations includes the panels. By default you have two panels, one at the top and one at the bottom of your screen. I prefer to have both panels at the top of my screen, and I like them to be a bit smaller than they are out of the box. Here is how we would make that change using the gconf-editor. Navigate to Edit > Search and search for bottom_panel or top_panel. I will start with bottom_panel. You should come up with a few results, the first one being /apps/panel/toplevels/bottom_panel_screen0. You can now customize the color, size, auto-hide feature and much more of your panel. If you find orientation, double-click the entry, and change the value to "top" you'll find that your panel instantly moves to the top of the screen. You may want to alter the size entry while you're in there as well. Make a note of the Key name that you see for each item. These will come in handy a little bit later. A few other settings that you might find interesting are Nautilus desktop settings such as: computer_icon_visible home_icon_visible network_icon_visible trash_icon_visible volumes_visible These are simple check-box settings, activating or deactivating an option upon click. Basically these allow you to toggle the computer, home, network or trash icons on your desktop. I prefer to make sure each of these is turned off. The only one that I do like to keep on is volumes_visible. Try this out yourself and see what you prefer. Automation Earlier I mentioned that you'll want to make note of the Key Name for the settings that you're playing with. It is these names that allow us to automate, or script, the customization of our desktop environment. After putting a little bit of time into finding the key names for each of the customizations that I like I am now able to completely customize every aspect of my desktop by running a simple script! Let me give you a few examples. Above we found that the key name for the bottom panel was: /apps/panel/toplevels/bottom_panel_screen0 The key name specifically for the orientation was: /apps/panel/toplevels/bottom_panel_screen0/orientation The value we changed was top or bottom. We can now make this change from the command line using by typing: gconftool-2 -s --type string /apps/panel/toplevels/bottom_panel_screen0/orientation top Let us see a few more examples, these will change the font settings for each entry that we saw in the Appearances menu: gconftool -s --type string /apps/nautilus/preferences/desktop_font "Droid Sans 10"gconftool -s --type string /apps/metacity/general/titlebar_font "Droid Sans 10"gconftool -s --type string /desktop/gnome/interface/monospace_font_name "Droid Sans 10"gconftool -s --type string /desktop/gnome/interface/document_font_name "Droid Sans 10"gconftool -s --type string /desktop/gnome/interface/font_name "Droid Sans 10" You may or may not have made these changes manually, but just think about the time you could save on your next Ubuntu installation by pasting in these five commands instead! I will warn you though, once you start making a list of gconftool commands it's hard to stop. Considering how simple it is to make environment changes using simple commands, why not list everything! I'd like to share the script that I use to make my preferred changes. You'll likely want to edit the values to match your preferences. #!/bin/bash## customize GNOME interface# (christer@rootcertified.com)#gconftool-2 -s --type string /apps/nautilus/preferences/desktop_font "Droid Sans 10"gconftool-2 -s --type string /apps/metacity/general/titlebar_font "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/monospace_font_name "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/document_font_name "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/font_name "Droid Sans 10"gconftool-2 -s --type string /desktop/gnome/interface/icon_theme "gnome-brave"gconftool-2 -s --type bool /apps/nautilus/preferences/always_use_browser truegconftool-2 -s --type bool /apps/nautilus/desktop/computer_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/home_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/network_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/trash_icon_visible falsegconftool-2 -s --type bool /apps/nautilus/desktop/volumes_visible truegconftool-2 -s --type bool /apps/nautilus-open-terminal/desktop_opens_home_dir truegconftool-2 -s --type bool /apps/gnome-do/preferences/Do/Platform/Linux/TrayIconPreferences/StatusIconVisible truegconftool-2 -s --type bool /apps/gnome-do/preferences/Do/CorePreferences/QuietStart truegconftool-2 -s --type bool /apps/gnome-terminal/profiles/Default/default_show_menubar falsegconftool-2 -s --type string /apps/gnome-terminal/profiles/Default/font "Droid Sans Mono 10"gconftool-2 -s --type string /apps/gnome-terminal/profiles/Default/scrollbar_position "hidden"gconftool-2 -s --type string /apps/gnome/interface/gtk_theme "Shiki-Brave"gconftool-2 -s --type string /apps/gnome/interface/icon_theme "gnome-brave"gconftool-2 -s --type integer /apps/panel/toplevels/bottom_panel_screen0/size 23gconftool-2 -s --type integer /apps/panel/toplevels/top_panel_screen0/size 23 Summary By saving the above script into a file called "gnome-setup" and running it after a fresh installation I'm able to update my theme, fonts, visible and non-visible icons, gnome-do preferences, gnome-terminal preferences and much more within seconds. My desktop actually feels like my desktop again! I find that maintaining a simple file like this greatly eases the customization of my desktop environment and lets me focus on getting things done. I no longer spend an hour tweaking each little setting to make my machine my home again. I install, run my script, and get to work! If you have read this article you may be interested to view : Compiling and Running Handbrake in Ubuntu Control of File Types in Ubuntu Ubuntu 9.10: How To Upgrade Install GNOME-Shell on Ubuntu 9.10 "Karmic Koala" Five Years of Ubuntu Control of File Types in Ubuntu What's New In Ubuntu 9.10 "Karmic Koala" Create a Local Ubuntu Repository using Apt-Mirror and Apt-Cacher
Read more
  • 0
  • 0
  • 4993
Modal Close icon
Modal Close icon