Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-using-business-rules-define-decision-points-oracle-soa-suite-part-1
Packt
28 Oct 2009
11 min read
Save for later

Using Business Rules to Define Decision Points in Oracle SOA Suite: Part 1

Packt
28 Oct 2009
11 min read
The advantage of separating out decision points as external rules is that we not only ensure that each rule is used in a consistent fashion, but in addition make it simpler and quicker to modify; that is we only have to modify a rule once and can do this with almost immediate effect, thus increasing the agility of our solution. Business Rule concepts Before we implement our first rule, let's briefly introduce the key components which make up a Business Rule. These are: Facts: Represent the data or business objects that rules are applied to. Rules: A rule consists of two parts, an IF part which consists of one or more tests to be applied to fact(s), and a THEN part, which lists the actions to be carried out should the test to evaluate to true Rule Set: As the name implies, it is just a set of one or more related rules that are designed to work together . Dictionary: A dictionary is the container of all components that make up a business rule, it holds all the facts, rule sets, and rules for a business rule. In addition, a dictionary may also contain functions, variables, and constraints. We will introduce these in more detail later in this article. To execute a business rule, you submit one or more facts to the rules engine. It will apply the rules to the facts, that is each fact will be tested against the IF part of the rule and if it evaluates to true, then it will perform the specified actions for that fact. This may result in the creation of new facts or the modification of existing facts (which may result in further rule evaluation). Leave approval rule To begin with, we will write a simple rule to automatically approve a leave request that is of type Vacation and only for 1 day's duration. A pretty trivial example, but once we've done this we will look at how to extend this rule to handle more complex examples. Using the Rule Author In SOA Suite 10.1.3 you use the Rule Author, which is a browser based interface for defining your business rules. To launch the Rule Author within your browser go to the following URL: http://<host name>:<port number>/ruleauthor/ This will bring up the Rule Author Log In screen. Here you need to log in as user that belongs to the rule-administrators role. You can either log in as the user oc4jadmin (default password Welcome1), which automatically belongs to this group, or define your own user. Creating a Rule Repository Within Oracle Business Rules, all of our definitions (that is facts, constraints, variables, and functions) and rule sets are defined within a dictionary. A dictionary is held within a Repository. A repository can contain multiple dictionaries and can also contain multiple versions of a dictionary. So, before we can write any rules, we need to either connect to an existing repository, or create a new one. Oracle Business Rules supports two types of repository—File based and WebDAV. For simplicity we will use a File based repository, though typically in production you want to use a WebDAV based repository as this makes it simpler to share rules between multiple BPEL Processes. WebDAV is short for Web-based Distributed Authoring and Versioning. It is an extension to HTTP that allows users to collaboratively edit and manage files (that is business rules in our case) over the Web. To create a File based repository click on the Repository tab within the Rule Author, this will display the Repository Connect screen as shown in the following screenshot: From here we can either connect to an existing repository (WebDAV or File based) or create and connect to a new file-based repository. For our purposes, select a Repository Type of File, and specify the full path name of where you want to create the repository and then click Create. To use a WebDAV repository, you will first need to create this externally from the Rule Author. Details on how to do this can be found in Appendix B of the Oracle Business Rules User Guide (http://download.oracle.com/docs/cd/B25221_04/web.1013/b15986/toc.htm). From a development perspective it can often be more convenient to develop your initial business rules in a file repository. Once complete, you can then export the rules from the file repository and import them into a WebDAV repository. Creating a dictionary Once we have connected to a repository, the next step is to create a dictionary. Click on the Create tab, circled in the following screenshot, and this will bring up the Create Dictionary screen. Enter a New Dictionary Name (for example LeaveApproval) and click Create. This will create and load the dictionary so it's ready to use. Once you have created a dictionary, then next time you connect to the repository you will select the Load tab (next to the Create tab) to load it. Defining facts Before we can define any rules, we first need to define the facts that the rules will be applied to. Click on the Definitions tab, this will bring up the page which summarizes all the facts defined within the current dictionary. You will see from this that the rule engine supports three types of facts: Java Facts, XML Facts, and RL Facts. The type of fact that you want to use really depends on the context in which you will be using the rules engine. For example, if you are calling the rule engine from Java, then you would work with Java Facts as this provides a more integrated way of combining the two components. As we are using the rule engine with BPEL then it makes sense to use XML Facts. Creating XML Facts The Rule Author uses XML Schemas to generate JAXB 1.0 classes, which are then imported to generate the corresponding XML Facts. For our example we will use the Leave Request schema, shown as follows for convenience: <?xml version="1.0" encoding="windows-1252"?> <xsd:schema targetNamespace="http://schemas.packtpub.com/LeaveRequest" elementFormDefault="qualified" > <xsd:element name="leaveRequest" type="tLeaveRequest"/> <xsd:complexType name="tLeaveRequest"> <xsd:sequence> <xsd:element name="employeeId" type="xsd:string"/> <xsd:element name="fullName" type="xsd:string" /> <xsd:element name="startDate" type="xsd:date" /> <xsd:element name="endDate" type="xsd:date" /> <xsd:element name="leaveType" type="xsd:string" /> <xsd:element name="leaveReason" type="xsd:string"/> <xsd:element name="requestStatus" type="xsd:string"/> </xsd:sequence> </xsd:complexType> </xsd:schema> Using JAXB, particularly when used in conjunction with BPEL, places a number of constraints on how we define our XML Schemas, including: When defining rules, the Rule Author can only work with globally defined types. This is because it's unable to introspect the properties (i.e. attributes and elements) of global elements. Within BPEL you can only define variables based on globally defined elements. The net result is that any facts we want to pass from BPEL to the rules engine (or vice versa) must be defined as global elements for BPEL and have a corresponding global type definition so that we can define rules against it. The simplest way to achieve this is to define a global type (for example tLeaveRequest in the above schema) and then define a corresponding global element based on that type (for example, leaveRequest in the above schema). Even though it is perfectly acceptable with XML Schemas to use the same name for both elements and types, it presents problems for JAXB, hence the approach taken above where we have prefixed every type definition with t as in tLeaveRequest. Fortunately this approach corresponds to best practice for XML Schema design. The final point you need to be aware of is that when creating XML facts the JAXB processor maps the type xsd:decimal to java.lang.BigDecimal and xsd:integer to java.lang.BigInteger. This means you can't use the standard operators (for example >, >=, <=, and <) within your rules to compare properties of these types. To simplify your rules, within your XML Schemas use xsd:double in place of xsd:decimal and xsd:int in place of xsd:integer. To generate XML facts, from the XML Fact Summary screen (shown previously), click Create, this will display the XML Schema Selector page as shown: Here we need to specify the location of the XML Schema, this can either be an absolute path to an xsd file containing the schema or can be a URL. Next we need to specify a temporary JAXB Class Directory in which the generated JAXB classes are to be created. Finally, for the Target Package Name we can optionally specify a unique name that will be used as the Java package name for the generated classes. If we leave this blank, the package name will be automatically generated based on the target namespace of the XML Schema using the JAXB XML-to-Java mapping rules. For example, our leave request schema has a target namespace of http://schemas.packtpub.com/LeaveRequest; this will result in a package name of com.packtpub.schemas.leaverequest. Next click on Add Schema; this will cause the Rule Author to generate the JAXB classes for our schema in the specified directory. This will update the XML Fact Summary screen to show details of the generated classes; expand the class navigation tree until you can see the list of all the generated classes, as shown in the following screenshot: Select the top level node (that is com) to specify that we want to import all the generated classes. We need to import the TLeaveRequest class as this is the one we will use to implement rules and the LeaveRequest class as we need this to pass this in as a fact from BPEL to the rules engine. The ObjectFactory class is optional, but we will need this if we need to generate new LeaveRequest facts within our rule sets. Although we don't need to do this at the moment it makes sense to import it now in case we do need it in the future. Once we have selected the classes to be imported, click Import (circled in previous screenshot) to load them into the dictionary. The Rule Author will display a message to confirm that the classes have been successfully imported. If you check the list of generated JAXB classes, you will see that the imported classes are shown in bold. In the process of importing your facts, the Rule Author will assign default aliases to each fact and a default alias to all properties that make up a fact, where a property corresponds to either an element or an attribute in the XML Schema. Using aliases Oracle Business Rules allows you to specify your own aliases for facts and properties in order to define more business friendly names which can then be used when writing rules. For XML facts if you have followed standard naming conventions when defining your XML Schemas, we typically find that the default aliases are clear enough and that if you start defining aliases it can actually cause more confusion unless applied consistently across all facts. Hiding facts and properties The Rule Author lets you hide facts and properties so that they don't appear in the drop downs within the Rule Author. For facts which have a large number of properties, hiding some of these can be worth while as it can simplify the creation of rules. Another obvious use of this might be to hide all the facts based on elements, since we won't be implementing any rules directly against these. However, any facts you hide will also be hidden from BPEL, so you won't be able to pass facts of these types from BPEL to the rules engine (or vice versa). In reality, the only fact you will typically want to hide will be the ObjectFactory (as you will have one of these per XML Schema that you import). Saving the rule dictionary As you define your business rules, it makes sense to save your work at regular intervals. To save the dictionary, click on the Save Dictionary link in the top right hand corner of the Rule Author page. This will bring up the Save Dictionary page. Here either click on the Save button to update the current version of the dictionary with your changes or, if you want to save the dictionary as a new version or under a new dictionary name, then click on the Save As link and amend the dictionary name and version as appropriate.
Read more
  • 0
  • 0
  • 2268

article-image-understanding-business-activity-monitoring-oracle-soa-suite
Packt
28 Oct 2009
14 min read
Save for later

Understanding Business Activity Monitoring in Oracle SOA Suite

Packt
28 Oct 2009
14 min read
How BAM differs from traditional business intelligence The Oracle SOA Suite stores the state of all processes in a database in documented schemas so why do we need yet another reporting tool to provide insight into our processes and services? In other words how does BAM differ from traditional BI (Business Intelligence)? In traditional BI, reports are generated and delivered either on a scheduled basis or in response to a user request. Any changes to the information will not be reflected until the next scheduled run or until a user requests the report to be rerun. BAM is an event-driven reporting tool that generates alerts and reports in real time, based on a continuously changing data stream, some of whose data may not be in the database. As events occur in the Services and Processes, the business has defined they are captured by BAM and reports and views are updated in real time. Where necessary these updated reports are delivered to users. This delivery to users can take several forms. The best known is the dashboard on users' desktops that will automatically update without any need for the user to refresh the screen. There are also other means to deliver reports to the end user, including sending them via a text message or an email. Traditional reporting tools such as Oracle Reports and Oracle Discoverer as well as Oracles latest Business Intelligence Suite can be used to provide some real-time reporting needs but they do not provide the event driven reporting that gives the business a continuously updating view of the current business situation. Event Driven Architecture Event Driven Architecture (EDA) is about building business solutions around responsiveness to events. Events may be simple triggers such as a stock out event or they may be more complex triggers such as the calculations to realize that a stock out will occur in three days. An Event Driven Architecture will often take a number of simple events and then combine them through a complex event processing sequence to generate complex events that could not have been raised without aggregation of several simpler events. Oracle BAM scenarios Oracle Business Activity Monitoring is typically used to monitor two distinct types of real-time data. Firstly it may be used to monitor the overall state of processes in the business. For example it may be used to track how many auctions are currently running, how many have bids on them, and how many have completed in the last 24 hours (or other time periods). Secondly it may be used to track in real-time Key Performance Indicators or KPIS. For example it may be used to provide a real-time updating dashboard to a seller to show the current total value of all the sellers' auctions and to track this against an expected target. In the first case, we are interested in how business processes are progressing and are using BAM to identify bottlenecks and failure points within those processes. Bottlenecks can be identified by too much time being spent on given steps in the process. BAM allows us to compute the time taken between two points in a process, such as the time between order placement and shipping, and provide real-time feedback on those times. Similarly BAM can be used to track the percentage drop-out rate between steps in a sales process, allowing the business to take appropriate action. In the second case, our interest is on some aggregate number, such as our total liabilities should we win all the auctions we are bidding on. This requires us to aggregate results from many events, possibly performing some kind of calculation on them to provide us with a single KPI that gives an indication to the business of how things are going. BAM allows us to continuously update this number in real on a dashboard without the need for continued polling. It also allows us to trigger alerts, perhaps through email or SMS, to notify an individual, when a threshold is breached. In both cases reports delivered can be customized based on the individual receiving the report. BAM architecture It may seem odd to have a section on architecture in the middle of a article about how to effectively use BAM, but key to successful utilization of BAM is an understanding of how the different tiers relate to each other. Logical view The following diagram represents a logical view of how BAM operates. Events are acquired from one or more sources through event acquisition and then normalized, correlated, and stored in event storage (generally a memory area in BAM that is backed up to disc). The report cache generates reports based on events in storage and then delivers those reports, together with real-time updates through the report delivery layer. Event processing is also performed on events in storage, and when defined conditions are met, alerts will be delivered through the alert delivery service. Physical view To better understand the physical view of the architecture of BAM, we have divided this section into four parts. Let us discuss these in detail. Capture This logical view maps onto the physical BAM components shown in the following diagram. Data acquisition in the SOA Suite is handled by sensors in BPEL and ESB. BAM can also receive events from JMS message queues and access data in databases (useful for historical comparison). For complex data formats or for other data sources then Oracle Data Integrator (ODI is a separate product to the SOA Suite) is recommended by Oracle. Although potentially less efficient and more work than running ODI, it is also possible to use adapters to acquire data from multiple sources and feed it into BAM through ESB or BPEL. At the data capture level we need to think of the data items that we can provide to feed the reports and alerts that we desire to generate. We must consider the sources of that data and the best way to load it into BAM. Store Once the data is captured, it is then stored in a normalized form in the Active Data Cache (ADC). This storage facility has the ability to do simple correlation based on fields within the data, and multiple data items received from the acquisition layer may update just a single object in the data cache. For example the state of a given BPEL process instance may be represented by a single object in the ADC and all updates to that process state will just update that single data item rather than creating multiple data items. Process Reports are run based on user demand. Once a report is run it will update the user's screen on a real time basis. Where multiple users are accessing the same report only one instance of the report is maintained by the report server. As events are captured and stored in real time the report engine will continuously monitor them for any changes that need to be made to those reports which are currently active. When changes are detected that impact active reports, then the appropriate report will be updated in memory and the updates sent to the user screen. In addition to the event processing required to correctly insert and update items in the ADC, there is also a requirement to monitor items in the ADC for events that require some sort of action to be taken. This is the job of the event processor. This will monitor data in the ADC to see if registered thresholds on values have been exceeded or if certain time-outs have expired. The event processor will often need to perform calculations across multiple data items to do this. Deliver Delivery of reports takes place in two ways. First, users request reports to be delivered to their desktop by selecting views within BAM. These reports are delivered as HTML pages within a browser and are updated whenever the underlying data used in the report changes. The second approach is that reports are sent out as a result of events being triggered by the Event Processing Engine. In the latter case, the report may be delivered by email, SMS, or voice messaging using the notifications service. A final option available for these event generated reports is to invoke a web service to take some sort of automated action. Closing the loop While monitoring what is happening is all very laudable, it is only of benefit if we actually do something about what we are monitoring. BAM provides the real-time monitoring ability very well but it also provides the facility to invoke other services to respond to undesirable events such as stock outs. The ability to invoke external services is crucial to the concept of a closed loop control environment where as a result of monitoring we are able to reach back into the processes and either alter their execution or start new ones. For example when a stock out or low stock event is raised then the message centre could invoke a web service requesting a supplier to send more stock to replenish inventory. Placing this kind of feedback mechanism in BAM allows us to trigger events across multiple applications and locations in a way that may not be possible within a single application or process. For example, in response to a stock out, instead of requesting our supplier to provide more stock, we may be monitoring stock levels in independent systems and, based on stock levels elsewhere, may redirect stock from one location to another. BAM platform anomaly In 10g SOA Suite, BAM runs only as a Windows application. Unlike the rest of SOA Suite, it does not run on a JEE Application Server and it can only run on the Windows platform. In the next release, 11g, BAM will be provided as a JEE application that can run on a number of application servers and operating systems. User interface Development in Oracle BAM is done through a web-based user interface. This user interface gives access to four different applications that allow you to interact with different parts of BAM. These are: Active Viewer for giving access to reports; this relates to the deliver stage for user requested reports. Active Studio for building reports; this relates to the 'process' stage for creating reports. Architect for setting up both inbound and outbound events. Data elements are defined here as data sources. Alerts are also configured here. This covers setting up, acquire and store stages as well as the deliver stage for alerts. Administrator for managing users and roles as well as defining the types of message sources. We will not examine the applications individually but will take a task-focused look at how to use them as part of providing some specific reports. Monitoring process state Now that we have examined how BAM is constructed, let us use this knowledge to construct some simple dashboards that track the state of a business process. We will instrument a simple version of an auction process. The process is shown in the following figure: An auction is started and then bids are placed until the time runs out at which point the auction is completed. This is modelled in BPEL. This process has three distinct states: Started Bid received Completed We are interested in the number of auctions in each state as well as the total value of auctions in progress. One needs to follow these steps to build the dashboard: Define our data within the Active Data Cache Create sensors in BPEL and map to data in the ADC Create suitable reports Run the reports Defining data objects Data in BAM is stored in data objects. Individual data objects contain the information that is reported in BAM dashboards and may be updated by multiple events. Generally BAM will report against aggregations of objects, but there is also the ability for reports to drill down into individual data objects. Before defining our data objects let's group them into an Auction folder so they are easy to find. To do this we use the BAM Architect application and select Data Objects which gives us the following screen: We select Create subfolder to create the folder and give it a name Auction. We then select Create folder to actually create the folder and we get a confirmation message to tell us that the folder was created. Notice that once created, the folder also appears in the Folders window on the left-hand side of the screen. Now we have our folder we can create a data object. Again we select Data Objects from the drop-down menu. To define the data objects that are to be stored in our Active Data Cache, we open the Auction folder if it is not already open and selectCreate Data Object. If we don't select the Auction folder then we pick it later when filling in the details of the data object. We need to give our object a unique name within the folder and optionally provide it with a tip text that helps explain what the object does when the mouse is moved over it in object listings. Having named our object we can now create the data fields by selecting Add a field. When adding fields we need to provide a name and type as well as indicating if they must contain data; the default Nullable does not require a field to be populated. We may also optionally indicate if a field should be public "available for display" and what if any tool tip text it should have. Once all the data fields have been defined then we can click Create Data Object to actually create the object as we have defined it. We are then presented with a confirmation screen that the object has been created. Grouping data into hierarchies When creating a data object it is possible to specify Dimensions for the object. A dimension is based on one or more fields within the object. A given field can only participate in one dimension. This gives the ability to group the object by the fields in the given dimension. If multiple fields are selected for a single dimension then they can be layered into a hierarchy, for example to allow analysis by country, region, and city. In this case all three elements would be selected into a single dimension, perhaps called geography. Within geography a hierarchy could be set up with country at the top, region next, and finally city at the bottom, allowing drill down to occur in views. Just as a data object can have multiple dimensions, a dimension can also have multiple hierarchies. A digression on populating data object fields In the previous discussion, we mentioned the Nullable attribute that can be attached to fields. This is very important as we do not expect to populate all or even most of the fields in a data object at one moment in time. Do not confuse data objects with the low level events that are used to populate them. Data objects in BAM do not have a one-to-one correspondence with the low level events that populate them. In our auction example there will be just one auction object for every auction. However there will be at least two and usually more messages for every auction; one message for the auction starting, another for the auction completing, and additional messages for each bid received. These messages will all populate or in some cases overwrite different parts of the auction data object. The table shows how the three messages populate different parts of the data object. Message Auction ID State Highest bid Reserve Expires Seller Highest bidder Auction Started Inserted Inserted Inserted Inserted Inserted Inserted   Bid Received   Updated Updated       Updated Auction Finished   Updated           Summary In this article we have explored how Business Activity Monitoring differs from and is complementary to more traditional Business Intelligence solutions such as Oracle Reports and Business Objects. We have explored how BAM can allow the business to monitor the state of business targets and Key Performance Indicators, such as the current most popular products in a retail environment or the current time taken to serve customers in a service environment.
Read more
  • 0
  • 0
  • 7172

article-image-enabling-spring-faces-support
Packt
28 Oct 2009
9 min read
Save for later

Enabling Spring Faces support

Packt
28 Oct 2009
9 min read
The main focus of the Spring Web Flow Framework is to deliver the infrastructure to describe the page flow of a web application. The flow itself is a very important element of a web application, because it describes its structure, particularly the structure of the implemented business use cases. But besides the flow which is only in the background, the user of your application is interested in the Graphical User Interface (GUI). Therefore, we need a solution of how to provide a rich user interface to the users. One framework which offers components is JavaServer Faces (JSF). With the release of Spring Web Flow 2, an integration module to connect these two technologies, called Spring Faces has been introduced. This article is no introduction to the JavaServer Faces technology. It is only a description about the integration of Spring Web Flow 2 with JSF. If you have never previously worked with JSF, please refer to the JSF reference to gain knowledge about the essential concepts of JavaServer Faces. JavaServer Faces (JSF)—a brief introductionThe JavaServer Faces (JSF) technology is a web application framework with the goal to make the development of user interfaces for a web application (based on Java EE) easier. JSF uses a component-based approach with an own lifecycle model, instead of a request-driven approach used by traditional MVC web frameworks. The version 1.0 of JSF is specified inside JSR (Java Specification Request) 127 (http://jcp.org/en/jsr/detail?id=127). To use the Spring Faces module, you have to add some configuration to your application. The diagram below depicts the single configuration blocks. These blocks are described in this article. The first step in the configuration is to configure the JSF framework itself. That is done in the deployment descriptor of the web application—web.xml. The servlet has to be loaded at the startup of the application. This is done with the <load-on-startup>1</load-on-startup> element. <!-- Initialization of the JSF implementation. The Servlet is not used at runtime --> <servlet> <servlet-name>Faces Servlet</servlet-name> <servlet-class>javax.faces.webapp.FacesServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>Faces Servlet</servlet-name> <url-pattern>*.faces</url-pattern> </servlet-mapping> For the work with the JavaServer Faces, there are two important classes. These are the javax.faces.webapp.FacesServlet and the javax.faces.context.FacesContext classes.You can think of FacesServlet as the core base of each JSF application. Sometimes that servlet is called an infrastructure servlet. It is important to mention that each JSF application in one web container has its own instance of the FacesServlet class. This means that an infrastructure servlet cannot be shared between many web applications on the same JEE web container.FacesContext is the data container which encapsulates all information that is necessary around the current request.For the usage of Spring Faces, it is important to know that FacesServlet is only used to instantiate the framework. A further usage inside Spring Faces is not done. To be able to use the components from Spring Faces library, it's required to use Facelets instead of JSP. Therefore, we have to configure that mechanism. If you are interested in reading more about the Facelets technology, visit the Facelets homepage from java.net with the following URL: https://facelets.dev.java.net. A good introduction inside the Facelets technology is the http://www.ibm.com/developerworks/java/library/j-facelets/ article, too. The configuration process is done inside the deployment descriptor of your web application—web.xml. The following sample shows the configuration inside the mentioned file. <context-param> <param-name>javax.faces.DEFAULT_SUFFIX</param-name> <param-value>.xhtml</param-value></context-param> As you can see in the above code, the configuration parameter is done with a context parameter. The name of the parameter is javax.faces.DEFAULT_SUFFIX. The value for that context parameter is .xhtml. Inside the Facelets technology To present the separate views inside a JSF context, you need a specific view handler technology. One of those technologies is the well-known JavaServer Pages (JSP) technology. Facelets are an alternative for the JSP inside the JSF context. Instead, to define the views in JSP syntax, you will use XML. The pages are created using XHTML. The Facelets technology offers the following features: A template mechanism, similar to the mechanism which is known from the Tiles framework The composition of components based on other components Custom logic tags Expression functions With the Facelets technology, it's possible to use HTML for your pages. Therefore, it's easy to create the pages and view them directly in a browser, because you don't need an application server between the processes of designing a page The possibility to create libraries of your components The following sample shows a sample XHTML page which uses the component aliasing mechanism of the Facelets technology. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html > <body> <form jsfc="h:form"> <span jsfc="h:outputText" value="Welcome to our page: #{user.name}" disabled="#{empty user}" /> <input type="text" jsfc="h:inputText" value="#{bean.theProperty}" /> <input type="submit" jsfc="h:commandButton" value="OK" action="#{bean.doIt}" /> </form> </body></html> The sample code snippet above uses the mentioned expression language (for example, the #{user.name} expression accesses the name property from the user instance) of the JSF technology to access the data. What is component aliasingOne of the mentioned features of the Facelets technology is that it is possible to view a page directly in a browser without that the page is running inside a JEE container environment. This is possible through the component aliasing feature. With this feature, you can use normal HTML elements, for example an input element. Additionally, you can refer to the component which is used behind the scenes with the jsfc attribute. An example for that is <input type="text" jsfc="h:inputText" value="#{bean.theProperty}" /> . If you open this inside a browser, the normal input element is used. If you use it inside your application, the h:inputText element of the component library is used     The ResourceServlet One main part of the JSF framework are the components for the GUI. These components often consist of many files besides the class files. If you use many of these components, the problem of handling these files arises. To solve this problem, the files such as JavaScript and CSS (Cascading Style Sheets) can be delivered inside the JAR archive of the component. If you deliver the file inside the JAR file, you can organize the components in one file and therefore it is easier for the deployment and maintenance of your component library. Regardless of the framework you use, the result is HTML. The resources inside the HTML pages are required as URLs. For that, we need a way to access these resources inside the archive with the HTTP protocol. To solve that problem, there is a servlet with the name ResourceServlet (package org.springframework.js.resource). The servlet can deliver the following resources: Resources which are available inside the web application (for example, CSS files) Resources inside a JAR archive The configuration of the servlet inside web.xml is shown below: <servlet> <servlet-name>Resource Servlet</servlet-name> <servlet-class>org.springframework.js.resource.ResourceServlet</servlet-class> <load-on-startup>0</load-on-startup></servlet> <servlet-mapping> <servlet-name>Resource Servlet</servlet-name> <url-pattern>/resources/*</url-pattern></servlet-mapping> It is important that you use the correct url-pattern inside servlet-mapping. As you can see in the sample above, you have to use /resources/*. If a component does not work (from the Spring Faces components), first check if you have the correct mapping for the servlet. All resources in the context of Spring Faces should be retrieved through this Servlet. The base URL is /resources. Internals of the ResourceServlet ResourceServlet can only be accessed via a GET request. The ResourceServlet servlet implements only the GET method. Therefore, it's not possible to serve POST requests. Before we describe the separate steps, we want to show you the complete process, illustrated in the diagram below: For a better understanding, we choose an example for the explanation of the mechanism which is shown in the previous diagram. Let us assume that we have registered the ResourcesServlet as mentioned before and we request a resource by the following sample URL: http://localhost:8080/ flowtrac-web-jsf/resources/css/test1.css. How to request more than one resource with one requestFirst, you can specify the appended parameter. The value of the parameter is the path to the resource you want to retrieve. An example for that is the following URL: http://localhost:8080/ flowtracweb-jsf/resources/css/test1.css?appended=/css/test2.css. If you want to specify more than one resource, you can use the delimiter comma inside the value for the appended parameter. A simple example for that mechanism is the following URL: http://localhost:8080/ flowtrac-web-jsf/resources/css/test1.css?appended=/css/test2.css, http://localhost:8080/flowtrac-web-jsf/resources/css/test1.css?appended=/css/test3.css. Additionally, it is possible to use the comma delimiter inside the PathInfo. For example: http://localhost:8080/flowtrac-web-jsf/resources/css/test1.css,/css/test2.css. It is important to mention that if one resource of the requested resources is not available, none of the requested resources is delivered. This mechanism can be used to deliver more than one CSS in one request. From the view of development, it can make sense to modularize your CSS files to get more maintainable CSS files. With that concept, the client gets one CSS, instead of many CSS files. From the view of performance optimization, it is better to have as few requests for rendering a page as possible. Therefore, it makes sense to combine the CSS files of a page. Internally, the files are written in the same sequence as they are requested. To understand how a resource is addressed, we separate the sample URL into the specific parts. The example URL is a URL on a local servlet container which has an HTTP connector at port 8080. See the following diagram for the mentioned separation: The table below describes the five sections of the URL that are shown in the previous diagram:
Read more
  • 0
  • 1
  • 26934

article-image-data-migration-scenarios-sap-business-one-application-part-2
Packt
27 Oct 2009
7 min read
Save for later

Data Migration Scenarios in SAP Business ONE Application- part 2

Packt
27 Oct 2009
7 min read
Advanced data migration tools: xFusion Studio For our own projects, we have adopted a tool called xFusion. Using this tool, you gain flexibility and are able to reuse migration settings for specific project environments. The tool provides connectivity to directly extract data from applications (including QuickBooks and Peachtree). In addition, it also supports building rules for data profiling, validation, and conversions. For example, our project team participated in the development of the template for the Peachtree interface. We configured the mappings from Peachtree, and connected the data with the right fields in SAP. This was then saved as a migration template. Therefore, it would be easy and straightforward to migrate data from Peachtree to SAP in any future projects. xFusion packs save migration knowledge Based on the concept of establishing templates for migrations, xFusion provides preconfigured templates for the SAP Business ONE application. In xFusion, templates are called xFusion packs. Please note that these preconfigured packs may include master data packs, and also xFusion packs for transaction data. The following xFusion packs are provided for an SAP Business ONE migration: Administration Banking Business partner Finance HR Inventory and production Marketing documents and receipts MRP UDFs Services You can see that the packs are also grouped by business object. For example, you have a group of xFusion packs for inventory and production. You can open the pack and find a group of xFusion files that contain the configuration information. If you open the inventory and production pack, a list of folders will be revealed. Each folder has a set of Excel templates and xFusion fi les (seen in the following screenshot). An xFusion pack essentially incorporates the configuration and data manipulation procedures required to bring data from a source into SAP. The source settings can be saved in xFusion packs so that you can reuse the knowledge with regards to data manipulation and formatting. Data "massaging" using SQL The key for the migration procedure is the capability to do data massaging in order to adjust formats and columns, in a step-by-step manner, based on requirements. Data manipulation is not done programmatically, but rather via a step-by-step process, where each step uses SQL statements to verify and format data. The entire process is represented visually, and thereby documents the steps required. This makes it easy to adjust settings and fine-tune them. The following applications are supported and can, therefore, be used as a source for an SAP migration: (They are existing xFusion packs) SAP Business ONE Sage ACT! SAP SAP BW Peachtree QuickBooks Microsoft Dynamics CRM The following is a list of supported databases: Oracle ODBC MySQL OLE DB SQL Server PostgrSQL Working with xFusion The workflow in xFusion starts when you open an existing xFusion pack, or create a new one. In this example, an xFusion pack for business partner migration was opened. You can see the graphical representation of the migration process in the main window (in the following screenshot). Each icon in the graphic representation represents a data manipulation and formatting step. If you click on an icon, the complete path from the data source to the icon is highlighted. Therefore, you can select the previous steps to adjust the data. The core concept is that you do not directly change the input data, but define rules to convert data from the source format to the target format. If you open an xFusion pack for the SAP Business ONE application, the target is obviously SAP Business ONE. Therefore, you need to enter the privileges and database name so that the pack knows how to access the SAP system. In addition, the source parameters need to be provided. xFusion packs come with example Excel fi les. You need to select the Excel fi les as the relevant source. However, it is important to note that you don't need to use the Excel files. You can use any database, or other source, as long as you adjust the data format using the step-by-step process to represent the same format as provided in Excel. In xFusion. you can use the sample files that come in Excel format. The connection parameters are presented once you double-click on any of the connections listed in the Connections section as follows: It is recommended to click on Test Connection to verify the proper parameters. If all of the connections are right, you can run a migration from the source to the target by right-clicking on an icon and selecting Run Export as shown here: The progress and export is visually documented. This way, you can verify the success. There is also a log file in the directory where the currently utilized xFusion pack resides, as shown in the following screenshot: Tips and recommendations for your own project Now you know all of the main migration tools and methods. If you want to select the right tool and method for your specific situation, you will see that even though there may be many templates and preconfigured packs out there, your own project potentially comes with some individual aspects. When organizing the data migration project, use the project task skeleton I provided. It is important to subdivide the required migration steps into a group of easy-to-understand steps, where data can be verified at each level. If it gets complicated, it is probably not the right way to move forward, and you need to re-think the methods and tools you are using. Common issues The most common issue I found in similar projects is that the data to be migrated is not entirely clean and consistent. Therefore, be sure to use a data verification procedure at each step. Don't just import data, only to find out later that the database is overloaded with data that is not right. Recommendation Separate the master data and the transaction data. If you don't want to lose valuable transaction data, you can establish a reporting database which will save all of the historic transactions. For example, sales history can easily be migrated to an SQL database. You can then provide access to this information from the required SAP forms using queries or Crystal Reports. Case study During the course of evaluating the data import features available in the SAP Business ONE application, we have already learned how to import business partner information and item data. This can easily be done using the standard SAP data import features based on the Excel or text files. Using this method allows the lead, customer, and vendor data to be imported. Let's say that the Lemonade Stand enterprise has salespeople who travel to trade fairs and collect contact information. We can import the address information using the proven BP import method. But after this data is imported, what would the next step be? It would be a good idea to create and manage opportunities based on the address material. Basically, you already know how to use Excel to bring over address information. Let's enhance this concept to bring over opportunity information. We will use xFusion to import opportunity data into the SAP Business ONE application. The basis will be the xFusion pack for opportunities. Importing sales opportunities for the Lemonade Stand The xFusion pack is open, and you can see that it is a nice and clean example without major complexity. That's how it should be, as you see here:
Read more
  • 0
  • 0
  • 2782

article-image-developing-web-applications-using-javaserver-faces-part-2
Packt
27 Oct 2009
5 min read
Save for later

Developing Web Applications using JavaServer Faces: Part 2

Packt
27 Oct 2009
5 min read
JSF Validation Earlier in this article, we discussed how the required attribute for JSF input fields allows us to easily make input fields mandatory. If a user attempts to submit a form with one or more required fields missing, an error message is automatically generated. The error message is generated by the <h:message> tag corresponding to the invalid field. The string First Name in the error message corresponds to the value of the label attribute for the field. Had we omitted the label attribute, the value of the fields id attribute would have been shown instead. As we can see, the required attribute makes it very easy to implement mandatory field functionality in our application. Recall that the age field is bound to a property of type Integer in our managed bean. If a user enters a value that is not a valid integer into this field, a validation error is automatically generated. Of course, a negative age wouldn't make much sense, however, our application validates that user input is a valid integer with essentially no effort on our part. The email address input field of our page is bound to a property of type String in our managed bean. As such, there is no built-in validation to make sure that the user enters a valid email address. In cases like this, we need to write our own custom JSF validators. Custom JSF validators must implement the javax.faces.validator.Validator interface. This interface contains a single method named validate(). This method takes three parameters: an instance of javax.faces.context.FacesContext, an instance of javax.faces.component.UIComponent containing the JSF component we are validating, and an instance of java.lang.Object containing the user entered value for the component. The following example illustrates a typical custom validator. package com.ensode.jsf.validators;import java.util.regex.Matcher;import java.util.regex.Pattern;import javax.faces.application.FacesMessage;import javax.faces.component.UIComponent;import javax.faces.component.html.HtmlInputText;import javax.faces.context.FacesContext;import javax.faces.validator.Validator;import javax.faces.validator.ValidatorException;public class EmailValidator implements Validator { public void validate(FacesContext facesContext, UIComponent uIComponent, Object value) throws ValidatorException { Pattern pattern = Pattern.compile("w+@w+.w+"); Matcher matcher = pattern.matcher( (CharSequence) value); HtmlInputText htmlInputText = (HtmlInputText) uIComponent; String label; if (htmlInputText.getLabel() == null || htmlInputText.getLabel().trim().equals("")) { label = htmlInputText.getId(); } else { label = htmlInputText.getLabel(); } if (!matcher.matches()) { FacesMessage facesMessage = new FacesMessage(label + ": not a valid email address"); throw new ValidatorException(facesMessage); } }} In our example, the validate() method does a regular expression match against the value of the JSF component we are validating. If the value matches the expression, validation succeeds, otherwise, validation fails and an instance of javax.faces.validator.ValidatorException is thrown. The primary purpose of our custom validator is to illustrate how to write custom JSF validations, and not to create a foolproof email address validator. There may be valid email addresses that don't validate using our validator. The constructor of ValidatorException takes an instance of javax.faces.application.FacesMessage as a parameter. This object is used to display the error message on the page when validation fails. The message to display is passed as a String to the constructor of FacesMessage. In our example, if the label attribute of the component is not null nor empty, we use it as part of the error message, otherwise we use the value of the component's id attribute. This behavior follows the pattern established by standard JSF validators. Before we can use our custom validator in our pages, we need to declare it in the application's faces-config.xml configuration file. To do so, we need to add a <validator> element just before the closing </faces-config> element. <validator> <validator-id>emailValidator</validator-id> <validator-class> com.ensode.jsf.validators.EmailValidator </validator-class></validator> The body of the <validator-id> sub element must contain a unique identifier for our validator. The value of the <validator-class> element must contain the fully qualified name of our validator class. Once we add our validator to the application's faces-config.xml, we are ready to use it in our pages. In our particular case, we need to modify the email field to use our custom validator. <h:inputText id="email" label="Email Address" required="true" value="#{RegistrationBean.email}"> <f:validator validatorId="emailValidator"/></h:inputText> All we need to do is nest an <f:validator> tag inside the input field we wish to have validated using our custom validator. The value of the validatorId attribute of <f:validator> must match the value of the body of the <validator-id> element in faces-config.xml. At this point we are ready to test our custom validator. When entering an invalid email address into the email address input field and submitting the form, our custom validator logic was executed and the String we passed as a parameter to FacesMessage in our validator() method is shown as the error text by the <h:message> tag for the field.
Read more
  • 0
  • 0
  • 1641

article-image-developing-web-applications-using-javaserver-faces-part-1
Packt
27 Oct 2009
6 min read
Save for later

Developing Web Applications using JavaServer Faces: Part 1

Packt
27 Oct 2009
6 min read
Although a lot of applications have been written using these APIs, most modern Java applications are written using some kind of web application framework. As of Java EE 5, the standard framework for building web applications is Java Server Faces (JSF). Introduction to JavaServer Faces Before JSF was developed, Java web applications were typically developed using non-standard web application frameworks such as Apache Struts, Tapestry, Spring Web MVC, or many others. These frameworks are built on top of the Servlet and JSP standards, and automate a lot of functionality that needs to be manually coded when using these APIs directly. Having a wide variety of web application frameworks available (at the time of writing, Wikipedia lists 35 Java web application frameworks, and this list is far from extensive!), often resulted in "analysis paralysis", that is, developers often spend an inordinate amount of time evaluating frameworks for their applications. The introduction of JSF to the Java EE 5 specification resulted in having a standard web application framework available in any Java EE 5 compliant application server. We don't mean to imply that other web application frameworks are obsolete or that they shouldn't be used at all, however, a lot of organizations consider JSF the "safe" choice since it is part of the standard and should be well supported for the foreseeable future. Additionally, NetBeans offers excellent JSF support, making JSF a very attractive choice. Strictly speaking, JSF is not a web application framework as such, but a component framework. In theory, JSF can be used to write applications that are not web-based, however, in practice JSF is almost always used for this purpose. In addition to being the standard Java EE 5 component framework, one benefit of JSF is that it was designed with graphical tools in mind, making it easy for tools and IDEs such as NetBeans to take advantage of the JSF component model with drag-and-drop support for components. NetBeans provides a Visual Web JSF Designer that allow us to visually create JSF applications. Developing Our first JSF Application From an application developer's point of view, a JSF application consists of a series of JSP pages containing custom JSF tags, one or more JSF managed beans, and a configuration file named faces-config.xml. The faces-config.xml file declares the managed beans in the application, as well as the navigation rules to follow when navigating from one JSF page to another. Creating a New JSF Project To create a new JSF project, we need to go to File | New Project, select the Java Web project category, and Web Application as the project type. After clicking Next, we need to enter a Project Name, and optionally change other information for our project, although NetBeans provides sensible defaults. On the next page in the wizard, we can select the Server, Java EE Version, and Context Path of our application. In our example, we will simply pick the default values. On the next page of the new project wizard, we can select what frameworks our web application will use. Unsurprisingly, for JSF applications we need to select the JavaServer Faces framework. The Visual Web JavaServer Faces framework allows us to quickly build web pages by dragging-and-dropping components from the NetBeans palette into our pages. Although it certainly allows us to develop applications a lot quicker than manually coding, it hides a lot of the "ins" and "outs" of JSF. Having a background in standard JSF development will help us understand what the NetBeans Visual Web functionality does behind the scenes. When clicking Finish, the wizard generates a skeleton JSF project for us, consisting of a single JSP file called welcomeJSF.jsp, and a few configuration files: web.xml, faces-config.xml and, if we are using the default bundled GlassFish server, the GlassFish specific sun-web.xml file is generated as well. web.xml is the standard configuration file needed for all Java web applications. faces-config.xml is a JSF-specific configuration file used to declare JSF-managed beans and navigation rules. sun-web.xml is a GlassFish-specific configuration file that allows us to override the application's default context root, add security role mappings, and perform several other configuration tasks. The generated JSP looks like this: <%@page contentType="text/html"%> <%@page pageEncoding="UTF-8"%> <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <%-- This file is an entry point for JavaServer Faces application. --%> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>JSP Page</title> </head> <body> <f:view> <h1> <h:outputText value="JavaServer Faces"/> </h1> </f:view> </body> </html> As we can see, a JSF enabled JSP file is a standard JSP file using a couple of JSF-specific tag libraries. The first tag library, declared in our JSP by the following line: <%@taglib prefix="f" uri="http://java.sun.com/jsf/core"%> is the core JSF tag library, this library includes a number of tags that are independent of the rendering mechanism of the JSF application (recall that JSF can be used for applications other than web applications). By convention, the prefix f (for faces) is used for this tag library. The second tag library in the generated JSP, declared by the following line: <%@taglib prefix="h" uri="http://java.sun.com/jsf/html"%> is the JSF HTML tag library. This tag library includes a number of tags that are used to implement HTML specific functionality, such as creating HTML forms and input fields. By convention, the prefix h (for HTML) is used for this tag library. The first JSF tag we see in the generated JSP file is the <f:view> tag. When writing a Java web application using JSF, all JSF custom tags must be enclosed inside an <f:view> tag. In addition to JSF-specific tags, this tag can contain standard HTML tags, as well as tags from other tag libraries, such as the JSTL tags. The next JSF-specific tag we see in the above JSP is <h:outputText>. This tag simply displays the value of its value attribute in the rendered page. The application generated by the new project wizard is a simple, but complete, JSF web application. We can see it in action by right-clicking on our project in the project window and selecting Run. At this point the application server is started (if it wasn't already running), the application is deployed and the default system browser opens, displaying our application's welcome page.
Read more
  • 0
  • 0
  • 2614
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-oracle-web-rowset-part2
Packt
27 Oct 2009
4 min read
Save for later

Oracle Web RowSet - Part2

Packt
27 Oct 2009
4 min read
Reading a Row Next, we will read a row from the OracleWebRowSet object. Click on Modify Web RowSet link in the CreateRow.jsp. In the ModifyWebRowSet JSP click on the Read Row link. The ReadRow.jsp JSP is displayed. In the ReadRow JSP specify the Database Row to Read and click on Apply. The second row values are retrieved from the Web RowSet: In the ReadRow JSP the readRow() method of the WebRowSetQuery.java application is invoked. TheWebRowSetQuery object is retrieved from the session object. WebRowSetQuery query=( webrowset.WebRowSetQuery)session.getAttribute("query"); The String[] values returned by the readRow() method are added to theReadRow JSP fields. In the readRow() method theOracleWebRowSet object cursor is moved to the row to be read. webRowSet.absolute(rowRead); Retrieve the row values with the getString() method and add to String[]. Return the String[] object. String[] resultSet=new String[5];resultSet[0]=webRowSet.getString(1);resultSet[1]=webRowSet.getString(2);resultSet[2]=webRowSet.getString(3);resultSet[3]=webRowSet.getString(4);resultSet[4]=webRowSet.getString(5);return resultSet; ReadRow.jsp JSP is listed as follows: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN""http://www.w3.org/TR/html4/loose.dtd"><%@ page contentType="text/html;charset=windows-1252"%><%@ page session="true"%><html><head><meta http-equiv="Content-Type" content="text/html;charset=windows-1252"><title>Read Row with Web RowSet</title></head><body><form><h3>Read Row with Web RowSet</h3><table><tr><td><a href="ModifyWebRowSet.jsp">Modify Web RowSetPage</a></td></tr></table></form><%webrowset.WebRowSetQuery query=null;query=( webrowset.WebRowSetQuery)session.getAttribute("query");String rowRead=request.getParameter("rowRead");String journalUpdate=request.getParameter("journalUpdate");String publisherUpdate=request.getParameter("publisherUpdate");String editionUpdate=request.getParameter("editionUpdate");String titleUpdate=request.getParameter("titleUpdate");String authorUpdate=request.getParameter("authorUpdate");if((rowRead!=null)){int row_Read=Integer.parseInt(rowRead);String[] resultSet=query.readRow(row_Read);journalUpdate=resultSet[0];publisherUpdate=resultSet[1];editionUpdate=resultSet[2];titleUpdate=resultSet[3];authorUpdate=resultSet[4];}%><form name="query" action="ReadRow.jsp" method="post"><table><tr><td>Database Row to Read:</td></tr><tr><td><input name="rowRead" type="text" size="25"maxlength="50"/></td></tr><tr><td>Journal:</td></tr><tr><td><input name="journalUpdate" value='<%=journalUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Publisher:</td></tr><tr><td><input name="publisherUpdate"value='<%=publisherUpdate%>' type="text" size="50"maxlength="250"/></td></tr><tr><td>Edition:</td></tr><tr><td><input name="editionUpdate" value='<%=editionUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Title:</td></tr><tr><td><input name="titleUpdate" value='<%=titleUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td>Author:</td></tr><tr><td><input name="authorUpdate" value='<%=authorUpdate%>'type="text" size="50" maxlength="250"/></td></tr><tr><td><input class="Submit" type="submit" value="Apply"/></td></tr></table></form></body></html>
Read more
  • 0
  • 0
  • 1587

article-image-python-data-persistence-using-mysql-part-ii-moving-data-processing-data
Packt
27 Oct 2009
8 min read
Save for later

Python Data Persistence using MySQL Part II: Moving Data Processing to the Data

Packt
27 Oct 2009
8 min read
To move data processing to the data, you can use stored procedures, stored functions, and triggers. All these components are implemented inside the underlying database, and can significantly improve performance of your application due to reducing network overhead associated with multiple calls to the database. It is important to realize, though, the decision to move any piece of processing logic into the database should be taken with care. In some situations, this may be simply inefficient. For example, if you decide to move some logic dealing with the data stored in a custom Python list into the database, while still keeping that list implemented in your Python code, this can be inefficient in such a case, since it only increases the number of calls to the underlying database, thus causing significant network overhead. To fix this situation, you could move the list from Python into the database as well, implementing it as a table. Starting with version 5.0, MySQL supports stored procedures, stored functions, and triggers, making it possible for you to enjoy programming on the underlying database side. In this article, you will look at triggers in action. Stored procedures and functions can be used similarly. Planning Changes for the Sample Application Assuming you have followed the instructions in Python Data Persistence using MySQL, you should already have the application structure to be reorganized here. To recap, what you should already have is: tags nested list of tags used to describe the posts obtained from the Packt Book Feed page. obtainPost function obtains the information about the most recent post on the Packt Book Feed page. determineTags function determines tags appropriate to the latest post obtained from the Packt Book Feed page. insertPost function inserts the information about the obtained post into the underlying database tables: posts and posttags. execPr function brings together the functionality of the described above functions. That’s what you should already have on the Python side. And on the database side, you should have the following components: posts table contains records representing posts obtained from the Packt Book Feed page. posttags table contains records each of which represents a tag associated with a certain post stored in the posts table. Let’s figure out how we can refactor the above structure, moving some data processing inside the database. The first thing you might want to do is to move the tags list from Python into the database, creating a new table tags for that. Then, you can move the logic implemented with the determineTags function inside the database, defining the AFTER INSERT trigger on the posts table. From within this trigger, you will also insert rows into the posttags table, thus eliminating the need to do it from within the insertPost function. Once you’ve done all that, you can refactor the Python code implemented in the appsample module. To summarize, here are the steps you need to perform in order to refactor the sample application discussed in the earlier article: Create tags table and populate it with the data currently stored in the  tags list implemented in Python. Define the AFTER INSERT trigger on the posts table. Refactor the insertPost function in the appsample.py module. Remove the tags list from the appsample.py module. Remove the determineTags function from the appsample.py module. Refactor the execPr function in the appsample.py module. Refactoring the Underlying Database To keep things simple, the tags table might contain a single column tag with the primary key constraint defined on it. So, you can create the tags table as follows: CREATE TABLE tags ( tag VARCHAR(20) PRIMARY KEY ) ENGINE = InnoDB; Then, you might want to modify the posttags table, adding a foreign key constraint to its tag column. Before you can do that, though, you will need to delete all the rows from this table. This can be done with the following query: DELETE FROM posttags; Now you can move on and alter posttags as follows: ALTER TABLE posttags ADD FOREIGN KEY (tag) REFERENCES tags(tag); The next step is to populate the tags table. You can automate this process with the help of the following Python script: >>> import MySQLdb >>> import appsample >>> db=MySQLdb.connect(host="localhost",user="usrsample",passwd="pswd",db=">>> dbsample") >>> c=db.cursor() >>> c.executemany("""INSERT INTO tags VALUES(%s)""", appsample.tags) >>> db.commit() >>> db.close() As a result, you should have the tags table populated with the data taken from the tags list discussed in Python Data Persistence using MySQL. To make sure it has done so, you can turn back to the mysql prompt and issue the following query against the tags table: SELECT * FROM tags; The above should output the list of tags you have in the tags list. Of course, you can always extend this list, adding new tags with the INSERT statement. For example, you could issue the following statement to add the Visual Studio tag: INSERT INTO tags VALUES('Visual Studio'); Now you can move on and define the AFTER INSERT trigger on the posts table: delimiter // CREATE TRIGGER insertPost AFTER INSERT ON posts FOR EACH ROW BEGIN INSERT INTO posttags(title, tag) SELECT NEW.title as title, tag FROM tags WHERE LOCATE(tag, NEW.title)>0; END // delimiter ; As you can see, the posttags table will be automatically populated with appropriate tags just after a new row is inserted into the posts table. Notice the use of the INSERT … SELECT statement in the body of the trigger. Using this syntax lets you insert several rows into the posttags table at once, without having to use an explicit loop. In the WHERE clause of SELECT, you use standard MySQL string function LOCATE returning the position of the first occurrence of the substring, passed in as the first argument, in the string, passed in as the second argument. In this particular example, though, you are not really interested in obtaining the position of an occurrence of the substring in the string. All you need to find out here is whether the substring appears in the string or not. If it is, it should appear in the posttags table as a separate row associated with the row just inserted into the posts table. Refactoring the Sample’s Python Code Now that you have moved some data and data processing from Python into the underlying database, it’s time to reorganize the appsample custom Python module created as discussed in Python Data Persistence using MySQL. As mentioned earlier, you need to rewrite the insertPost and execPr functions and remove the determineTags function and the tags list. This is what the appsample module should look like after revising: import MySQLdb import urllib2 import xml.dom.minidom def obtainPost(): addr = "http://feeds.feedburner.com/packtpub/sDsa?format=xml" xmldoc = xml.dom.minidom.parseString(urllib2.urlopen(addr).read()) item = xmldoc.getElementsByTagName("item")[0] title = item.getElementsByTagName("title")[0].firstChild.data guid = item.getElementsByTagName("guid")[0].firstChild.data pubDate = item.getElementsByTagName("pubDate")[0].firstChild.data post ={"title": title, "guid": guid, "pubDate": pubDate} return post def insertPost(title, guid, pubDate): db=MySQLdb.connect(host="localhost",user="usrsample",passwd="pswd",db="dbsample") c=db.cursor() c.execute("""INSERT INTO posts (title, guid, pubDate) VALUES(%s,%s,%s)""", (title, guid, pubDate)) db.commit() db.close() def execPr(): p = obtainPost() insertPost(p["title"], p["guid"], p["pubDate"]) If you compare it with appsample discussed in Part 1, you should notice that the revision is much shorter. It’s important to note, however, that nothing has changed from the user standpoint. So, if you now start the execPr function in your Python session: >>>import appsample >>>appsample.execPr() This should insert a new record into the posts table, inserting automatically corresponding tags records into the posttags table, if any. The difference lies in the way it’s going on behind the scenes. Now the Python code is responsible only for obtaining the latest post from the Packt Book Feed page and then inserting a record into the posts table. Dealing with tags is now responsibility of the logic implemented inside the database. In particular, the AFTER INSERT trigger defined on the posts table should take care of inserting the rows into the posttags table. To make sure that everything has worked smoothly, you can now check out the content of the posts and posttags tables. To look at the latest post stored in the posts table, you could issue the following query: SELECT title, str_to_date(pubDate,'%a, %e %b %Y') lastdate FROM posts ORDER BY lastdate DESC LIMIT 1; Then, you might want to look at the related tags stored in the posttags tables, by issuing the following query: SELECT p.title, t.tag, str_to_date(p.pubDate,'%a, %e %b %Y') lastdate FROM posts p, posttags t WHERE p.title=t.title ORDER BY lastdate DESC LIMIT 1; Conclusion In this article, you looked at how some business logic of a Python/MySQL application can be moved from Python into MySQL. For that, you continued with the sample application originally discussed in Python Data Persistence using MySQL.
Read more
  • 0
  • 0
  • 4620

article-image-implementing-basic-helloworld-wcf-windows-communication-foundation-service
Packt
27 Oct 2009
7 min read
Save for later

Implementing a Basic HelloWorld WCF (Windows Communication Foundation) Service

Packt
27 Oct 2009
7 min read
We will build a HelloWorld WCF service by carrying out the following steps: Create the solution and project Create the WCF service contract interface Implement the WCF service Host the WCF service in the ASP.NET Development Server Create a client application to consume this WCF service Creating the HelloWorld solution and project Before we can build the WCF service, we need to create a solution for our service projects. We also need a directory in which to save all the files. Throughout this article, we will save our project source codes in the D:SOAwithWCFandLINQProjects directory. We will have a subfolder for each solution we create, and under this solution folder, we will have one subfolder for each project. For this HelloWorld solution, the final directory structure is shown in the following image: You don't need to manually create these directories via Windows Explorer; Visual Studio will create them automatically when you create the solutions and projects. Now, follow these steps to create our first solution and the HelloWorld project: Start Visual Studio 2008. If the Open Project dialog box pops up, click Cancel to close it. Go to menu File | New | Project. The New Project dialog window will appear. From the left-hand side of the window (Project types), expand Other Project Types and then select Visual Studio Solutions as the project type. From the right-hand side of the window (Templates), select Blank Solution as the template. At the bottom of the window, type HelloWorld as the Name, and D:SOAwithWCFandLINQProjects as the Location. Note that you should not enter HelloWorld within the location, because Visual Studio will automatically create a folder for a new solution. Click the OK button to close this window and your screen should look like the following image, with an empty solution. Depending on your settings, the layout may be different. But you should still have an empty solution in your Solution Explorer. If you don't see Solution Explorer, go to menu View | Solution Explorer, or press Ctrl+Alt+L to bring it up. In the Solution Explorer, right-click on the solution, and select Add | New Project… from the context menu. You can also go to menu File | Add | New Project… to get the same result. The following image shows the context menu for adding a new project. The Add New Project window should now appear on your screen. In the left-hand side of this window (Project types), select Visual C# as the project type, and on the right-hand side of the window (Templates), select Class Library as the template. At the bottom of the window, type HelloWorldService as the Name. Leave D:SOAwithWCFandLINQProjectsHelloWorld as the Location. Again, don't add HelloWorldService to the location, as Visual Studio will create a subfolder for this new project (Visual Studio will use the solution folder as the default base folder for all the new projects added to the solution). You may have noticed that there is already a template for WCF Service Application in Visual Studio 2008. For the very first example, we will not use this template. Instead, we will create everything by ourselves so you know what the purpose of each template is. This is an excellent way for you to understand and master this new technology. Now, you can click the OK button to close this window. Once you click the OK button, Visual Studio will create several files for you. The first file is the project file. This is an XML file under the project directory, and it is called HelloWorldService.csproj. Visual Studio also creates an empty class file, called Class1.cs. Later, we will change this default name to a more meaningful one, and change its namespace to our own one. Three directories are created automatically under the project folder—one to hold the binary files, another to hold the object files, and a third one for the properties files of the project. The window on your screen should now look like the following image: We now have a new solution and project created. Next, we will develop and build this service. But before we go any further, we need to do two things to this project: Click the Show All Files button on the Solution Explorer toolbar. It is the second button from the left, just above the word Solution inside the Solution Explorer. If you allow your mouse to hover above this button, you will see the hint Show All Files, as shown in above diagram. Clicking this button will show all files and directories in your hard disk under the project folder-rven those items that are not included in the project. Make sure that you don't have the solution item selected. Otherwise, you can't see the Show All Files button. Change the default namespace of the project. From the Solution Explorer, right-click on the HelloWorldService project, select Properties from the context menu, or go to menu item Project | HelloWorldService Properties…. You will see the project properties dialog window. On the Application tab, change the Default namespace to MyWCFServices. Lastly, in order to develop a WCF service, we need to add a reference to the ServiceModel namespace. On the Solution Explorer window, right-click on the HelloWorldService project, and select Add Reference… from the context menu. You can also go to the menu item Project | Add Reference… to do this. The Add Reference dialog window should appear on your screen. Select System.ServiceModel from the .NET tab, and click OK. Now, on the Solution Explorer, if you expand the references of the HelloWorldService project, you will see that System.ServiceModel has been added. Also note that System.Xml.Linq is added by default. We will use this later when we query a database. Creating the HelloWorldService service contract interface In the previous section, we created the solution and the project for the HelloWorld WCF Service. From this section on, we will start building the HelloWorld WCF service. First, we need to create the service contract interface. In the Solution Explorer, right-click on the HelloWorldService project, and select Add | New Item…. from the context menu. The following Add New Item - HelloWorldService dialog window should appear on your screen. On the left-hand side of the window (Categories), select Visual C# Items as the category, and on the right-hand side of the window (Templates), select Interface as the template. At the bottom of the window, change the Name from Interface1.cs to IHelloWorldService.cs. Click the Add button. Now, an empty service interface file has been added to the project. Follow the steps below to customize it. Add a using statement: using System.ServiceModel; Add a ServiceContract attribute to the interface. This will designate the interface as a WCF service contract interface. [ServiceContract] Add a GetMessage method to the interface. This method will take a string as the input, and return another string as the result. It also has an attribute, OperationContract. [OperationContract] String GetMessage(String name); Change the interface to public. The final content of the file IHelloWorldService.cs should look like the following: using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.ServiceModel;namespace MyWCFServices{[ServiceContract]public interface IHelloWorldService{[OperationContract]String GetMessage(String name);}}
Read more
  • 0
  • 0
  • 2748

article-image-new-soa-capabilities-biztalk-server-2009-uddi-services
Packt
27 Oct 2009
6 min read
Save for later

New SOA Capabilities in BizTalk Server 2009: UDDI Services

Packt
27 Oct 2009
6 min read
All truths are easy to understand once they are discovered; the point is to discover them.-Galileo Galilei What is UDDI? Universal Description and Discovery Information (UDDI) is a type of registry whose primary purpose is to represent information about web services. It describes the service providers, the services that provider offers, and in some cases, the specific technical specifications for interacting with those services. While UDDI was originally envisioned as a public, platform independent registry that companies could exploit for listing and consuming services, it seems that many have chosen instead to use UDDI as an internal resource for categorizing and describing their available enterprise services. Besides simply listing available services for others to search and peruse, UDDI is arguably most beneficial for those who wish to perform runtime binding to service endpoints. Instead of hard-coding a service path in a client application, one may query UDDI for a particular service's endpoint and apply it to their active service call. While UDDI is typically used for web services, nothing prevents someone from storing information about any particular transport and allowing service consumers to discover and do runtime resolution to these endpoints. As an example, this is useful if you have an environment with primary, backup, and disaster access points and want your application be able to gracefully look up and failover to the next available service environment. In addition, UDDI can be of assistance if an application is deployed globally but you wish for regional consumers to look up and resolve against the closest geographical endpoint. UDDI has a few core hierarchy concepts that you must grasp to fully comprehend how the registry is organized. The most important ones are included here. Name Purpose Name in Microsoft UDDI services BusinessEntity These are the service providers. May be an organization, business unit or functional area. Provider BusinessService General reference to a business service offered by a provider. May be a logical grouping of actual services. Service BindingTemplate Technical details of an individual service including endpoint Binding tModel (Technical Model) Represents metadata for categorization or description such as transport or protocol tModel As far as relationships between these entities go, a Business Entity may contain many Business Services, which in turn can have multiple Binding Templates. A binding may reference multiple tModels and tModels may be reused across many Binding Templates. What's new in UDDI version three? The latest UDDI specification calls out multiple-registry environments, support for digital signatures applied to UDDI entries, more complex categorization, wildcard searching, and a subscription API. We'll spend a bit of time on that last one in a few moments. Let's take a brief lap around at the Microsoft UDDI Services offering. For practical purposes, consider the UDDI Services to be made up of two parts: an Administration Console and a web site. The website is actually broken up into both a public facing and administrative interface, but we'll talk about them as one unit. The UDDI Configuration Console is the place to set service-wide settings ranging from the extent of logging to permissions and site security. The site node (named UDDI) has settings for permission account groups, security settings (see below), and subscription notification thresholds among others. The web node, which resides immediately beneath the parent, controls web site setting such as logging level and target database. Finally, the notification node manages settings related to the new subscription notification feature and identically matches the categories of the web node. The UDDI Services web site, found at http://localhost/uddi/, is the destination or physically listing, managing, and configuring services. The Search page enables querying by a wide variety of criteria including category, services, service providers, bindings, and tModels. The Publish page is where you go to add new services to the registry or edit the settings of existing ones. Finally, the Subscription page is where the new UDDI version three capability of registry notification is configured. We will demonstrate this feature later in this article. How to add services to the UDDI registry? Now we're ready to add new services to our UDDI registry. First, let's go to the Publish page and define our Service Provider and a pair of categorical tModels. To add a new Provider, we right-click the Provider node in the tree and choose Add Provider. Once a provider is created and named, we have the choice of adding all types of context characteristics such as a contact name(s), categories, relationships, and more. I'd like to add two tModel categories to my environment : one to identify which type of environment the service references (development, test, staging, production) and another to flag which type of transport it uses (Basic HTTP, WS HTTP, and so on). To add atModel, simply right-click the tModels node and choose Add tModel. This first one is named biztalksoa:runtimeresolution:environment. After adding one more tModel for biztalksoa:runtimeresolution:transporttype, we're ready to add a service to the registry. Right-click the BizTalkSOA provider and choose Add Service. Set the name of this service toBatchMasterService. Next, we want to add a binding (or access point) for this service, which describes where the service endpoint is physically located. Switch to the Bindings tab of the service definition and choose New Binding. We need a new access point, so I pointed to our proxy service created earlier and identified it as an endPoint. Finally, let's associate the two new tModel categories with our service. Switch to the Categories tab, and choose to Add Custom Category. We're asked to search for atModel, which represents our category, so a wildcard entry such as %biztalksoa%  is a valid search criterion. After selecting the environment category, we're asked for the key name and value. The key "name" is purely a human-friendly representation of the data whereas the tModel identifier and the key value comprise the actual name-value pair. I've entered production as the value on the environment category, and WS-Http as the key value on thetransporttype category. At this point, we have a service sufficiently configured in the UDDI directory so that others can discover and dynamically resolve against it.
Read more
  • 0
  • 0
  • 2855
article-image-new-soa-capabilities-biztalk-server-2009-wcf-sql-server-adapter
Packt
26 Oct 2009
3 min read
Save for later

New SOA Capabilities in BizTalk Server 2009: WCF SQL Server Adapter

Packt
26 Oct 2009
3 min read
Do not go where the path may lead; go instead where there is no path and leave a trail.-Ralph Waldo Emerson Many of the patterns and capabilities shown in this article are compatible with the last few versions of the BizTalk Server product. So what's new in BizTalk Server 2009?` BizTalk Server 2009 is the sixth formal release of the BizTalk Server product. This upcoming release has a heavy focus on platform modernization through new support for Windows Server 2008, Visual Studio.NET 2008, SQL Server 2008, and the .NET Framework 3.5. This will surely help developers who have already moved to these platforms in their day-to-day activities but have been forced to maintain separate environments solely for BizTalk development efforts. Lets get started. What is the WCF SQL Adapter? The BizTalk Adapter Pack 2.0 now contains five system and data adapters including SAP, Siebel, Oracle databases, Oracle applications, and SQL Server. What are these adapters and how are they different than the adapters available for previous version of BizTalk? Up until recently, BizTalk adapters were built using a commonly defined BizTalk Adapter Framework. This framework prescribed interfaces and APIs for adapter developers in order to elicit a common look and feel for the users of the adapters. Moving forward, adapter developers are encouraged by Microsoft to use the new WCF LOB Adapter SDK. As you can guess from the name, this new adapter framework, which can be considered an evolution of the BizTalk Adapter Framework, is based on WCF technologies. All of the adapters in the BizTalk Adapter Pack 2.0 are built upon the WCF LOB Adapter SDK. What this means is that all of the adapters are built as reusable, metadata-rich components that are surfaced to users as WCF bindings. So much like you have a wsHttp or netTcp binding, now you have a sqlBinding or sapBinding. As you would expect from a WCF binding, there is a rich set of configuration attributes for these adapters and they are no longer tightly coupled to BizTalk itself. Microsoft has made connection a commodity, and no longer do organizations have to spend tens of thousands of dollars to connect to line of business systems like SAP through expensive, BizTalk-only adapters. This latest version of the BizTalk Adapter Pack now includes a SQL Server adapter, which replaces the legacy BizTalk-only SQL Server adapter. What do we get from this SQL Server adapter that makes it so much better than the old one? Feature Classic SQL Adapter WCF SQL Adapter Execute create-read-update-delete statements on tables and views; execute stored procedures and generic T-SQL statements Partial (send operations only support stored procedures and updategrams) Yes Database polling via FOR XML Yes Yes Database polling via  traditional tabular results No Yes Proactive database push via SQL Query Notification No Yes Expansive adapter configuration which impacts connection management and transaction behavior No Yes Support for composite transactions which allow aggregation of operations across tables or procedures into a single atomic transaction No Yes Rich metadata browsing and retrieval for finding and selecting database operations No Yes Support for the latest data types (e.g. XML) and SQL Server 2008 platform No Yes Reusable outside of BizTalk applications by WCF or basic HTTP clients No Yes Adapter extension and configuration through out of the box WCF components or custom WCF behaviors No Yes Dynamic WSDL generation which always reflects current state of the system instead of fixed contract which always requires explicit updates No Yes
Read more
  • 0
  • 0
  • 4874

article-image-jboss-tools-palette
Packt
26 Oct 2009
4 min read
Save for later

JBoss Tools Palette

Packt
26 Oct 2009
4 min read
By default, JBoss Tools Palette is available in the Web Development perspective that can be displayed from the Window menu by selecting the Open Perspective | Other option. In the following screenshot, you can see the default look of this palette: Let's dissect this palette to see how it makes our life easier! JBoss Tools Palette Toolbar Note that on the top right corner of the palette, we have a toolbar made of three buttons (as shown in the following screenshot). They are (from left to right): Palette Editor Show/Hide Import Each of these buttons accomplishes different tasks for offering a high level of flexibility and customizability. Next, we will focus our attention on each one of these buttons. Palette Editor Clicking on the Palette Editor icon will display the Palette Editor window (as shown in the following screenshot), which contains groups and subgroups of tags that are currently supported. Also, from this window you can create new groups, subgroups, icons, and of course, tags—as you will see in a few moments. As you can see, this window contains two panels: one for listing groups of tag libraries (left side) and another that displays details about the selected tag and allows us to modify the default values (extreme right). Modifying a tag is a very simple operation that can be done like this: Select from the left panel the tag that you want to modify (for example, the <div> tag from the HTML | Block subgroup, as shown in the previous screenshot). In the right panel, click on the row from the value column that corresponds to the property that you want to modify (the name column). Make the desirable modification(s) and click the OK button for confirming it (them). Creating a set of icons The Icons node from the left panel allows you to create sets of icons and import new icons for your tags. To start, you have to right-click on this node and select the Create | Create Set option from the contextual menu (as shown in the following screenshot). This action will open the Add Icon Set window where you have to specify a name for this new set. Once you're done with the naming, click on the Finish button (as shown in the following screenshot). For example, we have created a set named eHTMLi: Importing an icon You can import a new icon in any set of icons by right-clicking on the corresponding set and selecting the Create | Import Icon option from the contextual menu (as shown in the following screenshot): This action will open the Add Icon window, where you have to specify a name and a path for your icon, and then click on the Finish button (as shown in the following screenshot). Note that the image of the icon should be in GIF format. Creating a group of tag libraries As you can see, the JBoss Tools Palette has a consistent default set of groups of tag libraries, like HTML, JSF, JSTL, Struts, XHTML, etc. If these groups are insufficient, then you can create new ones by right-clicking on the Palette node and selecting the Create | Create Group option from the contextual menu (as shown in the following screenshot). This action will open the Create Group window, where you have to specify a name for the new group, and then click on Finish. For example, we have created a group named mygroup: Note that you can delete (only groups created by the user) or edit groups (any group) by selecting the Delete or Edit options from the contextual menu that appears when you right-click on the chosen group. Creating a tag library Now that we have created a group, it's time to create a library (or a subgroup). To do this, you have to right-click on the new group and select the Create Group option from the contextual menu (as shown in the following screenshot). This action will open the Add Palette Group window, where you have to specify a name and an icon for this library, and then click on the Finish button (as shown in the following screenshot). As an example, we have created a library named eHTML with an icon that we had imported in the Importing an icon section discussed earlier in this article: Note that you can delete a tag library (only tag libraries created by the user) by selecting the Delete option from the contextual menu that appears when you right-click on the chosen library.
Read more
  • 0
  • 0
  • 1669

article-image-rss-web-widget
Packt
24 Oct 2009
8 min read
Save for later

RSS Web Widget

Packt
24 Oct 2009
8 min read
What is an RSS Feed? First of all, let us understand what a web feed is. Basically, it is a data format that provides frequently updated content to users. Content distributors syndicate the web feed, allowing users to subscribe, by using feed aggregator. RSS feeds contain data in an XML format. RSS is the term used for describing Really Simple Syndication, RDF Site Summary, or Rich Site Summary, depending upon the different versions. RDF (Resource Description Framework), a family of W3C specification, is a data model format for modelling various information such as title, author, modified date, content etc through variety of syntax format. RDF is basically designed to be read by computers for exchanging information. Since, RSS is an XML format for data representation, different authorities defined different formats of RSS across different versions like 0.90, 0.91, 0.92, 0.93, 0.94, 1.0 and 2.0. The following table shows when and by whom were the different RSS versions proposed. RSS Version Year Developer's Name RSS 0.90 1999 Netscape introduced RSS 0.90. RSS 0.91 1999 Netscape proposed the simpler format of RSS 0.91. 1999 UserLand Software proposed the RSS specification. RSS 1.0 2000 O'Reilly released RSS 1.0. RSS 2.0 2000 UserLand Software proposed the further RSS specification in this version and it is the most popular RSS format being used these days. Meanwhile, Harvard Law school is responsible for the further development of the RSS specification. There had been a competition like scenario for developing the different versions of RSS between UserLand, Netscape and O'Reilly before the official RSS 2.0 specification was released. For a detailed history of these different versions of RSS you can check http://www.rss-specifications.com/history-rss.htm The current version RSS is 2.0 and it is the common format for publishing RSS feeds these days. Like RSS, there is another format that uses the XML language for publishing web feeds. It is known as ATOM feed, and is most commonly used in Wiki and blogging software. Please refer to http://en.wikipedia.org/wiki/ATOM for detail. The following is the RSS icon that denotes links with RSS feeds. If you're using Mozilla's Firefox web browser then you're likely to see the above image in the address bar of the browser for subscribing to an RSS feed link available in any given page. Web browsers like Firefox and Safari discover available RSS feeds in web pages by looking at the Internet media type application/rss+xml. The following tag specifies that this web page is linked with the RSS feed URL: http://www.example.com/rss.xml<link href="http://www.example.com/rss.xml" rel="alternate" type="application/rss+xml" title="Sitewide RSS Feed" /> Example of RSS 2.0 format First of all, let’s look at a simple example of the RSS format. <?xml version="1.0" encoding="UTF-8" ?><rss version="2.0"><channel> <title>Title of the feed</title> <link>http://www.examples.com</link> <description>Description of feed</description> <item> <title>News1 heading</title> <link>http://www.example.com/news-1</link> <description>detail of news1 </description> </item> <item> <title>News2 heading</title> <link>http://www.example.com/news-2</link> <description>detail of news2 </description> </item></channel></rss> The first line is the XML declaration that indicates its version is 1.0. The character encoding is UTF-8. UTF-8 characters support many European and Asian characters so it is widely used as character encoding in web. The next line is the rss declaration, which declares that it is a RSS document of version 2.0 The next line contains the <channel> element which is used for describing the detail of the RSS feed. The <channel> element must have three required elements <title>, <link> and <description>. The title tag contains the title of that particular feed. Similarly, the link element contains the hyperlink of the channel and the description tag describes or carries the main information of the channel. This tag usually contains the information in detail. Furthermore, each <channel> element may have one or more <item> elements which contain the story of the feed. Each <item> element must have the three elements <title>, <link> and <description> whose use is similar to those of channel elements, but they describe the details of each individual items. Finally, the last two lines are the closing tags for the <channel> and <rss> elements. Creating RSS Web Widget The RSS widget we're going to build is a simple one which displays the headlines from the RSS feed, along with the title of the RSS feed. This is another widget which uses some JavaScript, PHP CSS and HTML. The content of the widget is displayed within an Iframe so when you set up the widget, you've to adjust the height and width. To parse the RSS feed in XML format, I've used the popular PHP RSS Parser – Magpie RSS. The homepage of Magpie RSS is located at http://magpierss.sourceforge.net/. Introduction to Magpie RSS Before writing the code, let's understand what the benefits of using the Magpie framework are, and how it works. It is easy to use. While other RSS parsers are normally limited for parsing certain RSS versions, this parser parses most RSS formats i.e. RSS 0.90 to 2.0, as well as ATOM feed. Magpie RSS supports Integrated Object Cache which means that the second request to parse the same RSS feed is fast— it will be fetched from the cache. Now, let's quickly understand how Magpie RSS is used to parse the RSS feed. I'm going to pick the example from their homepage for demonstration. require_once 'rss_fetch.inc';$url = 'http://www.getacoder.com/rss.xml';$rss = fetch_rss($url);echo "Site: ", $rss->channel['title'], "<br>";foreach ($rss->items as $item ) { $title = $item[title]; $url = $item[link]; echo "<a href=$url>$title</a></li><br>";} If you're more interested in trying other PHP RSS parsers then you might like to check out SimplePie RSS Parser (http://simplepie.org/) and LastRSS (http://lastrss.oslab.net/). You can see in the first line how the rss_fetch.inc file is included in the working file. After that, the URL of the RSS feed from getacoder.com is assigned to the $url variable. The fetch_rss() function of Magpie is used for fetching data and converting this data into RSS Objects. In the next line, the title of RSS feed is displayed using the code $rss->channel['title']. The other lines are used for displaying each of the RSS feed's items. Each feed item is stored within an $rss->items array, and the foreach() loop is used to loop through each element of the array. Writing Code for our RSS Widget As I've already discussed, this widget is going to use Iframe for displaying the content of the widget, so let's look at the JavaScript code for embedding Iframe within the HTML code. var widget_string = '<iframe src="http://www.yourserver.com/rsswidget/rss_parse_handler.php?rss_url=';widget_string += encodeURIComponent(rss_widget_url);widget_string += '&maxlinks='+rss_widget_max_links;widget_string += '" height="'+rss_widget_height+'" width="'+rss_widget_width+'"';widget_string += ' style="border:1px solid #FF0000;"';widget_string +=' scrolling="no" frameborder="0"></iframe>';document.write(widget_string); In the above code, the widget string variable contains the string for displaying the widget. The source of Iframe is assigned to rss_parse_handler.php. The URL of the RSS feed, and the headlines from the feed are passed to rss_parse_handler.php via the GET method, using rss_url and maxlinks parameters respectively. The values to these parameters are assigned from the Javascript variables rss_widget_url and rss_widget_max_links. The width and height of the Iframe are also assigned from JavaScript variables, namely rss_widget_width and rss_widget_height. The red border on the widget is displayed by assigning 1px solid #FF0000 to the border attribute using the inline styling of CSS. Since, Inline CSS is used, the frameborder property is set to 0 (i.e. the border of the frame is zero). Displaying borders from the CSS has some benefit over employing the frameborder property. While using CSS code, 1px dashed #FF0000 (border-width border-style border-color) means you can display a dashed border (you can't using frameborder), and you can use the border-right, border-left, border-top, border-bottom attributes of CSS to display borders at specified positions of the object. The scrolling property is set to no here, which means that the scroll bar will not be displayed in the widget if the widget content overflows. If you want to show a scroll bar, then you can set this property to yes. The values of JavaScript variables like rss_widget_url, rss_widget_max_links etc come from the page where we'll be using this widget. You'll see how the values of these variables will be assigned from the section at the end where we'll look at how to use this RSS widget.  
Read more
  • 0
  • 0
  • 4517
article-image-working-simple-associations-using-cakephp
Packt
24 Oct 2009
5 min read
Save for later

Working with Simple Associations using CakePHP

Packt
24 Oct 2009
5 min read
Database relationship is hard to maintain even for a mid-sized PHP/MySQL application, particularly, when multiple levels of relationships are involved because complicated SQL queries are needed. CakePHP offers a simple yet powerful feature called 'object relational mapping' or ORM to handle database relationships with ease.In CakePHP, relations between the database tables are defined through association—a way to represent the database table relationship inside CakePHP. Once the associations are defined in models according to the table relationships, we are ready to use its wonderful functionalities. Using CakePHP's ORM, we can save, retrieve, and delete related data into and from different database tables with simplicity, in a better way—no need to write complex SQL queries with multiple JOINs anymore! In this article by Ahsanul Bari and Anupom Syam, we will have a deep look at various types of associations and their uses. In particular, the purpose of this article is to learn: How to figure out association types from database table relations How to define different types of associations in CakePHP models How to utilize the association for fetching related model data How to relate associated data while saving There are basically 3 types of relationship that can take place between database tables: one-to-one one-to-many many-to-many The first two of them are simple as they don't require any additional table to relate the tables in relationship. In this article, we will first see how to define associations in models for one-to-one and one-to-many relations. Then we will look at how to retrieve and delete related data from, and save data into, database tables using model associations for these simple associations. Defining One-To-Many Relationship in Models To see how to define a one-to-many relationship in models, we will think of a situation where we need to store information about some authors and their books and the relation between authors and books is one-to-many. This means an author can have multiple books but a book belongs to only one author (which is rather absurd, as in real life scenario a book can also have multiple authors). We are now going to define associations in models for this one-to-many relation, so that our models recognize their relations and can deal with them accordingly. Time for Action: Defining One-To-Many Relation Create a new database and put a fresh copy of CakePHP inside the web root. Name the database whatever you like but rename the cake folder to relationship. Configure the database in the new Cake installation. Execute the following SQL statements in the database to create a table named authors, CREATE TABLE `authors` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `name` varchar( 127 ) NOT NULL , `email` varchar( 127 ) NOT NULL , `website` varchar( 127 ) NOT NULL ); Create a books table in our database by executing the following SQL commands: CREATE TABLE `books` ( `id` int( 11 ) NOT NULL AUTO_INCREMENT PRIMARY KEY , `isbn` varchar( 13 ) NOT NULL , `title` varchar( 64 ) NOT NULL , `description` text NOT NULL , `author_id` int( 11 ) NOT NULL ) Create the Author model using the following code (/app/models/authors.php): <?php class Author extends AppModel{ var $name = 'Author'; var $hasMany = 'Book';} ?> Use the following code to create the Book model (/app/models/books.php): <?phpclass Book extends AppModel{ var $name = 'Book'; var $belongsTo = 'Author';}?> Create a controller for the Author model with the following code: (/app/controllers/authors_controller.php): <?phpclass AuthorsController extends AppController { var $name = 'Authors'; var $scaffold;}?>   Use the following code to create a controller for the Book model (/app/controllers/books_controller.php): <?php class BooksController extends AppController { var $name = 'Books'; var $scaffold; } ?> Now, go to the following URLs and add some test data: http://localhost/relationship/authors/ and http://localhost/relationship/books/ What Just Happened? We have created two tables: authors and books for storing author and book information. A foreign-key named author_id is added to the books table to establish the one-to-many relation between authors and books. Through this foreign-key, an author is related to multiple books, as well as, a book is related to one single author. By Cake convention, the name of a foreign-key should be underscored, singular name of target model, suffixed with _id. Once the database tables are created and relations are established between them, we can define associations in models. In both of the model classes, Author and Book, we defined associations to represent the one-to-many relationship between the corresponding two tables. CakePHP provides two types of association: hasMany and belongsTo to define one-to-many relations in models. These associations are very appropriately named: As an author 'has many' books, Author model should have hasMany association to represent its relation with the Book model. As a book 'belongs to' one author, Book model should have belongsTo association to denote its relation with the Author model. In the Author model, an association attribute $hasMany is defined with the value Book to inform the model that every author can be related to many books. We also added a $belongsTo attribute in the Book model and set its value to Author to let the Book model know that every book is related to only one author. After defining the associations, two controllers were created for both of these models with scaffolding to see how the associations are working.
Read more
  • 0
  • 0
  • 7846

article-image-writing-package-python
Packt
23 Oct 2009
18 min read
Save for later

Writing a Package in Python

Packt
23 Oct 2009
18 min read
Writing a Package Its intents are: To shorten the time needed to set up everything before starting the real work, in other words the boiler-plate code To provide a standardized way to write packages To ease the use of a test-driven development approach To facilitate the releasing process It is organized in the following four parts: A common pattern for all packages that describes the similarities between all Python packages, and how distutils and setuptools play a central role How generative programming (http://en.wikipedia.org/wiki/Generative_programming) can help this through the template-based approach The package template creation, where everything needed to work is set Setting up a development cycle A Common Pattern for All Packages The easiest way to organize the code of an application is to split it into several packages using eggs. This makes the code simpler, and easier to understand, maintain, and change. It also maximizes the reusability of each package. They act like components. Applications for a given company can have a set of eggs glued together with a master egg. Therefore, all packages can be built using egg structures. This section presents how a namespaced package is organized, released, and distributed to the world through distutils and setuptools. Writing an egg is done by layering the code in a nested folder that provides a common prefix namespace. For instance, for the Acme company, the common namespace can be acme. The result is a namespaced package. For example, a package whose code relates to SQL can be called acme.sql. The best way to work with such a package is to create an acme.sql folder that contains the acme and then the sql folder: setup.py, the Script That Controls Everything The root folder contains a setup.py script, which defines all metadata as described in the distutils module, combined as arguments in a call to the standard setup function. This function was extended by the third-party library setuptools that provides most of the egg infrastructure. The boundary between distutils and setuptools is getting fuzzy, and they might merge one day. Therefore, the minimum content for this file is: from setuptools import setupsetup(name='acme.sql') name gives the full name of the egg. From there, the script provides several commands that can be listed with the -help-commands option. $ python setup.py --help-commands Standard commands:  build             build everything needed to install  ...  install           install everything from build directory  sdist         create a source distribution  register      register the distribution  bdist         create a built (binary) distributionExtra commands:  develop       install package in 'development mode'  ...  test          run unit tests after in-place build alias         define a shortcut  bdist_egg     create an "egg" distribution The most important commands are the ones left in the preceding listing. Standard commands are the built-in commands provided by distutils, whereas Extra commands are the ones created by third-party packages such as setuptools or any other package that defines and registers a new command. sdist The sdist command is the simplest command available. It creates a release tree where everything needed to run the package is copied. This tree is then archived in one or many archived files (often, it just creates one tar ball). The archive is basically a copy of the source tree. This command is the easiest way to distribute a package from the target system independently. It creates a dist folder with the archives in it that can be distributed. To be able to use it, an extra argument has to be passed to setup to provide a version number. If you don't give it a version value, it will use version = 0.0.0: from setuptools import setupsetup(name='acme.sql', version='0.1.1' This number is useful to upgrade an installation. Every time a package is released, the number is raised so that the target system knows it has changed. Let's run the sdist command with this extra argument: $ python setup.py sdistrunning sdist...creating disttar -cf dist/acme.sql-0.1.1.tar acme.sql-0.1.1gzip -f9 dist/acme.sql-0.1.1.tarremoving 'acme.sql-0.1.1' (and everything under it)$ ls dist/acme.sql-0.1.1.tar.gz Under Windows, the archive will be a ZIP file. The version is used to mark the name of the archive, which can be distributed and installed on any system having Python. In the sdist distribution, if the package contains C libraries or extensions, the target system is responsible for compiling them. This is very common for Linux-based systems or Mac OS because they commonly provide a compiler. But it is less usual to have it under Windows. That's why a package should always be distributed with a pre-built distribution as well, when it is intended to run under several platforms. The MANIFEST.in File When building a distribution with sdist, distutils browse the package directory looking for files to include in the archive. distutils will include: All Python source files implied by the py_modules, packages, and scripts option All C source files listed in the ext_modules option Files that match the glob pattern test/test*.py README, README.txt, setup.py, and setup.cfg files Besides, if your package is under Subversion or CVS, sdist will browse folders such as .svn to look for files to include .sdist builds a MANIFEST file that lists all files and includes them into the archive. Let's say you are not using these version control systems, and need to include more files. Now, you can define a template called MANIFEST.in in the same directory as that of setup.py for the MANIFEST file, where you indicate to sdist which files to include. This template defines one inclusion or exclusion rule per line, for example: include HISTORY.txtinclude README.txtinclude CHANGES.txtinclude CONTRIBUTORS.txtinclude LICENSErecursive-include *.txt *.py The full list of commands is available at http://docs.python.org/dist/sdist-cmd.html#sdist-cmd. build and bdist To be able to distribute a pre-built distribution, distutils provide the build command, which compiles the package in four steps: build_py: Builds pure Python modules by byte-compiling them and copying them into the build folder. build_clib: Builds C libraries, when the package contains any, using Python compiler and creating a static library in the build folder. build_ext: Builds C extensions and puts the result in the build folder like build_clib. build_scripts: Builds the modules that are marked as scripts. It also changes the interpreter path when the first line was set (!#) and fixes the file mode so that it is executable. Each of these steps is a command that can be called independently. The result of the compilation process is a build folder that contains everything needed for the package to be installed. There's no cross-compiler option yet in the distutils package. This means that the result of the command is always specific to the system it was build on. Some people have recently proposed patches in the Python tracker to make distutils able to cross-compile the C parts. So this feature might be available in the future. When some C extensions have to be created, the build process uses the system compiler and the Python header file (Python.h). This include file is available from the time Python was built from the sources. For a packaged distribution, an extra package called python-dev often contains it, and has to be installed as well. The C compiler used is the system compiler. For Linux-based system or Mac OS X, this would be gcc. For Windows, Microsoft Visual C++ can be used (there's a free command-line version available) and the open-source project MinGW as well. This can be configured in distutils. The build command is used by the bdist command to build a binary distribution. It calls build and all dependent commands, and then creates an archive in the same was as sdist does. Let's create a binary distribution for acme.sql under Mac OS X: $ python setup.py bdistrunning bdistrunning bdist_dumbrunning build...running install_scriptstar -cf dist/acme.sql-0.1.1.macosx-10.3-fat.tar .gzip -f9 acme.sql-0.1.1.macosx-10.3-fat.tarremoving 'build/bdist.macosx-10.3-fat/dumb' (and everything under it)$ ls dist/acme.sql-0.1.1.macosx-10.3-fat.tar.gz    acme.sql-0.1.1.tar.gz Notice that the newly created archive's name contains the name of the system and the distribution it was built under (Mac OS X 10.3). The same command called under Windows will create a specific distribution archive: C:acme.sql> python.exe setup.py bdist...C:acme.sql> dir dist25/02/2008  08:18     <DIR>          .25/02/2008  08:18     <DIR>          ..25/02/2008  08:24             16 055 acme.sql-0.1.win32.zip               1 File(s)          16 055 bytes               2 Dir(s)   22 239 752 192 bytes free If a package contains C code, apart from a source distribution, it's important to release as many different binary distributions as possible. At the very least, a Windows binary distribution is important for those who don't have a C compiler installed. A binary release contains a tree that can be copied directly into the Python tree. It mainly contains a folder that is copied into Python's site-packages folder. bdist_egg The bdist_egg command is an extra command provided by setuptools. It basically creates a binary distribution like bdist, but with a tree comparable to the one found in the source distribution. In other words, the archive can be downloaded, uncompressed, and used as it is by adding the folder to the Python search path (sys.path). These days, this distribution mode should be used instead of the bdist-generated one. install The install command installs the package into Python. It will try to build the package if no previous build was made and then inject the result into the Python tree. When a source distribution is provided, it can be uncompressed in a temporary folder and then installed with this command. The install command will also install dependencies that are defined in the install_requires metadata. This is done by looking at the packages in the Python Package Index (PyPI). For instance, to install pysqlite and SQLAlchemy together with acme.sql, the setup call can be changed to: from setuptools import setupsetup(name='acme.sql', version='0.1.1',      install_requires=['pysqlite', 'SQLAlchemy']) When we run the command, both dependencies will be installed. How to Uninstall a Package The command to uninstall a previously installed package is missing in setup.py. This feature was proposed earlier too. This is not trivial at all because an installer might change files that are used by other elements of the system. The best way would be to create a snapshot of all elements that are being changed, and a record of all files and directories created. A record option exists in install to record all files that have been created in a text file: $ python setup.py install --record installation.txtrunning install...writing list of installed files to 'installation.txt' This will not create any backup on any existing file, so removing the file mentioned might break the system. There are platform-specific solutions to deal with this. For example, distutils allow you to distribute the package as an RPM package. But there's no universal way to handle it as yet. The simplest way to remove a package at this time is to erase the files created, and then remove any reference in the easy-install.pth file that is located in the sitepackages folder. develop setuptools added a useful command to work with the package. The develop command builds and installs the package in place, and then adds a simple link into the Python site-packages folder. This allows the user to work with a local copy of the code, even though it's available within Python's site-packages folder. All packages that are being created are linked with the develop command to the interpreter. When a package is installed this way, it can be removed specifically with the -u option, unlike the regular install: $ sudo python setup.py developrunning develop...Adding iw.recipe.fss 0.1.3dev-r7606 to easy-install.pth fileInstalled /Users/repos/ingeniweb.sourceforge.net/iw.recipe.fss/trunkProcessing dependencies ...$ sudo python setup.py develop -urunning developRemoving...Removing iw.recipe.fss 0.1.3dev-r7606 from easy-install.pth file Notice that a package installed with develop will always prevail over other versions of the same package installed. test Another useful command is test. It provides a way to run all tests contained in the package. It scans the folder and aggregates the test suites it finds. The test runner tries to collect tests in the package but is quite limited. A good practice is to hook an extended test runner such as zope.testing or Nose that provides more options. To hook Nose transparently to the test command, the test_suite metadata can be set to 'nose.collector' and Nose added in the test_requires list: setup(...test_suite='nose.collector',test_requires=['Nose'],...) register and upload To distribute a package to the world, two commands are available: register: This will upload all metadata to a server. upload: This will upload to the server all archives previously built in the dist folder. The main PyPI server, previously named the Cheeseshop, is located at http://pypi.python.org/pypi and contains over 3000 packages from the community. It is a default server used by the distutils package, and an initial call to the register command will generate a .pypirc file in your home directory. Since the PyPI server authenticates people, when changes are made to a package, you will be asked to create a user over there. This can also be done at the prompt: $ python setup.py registerrunning register...We need to know who you are, so please choose either: 1. use your existing login, 2. register as a new user, 3. have the server generate a new password for you (and email it toyou), or 4. quitYour selection [default 1]: Now, a .pypirc file will appear in your home directory containing the user and password you have entered. These will be used every time register or upload is called: [server-index]username: tarekpassword: secret There is a bug on Windows with Python 2.4 and 2.5. The home directory is not found by distutils unless a HOME environment variable is added. But, this has been fixed in 2.6. To add it, use the technique where we modify the PATH variable. Then add a HOME variable for your user that points to the directory returned by os.path.expanduser('~'). When the download_url metadata or the url is specified, and is a valid URL, the PyPI server will make it available to the users on the project web page as well. Using the upload command will make the archive directly available at PyPI, so the download_url can be omitted: Distutils defines a Trove categorization (see PEP 301: http://www.python.org/dev/peps/pep-0301/#distutils-trove-classification) to classify the packages, such as the one defined at Sourceforge. The trove is a static list that can be found at http://pypi.python.org/pypi?%3Aaction=list_classifiers, and that is augmented from time to time with a new entry. Each line is composed of levels separated by "::": ...Topic :: TerminalsTopic :: Terminals :: SerialTopic :: Terminals :: TelnetTopic :: Terminals :: Terminal Emulators/X TerminalsTopic :: Text Editors Topic :: Text Editors :: DocumentationTopic :: Text Editors :: Emacs... A package can be classified in several categories, which can be listed in the classifiers meta-data. A GPL package that deals with low-level Python code (for instance) can use: Programming Language :: PythonTopic :: Software Development :: Libraries :: Python ModulesLicense :: OSI Approved :: GNU General Public License (GPL) Python 2.6 .pypirc Format The .pypirc file has evolved under Python 2.6, so several users and their passwords can be managed along with several PyPI-like servers. A Python 2.6 configuration file will look somewhat like this: [distutils]index-servers =    pypi    alternative-server    alternative-account-on-pypi[pypi]username:tarekpassword:secret[alternative-server]username:tarekpassword:secretrepository:http://example.com/pypi The register and upload commands can pick a server with the help of the -r option, using the repository full URL or the section name: # upload to http://example.com/pypi$ python setup.py sdist upload -r   alternative-server#  registers with default account (tarek at pypi)$ python setup.py register#  registers to http://example.com$ python setup.py register -r http://example.com/pypi This feature allows interaction with servers other than PyPI. When dealing with a lot of packages that are not to be published at PyPI, a good practice is to run your own PyPI-like server. The Plone Software Center (see http://plone.org/products/plonesoftwarecenter) can be used, for example, to deploy a web server that can interact with distutils upload and register commands. Creating a New Command distutils allows you to create new commands, as described in http://docs.python.org/dist/node84.html. A new command can be registered with an entry point, which was introduced by setuptools as a simple way to define packages as plug-ins. An entry point is a named link to a class or a function that is made available through some APIs in setuptools. Any application can scan for all registered packages and use the linked code as a plug-in. To link the new command, the entry_points metadata can be used in the setup call: setup(name="my.command",          entry_points="""             [distutils.commands]             my_command = my.command.module.Class          """) All named links are gathered in named sections. When distutils is loaded, it scans for links that were registered under distutils.commands. This mechanism is used by numerous Python applications that provide extensibility. setup.py Usage Summary There are three main actions to take with setup.py: Build a package. Install it, possibly in develop mode. Register and upload it to PyPI. Since all the commands can be combined in the same call, some typical usage patterns are: # register the package with PyPI, creates a source and# an egg distribution, then upload them$ python setup.py register sdist bdist_egg upload# installs it in-place, for development purpose$ python setup.py develop# installs it$ python setup.py install The alias Command To make the command line work easily, a new command has been introduced by setuptools called alias. In a file called setup.cfg, it creates an alias for a given combination of commands. For instance, a release command can be created to perform all actions needed to upload a source and a binary distribution to PyPI: $ python setup.py alias release register sdist bdist_egg uploadrunning aliasWriting setup.cfg$ python setup.py release... Other Important Metadata Besides the name and the version of the package being distributed, the most important arguments setup can receive are: description: A few sentences to describe the package long_description: A full description that can be in reStructuredText keywords: A list of keywords that define the package author: The author's name or organization author_email: The contact email address url: The URL of the project license: The license (GPL, LGPL, and so on) packages: A list of all names in the package; setuptools provides a small function called find_packages that calculates this namespace_packages: A list of namespaced packages A completed setup.py file for acme.sql would be: import osfrom setuptools import setup, find_packagesversion = '0.1.0'README = os.path.join(os.path.dirname(__file__), 'README.txt')long_description = open(README).read() + 'nn'setup(name='acme.sql',      version=version,      description=("A package that deals with SQL, "                    "from ACME inc"),      long_description=long_description,      classifiers=[        "Programming Language :: Python",        ("Topic :: Software Development :: Libraries ::          "Python Modules"),        ],      keywords='acme sql',      author='Tarek',      author_email='tarek@ziade.org',      url='http://ziade.org',      license='GPL',      packages=find_packages(),      namespace_packages=['acme'],      install_requires=['pysqlite','SQLAchemy']      ) The two comprehensive guides to keep under your pillow are: The distutils guide at http://docs.python.org/dist/dist.html The setuptools guide at http://peak.telecommunity.com/DevCenter/setuptools 
Read more
  • 0
  • 0
  • 5460
Modal Close icon
Modal Close icon