Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-database-considerations-php-5-cms
Packt
17 Aug 2010
10 min read
Save for later

Database Considerations for PHP 5 CMS

Packt
17 Aug 2010
10 min read
(For more resources on PHP, see here.) The problem Building methods that: Handle common patterns of data manipulation securely and efficiently Help ease the database changes needed as data requirements evolve Provide powerful data objects at low cost Discussion and considerations Relational databases provide an effective and readily available means to store data. Once established, they normally behave consistently and reliably, making them easier to use than file systems. And clearly a database can do much more than a simple file system! Efficiency can quickly become an issue, both in relation to how often requests are made to a database, and how long queries take. One way to offset the cost of database queries is to use a cache at some stage in the processing. Whatever the framework does, a major factor will always be the care developers of extensions take over the design of table structures and software; the construction of SQL can also make a big difference. Examples included here have been assiduously optimized so far as the author is capable, although suggestions for further improvement are always welcome! Web applications are typically much less mature than more traditional data processing systems. This stems from factors such as speed of development and deployment. Also, techniques that are effective for programs that run for a relatively long time do not make sense for the brief processing that is applied to a single website request. For example, although PHP allows persistent database connections, thereby reducing the cost of making a fresh connection for each request, it is generally considered unwise to use this option because it is liable to create large numbers of dormant processes, and slow down database operations excessively. Likewise, prepared statements have advantages for performance and possibly security but are more laborious to implement. So, the advantages are diluted in a situation where a statement cannot be used more than once. Perhaps, even more than performance, security is an issue for web development, and there are well known routes for attacking databases. They need to be carefully blocked. The primary goal of a framework is to make further development easy. Writing web software frequently involves the same patterns of database access, and a framework can help a lot by implementing methods at a higher level than the basic PHP database access functions. In an ideal world, an object-oriented system is developed entirely on the basis of OO principles. But if no attention is paid to how the objects will be stored, problems arise. An object database has obvious appeal, but for a variety of reasons, such databases are not widely used. Web applications have to be pragmatic, and so the aim pursued here is the creation of database designs that occasionally ignore strict relational principles, and objects that are sometimes simpler than idealized designs might suggest. The benefit of making these compromises is that it becomes practical to achieve a useful correspondence between database rows and PHP objects. It is possible that PHP Data Objects (PDO) will become very important in this area, but it is a relatively new development. Use of PDO is likely to pick up gradually as it becomes more commonly found in typical web hosting, and as developers get to understand what it can offer. For the time being, the safest approach seems to be for the framework to provide classes on which effective data objects can be built. A great deal can be achieved using this technique. Database dependency Lest this section create too much disappointment, let me say at the outset that this article does not provide any help with achieving database independence. The best that can be done here is to explain why not, and what can be done to limit dependency. Nowadays, the most popular kind of database employs the relational model. All relational database systems implement the same theoretical principles, and even use more or less the same structured query language. People use products from different vendors for an immense variety of reasons, some better than others. For web development, MySQL is very widely available, although PostgreSQL is another highly regarded database system that is available without cost. There are a number of well-known proprietary systems, and existing databases often contain valuable information, which motivates attempts to link them into CMS implementations. In this situation, there are frequent requests for web software to become database independent. There are, sadly, practical obstacles towards achieving this. It is conceptually simple to provide the mechanics of access to a variety of different database systems, although the work involved is laborious. The result can be cumbersome, too. But the biggest problem is that SQL statements are inclined to vary across different systems. It is easy in theory to assert that only the common core of SQL that works on all database systems should be used. The serious obstacle here is that very few developers are knowledgeable about what comprises the common core. ANSI SQL might be thought to provide a system neutral language, but then not all of ANSI SQL is implemented by every system. So, the fact is that developers become expert in one particular database system, or at best a handful. Skilled developers are conscious of the standardization issue, and where there is a choice, they will prefer to write according to standards. For example, it is better to write: SELECT username, userid, count(userid) AS number FROM aliro_session AS s INNER JOIN aliro_session_data AS d ON s.session_id = d.session_id WHERE isadmin = 0 GROUP BY userid rather than, SELECT username, userid, count(userid) AS number FROM aliro_session AS s, aliro_session_data AS d WHERE s.session_id = d.session_id AND isadmin = 0 GROUP BY userid This is because it makes the nature of the query clearer, and also because it is less vulnerable to detailed syntax variations across database systems. Use of extensions that are only available in some database systems is a major problem for query standardization. Again, it is easy while theorizing to deplore the use of non-standard extensions. In practice, some of them are so tempting that few developers resist them. An older MySQL extension was the REPLACE command, which would either insert or update data depending on whether a matching key was already present in the database. This is now discouraged on the grounds that it achieved its result by deleting any matching data before doing an insertion. This can have adverse effects on linked foreign keys but the newer option of the INSERT ... ON DUPLICATE KEY construction provides a very neat, efficient way to handle the case where data needs to go into the database allowing for what is already there. It is more efficient in every way than trying to read before choosing between INSERT and UPDATE, and also avoids the issue of needing a transaction. Similarly, there is no standard way to obtain a slice of a result set, for example starting with the eleventh item, and comprising the next ten items. Yet this is exactly the operation that is needed to efficiently populate the second page of a list of items, ten per page. The MySQL extension that offers LIMIT and LIMITSTART is ideal for this purpose. Because of these practical issues, independence of database systems remains a desirable goal that is rarely fully achieved. The most practical policy seems to avoid dependencies where this is possible at reasonable cost. The role of the database We already noted that a database can be thought of as uncontrolled global data, assuming the database connection is generally available. So there should be policies on database access to prevent this becoming a liability. One policy adopted by Aliro is to use two distinct databases. The "core" database is reserved for tables that are needed by the basic framework of the CMS. Other tables, including those created by extensions to the CMS framework, use the "general" database. Although it is difficult to enforce restrictions, one policy that is immediately attractive is that the core database should never be accessed by extensions. How data is stored is an implementation matter for the various classes that make up the framework, and a selection of public methods should make up the public interface. Confining access to those public methods that constitute the API for the framework leaves open the possibility of development of the internal mechanisms with little or no change to the API. If the framework does not provide the information needed by extensions, then its API needs further development. The solution should not be direct access to the core database. Much the same applies to the general database, except that it may contain tables that are intended to be part of an API. By and large, extensions should restrict their database operations to their own tables, and provide object methods to implement interfaces across extensions. This is especially so for write operations, but should usually apply to all database operations. Level of database abstraction There have been some clues earlier in this article, but it is worth squarely addressing the question of how far the CMS database classes should go in insulating other classes from the database. All of the discussions here are based on the idea that currently the best available style of development is object oriented. But we have already decided that using a true object database is not usually a practical option for web development. The next option to consider is building a layer to provide an object-relational transformation, so that outside of the database classes, nobody needs to deal with purely relational concepts or with SQL. An example of a framework that does this is Propel, which can be found at http://propel.phpdb.org/trac/. While developments of this kind are interesting and attractive in principle, I am not convinced that they provide an acceptable level of performance and flexibility for current CMS developments. There can be severe overheads on object-relational operations and manual intervention is likely to be necessary if high performance is a goal. For that reason, it seems that for some while yet, CMS developments will be based on more direct use of a relational database. Another complicating factor is the limitations of PHP in respect of static methods, which are obliged to operate within the environment of the class in which they are declared, irrespective of the class that was invoked in the call. This constraint is lifted in PHP 5.3 but at the time of writing, reliance on PHP 5.3 would be premature, software that has not yet found its way into most stable software distributions. With more flexibility in the use of static methods and properties, it would be possible to create a better framework of database-related properties. Given what is currently practical, and given experience of what is actually useful in the development of applications to run within a CMS framework, the realistic goals are as follows: To create a database object that connects, possibly through a choice of different connectors, to a particular database and provides the ability to run SQL queries To enable the creation of objects that correspond to database rows and have the ability to load themselves with data or to store themselves in the database Some operations, such as the update of a single row, are best achieved through the use of a database row object. Others, such as deletion, are often applied to a number of rows, chosen from a list by the user, and are best effected through a SQL query. You can obtain powerful code for achieving the automatic creation of HTML by downloading the full Aliro project. Unfortunately, experience in use has been disappointing. Often, so much customization of the automated code is required that the gains are nullified, and the automation becomes just an overhead. This topic is therefore given little emphasis.
Read more
  • 0
  • 0
  • 1298

Packt
16 Aug 2010
12 min read
Save for later

URL Shorteners – Designing the TinyURL Clone with Ruby

Packt
16 Aug 2010
12 min read
(For more resources on Ruby, see here.) We start off with an easy application, a simple yet very useful Internet application, URL shorteners. We will take a quick tour of URL shorteners before jumping into the design of a simple URL shortener, followed by an in-depth discussion of how we clone our own URL shortener, Tinyclone. All about URL shorteners Internet applications don't always need to be full of features or cover all aspects of your Internet life to be successful. Sometimes it's ok to be simple and just focus on providing a single feature. It doesn't even need to be earth-shatteringly important—it should be just useful enough for its target users. The archetypical and probably most extreme example of this is the URL shortening application or URL shortener. This service offers a very simple but surprisingly useful feature. It provides a shorter URL that represents a normally longer URL. When a user goes to the short URL, he will be redirected to the original URL. For this simple feature, top three most popular URL shortening services (TinyURL, bit.ly, and is.gd) collectively had about 11 million unique visitors, 110 million page views and a reach of about one percent of the Internet in June 2009. In 2008, the most popular URL shortener at that time, TinyURL, was made one of Time Magazine's Top 50 Best Websites. The idea to shorten long and unwieldy URLs into shorter, more manageable ones has been around for some time. One of the earlier attempts to make it a public service is Make A Shorter Link (MASL), which appeared around July 2001. MASL did just that, though the usefulness was debatable as the domain name was long and the shortened URL could potentially be longer than the original. However, the pioneering site that popularized this concept (and subsequently bought over MASL and a few other similar sites) is TinyURL. TinyURL was launched in January 2002 by Kevin Gilbertson to help him to link directly to newsgroup postings which frequently had long URLs. It rapidly became one of the most popular URL shorteners around. In 2008, an estimated 100 similar services came to existence in various forms. URLs or Uniform Resource Locators are resource identifiers that specify where identified resources are available and how they can be retrieved. A popular term for URL is a Web address. Every URL is made up of the following: <resource type>://<username>:<password>@<domain>:<port>/<file path name>?<query string>#<anchor> Not all parts of the URL are required by a browser, if the resource type is missing, it is normally assumed to be http, if the port is missing, it is normally assumed to be 80 (for http). The username, password, query string and anchor components are optional. Initially, TinyURL and similar types of URL shorteners focused on simply providing a short representative URL to their users. Naturally the competitive breadth for shortening URLs was rather well, short. Many chose TinyURL over MASL because TinyURL had a shorter and easier to remember domain name (http://tinyurl.com over http://makeashorterlink.com) Subsequent competition over this space intensified and extended to providing various other features, including custom short URLs (TinyURL, bit.ly), analysis of click-through statistics (bit.ly), advertisements (Adjix, Linkbee), preview pages (TinyURL, is.gd) and so on. The explosive growth of Twitter (from June 2008 to June 2009, Twitter grew 1,164%) opened a new chapter for URL shorteners. Twitter chose a limit of 140 characters for each tweet to accommodate the 160 characters in an SMS message (Twitter was invented as a service for people to use SMS to tell small groups what they are doing). With Twitter's popularity skyrocketing, came the need for users to shorten URLs to fit into the 140 characters limit. Originally Twitter used TinyURL as its default URL shortener and this triggered a steep climb in the usage of TinyURL during the early days of Twitter. However, in May 2009, bit.ly replaced TinyURL as Twitter's default URL shortener and the impact was immediate. For the first time in that period, TinyURL recorded a drop in the number of users in May 2009, dropping from 6.1 million to 5.3 million unique users, while bit.ly jumped from 1.8 million to 2.9 million almost overnight. That's not the end of the story though. In April 2010 during Twitter's Chirp conference, Twitter announced its own URL shortener (twt.tl). As of writing it is still unclear the market share will pan out but it's clear that URL shorteners have good value and everyone is jumping into this market. In December 2009, Google came up with its own two URL shorteners goo.gl and youtu.be. Amazon.com (amzn.to), Facebook (fb.me) and Wordpress (wp.me) all have their own URL shorteners as well. Next, let's do a quick review of why URLs shorteners are so popular and why they attract criticism as well. Here's a quick summary of the benefits: Create short and easy to remember URLs Allow passing of links in character-limited services such as Twitter Create vanity URLs for marketing purposes Can verbally pass URLs The most obvious benefit of having a shortened URL is that it's, well, short. A typical example of an URL gone bad is a link to a location in Google Maps: http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=singapore +flyer&vps=1&jsv=169c&sll=1.352083,103.819836&sspn=0.68645,1.382904&g =singapore&ie=UTF8&latlng=8354962237652576151&ei=Shh3SsSRDpb4vAPsxLS3 BQ&cd=1&usq=Singapore+Flyer Such URLs are meant to be clicked on as it is virtually impossible to pass it around verbally. It might be justifiable if the URL is cut and pasted on documents, but sometimes certain applications will truncate parts of the URL while processing. This makes a long URL difficult to click on and even produces erroneous links. In fact, this was the main motivation in creating most of the earlier URL shorteners—older email clients tend to truncate URLs when they are more than 80 characters. Short links are of course crucial in character-limited message passing systems like Twitter, Plurk, and SMS. Passing long URLs is impossible without URL shorteners. Short URLs are very useful in cases of vanity URLs where for example, the Google Maps link above could be shortened to http://tinyurl.com/singapore-flyer. Such vanity URLs are useful when passing from one person to another, or even when using in a mass marketing way. Sticking to the maps theme in our examples, if you want to give a Google Maps link to your restaurant and put it up in catalogs and brochures, you will not want to give the long URL. Instead you would want a nice, descriptive and short URL. Short URLs are also useful in cases of accessibility. For example, reading out the Google Maps link above is almost impossible, but reading out the TinyURL link (vanity or otherwise) is much easier in comparison. Many popular URL shorteners also provide some form of statistics and analytics on the usage of the links. This feature allows you to track your short URLs to see how many clicks it received and what kind of patterns can be derived from the clicks. Although the metrics are usually not advanced, they do provide basic usefulness. On the other hand, URL shorteners have it fair share of criticisms as well. Here is a summary of the bad side of URL shorteners: Provides opportunity to spammers because it hide original URLs Could be unreliable if dependent on it for redirection Possible undesirable or vulgar short URLs URL shorteners have security issues. When a URL shortener creates a short URL, it effectively hides the original link and this provides opportunity for spammers or other abusers to redirect users to their sites. One relatively mild form of such attack is 'rickrolling'. Rickrolling uses a classic bait-and-switch trick to redirect users to a Rick Astley music video of Never Gonna Give You Up. For example, you might feel that the URL http://tinyurl.com/singapore-flyer goes to Google Map, but when you click on it, you might be rickrolled and redirected to that Rick Astley music video instead. Also, because most short URLs are not customized, it is quite difficult to see if the link is genuine or not just from the URL. Many prominent websites and applications have such concerns, including MySpace, Flickr and even Microsoft Live Messenger, and have one time or another banned or restricted usage of TinyURL because of this problem. To combat spammers and fraud, URL shortening services have come up with the idea of link previews, which allows users to preview a short URL before it redirects the user to the long URL. For example TinyURL will show the user the long URL on a preview page and requires the user to explicitly go to the long URL. Another problem is performance and reliability. When you access a website, your browser goes to a few DNS servers to resolve the address, but the URL shortener adds another layer of indirection. While DNS servers have redundancy and failsafe measures, there is no such assurance from URL shorteners. If the traffic to a particular link becomes too high, will the shortening service provider be able to add more servers to improve performance or even prevent a meltdown altogether? The problem of course lies in over-dependency on the shortening service. Finally, a negative side effect of random or even customized short URLs is that undesirable, vulgar or embarrassing short URLs can be created. Earlier on TinyURL short URLs were predictable and it was exploited, such as embarrassing short URLs that were made to redirect to the White House websites of then U.S. Vice President Dick Cheney and Second Lady Lynne Cheney. We have just covered significant ground on URL shorteners. If you are a programmer you might be wondering, "Why do I need to know such information? I am really interested in the programming bits, the others are just fluff to me." Background information on the application we want to clone is very important. It tells us what why that application exists in the first place and gives us an idea what are the main features (what makes it popular). It also tells us what problems it faces such that we are aware of the problem while programming it, or even avoid it altogether. This is important when we come to the design of the application. Finally it gives us better appreciation of the application and the motivations and issues faced by the product and technical people behind the application we wish to clone. Main features Next, let's list down the features of a URL shortener. The intention in this section is to distill the basic features of the application, features that define the service. Features listed here will be features that make the application what it is. However, as much as possible we want to also explore some additional features that extend the application and are provided by many of its competitors. Most importantly, the features here are mostly features of the most popular and definitive web application in the category. In this article, this will be TinyURL. These are the main features of a URL shortener: Users can create a short URL that represents a long URL Users who visit the short URL will be redirected to the long URL Users can preview a short URL to enable them to see what the long URL is Users can provide a custom URL to represent the long URL Undesirable words are not allowed in the short URL Users are able to view various statistics involving the short URL, including the number of clicks and where the clicks come from (optional, not in TinyURL) URL shorteners are simple web applications and the one that we will design and build will also be simple. Designing the clone Cloning TinyURL is relatively simple but there is some thought behind the design of the application. We will be building a clone of TinyURL called Tinyclone, which will be hosted at the domain http://tinyclone.saush.com. Creating a short URL for each long URL The domain of the short URL is fixed. What's left is the file path name. We need to represent the long URL with a unique file path name (a key), one for each long URL. This means we need to persist the relationship between the key and the URL. One of the ways we can associate the long URL with a unique key is to hash the long URL and use the resulting hash as the unique key. However, the resulting hash might be long and hashing functions could be slow. The faster and easier way is to use a relational database's auto-incremented row ID as the unique key. The database will help ensure the uniqueness of the ID. However, the running row ID number is base 10. To represent a million URLs would already require 7 characters, to represent 1 billion would take up 9 characters. In order to keep the number of characters smaller, we will need a larger base numbering system. In this clone we will use base 36, which is 26 characters of the alphabet (case insensitive) and 10 numbers. Using this system, we will only need 5 characters to represent 1 million URLs: 1,000,000 base 36 = lfls And 1 billion URLs can be represented in just six characters: 1,000,000,000 base 36 = gjdgxs
Read more
  • 0
  • 1
  • 12828

article-image-q-replication-components-ibm-replication-server
Packt
16 Aug 2010
8 min read
Save for later

Q Replication Components in IBM Replication Server

Packt
16 Aug 2010
8 min read
The individual stages for the different layers are shown in the following diagram: The DB2 database layer The first layer is the DB2 database layer, which involves the following tasks: For unidirectional replication and all replication scenarios that use unidirectional replication as the base, we need to enable the source database for archive logging (but not the target table). For multi-directional replication, all the source and target databases need to be enabled for archive logging. We need to identify which tables we want to replicate. One of the steps is to set the DATA CAPTURE CHANGES flag for each source table, which will be done automatically when the Q subscription is created. This setting of the flag will affect the minimum point in time recovery value for the table space containing the table, which should be carefully noted if table space recoveries are performed. Before moving on to the WebSphere MQ layer, let’s quickly look at the compatibility requirements for the database name, the table name, and the column names. We will also discuss whether or not we need unique indexes on the source and target tables. Database/table/column name compatibility In Q replication, the source and target database names and table names do not have to match on all systems. The database name is specified when the control tables are created. The source and target table names are specified in the Q subscription definition. Now let’s move on to looking at whether or not we need unique indexes on the source and target tables. We do not need to be able to identify unique rows on the source table, but we do need to be able to do this on the target table. Therefore, the target table should have one of: Primary key Unique contraint Unique index If none of these exist, then Q Apply will apply the updates using all columns. However, the source table must have the same constraints as the target table, so any constraints that exist at the target must also exist at the source, which is shown in the following diagram: The WebSphere MQ layer This is the second layer we should install and test—if this layer does not work then Q replication will not work! We can either install the WebSphere MQ Server code or the WebSphere MQ Client code. Throughout this book, we will be working with the WebSphere MQ Server code. If we are replicating between two servers, then we need to install WebSphere MQ Server on both servers. If we are installing WebSphere MQ Server on UNIX, then during the installation process a user ID and group called mqm are created. If we as a DBA want to issue MQ commands, then we need to get our user ID added to the mqm group. Assuming that WebSphere MQ Server has been successfully installed, we now need to create the Queue Managers and the queues that are needed for Q replication. This section also includes tests that we can perform to check that the MQ installation and setup is correct. The following diagram shows the MQ objects that need to be created for unidirectional replication: The following figure shows the MQ objects that need to be created for bidirectional replication: There is a mixture of Local Queue (LOCAL/QL) and Remote Queues (QREMOTE/QR) in addition to Transmission Queues (XMITQ) and channels. Once we have successfully completed the installation and testing of WebSphere MQ, we can move on to the next layer—the Q replication layer. The Q replication layer This is the third and final layer, which comprises the following steps: Create the replication control tables on the source and target servers. Create the transport definitions. What we mean by this is that we somehow need to tell Q replication what the source and target table names are, what rows/columns we want to replicate, and which Queue Managers and queues to use. Some of the terms that are covered in this section are: Logical table Replication Queue Map Q subscription Subscription group (SUBGROUP) What is a logical table? In Q replication, we have the concept of a logical table, which is the term used to refer to both the source and target tables in one statement. An example in a peer-to-peer three-way scenario is shown in the following diagram, where the logical table is made up of tables TABA, TABB, and TABC: What is a Replication/Publication Queue Map? The first part of the transport definitions mentioned earlier is a definition of Queue Map, which identifies the WebSphere MQ queues on both servers that are used to communicate between the servers. In Q replication, the Queue Map is called a Replication Queue Map, and in Event Publishing the Queue Map is called a Publication Queue Map. Let’s first look at Replication Queue Maps (RQMs). RQMs are used by Q Capture and Q Apply to communicate. This communication is Q Capture sending Q Apply rows to apply and Q Apply sending administration messages back to Q Capture. Each RQM is made up of three queues: a queue on the local server called the Send Queue (SENDQ), and two queues on the remote server—a Receive Queue (RECVQ) and an Administration Queue (ADMINQ), as shown in the preceding figures showing the different queues. An RQM can only contain one each of SENDQ, RECVQ, and ADMINQ. The SENDQ is the queue that Q Capture uses to send source data and informational messages. The RECVQ is the queue that Q Apply reads for transactions to apply to the target table(s). The ADMINQ is the queue that Q Apply uses to send control messages back to Q Capture. So using the queues in the first “Queues” figure, the Replication Queue Map definition would be: Send Queue (SENDQ): CAPA.TO.APPB.SENDQ.REMOTE on Source Receive Queue (RECVQ): CAPA.TO.APPB.RECVQ on Target Administration Queue (ADMINQ): CAPA.ADMINQ.REMOTE on Target Now let’s look at Publication Queue Maps (PQMs). PQMs are used in Event Publishing and are similar to RQMs, in that they define the WebSphere MQ queues needed to transmit messages between two servers. The big difference is that because in Event Publishing, we do not have a Q Apply component, the definition of a PQM is made up of only a Send Queue. What is a Q subscription? The second part of the transport definitions is a definition called a Q subscription, which defines a single source/target combination and which Replication Queue Map to use for this combination. We set up one Q subscription for each source/target combination. Each Q subscription needs a Replication Queue Map, so we need to make sure we have one defined before trying to create a Q subscription. Note that if we are using the Replication Center, then we can choose to create a Q subscription even though a RQM does not exist. The wizard will walk you through creating the RQM at the point at which it is needed. The structure of a Q subscription is made up of a source and target section, and we have to specify: The Replication Queue Map The source and target table The type of target table The type of conflict detection and action to be used The type of initial load, if any, should be performed If we define a Q subscription for unidirectional replication, then we can choose the name of the Q subscription—for any other type of replication we cannot. Q replication does not have the concept of a subscription set as there is in SQL Replication, where the subscription set holds all the tables which are related using referential integrity. In Q replication, we have to ensure that all the tables that are related through referential integrity use the same Replication Queue Map, which will enable Q Apply to apply the changes to the target tables in the correct sequence. In the following diagram, Q subscription 1 uses RQM1, Q subscription 2 also uses RQM1, and Q subscription 3 uses RQM3: What is a subscription group? A subscription group is the name for a collection of Q subscriptions that are involved in multi-directional replication, and is set using the SET SUBGROUP command. Q subscription activation In unidirectional, bidirectional, and peer-to-peer two-way replication, when Q Capture and Q Apply start, then the Q subscription can be automatically activated (if that option was specified). For peer-to-peer three-way replication and higher, when Q Capture and Q Apply are started, only a subset of the Q subscriptions of the subscription group starts automatically, so we need to manually start the remaining Q subscriptions.
Read more
  • 0
  • 0
  • 3620

Packt
12 Aug 2010
4 min read
Save for later

Easy guide to understand WCF in Visual Studio 2008 SP1 and Visual Studio 2010 Express

Packt
12 Aug 2010
4 min read
(For more resources on Microsoft, see here.) Creating your first WCF application in Visual Studio 2008 You start creating a WCF project by creating a new project from File | New | Project.... This opens the New Project window. You can see that there are four different templates available. We will be using the WCF Service Library template. Change the default name and provide a name for the project (herein JayWcf01) and click OK. The project JayWcf01 gets created with the folder structure shown in the next image: If you were to expand References node in the above you would notice that System.ServiceModel is already referenced. If it is not, for some reason, you can bring it in by using the Add Reference... window which is displayed when you right click the project in the Solution Explorer. IService1.vb is a service interface file as shown in the next listing. This defines the service contract and the operations expected of the service. If you change the interface name "IService1" here, you must also update the reference to "IService1" in App.config. <ServiceContract()> _Public Interface IService1 <OperationContract()> _ Function GetData(ByVal value As Integer) As String <OperationContract()> _ Function GetDataUsingDataContract(ByVal composite As CompositeType) As CompositeType ' TODO: Add your service operations hereEnd Interface' Use a data contract as illustrated in the sample below to add composite types to service operations<DataContract()> _Public Class CompositeType Private boolValueField As Boolean Private stringValueField As String <DataMember()> _ Public Property BoolValue() As Boolean Get Return Me.boolValueField End Get Set(ByVal value As Boolean) Me.boolValueField = value End Set End Property <DataMember()> _ Public Property StringValue() As String Get Return Me.stringValueField End Get Set(ByVal value As String) Me.stringValueField = value End Set End PropertyEnd Class The Service Contract is a contract that will be agreed to between the Client and the Server. Both the Client and the Server should be working with the same service contract. The one shown above is in the server. Inside the service, data is handled as simple (e.g. GetData) or complex types (e.g. GetDataUsingDataContract). However outside the Service these are handled as XML Schema Definitions. WCF Data contracts provides a mapping between the data defined in the code and the XML Schema defined by W3C organization, the standards organization. The service performed when the terms of the contract are properly adhered to is in the listing of Service1.vb file shown here. ' NOTE: If you change the class name "Service1" here, you must also update the reference to "Service1" in App.config.Public Class Service1 Implements IService1 Public Function GetData(ByVal value As Integer) As _ String Implements IService1.GetData Return String.Format("You entered: {0}", value) End Function Public Function GetDataUsingDataContract(ByVal composite As CompositeType) _ As CompositeType Implements IService1.GetDataUsingDataContract If composite.BoolValue Then composite.StringValue = (composite.StringValue & "Suffix") End If Return composite End FunctionEnd Class Service1 is defining two methods of the service by way of Functions. The GetData accepts a number and returns a string. For example, if the Client enters a value 50, the Server response will be "You entered: 50". The function GetDataUsingDataContract returns a Boolean and a String with 'Suffix' appended for an input which consists of a Boolean and a string. The JayWcf01 is a completed program with a default example contract IService1 and a defined service, Service1. This program is complete in itself. It is a good practice to provide your own names for the objects. Notwithstanding the default names are accepted in this demo. In what follows we test this program as is and then slightly modify the contract and test it again. The testing in the next section will invoke an in-built client and then later on we will publish it to the localhost which is an IIS 7 web server. How to test this program The program has a valid pair of contract and service and we should be able to test this service. The Windows Communication Foundation allows Visual Studio 2008 (also Visual Studio 2010 Express) to launch a host to test the service with a client. Build the program and after it succeeds hit F5. The WcfSvcHost is spawned which stays in the taskbar as shown. You can click WcfSvcHost to display the WCF Service Host window popping-up as shown. The host gets started as shown here. The service is hosted on the developmental server. This is immediately followed by the WCF Test Client user interface popping-up as shown. In this harness you can test the service.
Read more
  • 0
  • 0
  • 5215

article-image-drupal-7-preview
Packt
11 Aug 2010
3 min read
Save for later

Drupal 7 Preview

Packt
11 Aug 2010
3 min read
You'll need a localhost LAMP or XAMPP environment to follow along with the examples here. If you don't have one set up, I recommend using the Acquia Stack Drupal Installer: http://acquia.com/downloads. Once your testing environment is configured, download Drupal 7: http://drupal.org/drupal-7.0-alpha6. Installing D7 Save the installer to your localhost Drupal /sites folder and extract it. Set up your MySQL database using your preferred method. Note to developers: D7's new database abstraction layer will theoretically support multiple database types including SQLite, PostgreSQL, MSSQL and Oracle. So if you are running Oracle you may be able to use D7. Now load the installer page in your browser (note I renamed my extracted folder to drupal7): http://localhost:8082/drupal7/install.php. The install process is about the same as D6 - you're still going to need to copy your /sites/default/default.settings.php file and re-name it to settings.php. Also make sure to create your /files folder. Make sure the file has write permissions for the install process. Once you do this and have your db created, it's time to run the installer. One immediate difference with the installer is that D7 now offers you a Standard or Minimal install profile. Standard will install D7 with common Drupal functionality and features that you are familiar with. Minimal is the choice for developers who want only the core Drupal functionality enabled. I'll leave it set for Standard profile. Navigate through the installer screens choosing language; and adding your database information. Enhancements With D7 installed what are the immediate noticeable enhancements? The overall look and feel of the administrative interface now uses overlay windows to present links to sections and content. Navigation in the admin interface now runs horizontally along the top of the site. Directly under the toolbar navigation is a shortcut link navigation. You can customize this by adding your own shortcuts pointing to various admin functionality. In the toolbar, Content points to your content lists. Structure contains links to Blocks, Content types, Menus and Taxonomy. CCK is now built into Drupal 7 so you can create custom content types and manage custom fields without having to install modules. If you want to restore the user interface to look more like D6 you can do this by disabling the Overlay module or tweaking role permissions for the Overlay module. Content Types Two content types are enabled with Drupal 7 core. Article replaces the D6 Story type. Basic Page replaces the D6 Page type. Developers hope these more accurate names will help new Drupal users understand how to add content easily to their site.
Read more
  • 0
  • 0
  • 3856

article-image-oracle-enterprise-manager-key-concepts-and-subsystems
Packt
10 Aug 2010
7 min read
Save for later

Oracle Enterprise Manager Key Concepts and Subsystems

Packt
10 Aug 2010
7 min read
(For more resources on Oracle, see here.) Target The term 'target' refers to an entity that is managed via Enterprise Manager Grid Control. Target is the most important entity in Enterprise Manager Grid Control. All other processes and subsystems revolve around the target subsystem. For each target there is a model of the target that is saved in the Enterprise Manager Repository. In this article, we will use the terms target and target model interchangeably. Major building blocks of the target subsystem are: Target definition: All targets are organized into different categories, just like the actual entity that they represent, for example there is WebLogic Server target, Oracle Database target, and so on. These categories are called target types. For each target type there is a definition in XML format that is available with the agent as well as with the repository. This definition includes: Target Attributes: There are some attributes that are common across all target types, and there are some attributes specific to a particular target type. The example of a common attribute is the target name, which uniquely identifies a managed entity. The example of a target type specific attribute is the name of a WebLogic Domain for a WebLogic Server target. Some of the attributes provide connection details for connecting to the monitored entity, such as the WebLogic Domain host and port. Some other attributes contain authentication information to authenticate and connect to the monitored entity. Target asociations: Target type definition includes the association between related targets, for example an OC4J target will have its association defined with a corresponding Oracle Application Server. Target Metrics: This includes all the metrics that need to be collected for a given target and the source for those metrics. We'll cover this in greater detail in the Metrics subsystem. Every target that is managed through the EM belongs to one, and only one, target type category. For any new entity that needs to be managed by the Enterprise Manager, an instance of appropriate target type is created and persisted in the repository. Out-of-the-box Enterprise Manager provides the definition for most common target types such as the Host, Oracle Database, Oracle WebLogic Server, Seibel suite, SQLServer, SAP, .NET platform, IBM Websphere application server, Jboss application server, MQSeries, and so on. For a complete list of out-of-the-box targetsOut-of-the-box Enterprise Manager provides the definition for most common target types such as the Host, Oracle Database, Oracle WebLogic Server, Seibel suite, SQLServer, SAP, .NET platform, IBM Websphere application server, Jboss application server, MQSeries, and so on.For a complete list of out-of-the-box targets please refer to the Oracle website. Now that we have a good idea about the target definition, it's time we get to know more about the target lifecycle. Target lifecycle As the target is very central to the Enterprise Manager—it's very important that we understand each stage in the target life cycle. Please note that not all the stages of the lifecycle may be needed for each target. However, to proceed further we need to understand each step in the target lifecycle. Enterprise Manager automates many of these stages, so in a real life scenario many of these steps may be transparent to the user. For example, Discovery and Configuration for monitoring stages are completely automated for the Oracle Application Server. Discovery of a target Discovery is the first step in the target lifecycle. Discovery is a process that finds the entities that need to be managed, builds the required target model for those entities, and persists the model in the management repository. For example, the discovery process executed on a Linux server learns that there are OC4J containers on that server, it builds target models for the OC4Js and the Linux server, and it persists the target models in the repository. The agent has various discovery scripts and those scripts are used to identify various target types. Besides discovery, these scripts build a model for the discovered target and fill in all of the attributes for that target. We learnt about target attributes in the previous section. Some discovery scripts are executed automatically as a part of the agent installation and therefore, no user inputs are needed for discovery. For example, a discovery script for the Oracle Application Server is automatically triggered when an agent is installed. On the other hand, there are some discovery scripts where the user needs to provide some input parameters. An example for this is the WebLogic server, where the user needs to provide the port number of the WebLogic Administration Server and credentials to authenticate and connect to it. The Enterprise Manager console provides interface for such discovery. Discovery of targets can happen in two modes—local mode and remote mode. In local mode, the agent is running locally on the same host as the target. In remote discovery mode, the agent can be running on a different host. All of the targets can be discovered in local mode and there are some targets that can be discovered in remote mode. For example, discovery of WebLogic servers can happen in local as well as remote mode. One important point to note is that the agent that discovered the target does the monitoring of that target. For example, if a WebLogic Server target is discovered through a remote agent it gets monitored through that same remote agent. Configuration for monitoring After discovery the target needs to be configured for monitoring. The user will need to provide some parameters for the agent to use to connect to the target and get the metrics. These parameters include monitoring credentials, host, and port information, using which, the agent can connect to the target to fetch the metrics. The Enterprise Manager uses these parameters to connect, authenticate, and collect metrics from the targets. For example, to monitor an Oracle database the end user needs to provide the user ID and password, which can be used for authentication when collecting performance metrics using SNMP protocol. Enterprise Manager Console provides an interface for configuring these parameters. For some targets such as Application server, this step is not needed, as all the metrics can be fetched anonymously. For some other targets such as Oracle BPEL Process Manager, this step is needed only for detailed metrics; basic metrics are available without any monitoring configuration, but for advanced metrics monitoring, credentials needs to be provided by the end user. In this case, monitoring credentials are the user ID and password, used to authenticate when connecting to BPEL Process Manager for collecting performance metrics. Updates to a target Over a period of time, some target properties, attributes, and associations with other targets change—the EM target model that represents the target should be updated to reflect the changes. It is very important that end-users see the correct model from Enterprise Manager to ensure that all targets are monitored correctly. For example, in a given WebLogic Cluster, if a new WebLogic Server is added and an existing WebLogic Server is removed—Enterprise Manager's target model needs to reflect that. Or, if credentials to connect to WebLogic Admin Server are changed—the target model should be updated with new credentials. The Enterprise Manager console provides UI interface to update such properties. If the target model is not updated there is a risk that some entity may not be monitored, for example if a new WebLogic server is added but the target model of domain is not updated, the new WebLogic server will not be monitored. Stopping monitoring of a target Each IT resource has some maintenance window or planned 'down-time'. During such time it's desirable to stop monitoring a target and collecting metrics for that resource. This can be achieved by putting that target into a blackout state. In a blackout state, agents do not collect monitoring data for a target and they do not generate alerts. After the maintenance activity is over, the blackout can be cleared from a target and routine monitoring can start again. Enterprise Manager Console provides an interface for creating and removing blackout state for one or more targets.
Read more
  • 0
  • 0
  • 6387
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-netbeans-platform-69-working-actions
Packt
10 Aug 2010
4 min read
Save for later

NetBeans Platform 6.9: Working with Actions

Packt
10 Aug 2010
4 min read
(For more resources on NetBeans, see here.) In Swing, an Action object provides an ActionListener for Action event handling, together with additional features, such as tool tips, icons, and the Action's activated state. One aim of Swing Actions is that they should be reusable, that is, can be invoked from a menu item as well as a related toolbar button and keyboard shortcut. The NetBeans Platform provides an Action framework enabling you to organize Actions declaratively. In many cases, you can simply reuse your existing Actions exactly as they were before you used the NetBeans Platform, once you have declared them. For more complex scenarios, you can make use of specific NetBeans Platform Action classes that offer the advantages of additional features, such as more complex displays in toolbars and support for context-sensitive help. Preparing to work with global actions Before you begin working with global Actions, let's make some changes to our application. It should be possible for the TaskEditorTopComponent to open for a specific task. You should therefore be able to pass a task into the TaskEditorTopComponent. Rather than the TaskEditorPanel creating a new task in its constructor, the task needs to be passed into it and made available to the TaskEditorTopComponent. On the other hand, it may make sense for a TaskEditorTopComponent to create a new task, rather than providing an existing task, which can then be made available for editing. Therefore, the TaskEditorTopComponent should provide two constructors. If a task is passed into the TaskEditorTopComponent, the TaskEditorTopComponent and the TaskEditorPanel are initialized. If no task is passed in, a new task is created and is made available for editing. Furthermore, it is currently only possible to edit a single task at a time. It would make sense to be able to work on several tasks at the same time in different editors. At the same time, you should make sure that the task is only opened once by the same editor. The TaskEditorTopComponent should therefore provide a method for creating new or finding existing editors. In addition, it would be useful if TaskEditorPanels were automatically closed for deleted tasks. Remove the logic for creating new tasks from the constructor of the TaskEditorPanel, along with the instance variable for storing the TaskManager, which is now redundant: public TaskEditorPanel() { initComponents(); this.pcs = new PropertyChangeSupport(this); } Introduce a new method to update a task: public void updateTask(Task task) { Task oldTask = this.task; this.task = task; this.pcs.firePropertyChange(PROP_TASK, oldTask, this.task); this.updateForm(); } Let us now turn to the TaskEditorTopComponent, which currently cannot be instantiated either with or without a task being provided. You now need to be able to pass a task for initializing the TaskEditorPanel. The new default constructor creates a new task with the support of a chained constructor, and passes this to the former constructor for the remaining initialization of the editor. In addition, it should now be able to return several instances of the TaskEditorTopComponent that are each responsible for a specific task. Hence, the class should be extended by a static method for creating new or finding existing instances. These instances are stored in a Map<Task, TaskEditorTopComponent> which is populated by the former constructor with newly created instances. The method checks whether the map for the given task already stores a responsible instance, and creates a new one if necessary. Additionally, this method registers a Listener on the TaskManager to close the relevant editor for deleting a task. As an instance is now responsible for a particular task this should be able to be queried, so we introduce another appropriate method. Consequently, the changes to the TaskEditorTopComponent looks as follows: private static Map<Task, TaskEditorTopComponent> tcByTask = new HashMap<Task, TaskEditorTopComponent>();public static TaskEditorTopComponent findInstance(Task task) { TaskEditorTopComponent tc = tcByTask.get(task); if (null == tc) { tc = new TaskEditorTopComponent(task); } if (null == taskMgr) { taskMgr = Lookup.getDefault().lookup(TaskManager.class); taskMgr.addPropertyChangeListener(newListenForRemovedNodes()); } return tc;}private class ListenForRemovedNodes implements PropertyChangeListener { public void propertyChange(PropertyChangeEvent arg0) { if (TaskManager.PROP_TASKLIST_REMOVE.equals (arg0.getPropertyName())) { Task task = (Task) arg0.getNewValue(); TaskEditorTopComponent tc = tcByTask.get(task); if (null != tc) { tc.close(); tcByTask.remove(task); } } }}private TaskEditorTopComponent() { this(Lookup.getDefault().lookup(TaskManager.class)); }private TaskEditorTopComponent(TaskManager taskMgr) { this((taskMgr != null) ? taskMgr.createTask() : null); }private TaskEditorTopComponent(Task task) { initComponents();// ... ((TaskEditorPanel) this.jPanel1).updateTask(task); this.ic.add(((TaskEditorPanel) this.jPanel1).task); this.associateLookup(new AbstractLookup(this.ic)); tcByTask.put(task, this); }public String getTaskId() { Task task = ((TaskEditorPanel) this.jPanel1).task; return (null != task) ? task.getId() : ""; } With that our preparations are complete and you can turn to the following discussion on Actions.
Read more
  • 0
  • 0
  • 3230

article-image-creating-recent-comments-widget-agile
Packt
10 Aug 2010
7 min read
Save for later

Creating a Recent Comments Widget in Agile

Packt
10 Aug 2010
7 min read
(For more resources on Agile, see here.) Introducing CWidget Lucky for us, Yii is readymade to help us achieve this architecture. Yii provides a component class, called CWidget, which is intended for exactly this purpose. A Yii widget is an instance of this class (or its child class), and is a presentational component typically embedded in a view file to display self-contained, reusable user interface features. We are going to use a Yii widget to build a recent comments portlet and display it on the main project details page so we can see comment activity across all issues related to the project. To demonstrate the ease of re-use, we'll take it one step further and also display a list of project-specific comments on the project details page. To begin creating our widget, we are going to first add a new public method on our Comment AR model class to return the most recently added comments. As expected, we will begin by writing a test. But before we write the test method, let's update our comment fixtures data so that we have a couple of comments to use throughout our testing. Create a new file called tbl_comment.php within the protected/tests/fixtures folder. Open that file and add the following content: <?phpreturn array('comment1'=>array( 'content' => 'Test comment 1 on issue bug number 1', 'issue_id' => 1, 'create_time' => '', 'create_user_id' => 1, 'update_time' => '', 'update_user_id' => '', ), 'comment2'=>array( 'content' => 'Test comment 2 on issue bug number 1', 'issue_id' => 1, 'create_time' => '', 'create_user_id' => 1, 'update_time' => '', 'update_user_id' => '', ),); Now we have consistent, predictable, and repeatable comment data to work with. Create a new unit test file, protected/tests/unit/CommentTest.php and add the following content: <?phpclass CommentTest extends CDbTestCase{ public $fixtures=array( 'comments'=>'Comment', ); public function testRecentComments() { $recentComments=Comment::findRecentComments(); $this->assertTrue(is_array($recentComments)); }} This test will of course fail, as we have not yet added the Comment::findRecentComments() method to the Comment model class. So, let's add that now. We'll go ahead and add the full method we need, rather than adding just enough to get the test to pass. But if you are following along, feel free to move at your own TDD pace. Open Comment.php and add the following public static method: public static function findRecentComments($limit=10, $projectId=null){ if($projectId != null) { return self::model()->with(array( 'issue'=>array('condition'=>'project_id='.$projectId)))->findAll(array( 'order'=>'t.create_time DESC', 'limit'=>$limit, )); } else { //get all comments across all projects return self::model()->with('issue')->findAll(array( 'order'=>'t.create_time DESC', 'limit'=>$limit, )); }} Our new method takes in two optional parameters, one to limit the number of returned comments, the other to specify a specific project ID to which all of the comments should belong. The second parameter will allow us to use our new widget to display all comments for a project on the project details page. So, if the input project id was specified, it restricts the returned results to only those comments associated with the project, otherwise, all comments across all projects are returned. More on relational AR queries in Yii The above two relational AR queries are a little new to us. We have not been using many of these options in our previous queries. Previously we have been using the simplest approach to executing relational queries: Load the AR instance. Access the relational properties defined in the relations() method. For example if we wanted to query for all of the issues associated with, say, project id #1, we would execute the following two lines of code: // retrieve the project whose ID is 1$project=Project::model()->findByPk(1);// retrieve the project's issues: a relational query is actually being performed behind the scenes here$issues=$project->issues; This familiar approach uses what is referred to as a Lazy Loading. When we first create the project instance, the query does not return all of the associated issues. It only retrieves the associated issues upon an initial, explicit request for them, that is, when $project->issues is executed. This is referred to as lazy because it waits to load the issues. This approach is convenient and can also be very efficient, especially in those cases where the associated issues may not be required. However, in other circumstances, this approach can be somewhat inefficient. For example, if we wanted to retrieve the issue information across N projects, then using this lazy approach would involve executing N join queries. Depending on how large N is, this could be very inefficient. In these situations, we have another option. We can use what is called Eager Loading. The Eager Loading approach retrieves the related AR instances at the same time as the main AR instances are requested. This is accomplished by using the with() method in concert with either the find() or findAll() methods for AR query. Sticking with our project example, we could use Eager Loading to retrieve all issues for all projects by executing the following single line of code: //retrieve all project AR instances along with their associated issue AR instances$projects = Project::model()->with('issues')->findAll(); Now, in this case, every project AR instance in the $projects array already has its associated issues property populated with an array of issues AR instances. This result has been achieved by using just a single join query. We are using this approach in both of the relational queries executed in our findRecentComments() method. The one we are using to restrict the comments to a specific project is slightly more complex. As you can see, we are specifying a query condition on the eagerly loaded issue property for the comments. Let's look at the following line: Comment::model()->with(array('issue'=>array('condition'=>'project_id='.$projectId)))->findAll(); This query specifies a single join between the tbl_comment and the tbl_issue tables. Sticking with project id #1 for this example, the previous relational AR query would basically execute something similar to the following SQL statement: SELECT tbl_comment.*, tbl_issue.* FROM tbl_comment LEFT OUTER JOIN tbl_issue ON (tbl_comment.issue_id=tbl_issue.id) WHERE (tbl_issue.project_id=1) The added array we specify in the findAll() method simply sets an order by clause and a limit clause to the executed SQL statement. One last thing to note about the two queries we are using is how the column names that are common to both tables are disambiguated. Obviously when the two tables that are being joined have columns with the same name, we have to make a distinction between the two in our query. In our case, both tables have the create_time column defined. We are trying to order by this column in the tbl_comment table and not the one defined in the issue table. In a relational AR query in Yii, the alias name for the primary table is fixed as t, while the alias name for a relational table, by default, is the same as the corresponding relation name. So, in our two queries, we specify t.create_time to indicate we want to use the primary table's column. If we wanted to instead order by the issue create_time column, we would alter, the second query for example, as such: return Comment::model()->with('issue')->findAll(array( 'order'=>'issue.create_time DESC', 'limit'=>$limit,));
Read more
  • 0
  • 0
  • 1866

article-image-adding-user-comments-agile
Packt
10 Aug 2010
5 min read
Save for later

Adding User Comments in Agile

Packt
10 Aug 2010
5 min read
(For more resources on Agile, see here.) Iteration planning The goal of this iteration is to implement feature functionality in the Trackstar application to allow users to leave and read comments on issues. When a user is viewing the details of any project issue, they should be able to read all comments previously added as well as create a new comment on the issue. We also want to add a small fragment of content, or portlet, to the project-listing page that displays a list of recent comments left on all of the issues. This will be a nice way to provide a window into recent user activity and allow easy access to the latest issues that have active conversations. The following is a list of high-level tasks that we will need to complete in order to achieve these goals: Design and create a new database table to support comments Create the Yii AR class associated with our new comments table Add a form directly to the issue details page to allow users to submit comments Display a list of all comments associated with an issue directly on the issues details page Creating the model As always, we should run our existing test suite at the start of our iteration to ensure all of our previously written tests are still passing as expected. By this time, you should be familiar with how to do that, so we will leave it to the reader to ensure that all the unit tests are passing before proceeding. We first need to create a new table to house our comments. Following is the basic DDL definition for the table that we will be using: CREATE TABLE tbl_comment( `id` INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT, `content` TEXT NOT NULL, `issue_id` INTEGER, `create_time` DATETIME, `create_user_id` INTEGER, `update_time` DATETIME, `update_user_id` INTEGER) As each comment belongs to a specific issue, identified by the issue_id, and is written by a specific user, indicated by the create_user_id identifier, we also need to define the following foreign key relationships: ALTER TABLE `tbl_comment` ADD CONSTRAINT `FK_comment_issue` FOREIGN KEY (`issue_id`) REFERENCES `tbl_issue` (`id`);ALTER TABLE `tbl_comment` ADD CONSTRAINT `FK_comment_author` FOREIGN KEY (`create_user_id`) REFERENCES `tbl_user` (`id`); If you are following along, please ensure this table is created in both the trackstar_dev and trackstar_test databases. Once a database table is in place, creating the associated AR class is a snap. We simply use the Gii code creation tool's Model Generator command and create an AR class called Comment. Since we have already created the model class for issues, we will need to explicitly add the relations to to the Issue model class for comments. We will also add a relationship as a statistical query to easily retrieve the number of comments associated with a given issue (just as we did in the Project AR class for issues). Alter the Issue::relations() method as such: public function relations(){ return array( 'requester' => array(self::BELONGS_TO, 'User', 'requester_id'), 'owner' => array(self::BELONGS_TO, 'User', 'owner_id'), 'project' => array(self::BELONGS_TO, 'Project', 'project_id'), 'comments' => array(self::HAS_MANY, 'Comment', 'issue_id'), 'commentCount' => array(self::STAT, 'Comment', 'issue_id'),);} Also, we need to change our newly created Comment AR class to extend our custom TrackStarActiveRecord base class, so that it benefits from the logic we placed in the beforeValidate() method. Simply alter the beginning of the class definition as such: <?php/*** This is the model class for table "tbl_comment".*/class Comment extends TrackStarActiveRecord{ We'll make one last small change to the definitions in the Comment::relations() method. The relational attributes were named for us when the class was created. Let's change the one named createUser to be author, as this related user does represent the author of the comment. This is just a semantic change, but will help to make our code easier to read and understand. Change the method as such: /** * @return array relational rules. */public function relations(){ // NOTE: you may need to adjust the relation name and the related // class name for the relations automatically generated below. return array( 'author' => array(self::BELONGS_TO, 'User', 'create_user_id'), 'issue' => array(self::BELONGS_TO, 'Issue', 'issue_id'),); Creating the Comment CRUD Once we have an AR class in place, creating the CRUD scaffolding for managing the related entity is equally as easy. Again, use the Gii code generation tool's Crud Generator command with the AR class name, Comment, as the argument. Although we will not immediately implement full CRUD operations for our comments, it is nice to have the scaffolding for the other operations in place. As long as we are logged in, we should now be able to view the autogenerated comment submission form via the following URL: http://localhost/trackstar/index.php?r=comment/create
Read more
  • 0
  • 0
  • 1290

article-image-metadata-oracle-universal-content-management
Packt
09 Aug 2010
5 min read
Save for later

Metadata in Oracle Universal Content Management

Packt
09 Aug 2010
5 min read
Let's begin by looking in the metadata. Exploring metadata In case you forgot, metadata fields are there to describe the actual data such as the file name, shooting date, and camera name for a digital picture. There're two types of metadata fields: Standard and Extended or Custom. Let's take a closer look at what they can do for us. Standard metadata Standard metadata is essential for the system to function. These are fields like content ID, revision ID, check-in date, and author. Let's take a quick look at all of them so you have a full picture. Lab 2: Exploring standard metadata Click on the Quick Search button on the top right. Yes, leave the search box blank. If you do that, you'll get all content in the repository. In the last column on the Search Results page click on the i icon on any of the result rows. That brings up a Content Info screen. From this screen there is no way to tell which fields are Standard and which are Extended. So how do you tell? Explore the database That's right. A Content Server uses a relational database, like Oracle or SQL Server to store its metadata, so let's look there. If you are using SQL Server 2005 as your database, then open SQL Server Management Studio, and if not then bring up your SQL tool of choice. Check the list of columns in the table called Revisions (as shown in the following screenshot): Most of the column names in Revisions are the standard metadata fields. Here's a list of the fields you will be using most often: dID: ID of the document revision. This number is globally unique. If you have a project plan with three revisions—each of the three will have unique dID and all of them will have the same Content ID. dDocName: this is the actual Content ID. dDocType: content type of the document. dDocName or Content ID is the unique identifier for a content revision set. dID is the unique identifier of each individual content revision within a set. Being able to identify a content revision set is very useful, as it shows and tracks (makes auditable) the changes of content items over time. Being able to identify each individual revision with dID is also very useful, so we can work with specific content revisions. This is one of the great advantages of the Content Server over other systems, which only store the changes between revisions. Full revision sets as well as individual revisions are managed objects and each one can be accessed by its own unique URL. Now run this SQL statement: select * from Revisions; This shows the actual documents in the system and their values for standard meta fields (as shown in the following screenshot): And now let's look at the all-important Content Types. Content Types Content Type is a special kind of meta field. That's all. UCM puts a special emphasis on it as this is the value that differentiates a project plan from a web page and a team photo from a vendor invoice. You may even choose to change the way your check-in and content info form looks —based on the type of the document. Let's look how UCM handles Content Types. Lab 3: Exploring content types In Content Server go to Administration | Admin Applets. Launch the Configuration Manager. Select Options| Content Types... (as shown in the following screenshot): The Content Types dialog opens. As you see, out of the box, Content Server has seven types—one for each imaginary department. This is a good way of segregating content. You can also go by the actual type of content. For instance, you can have one Content Type for Invoice and one for Project Plan. They will also have different meta fields. For instance, an Invoice will have a Contract Number and a Total Amount. A Project Plan will have a project name and manager's name. Now let me show you how to add content types. How to add a Content Type It's easy to add a new Content Type. Just click on Add..., fill in the type name and the description. You can also select an icon for the new type. What if you need to upload a new icon? Just make it into an 8-bit GIF file, 30x37 px, 96 dpi and upload it to: C:oracleucmserverweblayoutimagesdocgifs If your install path is different or you're not running on Windows then make appropriate corrections. How to edit or delete a Content Type The only thing to know about editing is that you can not really change the type name. All you can update is the icon or description. If you're ready to delete a type then make sure there is no content in the repository that's using it. Either update it all or delete. How would you go about doing a mass-update? I'll show you one of the ways in on using Archiver(Ways on archiever is out of the scope of this article). And now let's proceed to Custom Metadata.
Read more
  • 0
  • 0
  • 2497
article-image-more-things-you-can-do-oracle-content-server-workflows
Packt
09 Aug 2010
5 min read
Save for later

More Things you can do with Oracle Content Server workflows

Packt
09 Aug 2010
5 min read
(For more resources on Oracle, see here.) The top three things As we've just seen, the most common things you can do are these: Get content approved: This is the most obvious use of the workflow we've just seen. Get people notified: Remember when we were adding workflow steps there was a number of required approvers on the Exit Conditions tab in the Add New Step dialog. If we set that to zero we accomplish one important thing: Approvers will get notified, but no action is required of them. It's a great way to "subscribe" a select group of people to an event of your choice. Perform custom actions: And if that's not enough you can easily add custom scripts to any step of a workflow. You can change metadata, release items, and send them to other workflows. You can even invoke your custom Java code. And here's another really powerful thing you can do with custom workflow actions. You can integrate with other systems and move from the local workflow to process orchestration. You can use a Content Server workflow to trigger external processes. UCM 10gR3 has an Oracle BPEL integration built in. This means that a UCM workflow can be initiated by (or can itself initiate) a BPEL workflow that spans many systems, not just the UCM. This makes ERP systems such as Siebel, PeopleSoft, SAP, and Oracle e-Business Suite easily accessible to the UCM, and content inside the UCM can be easily made available to these systems. So let's look at the jumps and scripting. Jumps and scripting Here's how to add scripting to a workflow: In Workflow Admin select a step of a workflow we've just created. Click on the Edit button on the right. The Edit Step dialog comes up. Go to the Events tab (as shown in the following screenshot): There are three events that you can add custom handlers for: Entry: This event triggers when an item arrives at the step. Update: This happens when an item or its metadata is updated. It's also initiated every hour by a timer event, Workflow Update Cycle. Use it for sending reminders to approvers or escalating the item to an alternative person after your approval period has expired. Exit: This event is triggered when an item has been approved and is about to exit the step. If you have defined Additional Exit Conditions on the Exit Conditions tab then those will be satisfied before this event fires. The following diagram illustrates the sequence of states and corresponding events that are fired when a content item arrives at a workflow step: Great! But how do we can actually add the jumps and custom scripts to a workflow step? How to add a jump to a workflow step Let's add an exception where content submitted by sysadmin will bypass our Manager Approval workflow. We will use a jump—a construct that causes an item to skip the normal workflow sequence and follow an alternative path. Here's how to do it: Add a jump to an Entry event of our very first step. On the Events tab of the Edit Step dialog, click on the Edit button—the one next to the Entry event. The Edit Script dialog displays (as shown in the following screenshot): Click on the Add button. The Add Jump dialog comes up (as shown in the following screenshot): Let's call the jump Sysadmin WF bypass. You don't need to change anything else at this point. Click on OK to get back to the Edit Script dialog. In the Field drop-down box pick Author. Click on the Select… button next to the Value box. Pick sysadmin (if you have trouble locating sysadmin in the list of users, make sure that the filter check-box is un-checked). Click the Add button below the Value field. Make sure that your clause appears in the Script Clauses box below. In the Target Step dropdown pick Next Step. Once you have done so the value will change to its script equivalent, @wfCurrentStep(1). If you have more than one step in the workflow, change 1 to the number of steps you have. This will make sure that you jump past the last step and exit the workflow. Here's how the completed dialog will look (as shown in the following screenshot): Click on OK to close. You're now back to the Events tab on the Edit Step dialog. Notice a few lines of script being added to the box next to the Entry event (as shown in the following screenshot): OK the dialog. It's time to test your changes. Check in a new document. Make sure you set the Author field to sysadmin. Set your Security Group to accounting, and Account to accounting/payable/current. If you don't, the item will not enter our workflow in the first place (as shown in the following screenshot): Complete your check-in and follow the link to go to the Content Info page. See the status of the item. It should be set to Released. That's right. The item got right out of the workflow. Check in a new document again, but use some other author. Notice how your item will enter the workflow and stay there. As you've seen, the dialog we used for creating a jump is simply a code generator. It created a few lines of script we needed to add the handler for the Entry event. Click on the Edit button next to that code and pick Edit Current to study it. You can find all the script function definitions in iDoc Reference Guide. Perfect! And we're still not done. What if you have a few common steps that you'd like to reuse in a bunch of workflows? Would you just have to manually recreate them? Nope. There are several solutions that allow you to reuse parts of the workflow. The one I find to be most useful is sub workflows.
Read more
  • 0
  • 0
  • 3809

article-image-oracle-universal-content-management-how-set-and-change-workflows
Packt
09 Aug 2010
4 min read
Save for later

Oracle Universal Content Management: How to Set Up and Change Workflows

Packt
09 Aug 2010
4 min read
(For more resources on Oracle, see here.) How to set up and change workflows First thing's first. Let's start by looking at the tools that you will be using to set up and configure your workflows. Discover the Workflow Admin application Go to Administration Admin Applets| and launch Workflow Admin. The Workflow Admin application comes up (as shown in the following screenshot): There are three tabs: Workflows: This tab is used for administering Basic or Manual Workflows. Criteria: This tab deals with Automatic or Criteria Workflows—the type we will be using most often. Templates: This is the place where you can pre-assemble Workflow Templates—reusable pieces that you can use to create new basic workflows. Let's create a simple automatic workflow. I call it automatic because content enters the workflow automatically when it is modified or created. If you will be using e-mail notifications then be sure to check your Internet Configuration screen in Admin Server. I'll walk you through the steps in using automatic workflows. Lab 7: Using automatic workflows Here's the process for creating a criteria workflow: Creating a criteria workflow Follow these steps: Go to the Criteria tab and click on Add. The New Criteria Workflow dialog comes up (as shown in the following screenshot): Fill in Workflow Name and Description. Pick the Security Group. Only items with the same security group as the workflow can enter it. Let's use the security group we've created. Select accounting. We're creating a Criteria Workflow, so let's check the Has Criteria Definition box. Now you can specify criteria that content must match to enter the workflow.For the sake of this lab, let's pick Account for the Field, and accounting/payable/current for the Value. Please note that a content item must match at least two conditions to enter the workflow: it must belong to the same security group as the workflow, and it must match the criteria of the workflow. As soon as a new content item is created with Security Group of accounting and Content Account value is set to accounting/payable/current, it will enter our workflow. It will not enter the workflow if its metadata is simply updated to these values. It takes a new check-in for an item to enter a criteria workflow. If you need it to enter a workflow after a metadata update then consider custom components available from the Fishbowl Solutions (www.fishbowlsolutions.com). You can use any metadata field and value pair as criteria for entering the workflow. But you can only have one condition. What if that's not enough? If you need to perform additional checks before you can accept the item in a workflow then keep your criteria really open, and do your checks in the workflow itself. I'll show you how, later in this article. The diagram next illustrates how a content item flows through a criteria workflow. You may find it useful to refer back to it as you follow the steps in this lab. OK. We have a workflow created but there're two problems with it: it has no steps in it and it is disabled. Let's begin by seeing how to add workflow steps. Adding workflow steps Here's how you add workflow steps: Click on the Add button in the Steps section on the right (as shown in the following screenshot): The Add New Step dialog opens. Fill in the step name and description (as shown in the following screenshot): Click on the Add User button on the right and select approvers for this step. Also add yourself to the list of approvers so you can test the workflow. Switch to the Exit Conditions tab (as shown in the following screenshot): You can change the number of approvers required to move the item to the next step. You can make all approvers required to advance a step or just any one as shown on the screenshot. And if you put zero in the text box, no approvers will be required at all. They will still receive notification, but the item will go immediately to the next step. And when the current step is the last the workflow will end and the new revision will be released into the system. What do I mean by that? Until workflow is complete, revisions that are currently in a workflow will not come up in searchers and will not show on the Web. You will still see them in the content info screen but that's it. OK the dialog. You now have a workflow with one step. Let's test it. But first, you need to enable the workflow.
Read more
  • 0
  • 0
  • 2803

article-image-building-consumer-review-website-using-wordpress-3
Packt
06 Aug 2010
15 min read
Save for later

Building a Consumer Review Website using WordPress 3

Packt
06 Aug 2010
15 min read
(For more resources on Wordpress, see here.) Building a consumer review website will allow you to supply consumers with the information that they seek and then, once they've decided to make a purchase, your site can direct them to a source for the product or service. This process can ultimately allow you to earn some nice commission checks because it's only logical that you would affiliate yourself with a number of the sites to which you will be directing consumers. The great thing about using the WP Review Site plugin to build your consumer review website is that you can provide people with an unbiased source of public opinions on any product or service that you can imagine. You will never have to resort to the hard sell in order to drive traffic to the companies that you've affiliated yourself with. Instead, consumers can research the reviews posted on your website and,ultimately, make a purchase feeling confident that they're making the right decision. In this article, you will learn about the following: Present reviews in the most convenient way possible for visitors browsing your site Specify the ratings criteria that site visitors will use when reviewing the products or services included on your website Display informational comparison tables on your site's index and category pages Provide visitors with the location of local businesses using Google Maps Perform the additional steps required when writing a post now that the WP Review Site plugin has been introduced into the process Perform either automatic and manual integration so that you can use a theme of your own rather than either of the ones provided with this plugin Once this project is complete, you will have succeeded in creating a site that's similar to the one shown in the following screenshot:   Introducing WP Review Site With the WP Review Site plugin you will be able to build a consumer review site where visitors can share their opinions about the products or services of your choosing. The plugin, which can be found at WP Review Site, can be used to build a dedicated review site or, if you would like consumer reviews to make up only a subsection of your website, then you can specify certain categories where they should appear. This plugin gives you complete control over where ratings appear and where they don't since you can choose to include or exclude them on any category, page, or post. The WP Review Site plugin seamlessly integrates with WordPress by, among other things, altering the normal appearance and functionality of the comments submission form. This plugin provides visitors with a way to write a review and assign stars to the ratings categories that you previously defined. They can also write a review and opt to provide no stars without harming the overall rating presented on your site, since no stars is interpreted as though no rating was given. WP Review Site plugin makes it easy for you to present your visitors with concise information. Using the features available with this plugin, you can build comparison tables based upon your posts and user reviews. In order to accomplish this, you will need to configure a few settings and then the plugin will take care of the rest. Typically, WordPress displays posts in chronological order, but that doesn't make much sense on a consumer review site where visitors want to view posts based upon other factors such as the number of positive reviews that a particular product or service has received. The developer behind WP Site Review took that into consideration and has included two alternative sorting methods for your site's posts. The developer has even included a Bayesian weighting feature so that reviews are ordered in the most logical way possible. Right about now, you're probably wondering what Bayesian weighting is and how it works. What it does is provide a way to mathematically calculate the rating of products and/or services based upon the credibility of the votes that have been cast. If an item receives only a few votes, then it can't be said with any certainty that that's how the general public feels. If an item receives several votes, then it can be safely assumed that many others hold the same opinion. So, with Bayesian weighting, a product that has received only one five star review won't outrank another that has received fifteen four star reviews. As the product that received one five star review garners more ratings, its reviews will grow in credibility and, if it continues to receive high ratings, it will eventually become credible enough to outrank the other reviews. If you're planning to create a website where visitors can come and review local businesses, then you might consider this plugins ability to automatically embed Google Maps quite handy. After configuring the settings on the plugin's Google Maps screen you will be able to type the address for a business into a custom field when writing a post and then the plugin will take care of the rest. The WP Review Site plugin also includes two sidebar widgets that can used with any widget-ready theme. These widgets will allow you to display a list of top rated items and a list of recent reviews. Lastly, the themes provided with this plugin include built-in support for the hReview microformat. This means that Google will easily be able to extract and highlight reviews from your website. That feature will prove to be very beneficial for driving search engine traffic to your site. Installing WP Review Site Once you've installed WordPress you can then concentrate on the installation of the WP Review Site plugin and its accompanying themes. First, extract the wpreviewsite.zip archive. Inside you will find a plugins folder and a themes folder. Within the plugins folder is another folder named review-site. Since none of these folders are zipped, you will need to upload them using either an FTP program or the file manager provided by your web host. So, upload the review-site folder to the wp-content/plugins directory on your server. If you plan to use one of the themes provided with this plugin, then you will next need to upload the contents of the themes folder to the wp-content/themes directory. Setting up and configuring WP Review Site With the installation process complete, you will now need to activate the WP Review Site plugin. Once that's finished, a Review Site menu will appear on the left side of your screen. This menu contains links to the settings screens for this plugin. Before you delve into the configuration process you must first activate the theme that you plan to use on your consumer review website. Using one of the provided themes is a bit easier. That's because using any other theme will mean that you must integrate the functionality of WP Review Site into it. Now that you know the benefits offered by the themes that are bundled with this plugin, click on Appearance | Themes. Once there, activate either Award Winning Hosts, Bonus Black, or a theme of your choice. General Settings Navigate to Review Site | General Settings to be taken to the first of the WP Review Site settings screens. On this screen, Sort Posts By is the first setting that you will encounter. Rather than displaying reviews in the normal chronological order used by WordPress you should, instead, select either the Average User Rating (Weighted) or the Number of Reviews/Comments option. Either of these settings will provide a much more user-friendly experience for your visitors. If you want to make it impossible for site visitors to submit a comment without also choosing a rating, tick the checkbox next to Require Ratings with All Comments. If you don't want to make this a requirement, then you can leave this setting as is. This setting will, of course, only apply to posts that you would like your visitors to rate. On normal posts, that don't include rating stars in the comment form area, it will still be possible for your visitors to submit a comment. When using one of the themes provided with the plugin, none of the other settings on this screen need to be configured. If you would like to integrate this plugin into a different theme, then, depending upon the method that you choose, you may need to revisit this screen later on. No matter how you're handling the theme issue, you can, for now, just click Save Settings before proceeding to the next screen. Rating Categories To access the next settings screen, click on Review Site | Rating Categories. Here you can add categories for people to rate when submitting reviews. These categories shouldn't be confused with the categories used in WordPress for organizational purposes. These WP Review Site categories are more like ratings criteria. By default, WP Review Site includes a category called Overall Rating, but you can click the remove link to delete it if you like. To add your first rating category, simply enter its title into the Add a Category textbox and then click Save Settings. The screen will then refresh and your newly created rating category will now appear under the Edit Rating Categories section of the screen. To add additional rating categories, simply repeat the process that you previously completed. Once you've finished adding rating categories, you will next need to turn your attention to the Bulk Apply Rating Categories section of the screen. In the Edit Rating Categories area you will see all of the rating categories that you just finished adding to your site. If you want to simplify matters, and apply these rating categories to all of the posts on your site, tick the checkbox next to each of the available rating categories. Then, from the Apply to Posts in Category drop-down menu, select All Categories. This is most likely the configuration that you will use if you're building a website entirely dedicated to providing consumer reviews. Once you've finished, click Save Settings. If you, instead, want your newly added rating categories to only appear on certain categories, then bypass the Edit Rating Categories area for now and first look to the Apply to Posts in Category settings area. Currently this will only show All Categories and Uncategorized. The lack of categories in this menu is being caused by two things. First, you haven't added any WordPress categories to your site yet. Secondly, categories won't be included in this menu until they contain at least one post. To solve part of this problem, open a new browser window and then, navigate to Posts | Categories. Then, add the categories that you would like to include on your website. Now, click on Posts | Edit to visit the Edit Posts screen. At the moment, the Hello world! post is the only one published on your site and you can use it to force your site's categories to appear in the Apply to Posts in Category drop-down menu. So, hover over the title of this post and then, from the now visible set of links, click Quick Edit. In the Categories section of the Quick Edit configuration area, tick the checkbox next to each of the categories found on your site. Then, click Update Post. After content has been added to each of your site's categories, you can delete the Hello world! post, since you will no longer need to use it to force the categories to appear in the Apply to Posts in Category drop-down menu. Now, return to the Rating Categories screen and then select the first category that you want to configure from the Apply to Posts in Category drop-down menu. With that selected, in the Edit Rating Categories area, tick the checkbox next to each rating category that you want to appear within that WordPress category. Then, click Save Settings. Repeat this process for each of the WordPress categories to which you would like rating categories to be added. Comparison Tables If you wish, you can add a comparison table to either the home page or the category pages on your site. To do this, you need to visit the Comparison Tables screen, so click on Review Site | Comparison Tables. If you want to display a comparison table on your home page, then tick the checkbox next to Display a Comparison Table on Home Page. If you would like to include all of your site's categories in the comparison table that will be displayed on the home page, then leave the Categories To Display On Home Page textbox as is. However, if you would prefer to include only certain categories, then enter their category IDs, separated by commas, into the textbox instead. You can learn the ID numbers that have been assigned to each of your site's categories by opening a new browser window and then navigating to Posts | Categories. Once there, hover over the title of each of the categories found on the right hand side of your screen. As you do, look at the URL that appears in your browser's status bar and make a note of the number that appears directly after tag_ID=. That's the number that you will need to enter in the Comparison Table screen. If you want to display a comparison table in one or more categories, then tick the checkbox next to Display a Comparison Table on Category Page(s). Now, return to the Comparison Table screen. If you want a comparison table to be displayed on each of your category pages, leave the Categories To Display Comparison Table On textbox at its default. Otherwise, enter a list of comma separated category IDs into the textbox for the categories where you want to display comparison tables. The Number of Posts in the Table setting is currently set to 5, but you can enter another value if you would like a different number of posts to be included in each comparison table. When writing posts, you might use custom fields to include additional information. If you would like that information to be displayed in your comparison tables you will need to enter the names of those fields, separated by commas, into the Custom Fields to Display textbox. Lastly, you can change the text that appears in the Text for the Visit Site link in the Table if you wish or you may leave it at its default. With these configurations complete, click Save Settings. In this screenshot, you can see what a populated comparison table will look like on your website: Google Maps If you plan on featuring reviews centered around local businesses, then you might want to consider adding Google Maps to your site. This will make it easy for visitors to see exactly where each business is located. You can access this settings screen by clicking on Review Site | Google Maps. To activate this feature, tick the checkbox next to Display a Google Map on Posts/Pages with mapaddress Custom Field. Next, you need to use the Map Position setting to specify where these Google Maps will appear in relation to the content. You can choose to use either the Top of Post or Bottom of Post position. The Your Google Maps API Key textbox is next. Here you will need to enter a Google Maps API key. If you don't have a Google Maps API key for this domain, then you will need to visit Google to generate one. To do this, right-click on the link provided on the Google Maps screen and then open that link in a new browser window. You will then be taken to the Google Maps API sign up screen, which can be found at Google Maps API sign up. If you've ever signed up to use any of Google's services, then you can use that username and password to log in. If you don't have an account with Google, create one now. Take a moment to read the information and terms presented on the Google Maps API sign up page. After you've finished reviewing this text, if it's acceptable to you, enter the URL for your website into the My web site URL textbox and then click Generate API Key. You will then be taken to a thank you screen where your API key will be displayed. Copy the API key and then return to the Google Maps screen on your website. Once there, paste your API key into the textbox for Your Google Maps API Key. The Map Width and Map Height settings are next. By default, these are configured to 400px and 300px. If you would prefer that the maps be displayed at a different size, then enter new values into each of these textboxes. The last setting is Map Zoom Level (1-5), which is currently set to 3. This setting should be fine, but you may change it if you wish. Finally, click Save Settings. When you publish a post that includes the mappadress custom field, this is what the Google Map will look like on your site.
Read more
  • 0
  • 0
  • 8538
article-image-agile-yii-11-and-php5-trackstar-application
Packt
05 Aug 2010
13 min read
Save for later

Agile with Yii 1.1 and PHP5: The TrackStar Application

Packt
05 Aug 2010
13 min read
(For more resources on Agile, see here.) Introducing TrackStar TrackStar is a Software Development Life Cycle (SDLC) issue management application. Its main goal is to help keep track of all the many issues that arise throughout the course of building software applications. It is a user-based application that allows the creation of user accounts and grants access to the application features, once a user has been authenticated and authorized. It allows a user to add and manage projects. Projects can have users associated with them (typically the team members working on the project) as well as issues. The project issues will be things such as development tasks and application bugs. The issues can be assigned to members of the project and will have a status such as not yet started, started, and finished. This way, the tracking tool can give an accurate depiction of projects with regard to what has been accomplished, what is currently in progress, and what is yet to be started. Creating user stories Simple user stories are a great way to identify the required features of your application. User stories, in their simplest form, state what a user can do with a piece of software. They should start simple, and grow in complexity as you dive into more and more of the details around each feature. Our goal here is to begin with just enough complexity to allow us to get started. If needed, we'll add more detail and complexity later. We briefly touched on the three main entities that play a large role in this application: users, projects, and issues. These are our primary domain objects, and are extremely important items in this application. So, let's start with them. Users TrackStar is a user-based web application. There will be two high-level user types: Anonymous Authenticated An anonymous user is any user of the application that has not been authenticated through the login process. Anonymous users will only have access to register for a new account or to log in. All other functionality will be restricted to authenticated users. An authenticated user is any user that has provided valid authentication credentials through the login process. In other words, authenticated users are logged-in users. They will have access to the main features of the application such as creating and managing projects, and project issues. Projects Managing the project is the primary purpose of the TrackStar application. A project represents a general, high-level goal to be achieved by one or more users of the application. The project is typically broken down into more granular tasks (or issues) that represent the smaller steps that need to be taken to achieve the overall goal. As an example, let's take what we are going to be doing throughout this book, that is, building a project and issue tracking management application. Unfortunately, we can't use our yet-to-be-created application as a tool to help us track its own development. However, if we were using a similar tool to help track what we are building, we might create a project called Build The TrackStar Project/Issue Management Tool. This project would be broken down into more granular project issues such as 'Create the login screen' or 'Design database schema for issues', and so on. Authenticated users can create new projects. The creator of the project within an account has a special role within that project, called the project owner. Project owners have the ability to edit and delete these projects as well as add new members to the project. Other users associated with the project—besides the project owner—are referred to simply as project members. They have the ability to add new issues, as well as edit existing ones. Issues Project issues can be classified into one of the following three categories: Features: Items that represent real features to be added to the application. For example, 'Implement the login functionality' Tasks: Items that represent work that needs to be done, but is not an actual feature of the software. For example, 'Set up the build and integration server' Bugs: Items that represent application behaviors that are not working as expected. For example, 'The account registration form does not validate the format of input e-mail addresses' Issues can have one of the following three statuses: Not yet started Started Finished Project members can add new issues to a project, as well as edit and delete them. They can assign issues to themselves or other project members. For now, this is enough information on these three main entities. We could go into a lot more detail about what exactly account registration entails' and how exactly one adds a new task to a project', but we have outlined enough specifications to begin on these basic features. We'll nail down the more granular details as we proceed with the implementation. However, before we start, we should jot down some basic navigation and application workflow. This will help everyone to better understand the general layout and flow of the application we are building. Navigation and page flow It is always good to outline the main pages within an application, and how they fit together. This will help us quickly identify some needed Yii controllers, actions and views as well as help to set everyone's expectations as to what we'll be building towards at the onset of our development. The following figure shows the basic idea of the application flow from logging in, through the project details listing: When users first come to the application, they must log in to authenticate themselves before accessing any functionality. Once successfully logged-in, they will be presented with a list of his current projects along with the option to create a new project. Choosing a specific project will take them to the project details page. The project details page will present a list of the issues by type. There will also be the option to add a new issue as well as edit any of the listed issues. This is all pretty basic functionality, but the figure gives us a little more information on how the application is stitched together and allows us to better identify our needed models, views, and controllers. It also allows something visual to be shared with others so that everyone involved has the same 'picture' of what we are working towards. In my experience, almost everyone prefers pictures over written specifications when first thinking through a new application. Defining a data scheme We still need to think a little more about the data we will be working with as we begin to build toward these specifications. If we pick out all the main nouns from our system, we may end up with a pretty good list of domain objects and, by extension of using Active Record, the data we want to model. Our previously outlined user stories seem to dictate the following: A User A Project An Issue Based on this and the other details provided in the user stories and application workflow diagram, a first attempt at the needed data is shown in the following figure. This is a basic object model that outlines our primary data entities, their respective attributes, and some of the relationships between them. The 1..* on either side of the line between the Project and User objects represents a many-to-many relationship between them. A user can be associated with one or more projects, and a project has one or more users. Similarly we have represented the fact that a project can have zero or more issues associated with it, whereas an issue belongs to just one specific project. Also, a user can be the owner of (or requester of) many issues, but an issue has just one owner (and also just one requester). We have kept the attributes as simple as possible at this state. A User is going to need a username and a password in order to get past the login screen. The Project has only a name Issues have the most associated information based on what we currently know about them. As discussed briefly in the user stories above, they will have a type attribute to distinguish the general category (bug, feature, or task). They will also have a status attribute to indicate the progress of the issue being worked on. A user in the system will initially create the issue, this is the requester. Once a user in the system has been assigned to work on the issue, they will be the owner of the issue. We have also defined the description attribute to allow for some descriptive text of the issue to be entered. Notice that we have not explicitly talked about schemas or databases yet. The fact is, until we think through what is really needed from a data perspective, we won't know the right tool to use to house this data. Would flat files on the filesystem work just as well as a relational database? Do we need a persistent data at all?   The answers to these questions are not needed in this early planning state. It is better to focus more on the features that we want and the type of data needed to support these features. We can turn to the explicit technology implementation details after we have had a chance to discuss these ideas with other project stakeholders to ensure we are on the right track. Other project stakeholders include anyone and everyone involved in this development project. This can include the client, if building an application for someone else, as well as other development team members, product/project managers, and so on. It is always a good idea to get some feedback from "the team" to help validate the approach and any assumptions being made. However, before we dive right into building our application, we need to cover our development approach. We will be employing some specific development methodologies and principles, and it makes sense to go over these prior to getting started with coding. Defining our development methodology We will be employing an agile inspired process of iterative and incremental development as we build this application. 'Agile' is certainly a loaded term in modern software development and can have varied meanings among developers. Our process will focus on the aspects of an agile methodology that embrace transparent and open collaboration, constant feedback loops, and a strong ability to respond quickly to changing requirements. We will work incrementally in that we won't wait until every detail of the application has been specified before we start coding. Once the details of a particular feature have been finalized, we can begin work on implementing that feature, even though other features or application details are still in the design/planning stage. The process surrounding this feature implementation will follow an iterative model. We will do some initial iteration planning, engage in analysis and design, write the code to try out these ideas, test the code, and gather feedback. We then repeat this cycle of design->code->test->evaluation, until everyone is happy. Once everyone is happy, we can deploy the application with the new feature, and then start gathering the specifications on the next feature(s) to be implemented in the next iteration. Automated software testing Gathering feedback is of fundamental importance to agile development. Feedback from the users of the application and other project stakeholders, feedback from the development team members, and feedback directly from the software itself. Developing software in a manner that will allow it to tell you when something is broken can turn the fear associated with integrating and deploying applications into boredom. The method by which you empower your software with this feedback mechanism is writing unit and functional tests, and then executing them repeatedly and often. Unit and functional testing Unit tests are written to provide the developer with verification that the code is doing the right things. Functional tests are written to provide the developer, as well as other project stakeholders, that the application, as a whole, is doing things the right way. Unit tests Unit tests are tests that focus on the smallest units within a software application. In an object-oriented application, (such as a Yii web application) the smallest units are the public methods that make up the interfaces to classes. Unit tests should focus on one single class, and not require other classes or objects to run. Their purpose is to validate that a single unit of code is working as expected. Functional tests Functional tests focus on testing the end-to-end feature functionality of the application. These tests exist at a higher level than the unit tests and typically do require multiple classes or objects to run. Their purpose is to validate that a given feature of the application is working as expected. Benefits of testing There are many benefits to writing unit and functional tests. For one, they are a great way to provide documentation. Unit tests can quickly tell the exact story of why a block of code exists. Similarly, functional tests document what features are implemented within an application. If you stay diligent in writing these tests, then the documentation continues to evolve naturally as the application evolves. They are also invaluable as a feedback mechanism to constantly reassure the developer and other project stakeholders that the code and application is working as expected. You run your tests every time you make changes to the code and get immediate feedback on whether or not something you altered inadvertently changed the behavior of the system. You then address these issues immediately. This really increases the confidence that developers have in the application's behavior and translates to fewer bugs and more successful projects. This immediate feedback also helps to facilitate change and improving the design of the code base. A developer is more likely to make improvements to existing code if a suite of tests are in place to immediately provide feedback as to whether the changes made altered the application behavior. The confidence provided by a suite of unit and functional tests allows developers to write better software, release a more stable application, and ship quality products. Test-driven development Test-driven development (TDD) is a software development methodology that helps to create an environment of comfort and confidence by ensuring your test suite grows organically with your application, and is always up-to-date. It does this by stipulating that you begin your coding by first writing a test for the code you are about to write. The following steps sum up the process: Begin by writing a test that will quickly fail. Run the test to ensure it does, indeed, fail. Quickly add just enough code to the class you are testing to get the test to pass. Run the test again to ensure it does, indeed, pass. Refactor the code to remove any repetitive logic or improve any corners cut while you were just trying to get the test to pass. These steps are then repeated throughout the entire development process. Even with the best intentions, if you wait to write your tests until after the code is completed, you probably won't. Writing your tests first and injecting the test writing process directly into the coding process will ensure the best test coverage. This depth of coverage will help minimize the stress and fear that can accompany complex software applications and build confidence by constantly providing positive feedback as additions and changes are made. In order to embrace a TDD process, we need to understand how to test within a Yii application.
Read more
  • 0
  • 0
  • 4935

article-image-freeswitch-utilizing-built-ivr-engine
Packt
05 Aug 2010
10 min read
Save for later

FreeSWITCH: Utilizing the Built-in IVR Engine

Packt
05 Aug 2010
10 min read
IVR engine overview Unlike many applications within FreeSWITCH which are built as modules, IVR is considered the core functionality of FreeSWITCH. It is used anytime a prompt is played and digits are collected. Even if you are not using the IVR application itself from your Dialplan, you will see IVR-related functions being utilized from various other applications. As an example, the voicemail application makes heavy use of IVR functionality when playing messages, while awaiting digits to control deleting, saving, and otherwise managing voicemails. In this section, we will only be reviewing the IVR functionality that is exposed from within the ivr Dialplan application. This functionality is typically used to build an auto-attendant menu, although other functions are possible as well. IVR XML configuration file FreeSWITCH ships with a sample IVR menu are typically invoked by dialing 5000 from the sample Dialplan. When you dial 500, you will hear a greeting welcoming you to FreeSWITCH, and presenting your menu options. The menu options consist of calling the FreeSWITCH conference, calling the echo extension, hearing music on hold, going to a sub menu, or listening to screaming monkeys. We will start off reviewing the XML that powers this example. Open conf/autoload_configs/ivr.xml which contains the following XML: <configuration name="ivr.conf" description="IVR menus"> <menus> <!-- demo IVR, Main Menu --> <menu name="demo_ivr" greet-long="phrase:demo_ivr_main_menu" greet-short="phrase:demo_ivr_main_menu_short" invalid-sound="ivr/ivr-that_was_an_invalid_entry.wav" exit-sound="voicemail/vm-goodbye.wav" timeout="10000" inter-digit-timeout="2000" max-failures="3" max-timeouts="3" digit-len="4"> <entry action="menu-exec-app" digits="1" param="bridge sofia/$${domain}/888@conference.freeswitch.org"/> <entry action="menu-exec-app" digits="2" param="transfer 9196 XML default"/> <entry action="menu-exec-app" digits="3" param="transfer 9664 XML default"/> <entry action="menu-exec-app" digits="4" param="transfer 9191 XML default"/> <entry action="menu-exec-app" digits="5" param="transfer 1234*256 enum"/> <entry action="menu-exec-app" digits="/^(10[01][0-9])$/" param="transfer $1 XML features"/> <entry action="menu-sub" digits="6" param="demo_ivr_submenu"/> <entry action="menu-top" digits="9"/> </menu> <!-- Demo IVR, Sub Menu --> <menu name="demo_ivr_submenu" greet-long="phrase:demo_ivr_sub_menu" greet-short="phrase:demo_ivr_sub_menu_short" invalid-sound="ivr/ivr-that_was_an_invalid_entry.wav" exit-sound="voicemail/vm-goodbye.wav" timeout="15000" max-failures="3" max-timeouts="3"> <entry action="menu-top" digits="*"/> </menu> </menus> </configuration> In the preceding example, there are two IVR menus defined. Let's break apart the first one and examine it, starting with the IVR menu definition itself. IVR menu definitions The following XML defines an IVR menu named "demo_ivr". <menu name="demo_ivr" greet-long="phrase:demo_ivr_main_menu" greet-short="phrase:demo_ivr_main_menu_short" invalid-sound="ivr/ivr-that_was_an_invalid_entry.wav" exit-sound="voicemail/vm-goodbye.wav" timeout="10000" inter-digit-timeout="2000" max-failures="3" max-timeouts="3" digit-len="4"> We'll use this menu's name later when we route calls to the IVR from the Dialplan. Following the name, various XML attributes specify how the IVR will behave. The following options are available when defining an IVR's options: greet-long The greet-long attribute specifies the initial greeting that is played when a caller reaches the IVR. This is different from the greet-short sound file which allows for introductions to be played, such as "Thank you for calling XYZ Company". In the sample IVR, the greet-long attribute is a Phrase Macro that plays an introductory message to the caller ("Welcome to FreeSWITCH...") followed by the menu options the caller may choose from. Argument syntax: Sound file name (or path + name), TTS, or Phrase Macro Examples: greet-long="my_greeting" greet-long="phrase:my_greeting_phrase" greet-long="say:Welcome to our company. Press 1 for sales, 2 for support." greet-short The greet-short attribute specifies the greeting that is re-played if the caller enters invalid information, or no information at all. This is typically the same sound file as greet-long without the introduction. In the sample IVR, the greet-short attribute is a Phrase Macro that simply plays the menu options to the caller, and does not play the lengthy introduction found in greet-long. Argument syntax: Sound file name (or path + name), TTS, or Phrase Macro Examples: greet-short="my_greeting_retry" greet-long="phrase:my_greeting_retry_phrase" greet-long="say:Press 1 for sales, 2 for support." invalid-sound The invalid-sound attribute specifies the sound that is played when a caller makes an invalid entry. Argument syntax: Sound file name (or path + name), TTS, or Phrase Macro Examples invalid-sound="invalid_entry.wav" invalid-sound="phrase:my_invalid_entry_phrase" invalid-sound="say:That was not a valid entry" exit-sound The exit-sound attribute specifies the sound, which is played when a caller makes too many invalid entries or too many timeouts occur. This file is played before disconnecting the caller. Argument syntax: Any number, in milliseconds Examples: exit-sound="too_many_bad_entries.wav" exit-sound="phrase:my_too_many_bad_entries_phrase" exit-sound="say:Hasta la vista, baby." timeout The timeout attribute specifies the maximum amount of time to wait for the user to begin entering digits after the greeting has played. If this time limit is exceeded, the menu is repeated until the value in the max-timeouts attribute has been reached. Argument syntax: Any number, in milliseconds Examples: timeout="10000" timeout="20000" inter-digit-timeout The inter-digit-timeout attribute specifies the maximum amount of time to wait in-between each digit the caller presses. This is different from the overall timeout.It is useful to allow enough time to enter as many digits as necessary, without frustrating the caller by pausing too long after they are done making their entry. For example, if both 1000 and 1 are valid IVR entries, the system will continue waiting for the inter-digit-timeout length of time after 1 is entered, before determining that it is the final entry. Argument syntax: Any number, in milliseconds Examples: inter-digit-timeout="2000" max-failures The max-failures attribute specifies how many failures, due to invalid entries, to tolerate before disconnecting. Argument syntax: Any number Examples: xx-xx="too_many_bad_entries.wav" xx-xx="phrase:my_too_many_bad_entries_phrase" max-timeouts The max-timeouts attribute specifes how many timeouts to tolerate before disconnecting. Argument syntax: Any number Examples: max-timeouts="3" digit-len The digit-len attribute specifes the maximum number of digits that the user can enter before determining the entry is complete. Argument syntax: Any number greater than 1. Examples: digit-len="4" tts-voice The tts-voice attribute specifes the specifc text-to-speech voice that should be used. Argument syntax: Any valid text-to-speech engine. Examples: tts-voice="Mary" tts-engine The tts-engine attribute specifies the specific text-to-speech engine that should be used. Argument syntax: Any valid text-to-speech engine. Examples: tts-engine="flite" confirm-key The confirm-key attribute specifes the key which the user can press to signify that they are done entering information. Argument syntax: Any valid DTMF digit. Examples: confirm-key="#" These attributes dictate the general behavior of the IVR. IVR menu destinations After defining the global attributes of the IVR, you need to specify what specific destinations (or options) are available for the caller to press. You do this with <entry > XML elements. Let's review the first five XML options used by this IVR: <entry action="menu-exec-app" digits="1" param="bridge sofia/$${domain}/888@conference.freeswitch.org"/> <entry action="menu-exec-app" digits="2" param="transfer 9196 XML default"/> <entry action="menu-exec-app" digits="3" param="transfer 9664 XML default"/> <entry action="menu-exec-app" digits="4" param="transfer 9191 XML default"/> <entry action="menu-exec-app" digits="5" param="transfer 1234*256 enum"/> <entry action="menu-exec-app" digits="/^(10[01][0-9])$/" param="transfer $1 XML features"/> Each preceding entry defines three parameters—an action to be taken, the digits the caller must press to activate that action, and the parameters that are passed to the action. In most cases you will probably use the menu-exec-app action, which simply allows you to specify an action and parameters to call just as you would from the regular Dialplan (bridge, transfer, hangup, and so on.). These options are all pretty simple—they define a single digit which, when pressed, either bridges a call or transfers the call to an extension. There is one entry that is a bit different from the rest, which is the fnal IVR entry. It deserves a closer look.   <entry action="menu-exec-app" digits="/^(10[01][0-9])$/" param="transfer $1 XML features"/> This entry definition specifes a regular expression for the digits feld. This regular expression feld is identical to the expressions you would use in the Dialplan. In this example, the IVR is looking for any four-digit extension number from 1000 through 1019 (which is the default extension number range for the predefined users in the directory). As the regular expression is wrapped in parenthesis, the result of the entry will be passed to the transfer application as the $1 channel variable. This effectively allows the IVR to accept 1000-1019 as entries, and transfer the caller directly to those extensions when they are entered into the IVR. The remaining IVR entry actions are a bit different. They introduce menu-sub as an action, which transfers the caller to an IVR sub-menu, and menu-top, which restarts the current IVR and replays the menu. <entry action="menu-sub" digits="6" param="demo_ivr_submenu"/> <entry action="menu-top" digits="9"/> Several other actions exist that can be used within an IVR. The complete list of actions you can use from within the IVR include the following: menu-exec-app The menu-exec-app action, combined with a param field, executes the specified application and passes the parameterslisted to that application. This is equivalent to using <action application="app" data="data"> in your Dialplan. The most common use of menu-exec-app is to transfer a caller to another extension in the Dialplan. Argument syntax: application <params> Examples: <entry digits="1" action="menu-exec-app" param="application param1 param2 param3 ..."> <entry digits="2" action="menu-exec-app" param="transfer 9664 XML default"> menu-exec-api The menu-exec-api action, combined with a param feld, executes the specifed API command and passes the parameters listed to that command. This is equivalent to entering API commands at the CLI or from the event socket. Argument syntax: api_command <params> Examples: <entry digits="1" action="menu-exec-api" param="eval Caller Pressed 1!"> menu-play-sound The menu-play-sound action, combined with a param field, plays a specified sound file. Argument syntax: valid sound file <entry digits="1" action="menu-play-sound" param="screaming_monkeys.wav"> menu-back The menu-back action returns to the previous IVR menu, if any. Argument syntax: none Examples: <entry digits="1" action="menu-back"> menu-top The menu-top action restarts this IVR's menu. Argument syntax: None. Examples: <entry digits="1" action="menu-top"> Take a look at the XML for the sample sub-menu IVR and see if you can fgure out what it does. Also note how it is called above, when clicking 6 from the main menu. <menu name="demo_ivr_submenu" greet-long="phrase:demo_ivr_sub_menu" greet-short="phrase:demo_ivr_sub_menu_short" invalid-sound="ivr/ivr-that_was_an_invalid_entry.wav" exit-sound="voicemail/vm-goodbye.wav" timeout="15000" max-failures="3" max-timeouts="3"> <entry action="menu-top" digits="*"/> </menu>
Read more
  • 0
  • 0
  • 10276
Modal Close icon
Modal Close icon