Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-sessions-and-users-php-5-cms
Packt
17 Aug 2010
14 min read
Save for later

Sessions and Users in PHP 5 CMS

Packt
17 Aug 2010
14 min read
(For more resources on PHP, see here.) The problem Dealing with sessions can be confusing, and is also a source of security loopholes. So we want our CMS framework to provide basic mechanisms that are robust. We want them to be easy to use by more application-oriented software. To achieve these aims, we need to consider: The need for sessions and their working The pitfalls that can introduce vulnerabilities Efficiency and scalability considerations Discussion and considerations To see what is required for our session handling, we shall first review the need for them and consider how they work in a PHP environment. Then the vulnerabilities that can arise through session handling will be considered. Web crawlers for search engines and more nefarious activities can place a heavy and unnecessary load on session handling, so we shall look at ways to avoid this load. Finally, the question of how best to store session data is studied. Why sessions? The need for continuity was mentioned when we first discussed users. But it is worth reviewing the requirement in a little more detail. If Tim Berners-Lee and his colleagues had known all the developments that would eventually occur in the internet world, maybe the Web would have been designed differently. In particular, the basic web transport protocol HTTP might not have treated each request in isolation. But that is hindsight, and the Web was originally designed to present information in a computer-independent way. Simple password schemes were sufficient to control access to specific pages. Nowadays, we need to cater for complex user management, or to handle things like shopping carts, and for these we need continuity. Many people have recognized this, and introduced the idea of sessions. The basic idea is that a session is a series of requests from an individual website visitor, and the session provides access to enduring information that is available throughout the session. The shopping cart is an obvious example of information being retained across the requests that make up a session. PHP has its own implementation of sessions, and there is no point reinventing the wheel, so PHP sessions are the obvious tool for us to use to provide continuity. How sessions work There are three main choices which have been available for handling continuity: Adding extra information to the URI Using cookies Using hidden fields in the form sent to the browser All of them can be used at times. Which of them is most suitable for handling sessions? PHP uses either of the first two alternatives. Web software often makes use of hidden variables, but they do not offer a neat way to provide an unobtrusive general mechanism for maintaining continuity. In fact, whenever hidden variables are used, it is worth considering whether session data would be a better alternative. For reasons discussed in detail later, we shall consider only the use of cookies, and reject the URI alternative. There was a time when there were lots of scary stories about cookies, and people were inclined to block them. While there will always be security issues associated with web browsing, the situation has changed, and the majority of sites now rely on cookies. It is generally considered acceptable for a site to demand the use of cookies for operations such as user login or for shopping carts and purchase checkout. The PHP cookie-based session mechanism can seem obscure, so it is worth explaining how it works. First we need to review the working of cookies. A cookie is simply a named piece of data, usually limited to around 4,000 bytes, which is stored by the browser in order to help the web server to retain information about a user. More strictly, the connection is with the browser, not the user. Any cookie is tied to a specific website, and optionally to a particular part of the website, indicated by a path. It also has a life time that can be specified explicitly as a duration; a zero duration means that the cookie will be kept for as long as the browser is kept open, and then discarded. The browser does nothing with cookies, except to save and then return them to the server along with requests. Every cookie that relates to the particular website will be sent if either the cookie is for the site as a whole, or the optional path matches the path to which the request is being sent. So cookies are entirely the responsibility of the server, but the browser helps by storing and returning them. Note that, since the cookies are only ever sent back to the site that originated them, there are constraints on access to information about other sites that were visited using the same browser. In a PHP program, cookies can be written by calling the set_cookie function, or implicitly through session handling. The name of the cookie is a string, and the value to be stored is also a string, although the serialize function can be used to make more structured data into a string for storage as a cookie. Take care to keep the cookies within the size limit. PHP makes available the cookies that have been sent back by the browser in the $_COOKIES super-global, keyed by their names. Apart from any cookies explicitly written by code, PHP may also write a session cookie. It will do so either as a result of calls to session handling functions, or because the system has been configured to automatically start or resume a session for each request. By default, session cookies do not use the option of setting an expiry time, but can be deleted when the browser is closed down. Commonly, browsers keep this type of cookie in memory so that they are automatically lost on shutdown. Before looking at what PHP is doing with the session cookie, let's note that there is an important general consideration for writing cookies. In the construction of messages between the server and the browser, cookies are part of the header. That means rules about headers must be obeyed. Headers must be sent before anything else, and once anything else has been sent, it is not permitted to send more headers. So, in the case of server to browser communication, the moment any part of the XHTML has been written by the PHP program, it is too late to send a header, and therefore too late to write a cookie. For this reason, a PHP session is best started early in the processing. The only purpose PHP has in writing a session cookie is to allocate a unique key to the session, and retrieve it again on the next request. So the session cookie is given an identifying name, and its value is the session's unique key. The session key is usually called the session ID, and is used by PHP to pick out the correct set of persistent values that belong to the session. By default, the session name is PHPSESSID but it can, in most circumstances, be changed by calling the PHP function session_name prior to starting the session. Starting, or more often restarting, a session is done by calling session_start, which returns the session ID. In a simple situation, you do not need the session ID, as PHP places any existing session data in another superglobal, $_SESSION. In fact, we will have a use for the session ID as you will soon see. The $_SESSION super-global is available once session_start has been called, and the PHP program can store whatever data it chooses in it. It is an array, initially empty, and naturally the subscripts need to be chosen carefully in a complex system to avoid any clashes. The neat part of the PHP session is that provided it is restarted each time with session_start, the $_SESSION superglobal will retain any values assigned during the handling of previous requests. The data is thus preserved until the program decides to remove it. The only exception to this would be if the session expired, but in a default configuration, sessions do not expire automatically. Later in this article, we will look at ways to deliberately kill sessions after a determinate period of inactivity. As it is only the session ID that is stored in the cookie, rules about the timing of output do not apply to $_SESSION, which can be read or written at any time after session_start has been called. PHP stores the contents of $_SESSION at the end of processing or on request using the PHP function session_write_close. By default, PHP puts the data in a temporary file whose name includes the session ID. Whenever the session data is stored, PHP retrieves it again at the next session_start. Session data does not have to be stored in temporary files, and PHP permits the program to provide its own handling routines. We will look at a scheme for storing the session data in a database later in the article. Avoiding session vulnerabilities So far, the option to pass the session ID as part of the URI instead of as a cookie has not been considered. Looking at security will show why. The main security issue with sessions is that a cracker may find out the session ID for a user, and then hijack that user's session. Session handling should do its best to guard against that happening. PHP can pass the session ID as part of the URI. This makes it especially vulnerable to disclosure, since URIs can be stored in all kinds of places that may not be as inaccessible as we would like. As a result, secure systems avoid the URI option. It is also undesirable to find links appearing in search engines that include a session ID as part of the URI. These two points are enough to rule out the URI option for passing session ID. It can be prevented by the following PHP calls: ini_set('session.use_cookies', 1);ini_set('session.use_only_cookies', 1); These calls force PHP to use cookies for session handling, an option that is now considered acceptable. The extent to which the site will function without cookies depends on what a visitor can do with no continuity of data—user login will not stick, and anything like a shopping cart will not be remembered. It is best to avoid the default name of PHPSESSID for the session cookie, since that is something that a cracker could look for in the network traffic. One step that can be taken is to create a session name that is the MD5 hash of various items of internal information. This makes it harder but not impossible to sniff messages to find out a session ID, since it is no longer obvious what to seek—the well known name of PHPSESSID is not used. It is important for the session ID to be unpredictable, but we rely on PHP to achieve that. It is also desirable that the ID be long, since otherwise it might be possible for an attacker to try out all possible values within the life of a session. PHP uses 32 hexadecimal digits, which is a reasonable defense for most purposes. The other main vulnerability apart from session hijacking is called session fixation. This is typically implemented by a cracker setting up a link that takes the user to your site with a session already established, and known to the cracker. An important security step that is employed by robust systems is to change the session ID at significant points. So, although a session may be created as soon as a visitor arrives at the site, the session ID is changed at login. This technique is used by Amazon among others so that people can browse for items and build up a shopping cart, but on purchase a fresh login is required. Doing this reduces the available window for a cracker to obtain, and use, the session ID. It also blocks session fixation, since the original session is abandoned at critical points. It is also advisable to change the ID on logout, so although the session is continued, its data is lost and the ID is not the same. It is highly desirable to provide logout as an option, but this needs to be supplemented by time limits on inactive sessions. A significant part of session handling is devoted to keeping enough information to be able to expire sessions that have not been used for some time. It also makes sense to revoke a session that seems to have been used for any suspicious activity. Ideally, the session ID is never transmitted unencrypted, but achieving this requires the use of SSL, and is not always practical. It should certainly be considered for high security applications. Search engine bots One aspect of website building is, perhaps unexpectedly, the importance of handling the bots that crawl the web. They are often gathering data for search engines, although some have more dubious goals, such as trawling for e-mail addresses to add to spam lists. The load they place on a site can be substantial. Sometimes, search engines account for half or more of the bandwidth being used by a site, which certainly seems excessive. If no action is taken, these bots can consume significant resources, often for very little advantage to the site owner. They can also distort information about the site, such as when the number of current visitors is displayed but includes bots in the counts. Matters are made worse by the fact that bots will normally fail to handle cookies. After all, they are not browsers and have no need to implement support for cookies. This means that every request by a bot is separate from every other, as our standard mechanism for linking requests together will not work. If the system starts a new session, it will have to do this for every new request from a bot. There will never be a logout from the bot to terminate the session, so each bot-related session will last for the time set for automatic expiry. Clearly it is inadvisable to bar bots, since most sites are anxious to gain search engine exposure. But it is possible to build session handling so as to limit the workload created by visitors who do not permit cookies, which will mostly be bots. When we move into implementation techniques, the mechanisms will be demonstrated. Session data and scalability We could simply let PHP take care of session data. It does that by writing a serialized version of any data placed into $_SESSION into a file in a temporary directory. Each session has its own file. But PHP also allows us to implement our own session data handling mechanism. There are a couple of good reasons for using that facility, and storing the information in the database. One is that we can analyze and manage the data better, and especially limit the overhead of dealing with search engine bots. The other is that by storing session data in the database, we make it feasible for the site to be run across multiple servers. There may well be other issues before that can be achieved, but providing session continuity is an essential requirement if load sharing is to be fully effective. Storing session data in a database is a reliable solution to this issue. Arguments against storing session data in a database include questions about the overhead involved, constraints on database performance, or the possibility of a single point of failure. While these are real issues, they can certainly be mitigated. Most database engines, including MySQL, have many options for building scalable and robust systems. If necessary, the database can be spread across multiple computers linked by a high speed network, although this should never be done unless it is really needed. Design of such a system is outside the scope of this article, but the key point is that the arguments against storing session data in a database are not particularly strong.
Read more
  • 0
  • 0
  • 3437

Packt
16 Aug 2010
12 min read
Save for later

URL Shorteners – Designing the TinyURL Clone with Ruby

Packt
16 Aug 2010
12 min read
(For more resources on Ruby, see here.) We start off with an easy application, a simple yet very useful Internet application, URL shorteners. We will take a quick tour of URL shorteners before jumping into the design of a simple URL shortener, followed by an in-depth discussion of how we clone our own URL shortener, Tinyclone. All about URL shorteners Internet applications don't always need to be full of features or cover all aspects of your Internet life to be successful. Sometimes it's ok to be simple and just focus on providing a single feature. It doesn't even need to be earth-shatteringly important—it should be just useful enough for its target users. The archetypical and probably most extreme example of this is the URL shortening application or URL shortener. This service offers a very simple but surprisingly useful feature. It provides a shorter URL that represents a normally longer URL. When a user goes to the short URL, he will be redirected to the original URL. For this simple feature, top three most popular URL shortening services (TinyURL, bit.ly, and is.gd) collectively had about 11 million unique visitors, 110 million page views and a reach of about one percent of the Internet in June 2009. In 2008, the most popular URL shortener at that time, TinyURL, was made one of Time Magazine's Top 50 Best Websites. The idea to shorten long and unwieldy URLs into shorter, more manageable ones has been around for some time. One of the earlier attempts to make it a public service is Make A Shorter Link (MASL), which appeared around July 2001. MASL did just that, though the usefulness was debatable as the domain name was long and the shortened URL could potentially be longer than the original. However, the pioneering site that popularized this concept (and subsequently bought over MASL and a few other similar sites) is TinyURL. TinyURL was launched in January 2002 by Kevin Gilbertson to help him to link directly to newsgroup postings which frequently had long URLs. It rapidly became one of the most popular URL shorteners around. In 2008, an estimated 100 similar services came to existence in various forms. URLs or Uniform Resource Locators are resource identifiers that specify where identified resources are available and how they can be retrieved. A popular term for URL is a Web address. Every URL is made up of the following: <resource type>://<username>:<password>@<domain>:<port>/<file path name>?<query string>#<anchor> Not all parts of the URL are required by a browser, if the resource type is missing, it is normally assumed to be http, if the port is missing, it is normally assumed to be 80 (for http). The username, password, query string and anchor components are optional. Initially, TinyURL and similar types of URL shorteners focused on simply providing a short representative URL to their users. Naturally the competitive breadth for shortening URLs was rather well, short. Many chose TinyURL over MASL because TinyURL had a shorter and easier to remember domain name (http://tinyurl.com over http://makeashorterlink.com) Subsequent competition over this space intensified and extended to providing various other features, including custom short URLs (TinyURL, bit.ly), analysis of click-through statistics (bit.ly), advertisements (Adjix, Linkbee), preview pages (TinyURL, is.gd) and so on. The explosive growth of Twitter (from June 2008 to June 2009, Twitter grew 1,164%) opened a new chapter for URL shorteners. Twitter chose a limit of 140 characters for each tweet to accommodate the 160 characters in an SMS message (Twitter was invented as a service for people to use SMS to tell small groups what they are doing). With Twitter's popularity skyrocketing, came the need for users to shorten URLs to fit into the 140 characters limit. Originally Twitter used TinyURL as its default URL shortener and this triggered a steep climb in the usage of TinyURL during the early days of Twitter. However, in May 2009, bit.ly replaced TinyURL as Twitter's default URL shortener and the impact was immediate. For the first time in that period, TinyURL recorded a drop in the number of users in May 2009, dropping from 6.1 million to 5.3 million unique users, while bit.ly jumped from 1.8 million to 2.9 million almost overnight. That's not the end of the story though. In April 2010 during Twitter's Chirp conference, Twitter announced its own URL shortener (twt.tl). As of writing it is still unclear the market share will pan out but it's clear that URL shorteners have good value and everyone is jumping into this market. In December 2009, Google came up with its own two URL shorteners goo.gl and youtu.be. Amazon.com (amzn.to), Facebook (fb.me) and Wordpress (wp.me) all have their own URL shorteners as well. Next, let's do a quick review of why URLs shorteners are so popular and why they attract criticism as well. Here's a quick summary of the benefits: Create short and easy to remember URLs Allow passing of links in character-limited services such as Twitter Create vanity URLs for marketing purposes Can verbally pass URLs The most obvious benefit of having a shortened URL is that it's, well, short. A typical example of an URL gone bad is a link to a location in Google Maps: http://maps.google.com/maps?f=q&source=s_q&hl=en&geocode=&q=singapore +flyer&vps=1&jsv=169c&sll=1.352083,103.819836&sspn=0.68645,1.382904&g =singapore&ie=UTF8&latlng=8354962237652576151&ei=Shh3SsSRDpb4vAPsxLS3 BQ&cd=1&usq=Singapore+Flyer Such URLs are meant to be clicked on as it is virtually impossible to pass it around verbally. It might be justifiable if the URL is cut and pasted on documents, but sometimes certain applications will truncate parts of the URL while processing. This makes a long URL difficult to click on and even produces erroneous links. In fact, this was the main motivation in creating most of the earlier URL shorteners—older email clients tend to truncate URLs when they are more than 80 characters. Short links are of course crucial in character-limited message passing systems like Twitter, Plurk, and SMS. Passing long URLs is impossible without URL shorteners. Short URLs are very useful in cases of vanity URLs where for example, the Google Maps link above could be shortened to http://tinyurl.com/singapore-flyer. Such vanity URLs are useful when passing from one person to another, or even when using in a mass marketing way. Sticking to the maps theme in our examples, if you want to give a Google Maps link to your restaurant and put it up in catalogs and brochures, you will not want to give the long URL. Instead you would want a nice, descriptive and short URL. Short URLs are also useful in cases of accessibility. For example, reading out the Google Maps link above is almost impossible, but reading out the TinyURL link (vanity or otherwise) is much easier in comparison. Many popular URL shorteners also provide some form of statistics and analytics on the usage of the links. This feature allows you to track your short URLs to see how many clicks it received and what kind of patterns can be derived from the clicks. Although the metrics are usually not advanced, they do provide basic usefulness. On the other hand, URL shorteners have it fair share of criticisms as well. Here is a summary of the bad side of URL shorteners: Provides opportunity to spammers because it hide original URLs Could be unreliable if dependent on it for redirection Possible undesirable or vulgar short URLs URL shorteners have security issues. When a URL shortener creates a short URL, it effectively hides the original link and this provides opportunity for spammers or other abusers to redirect users to their sites. One relatively mild form of such attack is 'rickrolling'. Rickrolling uses a classic bait-and-switch trick to redirect users to a Rick Astley music video of Never Gonna Give You Up. For example, you might feel that the URL http://tinyurl.com/singapore-flyer goes to Google Map, but when you click on it, you might be rickrolled and redirected to that Rick Astley music video instead. Also, because most short URLs are not customized, it is quite difficult to see if the link is genuine or not just from the URL. Many prominent websites and applications have such concerns, including MySpace, Flickr and even Microsoft Live Messenger, and have one time or another banned or restricted usage of TinyURL because of this problem. To combat spammers and fraud, URL shortening services have come up with the idea of link previews, which allows users to preview a short URL before it redirects the user to the long URL. For example TinyURL will show the user the long URL on a preview page and requires the user to explicitly go to the long URL. Another problem is performance and reliability. When you access a website, your browser goes to a few DNS servers to resolve the address, but the URL shortener adds another layer of indirection. While DNS servers have redundancy and failsafe measures, there is no such assurance from URL shorteners. If the traffic to a particular link becomes too high, will the shortening service provider be able to add more servers to improve performance or even prevent a meltdown altogether? The problem of course lies in over-dependency on the shortening service. Finally, a negative side effect of random or even customized short URLs is that undesirable, vulgar or embarrassing short URLs can be created. Earlier on TinyURL short URLs were predictable and it was exploited, such as embarrassing short URLs that were made to redirect to the White House websites of then U.S. Vice President Dick Cheney and Second Lady Lynne Cheney. We have just covered significant ground on URL shorteners. If you are a programmer you might be wondering, "Why do I need to know such information? I am really interested in the programming bits, the others are just fluff to me." Background information on the application we want to clone is very important. It tells us what why that application exists in the first place and gives us an idea what are the main features (what makes it popular). It also tells us what problems it faces such that we are aware of the problem while programming it, or even avoid it altogether. This is important when we come to the design of the application. Finally it gives us better appreciation of the application and the motivations and issues faced by the product and technical people behind the application we wish to clone. Main features Next, let's list down the features of a URL shortener. The intention in this section is to distill the basic features of the application, features that define the service. Features listed here will be features that make the application what it is. However, as much as possible we want to also explore some additional features that extend the application and are provided by many of its competitors. Most importantly, the features here are mostly features of the most popular and definitive web application in the category. In this article, this will be TinyURL. These are the main features of a URL shortener: Users can create a short URL that represents a long URL Users who visit the short URL will be redirected to the long URL Users can preview a short URL to enable them to see what the long URL is Users can provide a custom URL to represent the long URL Undesirable words are not allowed in the short URL Users are able to view various statistics involving the short URL, including the number of clicks and where the clicks come from (optional, not in TinyURL) URL shorteners are simple web applications and the one that we will design and build will also be simple. Designing the clone Cloning TinyURL is relatively simple but there is some thought behind the design of the application. We will be building a clone of TinyURL called Tinyclone, which will be hosted at the domain http://tinyclone.saush.com. Creating a short URL for each long URL The domain of the short URL is fixed. What's left is the file path name. We need to represent the long URL with a unique file path name (a key), one for each long URL. This means we need to persist the relationship between the key and the URL. One of the ways we can associate the long URL with a unique key is to hash the long URL and use the resulting hash as the unique key. However, the resulting hash might be long and hashing functions could be slow. The faster and easier way is to use a relational database's auto-incremented row ID as the unique key. The database will help ensure the uniqueness of the ID. However, the running row ID number is base 10. To represent a million URLs would already require 7 characters, to represent 1 billion would take up 9 characters. In order to keep the number of characters smaller, we will need a larger base numbering system. In this clone we will use base 36, which is 26 characters of the alphabet (case insensitive) and 10 numbers. Using this system, we will only need 5 characters to represent 1 million URLs: 1,000,000 base 36 = lfls And 1 billion URLs can be represented in just six characters: 1,000,000,000 base 36 = gjdgxs
Read more
  • 0
  • 1
  • 12828

article-image-q-replication-components-ibm-replication-server
Packt
16 Aug 2010
8 min read
Save for later

Q Replication Components in IBM Replication Server

Packt
16 Aug 2010
8 min read
The individual stages for the different layers are shown in the following diagram: The DB2 database layer The first layer is the DB2 database layer, which involves the following tasks: For unidirectional replication and all replication scenarios that use unidirectional replication as the base, we need to enable the source database for archive logging (but not the target table). For multi-directional replication, all the source and target databases need to be enabled for archive logging. We need to identify which tables we want to replicate. One of the steps is to set the DATA CAPTURE CHANGES flag for each source table, which will be done automatically when the Q subscription is created. This setting of the flag will affect the minimum point in time recovery value for the table space containing the table, which should be carefully noted if table space recoveries are performed. Before moving on to the WebSphere MQ layer, let’s quickly look at the compatibility requirements for the database name, the table name, and the column names. We will also discuss whether or not we need unique indexes on the source and target tables. Database/table/column name compatibility In Q replication, the source and target database names and table names do not have to match on all systems. The database name is specified when the control tables are created. The source and target table names are specified in the Q subscription definition. Now let’s move on to looking at whether or not we need unique indexes on the source and target tables. We do not need to be able to identify unique rows on the source table, but we do need to be able to do this on the target table. Therefore, the target table should have one of: Primary key Unique contraint Unique index If none of these exist, then Q Apply will apply the updates using all columns. However, the source table must have the same constraints as the target table, so any constraints that exist at the target must also exist at the source, which is shown in the following diagram: The WebSphere MQ layer This is the second layer we should install and test—if this layer does not work then Q replication will not work! We can either install the WebSphere MQ Server code or the WebSphere MQ Client code. Throughout this book, we will be working with the WebSphere MQ Server code. If we are replicating between two servers, then we need to install WebSphere MQ Server on both servers. If we are installing WebSphere MQ Server on UNIX, then during the installation process a user ID and group called mqm are created. If we as a DBA want to issue MQ commands, then we need to get our user ID added to the mqm group. Assuming that WebSphere MQ Server has been successfully installed, we now need to create the Queue Managers and the queues that are needed for Q replication. This section also includes tests that we can perform to check that the MQ installation and setup is correct. The following diagram shows the MQ objects that need to be created for unidirectional replication: The following figure shows the MQ objects that need to be created for bidirectional replication: There is a mixture of Local Queue (LOCAL/QL) and Remote Queues (QREMOTE/QR) in addition to Transmission Queues (XMITQ) and channels. Once we have successfully completed the installation and testing of WebSphere MQ, we can move on to the next layer—the Q replication layer. The Q replication layer This is the third and final layer, which comprises the following steps: Create the replication control tables on the source and target servers. Create the transport definitions. What we mean by this is that we somehow need to tell Q replication what the source and target table names are, what rows/columns we want to replicate, and which Queue Managers and queues to use. Some of the terms that are covered in this section are: Logical table Replication Queue Map Q subscription Subscription group (SUBGROUP) What is a logical table? In Q replication, we have the concept of a logical table, which is the term used to refer to both the source and target tables in one statement. An example in a peer-to-peer three-way scenario is shown in the following diagram, where the logical table is made up of tables TABA, TABB, and TABC: What is a Replication/Publication Queue Map? The first part of the transport definitions mentioned earlier is a definition of Queue Map, which identifies the WebSphere MQ queues on both servers that are used to communicate between the servers. In Q replication, the Queue Map is called a Replication Queue Map, and in Event Publishing the Queue Map is called a Publication Queue Map. Let’s first look at Replication Queue Maps (RQMs). RQMs are used by Q Capture and Q Apply to communicate. This communication is Q Capture sending Q Apply rows to apply and Q Apply sending administration messages back to Q Capture. Each RQM is made up of three queues: a queue on the local server called the Send Queue (SENDQ), and two queues on the remote server—a Receive Queue (RECVQ) and an Administration Queue (ADMINQ), as shown in the preceding figures showing the different queues. An RQM can only contain one each of SENDQ, RECVQ, and ADMINQ. The SENDQ is the queue that Q Capture uses to send source data and informational messages. The RECVQ is the queue that Q Apply reads for transactions to apply to the target table(s). The ADMINQ is the queue that Q Apply uses to send control messages back to Q Capture. So using the queues in the first “Queues” figure, the Replication Queue Map definition would be: Send Queue (SENDQ): CAPA.TO.APPB.SENDQ.REMOTE on Source Receive Queue (RECVQ): CAPA.TO.APPB.RECVQ on Target Administration Queue (ADMINQ): CAPA.ADMINQ.REMOTE on Target Now let’s look at Publication Queue Maps (PQMs). PQMs are used in Event Publishing and are similar to RQMs, in that they define the WebSphere MQ queues needed to transmit messages between two servers. The big difference is that because in Event Publishing, we do not have a Q Apply component, the definition of a PQM is made up of only a Send Queue. What is a Q subscription? The second part of the transport definitions is a definition called a Q subscription, which defines a single source/target combination and which Replication Queue Map to use for this combination. We set up one Q subscription for each source/target combination. Each Q subscription needs a Replication Queue Map, so we need to make sure we have one defined before trying to create a Q subscription. Note that if we are using the Replication Center, then we can choose to create a Q subscription even though a RQM does not exist. The wizard will walk you through creating the RQM at the point at which it is needed. The structure of a Q subscription is made up of a source and target section, and we have to specify: The Replication Queue Map The source and target table The type of target table The type of conflict detection and action to be used The type of initial load, if any, should be performed If we define a Q subscription for unidirectional replication, then we can choose the name of the Q subscription—for any other type of replication we cannot. Q replication does not have the concept of a subscription set as there is in SQL Replication, where the subscription set holds all the tables which are related using referential integrity. In Q replication, we have to ensure that all the tables that are related through referential integrity use the same Replication Queue Map, which will enable Q Apply to apply the changes to the target tables in the correct sequence. In the following diagram, Q subscription 1 uses RQM1, Q subscription 2 also uses RQM1, and Q subscription 3 uses RQM3: What is a subscription group? A subscription group is the name for a collection of Q subscriptions that are involved in multi-directional replication, and is set using the SET SUBGROUP command. Q subscription activation In unidirectional, bidirectional, and peer-to-peer two-way replication, when Q Capture and Q Apply start, then the Q subscription can be automatically activated (if that option was specified). For peer-to-peer three-way replication and higher, when Q Capture and Q Apply are started, only a subset of the Q subscriptions of the subscription group starts automatically, so we need to manually start the remaining Q subscriptions.
Read more
  • 0
  • 0
  • 3620

Packt
12 Aug 2010
4 min read
Save for later

Easy guide to understand WCF in Visual Studio 2008 SP1 and Visual Studio 2010 Express

Packt
12 Aug 2010
4 min read
(For more resources on Microsoft, see here.) Creating your first WCF application in Visual Studio 2008 You start creating a WCF project by creating a new project from File | New | Project.... This opens the New Project window. You can see that there are four different templates available. We will be using the WCF Service Library template. Change the default name and provide a name for the project (herein JayWcf01) and click OK. The project JayWcf01 gets created with the folder structure shown in the next image: If you were to expand References node in the above you would notice that System.ServiceModel is already referenced. If it is not, for some reason, you can bring it in by using the Add Reference... window which is displayed when you right click the project in the Solution Explorer. IService1.vb is a service interface file as shown in the next listing. This defines the service contract and the operations expected of the service. If you change the interface name "IService1" here, you must also update the reference to "IService1" in App.config. <ServiceContract()> _Public Interface IService1 <OperationContract()> _ Function GetData(ByVal value As Integer) As String <OperationContract()> _ Function GetDataUsingDataContract(ByVal composite As CompositeType) As CompositeType ' TODO: Add your service operations hereEnd Interface' Use a data contract as illustrated in the sample below to add composite types to service operations<DataContract()> _Public Class CompositeType Private boolValueField As Boolean Private stringValueField As String <DataMember()> _ Public Property BoolValue() As Boolean Get Return Me.boolValueField End Get Set(ByVal value As Boolean) Me.boolValueField = value End Set End Property <DataMember()> _ Public Property StringValue() As String Get Return Me.stringValueField End Get Set(ByVal value As String) Me.stringValueField = value End Set End PropertyEnd Class The Service Contract is a contract that will be agreed to between the Client and the Server. Both the Client and the Server should be working with the same service contract. The one shown above is in the server. Inside the service, data is handled as simple (e.g. GetData) or complex types (e.g. GetDataUsingDataContract). However outside the Service these are handled as XML Schema Definitions. WCF Data contracts provides a mapping between the data defined in the code and the XML Schema defined by W3C organization, the standards organization. The service performed when the terms of the contract are properly adhered to is in the listing of Service1.vb file shown here. ' NOTE: If you change the class name "Service1" here, you must also update the reference to "Service1" in App.config.Public Class Service1 Implements IService1 Public Function GetData(ByVal value As Integer) As _ String Implements IService1.GetData Return String.Format("You entered: {0}", value) End Function Public Function GetDataUsingDataContract(ByVal composite As CompositeType) _ As CompositeType Implements IService1.GetDataUsingDataContract If composite.BoolValue Then composite.StringValue = (composite.StringValue & "Suffix") End If Return composite End FunctionEnd Class Service1 is defining two methods of the service by way of Functions. The GetData accepts a number and returns a string. For example, if the Client enters a value 50, the Server response will be "You entered: 50". The function GetDataUsingDataContract returns a Boolean and a String with 'Suffix' appended for an input which consists of a Boolean and a string. The JayWcf01 is a completed program with a default example contract IService1 and a defined service, Service1. This program is complete in itself. It is a good practice to provide your own names for the objects. Notwithstanding the default names are accepted in this demo. In what follows we test this program as is and then slightly modify the contract and test it again. The testing in the next section will invoke an in-built client and then later on we will publish it to the localhost which is an IIS 7 web server. How to test this program The program has a valid pair of contract and service and we should be able to test this service. The Windows Communication Foundation allows Visual Studio 2008 (also Visual Studio 2010 Express) to launch a host to test the service with a client. Build the program and after it succeeds hit F5. The WcfSvcHost is spawned which stays in the taskbar as shown. You can click WcfSvcHost to display the WCF Service Host window popping-up as shown. The host gets started as shown here. The service is hosted on the developmental server. This is immediately followed by the WCF Test Client user interface popping-up as shown. In this harness you can test the service.
Read more
  • 0
  • 0
  • 5215

article-image-drupal-7-preview
Packt
11 Aug 2010
3 min read
Save for later

Drupal 7 Preview

Packt
11 Aug 2010
3 min read
You'll need a localhost LAMP or XAMPP environment to follow along with the examples here. If you don't have one set up, I recommend using the Acquia Stack Drupal Installer: http://acquia.com/downloads. Once your testing environment is configured, download Drupal 7: http://drupal.org/drupal-7.0-alpha6. Installing D7 Save the installer to your localhost Drupal /sites folder and extract it. Set up your MySQL database using your preferred method. Note to developers: D7's new database abstraction layer will theoretically support multiple database types including SQLite, PostgreSQL, MSSQL and Oracle. So if you are running Oracle you may be able to use D7. Now load the installer page in your browser (note I renamed my extracted folder to drupal7): http://localhost:8082/drupal7/install.php. The install process is about the same as D6 - you're still going to need to copy your /sites/default/default.settings.php file and re-name it to settings.php. Also make sure to create your /files folder. Make sure the file has write permissions for the install process. Once you do this and have your db created, it's time to run the installer. One immediate difference with the installer is that D7 now offers you a Standard or Minimal install profile. Standard will install D7 with common Drupal functionality and features that you are familiar with. Minimal is the choice for developers who want only the core Drupal functionality enabled. I'll leave it set for Standard profile. Navigate through the installer screens choosing language; and adding your database information. Enhancements With D7 installed what are the immediate noticeable enhancements? The overall look and feel of the administrative interface now uses overlay windows to present links to sections and content. Navigation in the admin interface now runs horizontally along the top of the site. Directly under the toolbar navigation is a shortcut link navigation. You can customize this by adding your own shortcuts pointing to various admin functionality. In the toolbar, Content points to your content lists. Structure contains links to Blocks, Content types, Menus and Taxonomy. CCK is now built into Drupal 7 so you can create custom content types and manage custom fields without having to install modules. If you want to restore the user interface to look more like D6 you can do this by disabling the Overlay module or tweaking role permissions for the Overlay module. Content Types Two content types are enabled with Drupal 7 core. Article replaces the D6 Story type. Basic Page replaces the D6 Page type. Developers hope these more accurate names will help new Drupal users understand how to add content easily to their site.
Read more
  • 0
  • 0
  • 3856

article-image-creating-recent-comments-widget-agile
Packt
10 Aug 2010
7 min read
Save for later

Creating a Recent Comments Widget in Agile

Packt
10 Aug 2010
7 min read
(For more resources on Agile, see here.) Introducing CWidget Lucky for us, Yii is readymade to help us achieve this architecture. Yii provides a component class, called CWidget, which is intended for exactly this purpose. A Yii widget is an instance of this class (or its child class), and is a presentational component typically embedded in a view file to display self-contained, reusable user interface features. We are going to use a Yii widget to build a recent comments portlet and display it on the main project details page so we can see comment activity across all issues related to the project. To demonstrate the ease of re-use, we'll take it one step further and also display a list of project-specific comments on the project details page. To begin creating our widget, we are going to first add a new public method on our Comment AR model class to return the most recently added comments. As expected, we will begin by writing a test. But before we write the test method, let's update our comment fixtures data so that we have a couple of comments to use throughout our testing. Create a new file called tbl_comment.php within the protected/tests/fixtures folder. Open that file and add the following content: <?phpreturn array('comment1'=>array( 'content' => 'Test comment 1 on issue bug number 1', 'issue_id' => 1, 'create_time' => '', 'create_user_id' => 1, 'update_time' => '', 'update_user_id' => '', ), 'comment2'=>array( 'content' => 'Test comment 2 on issue bug number 1', 'issue_id' => 1, 'create_time' => '', 'create_user_id' => 1, 'update_time' => '', 'update_user_id' => '', ),); Now we have consistent, predictable, and repeatable comment data to work with. Create a new unit test file, protected/tests/unit/CommentTest.php and add the following content: <?phpclass CommentTest extends CDbTestCase{ public $fixtures=array( 'comments'=>'Comment', ); public function testRecentComments() { $recentComments=Comment::findRecentComments(); $this->assertTrue(is_array($recentComments)); }} This test will of course fail, as we have not yet added the Comment::findRecentComments() method to the Comment model class. So, let's add that now. We'll go ahead and add the full method we need, rather than adding just enough to get the test to pass. But if you are following along, feel free to move at your own TDD pace. Open Comment.php and add the following public static method: public static function findRecentComments($limit=10, $projectId=null){ if($projectId != null) { return self::model()->with(array( 'issue'=>array('condition'=>'project_id='.$projectId)))->findAll(array( 'order'=>'t.create_time DESC', 'limit'=>$limit, )); } else { //get all comments across all projects return self::model()->with('issue')->findAll(array( 'order'=>'t.create_time DESC', 'limit'=>$limit, )); }} Our new method takes in two optional parameters, one to limit the number of returned comments, the other to specify a specific project ID to which all of the comments should belong. The second parameter will allow us to use our new widget to display all comments for a project on the project details page. So, if the input project id was specified, it restricts the returned results to only those comments associated with the project, otherwise, all comments across all projects are returned. More on relational AR queries in Yii The above two relational AR queries are a little new to us. We have not been using many of these options in our previous queries. Previously we have been using the simplest approach to executing relational queries: Load the AR instance. Access the relational properties defined in the relations() method. For example if we wanted to query for all of the issues associated with, say, project id #1, we would execute the following two lines of code: // retrieve the project whose ID is 1$project=Project::model()->findByPk(1);// retrieve the project's issues: a relational query is actually being performed behind the scenes here$issues=$project->issues; This familiar approach uses what is referred to as a Lazy Loading. When we first create the project instance, the query does not return all of the associated issues. It only retrieves the associated issues upon an initial, explicit request for them, that is, when $project->issues is executed. This is referred to as lazy because it waits to load the issues. This approach is convenient and can also be very efficient, especially in those cases where the associated issues may not be required. However, in other circumstances, this approach can be somewhat inefficient. For example, if we wanted to retrieve the issue information across N projects, then using this lazy approach would involve executing N join queries. Depending on how large N is, this could be very inefficient. In these situations, we have another option. We can use what is called Eager Loading. The Eager Loading approach retrieves the related AR instances at the same time as the main AR instances are requested. This is accomplished by using the with() method in concert with either the find() or findAll() methods for AR query. Sticking with our project example, we could use Eager Loading to retrieve all issues for all projects by executing the following single line of code: //retrieve all project AR instances along with their associated issue AR instances$projects = Project::model()->with('issues')->findAll(); Now, in this case, every project AR instance in the $projects array already has its associated issues property populated with an array of issues AR instances. This result has been achieved by using just a single join query. We are using this approach in both of the relational queries executed in our findRecentComments() method. The one we are using to restrict the comments to a specific project is slightly more complex. As you can see, we are specifying a query condition on the eagerly loaded issue property for the comments. Let's look at the following line: Comment::model()->with(array('issue'=>array('condition'=>'project_id='.$projectId)))->findAll(); This query specifies a single join between the tbl_comment and the tbl_issue tables. Sticking with project id #1 for this example, the previous relational AR query would basically execute something similar to the following SQL statement: SELECT tbl_comment.*, tbl_issue.* FROM tbl_comment LEFT OUTER JOIN tbl_issue ON (tbl_comment.issue_id=tbl_issue.id) WHERE (tbl_issue.project_id=1) The added array we specify in the findAll() method simply sets an order by clause and a limit clause to the executed SQL statement. One last thing to note about the two queries we are using is how the column names that are common to both tables are disambiguated. Obviously when the two tables that are being joined have columns with the same name, we have to make a distinction between the two in our query. In our case, both tables have the create_time column defined. We are trying to order by this column in the tbl_comment table and not the one defined in the issue table. In a relational AR query in Yii, the alias name for the primary table is fixed as t, while the alias name for a relational table, by default, is the same as the corresponding relation name. So, in our two queries, we specify t.create_time to indicate we want to use the primary table's column. If we wanted to instead order by the issue create_time column, we would alter, the second query for example, as such: return Comment::model()->with('issue')->findAll(array( 'order'=>'issue.create_time DESC', 'limit'=>$limit,));
Read more
  • 0
  • 0
  • 1866
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-netbeans-platform-69-working-actions
Packt
10 Aug 2010
4 min read
Save for later

NetBeans Platform 6.9: Working with Actions

Packt
10 Aug 2010
4 min read
(For more resources on NetBeans, see here.) In Swing, an Action object provides an ActionListener for Action event handling, together with additional features, such as tool tips, icons, and the Action's activated state. One aim of Swing Actions is that they should be reusable, that is, can be invoked from a menu item as well as a related toolbar button and keyboard shortcut. The NetBeans Platform provides an Action framework enabling you to organize Actions declaratively. In many cases, you can simply reuse your existing Actions exactly as they were before you used the NetBeans Platform, once you have declared them. For more complex scenarios, you can make use of specific NetBeans Platform Action classes that offer the advantages of additional features, such as more complex displays in toolbars and support for context-sensitive help. Preparing to work with global actions Before you begin working with global Actions, let's make some changes to our application. It should be possible for the TaskEditorTopComponent to open for a specific task. You should therefore be able to pass a task into the TaskEditorTopComponent. Rather than the TaskEditorPanel creating a new task in its constructor, the task needs to be passed into it and made available to the TaskEditorTopComponent. On the other hand, it may make sense for a TaskEditorTopComponent to create a new task, rather than providing an existing task, which can then be made available for editing. Therefore, the TaskEditorTopComponent should provide two constructors. If a task is passed into the TaskEditorTopComponent, the TaskEditorTopComponent and the TaskEditorPanel are initialized. If no task is passed in, a new task is created and is made available for editing. Furthermore, it is currently only possible to edit a single task at a time. It would make sense to be able to work on several tasks at the same time in different editors. At the same time, you should make sure that the task is only opened once by the same editor. The TaskEditorTopComponent should therefore provide a method for creating new or finding existing editors. In addition, it would be useful if TaskEditorPanels were automatically closed for deleted tasks. Remove the logic for creating new tasks from the constructor of the TaskEditorPanel, along with the instance variable for storing the TaskManager, which is now redundant: public TaskEditorPanel() { initComponents(); this.pcs = new PropertyChangeSupport(this); } Introduce a new method to update a task: public void updateTask(Task task) { Task oldTask = this.task; this.task = task; this.pcs.firePropertyChange(PROP_TASK, oldTask, this.task); this.updateForm(); } Let us now turn to the TaskEditorTopComponent, which currently cannot be instantiated either with or without a task being provided. You now need to be able to pass a task for initializing the TaskEditorPanel. The new default constructor creates a new task with the support of a chained constructor, and passes this to the former constructor for the remaining initialization of the editor. In addition, it should now be able to return several instances of the TaskEditorTopComponent that are each responsible for a specific task. Hence, the class should be extended by a static method for creating new or finding existing instances. These instances are stored in a Map<Task, TaskEditorTopComponent> which is populated by the former constructor with newly created instances. The method checks whether the map for the given task already stores a responsible instance, and creates a new one if necessary. Additionally, this method registers a Listener on the TaskManager to close the relevant editor for deleting a task. As an instance is now responsible for a particular task this should be able to be queried, so we introduce another appropriate method. Consequently, the changes to the TaskEditorTopComponent looks as follows: private static Map<Task, TaskEditorTopComponent> tcByTask = new HashMap<Task, TaskEditorTopComponent>();public static TaskEditorTopComponent findInstance(Task task) { TaskEditorTopComponent tc = tcByTask.get(task); if (null == tc) { tc = new TaskEditorTopComponent(task); } if (null == taskMgr) { taskMgr = Lookup.getDefault().lookup(TaskManager.class); taskMgr.addPropertyChangeListener(newListenForRemovedNodes()); } return tc;}private class ListenForRemovedNodes implements PropertyChangeListener { public void propertyChange(PropertyChangeEvent arg0) { if (TaskManager.PROP_TASKLIST_REMOVE.equals (arg0.getPropertyName())) { Task task = (Task) arg0.getNewValue(); TaskEditorTopComponent tc = tcByTask.get(task); if (null != tc) { tc.close(); tcByTask.remove(task); } } }}private TaskEditorTopComponent() { this(Lookup.getDefault().lookup(TaskManager.class)); }private TaskEditorTopComponent(TaskManager taskMgr) { this((taskMgr != null) ? taskMgr.createTask() : null); }private TaskEditorTopComponent(Task task) { initComponents();// ... ((TaskEditorPanel) this.jPanel1).updateTask(task); this.ic.add(((TaskEditorPanel) this.jPanel1).task); this.associateLookup(new AbstractLookup(this.ic)); tcByTask.put(task, this); }public String getTaskId() { Task task = ((TaskEditorPanel) this.jPanel1).task; return (null != task) ? task.getId() : ""; } With that our preparations are complete and you can turn to the following discussion on Actions.
Read more
  • 0
  • 0
  • 3230

article-image-adding-user-comments-agile
Packt
10 Aug 2010
5 min read
Save for later

Adding User Comments in Agile

Packt
10 Aug 2010
5 min read
(For more resources on Agile, see here.) Iteration planning The goal of this iteration is to implement feature functionality in the Trackstar application to allow users to leave and read comments on issues. When a user is viewing the details of any project issue, they should be able to read all comments previously added as well as create a new comment on the issue. We also want to add a small fragment of content, or portlet, to the project-listing page that displays a list of recent comments left on all of the issues. This will be a nice way to provide a window into recent user activity and allow easy access to the latest issues that have active conversations. The following is a list of high-level tasks that we will need to complete in order to achieve these goals: Design and create a new database table to support comments Create the Yii AR class associated with our new comments table Add a form directly to the issue details page to allow users to submit comments Display a list of all comments associated with an issue directly on the issues details page Creating the model As always, we should run our existing test suite at the start of our iteration to ensure all of our previously written tests are still passing as expected. By this time, you should be familiar with how to do that, so we will leave it to the reader to ensure that all the unit tests are passing before proceeding. We first need to create a new table to house our comments. Following is the basic DDL definition for the table that we will be using: CREATE TABLE tbl_comment( `id` INTEGER NOT NULL PRIMARY KEY AUTO_INCREMENT, `content` TEXT NOT NULL, `issue_id` INTEGER, `create_time` DATETIME, `create_user_id` INTEGER, `update_time` DATETIME, `update_user_id` INTEGER) As each comment belongs to a specific issue, identified by the issue_id, and is written by a specific user, indicated by the create_user_id identifier, we also need to define the following foreign key relationships: ALTER TABLE `tbl_comment` ADD CONSTRAINT `FK_comment_issue` FOREIGN KEY (`issue_id`) REFERENCES `tbl_issue` (`id`);ALTER TABLE `tbl_comment` ADD CONSTRAINT `FK_comment_author` FOREIGN KEY (`create_user_id`) REFERENCES `tbl_user` (`id`); If you are following along, please ensure this table is created in both the trackstar_dev and trackstar_test databases. Once a database table is in place, creating the associated AR class is a snap. We simply use the Gii code creation tool's Model Generator command and create an AR class called Comment. Since we have already created the model class for issues, we will need to explicitly add the relations to to the Issue model class for comments. We will also add a relationship as a statistical query to easily retrieve the number of comments associated with a given issue (just as we did in the Project AR class for issues). Alter the Issue::relations() method as such: public function relations(){ return array( 'requester' => array(self::BELONGS_TO, 'User', 'requester_id'), 'owner' => array(self::BELONGS_TO, 'User', 'owner_id'), 'project' => array(self::BELONGS_TO, 'Project', 'project_id'), 'comments' => array(self::HAS_MANY, 'Comment', 'issue_id'), 'commentCount' => array(self::STAT, 'Comment', 'issue_id'),);} Also, we need to change our newly created Comment AR class to extend our custom TrackStarActiveRecord base class, so that it benefits from the logic we placed in the beforeValidate() method. Simply alter the beginning of the class definition as such: <?php/*** This is the model class for table "tbl_comment".*/class Comment extends TrackStarActiveRecord{ We'll make one last small change to the definitions in the Comment::relations() method. The relational attributes were named for us when the class was created. Let's change the one named createUser to be author, as this related user does represent the author of the comment. This is just a semantic change, but will help to make our code easier to read and understand. Change the method as such: /** * @return array relational rules. */public function relations(){ // NOTE: you may need to adjust the relation name and the related // class name for the relations automatically generated below. return array( 'author' => array(self::BELONGS_TO, 'User', 'create_user_id'), 'issue' => array(self::BELONGS_TO, 'Issue', 'issue_id'),); Creating the Comment CRUD Once we have an AR class in place, creating the CRUD scaffolding for managing the related entity is equally as easy. Again, use the Gii code generation tool's Crud Generator command with the AR class name, Comment, as the argument. Although we will not immediately implement full CRUD operations for our comments, it is nice to have the scaffolding for the other operations in place. As long as we are logged in, we should now be able to view the autogenerated comment submission form via the following URL: http://localhost/trackstar/index.php?r=comment/create
Read more
  • 0
  • 0
  • 1290

article-image-oracle-enterprise-manager-key-concepts-and-subsystems
Packt
10 Aug 2010
7 min read
Save for later

Oracle Enterprise Manager Key Concepts and Subsystems

Packt
10 Aug 2010
7 min read
(For more resources on Oracle, see here.) Target The term 'target' refers to an entity that is managed via Enterprise Manager Grid Control. Target is the most important entity in Enterprise Manager Grid Control. All other processes and subsystems revolve around the target subsystem. For each target there is a model of the target that is saved in the Enterprise Manager Repository. In this article, we will use the terms target and target model interchangeably. Major building blocks of the target subsystem are: Target definition: All targets are organized into different categories, just like the actual entity that they represent, for example there is WebLogic Server target, Oracle Database target, and so on. These categories are called target types. For each target type there is a definition in XML format that is available with the agent as well as with the repository. This definition includes: Target Attributes: There are some attributes that are common across all target types, and there are some attributes specific to a particular target type. The example of a common attribute is the target name, which uniquely identifies a managed entity. The example of a target type specific attribute is the name of a WebLogic Domain for a WebLogic Server target. Some of the attributes provide connection details for connecting to the monitored entity, such as the WebLogic Domain host and port. Some other attributes contain authentication information to authenticate and connect to the monitored entity. Target asociations: Target type definition includes the association between related targets, for example an OC4J target will have its association defined with a corresponding Oracle Application Server. Target Metrics: This includes all the metrics that need to be collected for a given target and the source for those metrics. We'll cover this in greater detail in the Metrics subsystem. Every target that is managed through the EM belongs to one, and only one, target type category. For any new entity that needs to be managed by the Enterprise Manager, an instance of appropriate target type is created and persisted in the repository. Out-of-the-box Enterprise Manager provides the definition for most common target types such as the Host, Oracle Database, Oracle WebLogic Server, Seibel suite, SQLServer, SAP, .NET platform, IBM Websphere application server, Jboss application server, MQSeries, and so on. For a complete list of out-of-the-box targetsOut-of-the-box Enterprise Manager provides the definition for most common target types such as the Host, Oracle Database, Oracle WebLogic Server, Seibel suite, SQLServer, SAP, .NET platform, IBM Websphere application server, Jboss application server, MQSeries, and so on.For a complete list of out-of-the-box targets please refer to the Oracle website. Now that we have a good idea about the target definition, it's time we get to know more about the target lifecycle. Target lifecycle As the target is very central to the Enterprise Manager—it's very important that we understand each stage in the target life cycle. Please note that not all the stages of the lifecycle may be needed for each target. However, to proceed further we need to understand each step in the target lifecycle. Enterprise Manager automates many of these stages, so in a real life scenario many of these steps may be transparent to the user. For example, Discovery and Configuration for monitoring stages are completely automated for the Oracle Application Server. Discovery of a target Discovery is the first step in the target lifecycle. Discovery is a process that finds the entities that need to be managed, builds the required target model for those entities, and persists the model in the management repository. For example, the discovery process executed on a Linux server learns that there are OC4J containers on that server, it builds target models for the OC4Js and the Linux server, and it persists the target models in the repository. The agent has various discovery scripts and those scripts are used to identify various target types. Besides discovery, these scripts build a model for the discovered target and fill in all of the attributes for that target. We learnt about target attributes in the previous section. Some discovery scripts are executed automatically as a part of the agent installation and therefore, no user inputs are needed for discovery. For example, a discovery script for the Oracle Application Server is automatically triggered when an agent is installed. On the other hand, there are some discovery scripts where the user needs to provide some input parameters. An example for this is the WebLogic server, where the user needs to provide the port number of the WebLogic Administration Server and credentials to authenticate and connect to it. The Enterprise Manager console provides interface for such discovery. Discovery of targets can happen in two modes—local mode and remote mode. In local mode, the agent is running locally on the same host as the target. In remote discovery mode, the agent can be running on a different host. All of the targets can be discovered in local mode and there are some targets that can be discovered in remote mode. For example, discovery of WebLogic servers can happen in local as well as remote mode. One important point to note is that the agent that discovered the target does the monitoring of that target. For example, if a WebLogic Server target is discovered through a remote agent it gets monitored through that same remote agent. Configuration for monitoring After discovery the target needs to be configured for monitoring. The user will need to provide some parameters for the agent to use to connect to the target and get the metrics. These parameters include monitoring credentials, host, and port information, using which, the agent can connect to the target to fetch the metrics. The Enterprise Manager uses these parameters to connect, authenticate, and collect metrics from the targets. For example, to monitor an Oracle database the end user needs to provide the user ID and password, which can be used for authentication when collecting performance metrics using SNMP protocol. Enterprise Manager Console provides an interface for configuring these parameters. For some targets such as Application server, this step is not needed, as all the metrics can be fetched anonymously. For some other targets such as Oracle BPEL Process Manager, this step is needed only for detailed metrics; basic metrics are available without any monitoring configuration, but for advanced metrics monitoring, credentials needs to be provided by the end user. In this case, monitoring credentials are the user ID and password, used to authenticate when connecting to BPEL Process Manager for collecting performance metrics. Updates to a target Over a period of time, some target properties, attributes, and associations with other targets change—the EM target model that represents the target should be updated to reflect the changes. It is very important that end-users see the correct model from Enterprise Manager to ensure that all targets are monitored correctly. For example, in a given WebLogic Cluster, if a new WebLogic Server is added and an existing WebLogic Server is removed—Enterprise Manager's target model needs to reflect that. Or, if credentials to connect to WebLogic Admin Server are changed—the target model should be updated with new credentials. The Enterprise Manager console provides UI interface to update such properties. If the target model is not updated there is a risk that some entity may not be monitored, for example if a new WebLogic server is added but the target model of domain is not updated, the new WebLogic server will not be monitored. Stopping monitoring of a target Each IT resource has some maintenance window or planned 'down-time'. During such time it's desirable to stop monitoring a target and collecting metrics for that resource. This can be achieved by putting that target into a blackout state. In a blackout state, agents do not collect monitoring data for a target and they do not generate alerts. After the maintenance activity is over, the blackout can be cleared from a target and routine monitoring can start again. Enterprise Manager Console provides an interface for creating and removing blackout state for one or more targets.
Read more
  • 0
  • 0
  • 6387

article-image-oracle-universal-content-management-how-set-and-change-workflows
Packt
09 Aug 2010
4 min read
Save for later

Oracle Universal Content Management: How to Set Up and Change Workflows

Packt
09 Aug 2010
4 min read
(For more resources on Oracle, see here.) How to set up and change workflows First thing's first. Let's start by looking at the tools that you will be using to set up and configure your workflows. Discover the Workflow Admin application Go to Administration Admin Applets| and launch Workflow Admin. The Workflow Admin application comes up (as shown in the following screenshot): There are three tabs: Workflows: This tab is used for administering Basic or Manual Workflows. Criteria: This tab deals with Automatic or Criteria Workflows—the type we will be using most often. Templates: This is the place where you can pre-assemble Workflow Templates—reusable pieces that you can use to create new basic workflows. Let's create a simple automatic workflow. I call it automatic because content enters the workflow automatically when it is modified or created. If you will be using e-mail notifications then be sure to check your Internet Configuration screen in Admin Server. I'll walk you through the steps in using automatic workflows. Lab 7: Using automatic workflows Here's the process for creating a criteria workflow: Creating a criteria workflow Follow these steps: Go to the Criteria tab and click on Add. The New Criteria Workflow dialog comes up (as shown in the following screenshot): Fill in Workflow Name and Description. Pick the Security Group. Only items with the same security group as the workflow can enter it. Let's use the security group we've created. Select accounting. We're creating a Criteria Workflow, so let's check the Has Criteria Definition box. Now you can specify criteria that content must match to enter the workflow.For the sake of this lab, let's pick Account for the Field, and accounting/payable/current for the Value. Please note that a content item must match at least two conditions to enter the workflow: it must belong to the same security group as the workflow, and it must match the criteria of the workflow. As soon as a new content item is created with Security Group of accounting and Content Account value is set to accounting/payable/current, it will enter our workflow. It will not enter the workflow if its metadata is simply updated to these values. It takes a new check-in for an item to enter a criteria workflow. If you need it to enter a workflow after a metadata update then consider custom components available from the Fishbowl Solutions (www.fishbowlsolutions.com). You can use any metadata field and value pair as criteria for entering the workflow. But you can only have one condition. What if that's not enough? If you need to perform additional checks before you can accept the item in a workflow then keep your criteria really open, and do your checks in the workflow itself. I'll show you how, later in this article. The diagram next illustrates how a content item flows through a criteria workflow. You may find it useful to refer back to it as you follow the steps in this lab. OK. We have a workflow created but there're two problems with it: it has no steps in it and it is disabled. Let's begin by seeing how to add workflow steps. Adding workflow steps Here's how you add workflow steps: Click on the Add button in the Steps section on the right (as shown in the following screenshot): The Add New Step dialog opens. Fill in the step name and description (as shown in the following screenshot): Click on the Add User button on the right and select approvers for this step. Also add yourself to the list of approvers so you can test the workflow. Switch to the Exit Conditions tab (as shown in the following screenshot): You can change the number of approvers required to move the item to the next step. You can make all approvers required to advance a step or just any one as shown on the screenshot. And if you put zero in the text box, no approvers will be required at all. They will still receive notification, but the item will go immediately to the next step. And when the current step is the last the workflow will end and the new revision will be released into the system. What do I mean by that? Until workflow is complete, revisions that are currently in a workflow will not come up in searchers and will not show on the Web. You will still see them in the content info screen but that's it. OK the dialog. You now have a workflow with one step. Let's test it. But first, you need to enable the workflow.
Read more
  • 0
  • 0
  • 2803
article-image-more-things-you-can-do-oracle-content-server-workflows
Packt
09 Aug 2010
5 min read
Save for later

More Things you can do with Oracle Content Server workflows

Packt
09 Aug 2010
5 min read
(For more resources on Oracle, see here.) The top three things As we've just seen, the most common things you can do are these: Get content approved: This is the most obvious use of the workflow we've just seen. Get people notified: Remember when we were adding workflow steps there was a number of required approvers on the Exit Conditions tab in the Add New Step dialog. If we set that to zero we accomplish one important thing: Approvers will get notified, but no action is required of them. It's a great way to "subscribe" a select group of people to an event of your choice. Perform custom actions: And if that's not enough you can easily add custom scripts to any step of a workflow. You can change metadata, release items, and send them to other workflows. You can even invoke your custom Java code. And here's another really powerful thing you can do with custom workflow actions. You can integrate with other systems and move from the local workflow to process orchestration. You can use a Content Server workflow to trigger external processes. UCM 10gR3 has an Oracle BPEL integration built in. This means that a UCM workflow can be initiated by (or can itself initiate) a BPEL workflow that spans many systems, not just the UCM. This makes ERP systems such as Siebel, PeopleSoft, SAP, and Oracle e-Business Suite easily accessible to the UCM, and content inside the UCM can be easily made available to these systems. So let's look at the jumps and scripting. Jumps and scripting Here's how to add scripting to a workflow: In Workflow Admin select a step of a workflow we've just created. Click on the Edit button on the right. The Edit Step dialog comes up. Go to the Events tab (as shown in the following screenshot): There are three events that you can add custom handlers for: Entry: This event triggers when an item arrives at the step. Update: This happens when an item or its metadata is updated. It's also initiated every hour by a timer event, Workflow Update Cycle. Use it for sending reminders to approvers or escalating the item to an alternative person after your approval period has expired. Exit: This event is triggered when an item has been approved and is about to exit the step. If you have defined Additional Exit Conditions on the Exit Conditions tab then those will be satisfied before this event fires. The following diagram illustrates the sequence of states and corresponding events that are fired when a content item arrives at a workflow step: Great! But how do we can actually add the jumps and custom scripts to a workflow step? How to add a jump to a workflow step Let's add an exception where content submitted by sysadmin will bypass our Manager Approval workflow. We will use a jump—a construct that causes an item to skip the normal workflow sequence and follow an alternative path. Here's how to do it: Add a jump to an Entry event of our very first step. On the Events tab of the Edit Step dialog, click on the Edit button—the one next to the Entry event. The Edit Script dialog displays (as shown in the following screenshot): Click on the Add button. The Add Jump dialog comes up (as shown in the following screenshot): Let's call the jump Sysadmin WF bypass. You don't need to change anything else at this point. Click on OK to get back to the Edit Script dialog. In the Field drop-down box pick Author. Click on the Select… button next to the Value box. Pick sysadmin (if you have trouble locating sysadmin in the list of users, make sure that the filter check-box is un-checked). Click the Add button below the Value field. Make sure that your clause appears in the Script Clauses box below. In the Target Step dropdown pick Next Step. Once you have done so the value will change to its script equivalent, @wfCurrentStep(1). If you have more than one step in the workflow, change 1 to the number of steps you have. This will make sure that you jump past the last step and exit the workflow. Here's how the completed dialog will look (as shown in the following screenshot): Click on OK to close. You're now back to the Events tab on the Edit Step dialog. Notice a few lines of script being added to the box next to the Entry event (as shown in the following screenshot): OK the dialog. It's time to test your changes. Check in a new document. Make sure you set the Author field to sysadmin. Set your Security Group to accounting, and Account to accounting/payable/current. If you don't, the item will not enter our workflow in the first place (as shown in the following screenshot): Complete your check-in and follow the link to go to the Content Info page. See the status of the item. It should be set to Released. That's right. The item got right out of the workflow. Check in a new document again, but use some other author. Notice how your item will enter the workflow and stay there. As you've seen, the dialog we used for creating a jump is simply a code generator. It created a few lines of script we needed to add the handler for the Entry event. Click on the Edit button next to that code and pick Edit Current to study it. You can find all the script function definitions in iDoc Reference Guide. Perfect! And we're still not done. What if you have a few common steps that you'd like to reuse in a bunch of workflows? Would you just have to manually recreate them? Nope. There are several solutions that allow you to reuse parts of the workflow. The one I find to be most useful is sub workflows.
Read more
  • 0
  • 0
  • 3809

article-image-metadata-oracle-universal-content-management
Packt
09 Aug 2010
5 min read
Save for later

Metadata in Oracle Universal Content Management

Packt
09 Aug 2010
5 min read
Let's begin by looking in the metadata. Exploring metadata In case you forgot, metadata fields are there to describe the actual data such as the file name, shooting date, and camera name for a digital picture. There're two types of metadata fields: Standard and Extended or Custom. Let's take a closer look at what they can do for us. Standard metadata Standard metadata is essential for the system to function. These are fields like content ID, revision ID, check-in date, and author. Let's take a quick look at all of them so you have a full picture. Lab 2: Exploring standard metadata Click on the Quick Search button on the top right. Yes, leave the search box blank. If you do that, you'll get all content in the repository. In the last column on the Search Results page click on the i icon on any of the result rows. That brings up a Content Info screen. From this screen there is no way to tell which fields are Standard and which are Extended. So how do you tell? Explore the database That's right. A Content Server uses a relational database, like Oracle or SQL Server to store its metadata, so let's look there. If you are using SQL Server 2005 as your database, then open SQL Server Management Studio, and if not then bring up your SQL tool of choice. Check the list of columns in the table called Revisions (as shown in the following screenshot): Most of the column names in Revisions are the standard metadata fields. Here's a list of the fields you will be using most often: dID: ID of the document revision. This number is globally unique. If you have a project plan with three revisions—each of the three will have unique dID and all of them will have the same Content ID. dDocName: this is the actual Content ID. dDocType: content type of the document. dDocName or Content ID is the unique identifier for a content revision set. dID is the unique identifier of each individual content revision within a set. Being able to identify a content revision set is very useful, as it shows and tracks (makes auditable) the changes of content items over time. Being able to identify each individual revision with dID is also very useful, so we can work with specific content revisions. This is one of the great advantages of the Content Server over other systems, which only store the changes between revisions. Full revision sets as well as individual revisions are managed objects and each one can be accessed by its own unique URL. Now run this SQL statement: select * from Revisions; This shows the actual documents in the system and their values for standard meta fields (as shown in the following screenshot): And now let's look at the all-important Content Types. Content Types Content Type is a special kind of meta field. That's all. UCM puts a special emphasis on it as this is the value that differentiates a project plan from a web page and a team photo from a vendor invoice. You may even choose to change the way your check-in and content info form looks —based on the type of the document. Let's look how UCM handles Content Types. Lab 3: Exploring content types In Content Server go to Administration | Admin Applets. Launch the Configuration Manager. Select Options| Content Types... (as shown in the following screenshot): The Content Types dialog opens. As you see, out of the box, Content Server has seven types—one for each imaginary department. This is a good way of segregating content. You can also go by the actual type of content. For instance, you can have one Content Type for Invoice and one for Project Plan. They will also have different meta fields. For instance, an Invoice will have a Contract Number and a Total Amount. A Project Plan will have a project name and manager's name. Now let me show you how to add content types. How to add a Content Type It's easy to add a new Content Type. Just click on Add..., fill in the type name and the description. You can also select an icon for the new type. What if you need to upload a new icon? Just make it into an 8-bit GIF file, 30x37 px, 96 dpi and upload it to: C:oracleucmserverweblayoutimagesdocgifs If your install path is different or you're not running on Windows then make appropriate corrections. How to edit or delete a Content Type The only thing to know about editing is that you can not really change the type name. All you can update is the icon or description. If you're ready to delete a type then make sure there is no content in the repository that's using it. Either update it all or delete. How would you go about doing a mass-update? I'll show you one of the ways in on using Archiver(Ways on archiever is out of the scope of this article). And now let's proceed to Custom Metadata.
Read more
  • 0
  • 0
  • 2497

article-image-building-consumer-review-website-using-wordpress-3
Packt
06 Aug 2010
15 min read
Save for later

Building a Consumer Review Website using WordPress 3

Packt
06 Aug 2010
15 min read
(For more resources on Wordpress, see here.) Building a consumer review website will allow you to supply consumers with the information that they seek and then, once they've decided to make a purchase, your site can direct them to a source for the product or service. This process can ultimately allow you to earn some nice commission checks because it's only logical that you would affiliate yourself with a number of the sites to which you will be directing consumers. The great thing about using the WP Review Site plugin to build your consumer review website is that you can provide people with an unbiased source of public opinions on any product or service that you can imagine. You will never have to resort to the hard sell in order to drive traffic to the companies that you've affiliated yourself with. Instead, consumers can research the reviews posted on your website and,ultimately, make a purchase feeling confident that they're making the right decision. In this article, you will learn about the following: Present reviews in the most convenient way possible for visitors browsing your site Specify the ratings criteria that site visitors will use when reviewing the products or services included on your website Display informational comparison tables on your site's index and category pages Provide visitors with the location of local businesses using Google Maps Perform the additional steps required when writing a post now that the WP Review Site plugin has been introduced into the process Perform either automatic and manual integration so that you can use a theme of your own rather than either of the ones provided with this plugin Once this project is complete, you will have succeeded in creating a site that's similar to the one shown in the following screenshot:   Introducing WP Review Site With the WP Review Site plugin you will be able to build a consumer review site where visitors can share their opinions about the products or services of your choosing. The plugin, which can be found at WP Review Site, can be used to build a dedicated review site or, if you would like consumer reviews to make up only a subsection of your website, then you can specify certain categories where they should appear. This plugin gives you complete control over where ratings appear and where they don't since you can choose to include or exclude them on any category, page, or post. The WP Review Site plugin seamlessly integrates with WordPress by, among other things, altering the normal appearance and functionality of the comments submission form. This plugin provides visitors with a way to write a review and assign stars to the ratings categories that you previously defined. They can also write a review and opt to provide no stars without harming the overall rating presented on your site, since no stars is interpreted as though no rating was given. WP Review Site plugin makes it easy for you to present your visitors with concise information. Using the features available with this plugin, you can build comparison tables based upon your posts and user reviews. In order to accomplish this, you will need to configure a few settings and then the plugin will take care of the rest. Typically, WordPress displays posts in chronological order, but that doesn't make much sense on a consumer review site where visitors want to view posts based upon other factors such as the number of positive reviews that a particular product or service has received. The developer behind WP Site Review took that into consideration and has included two alternative sorting methods for your site's posts. The developer has even included a Bayesian weighting feature so that reviews are ordered in the most logical way possible. Right about now, you're probably wondering what Bayesian weighting is and how it works. What it does is provide a way to mathematically calculate the rating of products and/or services based upon the credibility of the votes that have been cast. If an item receives only a few votes, then it can't be said with any certainty that that's how the general public feels. If an item receives several votes, then it can be safely assumed that many others hold the same opinion. So, with Bayesian weighting, a product that has received only one five star review won't outrank another that has received fifteen four star reviews. As the product that received one five star review garners more ratings, its reviews will grow in credibility and, if it continues to receive high ratings, it will eventually become credible enough to outrank the other reviews. If you're planning to create a website where visitors can come and review local businesses, then you might consider this plugins ability to automatically embed Google Maps quite handy. After configuring the settings on the plugin's Google Maps screen you will be able to type the address for a business into a custom field when writing a post and then the plugin will take care of the rest. The WP Review Site plugin also includes two sidebar widgets that can used with any widget-ready theme. These widgets will allow you to display a list of top rated items and a list of recent reviews. Lastly, the themes provided with this plugin include built-in support for the hReview microformat. This means that Google will easily be able to extract and highlight reviews from your website. That feature will prove to be very beneficial for driving search engine traffic to your site. Installing WP Review Site Once you've installed WordPress you can then concentrate on the installation of the WP Review Site plugin and its accompanying themes. First, extract the wpreviewsite.zip archive. Inside you will find a plugins folder and a themes folder. Within the plugins folder is another folder named review-site. Since none of these folders are zipped, you will need to upload them using either an FTP program or the file manager provided by your web host. So, upload the review-site folder to the wp-content/plugins directory on your server. If you plan to use one of the themes provided with this plugin, then you will next need to upload the contents of the themes folder to the wp-content/themes directory. Setting up and configuring WP Review Site With the installation process complete, you will now need to activate the WP Review Site plugin. Once that's finished, a Review Site menu will appear on the left side of your screen. This menu contains links to the settings screens for this plugin. Before you delve into the configuration process you must first activate the theme that you plan to use on your consumer review website. Using one of the provided themes is a bit easier. That's because using any other theme will mean that you must integrate the functionality of WP Review Site into it. Now that you know the benefits offered by the themes that are bundled with this plugin, click on Appearance | Themes. Once there, activate either Award Winning Hosts, Bonus Black, or a theme of your choice. General Settings Navigate to Review Site | General Settings to be taken to the first of the WP Review Site settings screens. On this screen, Sort Posts By is the first setting that you will encounter. Rather than displaying reviews in the normal chronological order used by WordPress you should, instead, select either the Average User Rating (Weighted) or the Number of Reviews/Comments option. Either of these settings will provide a much more user-friendly experience for your visitors. If you want to make it impossible for site visitors to submit a comment without also choosing a rating, tick the checkbox next to Require Ratings with All Comments. If you don't want to make this a requirement, then you can leave this setting as is. This setting will, of course, only apply to posts that you would like your visitors to rate. On normal posts, that don't include rating stars in the comment form area, it will still be possible for your visitors to submit a comment. When using one of the themes provided with the plugin, none of the other settings on this screen need to be configured. If you would like to integrate this plugin into a different theme, then, depending upon the method that you choose, you may need to revisit this screen later on. No matter how you're handling the theme issue, you can, for now, just click Save Settings before proceeding to the next screen. Rating Categories To access the next settings screen, click on Review Site | Rating Categories. Here you can add categories for people to rate when submitting reviews. These categories shouldn't be confused with the categories used in WordPress for organizational purposes. These WP Review Site categories are more like ratings criteria. By default, WP Review Site includes a category called Overall Rating, but you can click the remove link to delete it if you like. To add your first rating category, simply enter its title into the Add a Category textbox and then click Save Settings. The screen will then refresh and your newly created rating category will now appear under the Edit Rating Categories section of the screen. To add additional rating categories, simply repeat the process that you previously completed. Once you've finished adding rating categories, you will next need to turn your attention to the Bulk Apply Rating Categories section of the screen. In the Edit Rating Categories area you will see all of the rating categories that you just finished adding to your site. If you want to simplify matters, and apply these rating categories to all of the posts on your site, tick the checkbox next to each of the available rating categories. Then, from the Apply to Posts in Category drop-down menu, select All Categories. This is most likely the configuration that you will use if you're building a website entirely dedicated to providing consumer reviews. Once you've finished, click Save Settings. If you, instead, want your newly added rating categories to only appear on certain categories, then bypass the Edit Rating Categories area for now and first look to the Apply to Posts in Category settings area. Currently this will only show All Categories and Uncategorized. The lack of categories in this menu is being caused by two things. First, you haven't added any WordPress categories to your site yet. Secondly, categories won't be included in this menu until they contain at least one post. To solve part of this problem, open a new browser window and then, navigate to Posts | Categories. Then, add the categories that you would like to include on your website. Now, click on Posts | Edit to visit the Edit Posts screen. At the moment, the Hello world! post is the only one published on your site and you can use it to force your site's categories to appear in the Apply to Posts in Category drop-down menu. So, hover over the title of this post and then, from the now visible set of links, click Quick Edit. In the Categories section of the Quick Edit configuration area, tick the checkbox next to each of the categories found on your site. Then, click Update Post. After content has been added to each of your site's categories, you can delete the Hello world! post, since you will no longer need to use it to force the categories to appear in the Apply to Posts in Category drop-down menu. Now, return to the Rating Categories screen and then select the first category that you want to configure from the Apply to Posts in Category drop-down menu. With that selected, in the Edit Rating Categories area, tick the checkbox next to each rating category that you want to appear within that WordPress category. Then, click Save Settings. Repeat this process for each of the WordPress categories to which you would like rating categories to be added. Comparison Tables If you wish, you can add a comparison table to either the home page or the category pages on your site. To do this, you need to visit the Comparison Tables screen, so click on Review Site | Comparison Tables. If you want to display a comparison table on your home page, then tick the checkbox next to Display a Comparison Table on Home Page. If you would like to include all of your site's categories in the comparison table that will be displayed on the home page, then leave the Categories To Display On Home Page textbox as is. However, if you would prefer to include only certain categories, then enter their category IDs, separated by commas, into the textbox instead. You can learn the ID numbers that have been assigned to each of your site's categories by opening a new browser window and then navigating to Posts | Categories. Once there, hover over the title of each of the categories found on the right hand side of your screen. As you do, look at the URL that appears in your browser's status bar and make a note of the number that appears directly after tag_ID=. That's the number that you will need to enter in the Comparison Table screen. If you want to display a comparison table in one or more categories, then tick the checkbox next to Display a Comparison Table on Category Page(s). Now, return to the Comparison Table screen. If you want a comparison table to be displayed on each of your category pages, leave the Categories To Display Comparison Table On textbox at its default. Otherwise, enter a list of comma separated category IDs into the textbox for the categories where you want to display comparison tables. The Number of Posts in the Table setting is currently set to 5, but you can enter another value if you would like a different number of posts to be included in each comparison table. When writing posts, you might use custom fields to include additional information. If you would like that information to be displayed in your comparison tables you will need to enter the names of those fields, separated by commas, into the Custom Fields to Display textbox. Lastly, you can change the text that appears in the Text for the Visit Site link in the Table if you wish or you may leave it at its default. With these configurations complete, click Save Settings. In this screenshot, you can see what a populated comparison table will look like on your website: Google Maps If you plan on featuring reviews centered around local businesses, then you might want to consider adding Google Maps to your site. This will make it easy for visitors to see exactly where each business is located. You can access this settings screen by clicking on Review Site | Google Maps. To activate this feature, tick the checkbox next to Display a Google Map on Posts/Pages with mapaddress Custom Field. Next, you need to use the Map Position setting to specify where these Google Maps will appear in relation to the content. You can choose to use either the Top of Post or Bottom of Post position. The Your Google Maps API Key textbox is next. Here you will need to enter a Google Maps API key. If you don't have a Google Maps API key for this domain, then you will need to visit Google to generate one. To do this, right-click on the link provided on the Google Maps screen and then open that link in a new browser window. You will then be taken to the Google Maps API sign up screen, which can be found at Google Maps API sign up. If you've ever signed up to use any of Google's services, then you can use that username and password to log in. If you don't have an account with Google, create one now. Take a moment to read the information and terms presented on the Google Maps API sign up page. After you've finished reviewing this text, if it's acceptable to you, enter the URL for your website into the My web site URL textbox and then click Generate API Key. You will then be taken to a thank you screen where your API key will be displayed. Copy the API key and then return to the Google Maps screen on your website. Once there, paste your API key into the textbox for Your Google Maps API Key. The Map Width and Map Height settings are next. By default, these are configured to 400px and 300px. If you would prefer that the maps be displayed at a different size, then enter new values into each of these textboxes. The last setting is Map Zoom Level (1-5), which is currently set to 3. This setting should be fine, but you may change it if you wish. Finally, click Save Settings. When you publish a post that includes the mappadress custom field, this is what the Google Map will look like on your site.
Read more
  • 0
  • 0
  • 8538
article-image-freeswitch-106-sip-and-user-directory
Packt
05 Aug 2010
7 min read
Save for later

FreeSWITCH 1.0.6: SIP and the User Directory

Packt
05 Aug 2010
7 min read
(For more resources on Telephony, see here.) Understanding the FreeSWITCH user directory The FreeSWITCH user directory is based on a centralized XML document, comprised of one or more <domain> elements. Each <domain> can contain either <user> elements or <groups> elements. A <groups> element contains one or more <groups> elements, each of which contains one or more <user> elements. A small, simple example would look like the following: <section name="directory"> <domain name="example.com"> <groups> <group name="default"> <user id="1001"> <params> <param name="password" value="1234"/> </params> </user> </group> </groups> </domain></section> Some more basic configurations may not have a need to organize the users in groups so it is possible to omit the <groups> element completely, and just insert several <user> elements into the top <domain> element. The important thing is that each user@domain derived from this directory is available to all components in the system—it's a single centralized directory for storing all of your user information. If you register as a user with a SIP phone or if you try to leave a voicemail message for a user, FreeSWITCH looks in the same place for user data. This is important because it limits duplication of data, and makes it more efficient than it would be if each component kept track of its users separately. This system should work well for a small system with a few users in it, but what about a large system with thousands of users? What if a user wants to connect his existing database to FreeSWITCH to provide the user directory? Well, using mod_xml_curl (download here-ch:1,ch:3), we can create a web service that gets the request for the entries in the user directory, in the same way a web page sends the results of a form submission. In turn, that web service can query an existing database of users formatted any way possible, and construct the XML records in the format that FreeSWITCH registry expects. mod_xml_curl returns the data to the module requesting the lookup. This means that instant, seamless integration with your existing setup is possible; your data is still kept in its original, central location. The user directory can be accessed by any subsystem within FreeSWITCH. This includes modules, scripts, and the FSAPI interface among others. In this article, we are going to learn how the Sofia SIP module employs the user directory to authenticate your softphone or hardware SIP phone. If you are a developer you may appreciate some nifty things you can do with your user directory, such as adding a <variables> element to either the <domain>, the <groups>, or the <user> element. In this element you can set many <variable> elements, allowing you to set channel variables that will apply to every call made by a particular authenticated user. This can come in very handy in the Dialplan because it allows you to make user-specific routing decisions. It is also possible to define IP address ranges using CIDR notation, which can be used to authenticate particular users based on what remote network address they connect from. This removes the need for a login and password, if your user always logs in from the same remote IP address. The directory is implemented in pure XML. This is advantageous for several reasons, not the least of which is the "X" in XML: Extensible. Since XML is, by definition, extensible, the directory structure is also extensible. If we need to add a new element into the directory, we can do so simply by adding to the existing XML structure. Authentication versus authorizationAuthentication is the process of identifying a user. Authorization is the process of determining the level of access of a user. Authentication answers the question, "Is this person really who he says he is?" Authorization answers the question, "What is this person allowed to do here?" When you see expressions such as "IP Auth" and "Digest Auth", remember that they are referring to the two primary ways of identifying (that is, authenticating) a user. IP authorization is based upon the user's IP address. Digest authentication is based upon the user supplying a username and password. SIP (and FreeSWITCH) can use either method. Visit http://en.wikipedia.org/wiki/Digest_access_authentication for a discussion of how digest authentication works Working with the FreeSWITCH user directory The default configuration has one domain with a directory of 20 users. Users can be added or removed very easily. There is no set limit to how many users can be defined on the system. The list of users is collectively referred to as the directory. Users can belong to one or more groups. Finally, all the users belong to a single domain. By default, the domain is the IP address of the FreeSWITCH server. In the following sections we will discuss these topics: User features Adding a user Testing voicemail Groups of users User features Let's begin by looking at the XML file that defines a user. Locate the file conf/directory/default/1000.xml and open it in an editor. You should see a file like the following: <include> <user id="1000"> <params> <param name="password" value="$${default_password}"/> <param name="vm-password" value="1000"/> </params> <variables> <variable name="toll_allow" value="domestic,international,local"/> <variable name="accountcode" value="1000"/> <variable name="user_context" value="default"/> <variable name="effective_caller_id_name" value="Extension 1000"/> <variable name="effective_caller_id_number" value="1000"/> <variable name="outbound_caller_id_name" value="$${outbound_caller_name}"/> <variable name="outbound_caller_id_number" value="$${outbound_caller_id}"/> <variable name="callgroup" value="techsupport"/> </variables> </user></include> The XML structure of a user is simple. Within the <include> tags the user has the following: The user element with the id attribute The params element, wherein parameters are specified The variables element, wherein channel variables are defined Even before we know what much of the specifics mean, we can glean from this file that the user id is 1000 and that there is both a password and a vm-password. In this case, the password parameter refers to the SIP authorization password. The expression $${default_password} refers to the value contained in the global variable default_password which is defined in the conf/vars.xml file. If you surmised that vm-password means "voicemail password" then you are correct. This value refers to the digits that the user needs to dial when logging in to check his or her voicemail messages. The value of id is used both as the authorization username and the SIP username. Additionally, there are a number of channel variables that are defined for this user. Most of these are directly related to the default Dialplan. The following table lists each variable and what it is used for: Variable Purpose toll_allow Specifies which types of calls this user can make accountcode Arbitrary value that shows up in CDR data user_context The Dialplan context that is used when this person makes a phone call effective_caller_id_name Caller ID name displayed on called party's phone when calling another registered user effective_caller_id_number Caller ID number displayed on called party's phone when calling another registered user outbound_caller_id_name Caller ID name sent to provider on outbound calls outbound_caller_id_number Caller ID number sent to provider on outbound calls callgroup Arbitrary value that can be used in Dialplan or CDR In summary, a user in the default configuration has the following: A username for SIP and for authorization A voicemail password A means of allowing/restricting dialling A means of handling caller ID being sent out Several arbitrary variables that can be used or ignored as needed Let's now add a new user to our directory.
Read more
  • 0
  • 0
  • 5596

article-image-managing-voip-solution-active-directory-depends-your-needs
Packt
05 Aug 2010
3 min read
Save for later

Managing a VoIP Solution with Active Directory Depends On Your Needs

Packt
05 Aug 2010
3 min read
(For more resources on Telephony, see here.) Some smaller businesses might be able to get away with just using Skype. As a software client, Skype can be easily installed on individual computers. Since most workstations these days have a microphone built in to the monitor, a simple headset should suffice in getting up and running with Skype, along with a nominal fee per month to set up an account with privileges to call regular telephones. One problem with this method, however, is the way that Skype can hog your bandwidth. Skype is a peer-to-peer application that not only uses your system’s bandwidth in order to make phone calls; it also acts as a node for other phone calls across its own distributed network. Essentially, Skype also has the capability in its peer-to-peer system that can cause it to inadvertently hog bandwidth, which could cause your office to experience traffic problems. There are a series of useful Active Directory group policies you can enact to try to such as using ListenPortPolicy to try to lock down ports as well as using DisableApiPolicy to block bandwidth-eating third party APIs, but having to manage this system may be a bit too tumultuous, especially if you have a large amount of machines on your system. In a network that has larger scale, using Skype is probably not feasible. Technology titans such as Cisco and HP have systems complete with phones and special switches that can be easily implemented into a network; although this option requires a lot more upfront expense and time, if your system is at such a scale for such a service the long-term cost savings will be immense. Because IP phones are just like devices that use Organizational Units in Active Directory, you’ll be able to better place policies on them. You’ll inevitably have bandwidth issues using VoIP, but the difference between an application like Skype and IP telephone hardware is that you’re dealing with separate devices that disparately use bandwidth instead of trying to use group policies to manage software that is on a workstation. That means using your network performance management system to be able to control things like jitter and packet loss by placing a priority on your VoIP traffic. Bottom line: depending on the size of your network, you have options on leveraging VoIP and Active Directory for your infrastructure. Either way you look at it, you’ll be able to save cash on phone calls by switching to an IP-based solution. Further resources on this subject: Setting Up OpenVPN with X509 Certificates [Article] Installing OpenVPN on Linux and Unix Systems [Article] Networking with OpenVPN [Article] Installation of OpenSIPS 1.6 [Article] Configuring sipXecs Server Features [Article]
Read more
  • 0
  • 0
  • 6460
Modal Close icon
Modal Close icon