Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-php-magic-features
Packt
12 Oct 2009
5 min read
Save for later

PHP Magic Features

Packt
12 Oct 2009
5 min read
In this article by Jani Hartikainen, we'll look at PHP's "magic" features: Magic methods, which are class methods with specific names, are used to perform various specialized tasks. They are grouped into two: overloading methods and non-overloading methods. Overloading magic methods are used when your code attempts to access a method or a property which does not exist. Non-overloading methods perform other tasks. Magic functions, which are similar to magic methods, but are just plain functions outside any class. Currently there is only one magic function in PHP. Magic constants, which are similar to constants in notation, but act more like "dynamic" constants - their value depends on where you use them. We'll also look at some practical examples of using some of these, and lastly we'll check out what new features PHP 5.3 is going to add. Magic methods For starters, let's take a look at the magic methods PHP provides. We will first go over the non-overloading methods. __construct and __destruct class SomeClass { public function __construct() { } public function __destruct() { }} The most common magic method in PHP is __construct. In fact, you might not even have thought of it as a magic method at all, as it's so common.  __construct is the class constructor method, which gets called when you instantiate a new object using the new keyword, and any parameters used will get passed to __construct. $obj = new SomeClass(); __destruct is __construct's "pair". It is a class destructor, which is rarely used in PHP, but still it is good to know about  its existence. It gets called when your object falls out of scope or is garbage collected. function someFunc() { $obj = new SomeClass(); //when the function ends, $obj falls out of scope and SomeClass __destruct is called } someFunc(); If you make the constructor private or protected, it means that the class cannot be instantiated, except inside a method of the same class. You can use this to your advantage, for example to create a singleton. __clone class SomeClass { public $someValue; public function __clone() { $clone = new SomeClass(); $clone->someValue = $this->someValue; return $clone; }} The __clone method is called when you use PHP's clone keyword, and is used to create a clone of the object. The purpose is that by implementing __clone, you can define a way to copy objects. $obj1 = new SomeClass();$obj1->someValue = 1;$obj2 = clone $obj1;echo $obj2->someValue;//echos 1 Important: __clone is not the same as =. If you use = to assign an object to another variable, the other variable will still refer to the same object as the first one! If you use the clone keyword, the purpose is to return a new object with similar state as the original. Consider the following: $obj1 = new SomeClass();$obj1->someValue = 1;$obj2 = $obj1;$obj3 = clone $obj1;$obj1->someValue = 2; What are the values of the someValue property in $obj2 and $obj3 now? As we have used the assign operator to create $obj2, it refers to the same object as $obj1, thus $obj2->someValue is 2. When creating $obj3, we have used the clone keyword, so the __clone method was called. As __clone creates a new instance, $obj3->someValue is still the same as it was when we cloned $obj1: 1. If you want to disable cloning, you can make __clone private or protected. __toString class SomeClass { public function __toString() { return 'someclass'; }} The __toString method is called when PHP needs to convert class instances into strings, for example when echoing: $obj = new SomeClass();echo $obj;//will output 'someclass' This can be a useful example to help you identify objects or when creating lists. If we have a user object, we could define a __toString method which outputs the user's first and last names, and when we want to create a list of users, we could simply echo the objects themselves. __sleep and __wakeup class SomeClass { private $_someVar; public function __sleep() { return array('_someVar'); } public function __wakeup() { }} These two methods are used with PHP's serializer: __sleep is called with serialize(), __wakeup is called with unserialize(). Note that you will need to return an array of the class variables you want to save from __sleep. That's why the example class returns an array with _someVar in it: Without it, the variable will not get serialized. $obj = new SomeClass();$serialized = serialize($obj);//__sleep was calledunserialize($serialized);//__wakeup was called You typically won't need to implement __sleep and __wakeup, as the default implementation will serialize classes correctly. However, in some special cases it can be useful. For example, if your class stores a reference to a PDO object, you will need to implement __sleep, as PDO objects cannot be serialized. As with most other methods, you can make __sleep private or protected to stop serialization. Alternatively, you can throw an exception, which may be a better idea as you can provide a more meaningful error message. An alternative to __sleep and __wakeup is the Serializable interface. However, as its behavior is different from these two methods, the interface is outside the scope of this article. You can find info on it in the PHP manual. __set_state class SomeClass { public $someVar; public static function __set_state($state) { $obj = new SomeClass(); $obj->someVar = $state['someVar']; return $obj; }} This method is called in code created by var_export. It gets an array as its parameter, which contains a key and value for each of the class variables, and it must return an instance of the class. $obj = new SomeClass();$obj->someVar = 'my value';var_export($obj); This code will output something along the lines of: SomeClass::__set_state(array('someVar'=>'my value')); Note that var_export will also export private and protected variables of the class, so they too will be in the array.
Read more
  • 0
  • 0
  • 3314

article-image-testing-your-business-rules-jboss-drools
Packt
12 Oct 2009
9 min read
Save for later

Testing your Business Rules in JBoss Drools

Packt
12 Oct 2009
9 min read
When we start writing 'real' business rules, there is very little space to make mistakes as the mistakes made can cause a lot of wastage of money. How much money would a company lose if a rule that you wrote gave double the intended discount to a customer? Or, what if your airline ticket pricing rule started giving away first class transatlantic flights for one cent? Of course, mistakes happen. This article makes sure that these costly mistakes don't happen to you. If you're going through the trouble of writing business rules, you will want to make sure that they do what you intend them to, and keep on doing what you intend, even when you or other people make changes, both now and in the future. But first of all, we will see how testing is not a standalone activity, but part of an ongoing cycle. Testing when building rules It's a slightly morbid thought, but there's every chance that some of the business rules that you write will last longer than you do. Remember the millennium bug caused by programmers in the 1960's, assuming that nobody would be using their work in 40 years' time, and then being surprised when the year 2000 actually came along? Rather than 'play and throw away', we're more likely to create production business rules in the following cycle: Write your rules (or modify an existing one) based on a specification, or feedback from end users. Test your rules to make sure that your new rules do what you want them to do, and ensure that you haven't inadvertently broken any existing rules. Deploy your rules to somewhere other than your local machine, where end users (perhaps via a web page or an enterprise system) can interact with them. You can repeat steps 1, 2, and 3 as often as required. That means, repeat as many times as it takes you to get the first version into production. Or, deploy now and modify it anytime later –in 1, 2, or 10 years time. Making testing interesting Normal testing, where you inspect everything manually, is booooooooring! You might check everything the first time, but after the hundredth deployment you'll be tempted to skip your tests—and you'll probably get away with it without any problems. You'll then be tempted to skip your tests on the 101st deployment—still no problems. So, not testing becomes a bad habit either because you're bored, or because your boss fails to see the value of the tests. The problem, then comes one Friday afternoon, or just when you're about to go on vacation, or some other worst possible time. The whole world will see any mistakes in the rules that are in production. Therefore, fixing them is a lot more time and money consuming than if you catch the error at the very start on your own PC. What's the solution? Automate the testing. All of your manual checks are very repetitive—exactly the sort of thing that computers are good at. The sort of checks for our chocolate shipment example would be 'every time we have an order of 2000 candy bars, we should have 10 shipments of 210 bars and one shipment of 110 bars'. Testing using Guvnor There is one very important  advantage of testing—we can instantly see whether our tests are correct, without having to wait for our rules to be deployed into the target system. At a high level, Guvnor has two main screens that deal with testing: An individual test screen: Here you can edit your test by specifying the values that you want to input, and the values that you expect once your rules have fired A package or multiple tests screen (below): This allows you to run (later on) all of the tests in your package, to catch any rules that you may have inadvertently broken Another way of saying this is: You write your tests for selfish reasons because you need them to ensure that your rules do what you want them to do. By keeping your tests for later, they automatically become a free safety net that catches bugs as soon as you make a change. Testing using FIT Guvnor testing is great. But, often, a lot of what you are testing for is already specified in the requirements documents for the system. With a bit of thought in specifying various scenarios in your requirements documents, FIT allows you to automatically compare these requirements against your rules. These requirements documents can be written in Microsoft Word, or similar format, and they will highlight if the outputs aren't what is specified. Like Drools, FIT is an open source project, so there is no charge for either using it, or for customizing it to fit your needs. Before you get too excited about this, your requirements documents do have some compromises. The tests must specify the values to be input to the rules, and the expected result—similar to the examples, or scenarios, that many specifications already contain. These scenarios have to follow a FIT-specific format. Specification documents should follow a standard format anyway—the FIT scenario piece is often less than 10% of it, and it is still highly human-readable! Even better, the document can be written in anything that generates HTML, which includes Microsoft Word, Excel, OpenOffice, Google documents, and most of the myriad of editors that are available today. Like the Guvnor testing, we can use FIT to test whether our individual requirements are being met when writing our rules. It is possible to run FIT automatically over multiple requirement documents to ensure that nothing has 'accidentally' broken as we update other rules. Getting FIT When you download the samples, you will probably notice three strange packages and folders. fit-testcase: This folder resides just within the main project folder, and contains the FIT requirements documents that we're going to test against. chap7: This is a folder under src/main/java/net/firstpartners, and contains the startpoint (FitRulesExample.java) that we'll use to kick-start our FIT Tests. FIT: This folder is next to the chap7 folder. It folder contains some of the 'magic plumbing' that makes FIT work. Most business users won't care how this works (you probably won't need to change what you find here), but we will take a look at it in more detail in case we want to customize exactly how FIT works. If you built the previous example using Maven, then all of the required FIT software will have been downloaded for you. (Isn't Maven great?) So, we're ready to go. The FIT requirements document Open the word document fit-testcase.doc using Word, or OpenOffice. Remember that it's in the fit-testcase folder. fit-testcase.doc is a normal document, without any hidden code. The testing magic lies in the way the document is laid out. More specifically, it's in the tables that you see in the document. All of the other text is optional. Let's go through it. Logo and the first paragraph At the very top of the document is the Drools logo and a reference to where you can download FIT for rules code. It's also worth reading the text here, as it's another explanation of what the FIT project does and how it works. None of this text matters, or rather FIT ignores it as it contains no tables. We can safely replace this (or any other text in this document that isn't in the table) with your company logo, or whatever you normally put at the top of your requirement documents. FIT is a GPL (General Public License) open source software. This means you can modify it (as long as you publish your changes). In this sample we've modified it to accept global variables passed into the rules. (We will use this feature in step 3.) The changes are published in the FIT plumbing directory, which is a part of the sample. Feel free to use it in your own projects. First step—setup The setup table prepares the ground for our test, and explains the objects that we want to use in our test. These objects are familiar as they are the Java facts that we've used in our rules. There's a bit of text (worth reading as it also explains what the table does), but FIT ignores it. The bit that it reads is given in the following table: net.firstpartners.fit.fixture.Setup net.firstpartners.chap6.domain.CustomerOrder AcmeOrder net.firstpartners.chap6.domain.OoompaLoompaDate nextAvailableShipmentDate If you're wondering what this does, try the following explanation in the same table format: Use the piece of plumbing called 'Setup'   Create CustomerOrder and call it AcmeOrder Create OoompaLoompaDate and call it nextAvailableShipmentDate There is nothing here that we haven't seen before. Note that we will be passing nextShipmentDate as a global so that it matches the global of a same name in our rules file (the match includes the exact spelling, and the same lower-and uppercase). Second step—values in The second part also has the usual text explanation (ignored by FIT) and table (the important bit), which explains how to set the values. net.firstpartners.fit.fixture.Populate AcmeOrder Set initial balance 2000 AcmeOrder Set current balance 2000 It's a little bit clearer than the first table, but we'll explain it again anyway. Use the piece of plumbing called Populate AcmeOrder Take the ... we created earlier, and set it to have an initial balance of ... 2000 AcmeOrder Take the ... we created earlier, and set it to have a current balance of ... 2000 Third step—click on the Go button Our next part starts the rules. Or rather, the table tells FIT to invoke the rules. The rest of the text (which is useful to explain what is going on to us humans) gets ignored. net.firstpartners.fit.fixture.Engine Ruleset src/main/java/net/firstpartners/chap6/shipping-rules.drl Assert AcmeOrder Global nextAvailableShipmentDate Execute   The following table is the same again, in English: Use the piece of plumbing called 'Engine' Ruleset Use the rules in shipping-rules.drl Assert Pass our AcmeOrder to the rule engine (as a fact) Global Pass our nextAvailableShipmentDate to the rule engine (as a global) Execute Click on the Go Button
Read more
  • 0
  • 0
  • 8006

article-image-importing-structure-and-data-using-phpmyadmin
Packt
12 Oct 2009
9 min read
Save for later

Importing Structure and Data Using phpMyAdmin

Packt
12 Oct 2009
9 min read
A feature was added in version 2.11.0: an import file may contain the DELIMITER keyword. This enables phpMyAdmin to mimic the mysql command-line interpreter. The DELIMITER separator is used to delineate the part of the file containing a stored procedure, as these procedures can themselves contain semicolons. The default values for the Import interface are defined in $cfg['Import']. Before examining the actual import dialog, let's discuss some limits issues. Limits for the transfer When we import, the source file is usually on our client machine; so, it must travel to the server via HTTP. This transfer takes time and uses resources that may be limited in the web server's PHP configuration. Instead of using HTTP, we can upload our file to the server using a protocol such as FTP, as described in the Web Server Upload Directories section. This method circumvents the web server's PHP upload limits. Time limits First, let's consider the time limit. In config.inc.php, the $cfg['ExecTimeLimit'] configuration directive assigns, by default, a maximum execution time of 300 seconds (five minutes) for any phpMyAdmin script, including the scripts that process data after the file has been uploaded. A value of 0 removes the limit, and in theory, gives us infinite time to complete the import operation. If the PHP server is running in safe mode, modifying $cfg['ExecTimeLimit'] will have no effect. This is because the limits set in php.ini or in user-related web server configuration file (such as .htaccess or virtual host configuration files) take precedence over this parameter. Of course, the time it effectively takes, depends on two key factors: Web server load MySQL server load The time taken by the file, as it travels between the client and the server,does not count as execution time because the PHP script starts to execute only once the file has been received on the server. Therefore, the $cfg['ExecTimeLimit'] parameter has an impact only on the time used to process data (like decompression or sending it to the MySQL server). Other limits The system administrator can use the php.ini file or the web server's virtual host configuration file to control uploads on the server. The upload_max_filesize parameter specifies the upper limit or the maximum file size that can be uploaded via HTTP. This one is obvious, but another less obvious parameter is post_max_size. As HTTP uploading is done via the POST method, this parameter may limit our transfers. For more details about the POST method, please refer to http://en.wikipedia.org/wiki/Http#Request_methods. The memory_limit parameter is provided to avoid web server child processes from grabbing too much of the server memory—phpMyAdmin also runs as a child process. Thus, the handling of normal file uploads, especially compressed dumps, can be compromised by giving this parameter a small value. Here, no preferred value can be recommended; the value depends on the size of uploaded data. The memory limit can also be tuned via the $cfg['MemoryLimit'] parameter in config.inc.php. Finally, file uploads must be allowed by setting file_uploads to On. Otherwise, phpMyAdmin won't even show the Location of the textfile dialog. It would be useless to display this dialog, as the connection would be refused later by the PHP component of the web server. Partial imports If the file is too big, there are ways in which we can resolve the situation. If we still have access to the original data, we could use phpMyAdmin to generate smaller CSV export files, choosing the Dump n rows starting at record # n dialog. If this were not possible, we will have to use a text editor to split the file into smaller sections. Another possibility is to use the upload directory mechanism, which accesses the directory defined in $cfg['UploadDir']. This feature is explained later in this article. In recent phpMyAdmin versions, the Partial import feature can also solve this file size problem. By selecting the Allow interrupt… checkbox, the import process will interrupt itself if it detects that it is close to the time limit. We can also specify a number of queries to skip from the start, in case we successfully import a number of rows and wish to continue from that point. Temporary directory On some servers, a security feature called open_basedir can be set up in a way that impedes the upload mechanism. In this case, or for any other reason, when uploads are problematic, the $cfg['TempDir'] parameter can be set with the value of a temporary directory. This is probably a subdirectory of phpMyAdmin's main directory, into which the web server is allowed to put the uploaded file. Importing SQL files Any file containing MySQL statements can be imported via this mechanism. The dialog is available in the Database view or the Table view, via the Import subpage, or in the Query window. There is no relation between the currently selected table (here author) and the actual contents of the SQL file that will be imported. All the contents of the SQL file will be imported, and it is those contents that determine which tables or databases are affected. However, if the imported file does not contain any SQL statements to select a database, all statements in the imported file will be executed on the currently selected database. Let's try an import exercise. First, we make sure that we have a current SQL export of the book table. This export file must contain the structure and the data. Then we drop the book table—yes, really! We could also simply rename it. Now it is time to import the file back. We should be on the Import subpage, where we can see the Location of the text file dialog. We just have to hit the Browse button and choose our file. phpMyAdmin is able to detect which compression method (if any) has been applied to the file. Depending on the phpMyAdmin version, and the extensions that are available in the PHP component of the web server, there is variation in the formats that the program can decompress. However, to import successfully, phpMyAdmin must be informed of the character set of the file to be imported. The default value is utf8. However, if we know that the import file was created with another character set, we should specify it here. An SQL compatibility mode selector is available at import time. This mode should be adjusted to match the actual data that we are about to import, according to the type of the server where the data was previously exported. To start the import, we click Go. The import procedure continues and we receive a message: Import has been successfully finished, 2 queries executed. We can browse our newly-created tables to confirm the success of the import operation. The file could be imported for testing in a different database or even in a MySQL server. Importing CSV files In this section, we will examine how to import CSV files. There are two possible methods—CSV and CSV using LOAD DATA. The first method is implemented internally by phpMyAdmin and is the recommended one for its simplicity. With the second method, phpMyAdmin receives the file to be loaded, and passes it to MySQL. In theory, this method should be faster. However, it has more requirements due to MySQL itself (see the Requirements sub-section of the CSV using LOAD DATA section). Differences between SQL and CSV formats There are some differences between these two formats. The CSV file format contains data only, so we must already have an existing table in place. This table does not need to have the same structure as the original table (from which the data comes); the Column names dialog enables us to choose which columns are affected in the target table. Because the table must exist prior to the import, the CSV import dialog is available only from the Import subpage in the Table view, and not in the Database view.   Exporting a test file Before trying an import, let's generate an author.csv export file from the author table. We use the default values in the CSV export options. We can then Empty the author table—we should avoid dropping this table because we still need the table structure. CSV From the author table menu, we select Import and then CSV. We can influence the behavior of the import in a number of ways. By default, importing does not modify existing data (based on primary or unique keys). However, the Replace table data with file option instructs phpMyAdmin to use REPLACE statement instead of INSERT statement, so that existing rows are replaced with the imported data. Using Ignore duplicate rows, INSERT IGNORE statements are generated. These cause MySQL to ignore any duplicate key problems during insertion. A duplicate key from the import file does not replace existing data, and the procedure continues for the next line of CSV data. We can then specify the character that terminates each field, the character that encloses data, and the character that escapes the enclosing character. Usually this is . For example, for a double quote enclosing character, if the data field contains a double quote, it must be expressed as "some data " some other data". For Lines terminated by, recent versions of phpMyAdmin offer the auto choice, which should be tried first as it detects the end-of-line character automatically. We can also specify manually which characters terminate the lines. The usual choice is n for UNIX-based systems, rn for DOS or Windows systems, and r for Mac-based system (up to Mac OS 9). If in doubt, we can use a hexadecimal file editor on our client computer (not part of phpMyAdmin) to examine the exact codes. By default, phpMyAdmin expects a CSV file with the same number of fields and the same field order as the target table. But this can be changed by entering a comma-separated list of column names in Column names, respecting the source file format. For example, let's say our source file contains only the author ID and the author name information: "1","John Smith" "2","Maria Sunshine" We'd have to put id, name in Column names to match the source file. When we click Go, the import is executed and we get a confirmation. We might also see the actual INSERT queries generated if the total size of the file is not too big. Import has been successfully finished, 2 queries executed.INSERT INTO `author` VALUES ('1', 'John Smith', '+01 445 789-1234')# 1 row(s) affected.INSERT INTO `author` VALUES ('2', 'Maria Sunshine', '333-3333')# 1 row(s) affected.
Read more
  • 0
  • 0
  • 13349

article-image-creating-our-first-module-using-drupal-6-part2
Packt
12 Oct 2009
11 min read
Save for later

Creating Our First Module using Drupal 6 (Part2)

Packt
12 Oct 2009
11 min read
Using Goodreads Data So far, we have created a basic module that uses hook_block() to add block content and installed this basic module. As it stands, however, this module does no more than simply displaying a few lines of static text. In this article, we are going to extend the module's functionality. We will add a few new functions that retrieve and format data from Goodreads. Goodreads makes data available in an XML format based on RSS 2.0. The XML content is retrieved over HTTP (HyperText Transport Protocol), the protocol that web browsers use to retrieve web pages. To enable this module to get Goodreads content, we will have to write some code to retrieve data over HTTP and then parse the retrieved XML. Our first change will be to make a few modifications to goodreads_block(). Modifying the Block Hook We could cram all of our new code into the existing goodreads_block() hook; however, this would make the function cumbersome to read and difficult to maintain. Rather than adding significant code here, we will just call another function that will perform another part of the work. /** * Implementation of hook_block */function goodreads_block($op='list' , $delta=0, $edit=array()) { switch ($op) { case 'list': $blocks[0]['info'] = t('Goodreads Bookshelf'); return $blocks; case 'view': $url = 'http://www.goodreads.com/review/list_rss/' .'398385' .'?shelf=' .'history-of-philosophy'; $blocks['subject'] = t('On the Bookshelf'); $blocks['content'] = _goodreads_fetch_bookshelf($url); return $blocks; }} The preceding code should look familiar. This is our hook implementation as seen in the previous article. However, we have made a few modifications, indicated by the highlighted lines. First, we have added a variable, $url, whose value is the URL of the Goodreads XML feed we will be using (http://www.goodreads.com/review/list_rss/398385?shelf=history-of-philosophy). In a completely finished module, we would want this to be a configurable parameter, but for now we will leave it hard-coded. The second change has to do with where the module is getting its content. Previously, the function was setting the content to t('Temporary content'). Now it is calling another function: _goodreads_fetch_bookshelf($url). The leading underscore here indicates that this function is a private function of our module—it is a function not intended to be called by any piece of code outside of the module. Demarcating a function as private by using the initial underscore is another Drupal convention that you should employ in your own code. Let's take a look at the _goodreads_fetch_bookshelf() function. Retrieving XML Content over HTTP The job of the _goodreads_fetch_bookshelf() function is to retrieve the XML content using an HTTP connection to the Goodreads site. Once it has done that, it will hand over the job of formatting to another function. Here's a first look at the function in its entirety: /** * Retrieve information from the Goodreads bookshelp XML API. * * This makes an HTTP connection to the given URL, and * retrieves XML data, which it then attempts to format * for display. * * @param $url * URL to the goodreads bookshelf. * @param $num_items * Number of items to include in results. * @return * String containing the bookshelf. */function _goodreads_fetch_bookshelf($url, $num_items=3) { $http_result = drupal_http_request($url); if ($http_result->code == 200) { $doc = simplexml_load_string($http_result->data); if ($doc === false) { $msg = "Error parsing bookshelf XML for %url: %msg."; $vars = array('%url'=>$url, '%msg'=>$e->getMessage()); watchdog('goodreads', $msg, $vars, WATCHDOG_WARNING); return t("Getting the bookshelf resulted in an error."); } return _goodreads_block_content($doc, $num_items); // Otherwise we don't have any data}else { $msg = 'No content from %url.'; $vars = array('%url' => $url); watchdog('goodreads', $msg, $vars, WATCHDOG_WARNING); return t("The bookshelf is not accessible."); }} Let's take a closer look. Following the Drupal coding conventions, the first thing in the above code is an API description: /** * Retrieve information from the Goodreads bookshelp XML API. * * This makes an HTTP connection to the given URL, and retrieves * XML data, which it then attempts to format for display. * * @param $url * URL to the goodreads bookshelf. * @param $num_items * Number of items to include in results. * @return * String containing the bookshelf. */ This represents the typical function documentation block. It begins with a one-sentence overview of the function. This first sentence is usually followed by a few more sentences clarifying what the function does. Near the end of the docblock, special keywords (preceded by the @ sign) are used to document the parameters and possible return values for this function. @param: The @param keyword is used to document a parameter and it follows the following format: @param <variable name> <description>. The description should indicate what data type is expected in this parameter. @return: This keyword documents what type of return value one can expect from this function. It follows the format: @return <description>. This sort of documentation should be used for any module function that is not an implementation of a hook. Now we will look at the method itself, starting with the first few lines. function _goodreads_fetch_bookshelf($url, $num_items=3) { $http_result = drupal_http_request($url); This function expects as many as two parameters. The required $url parameter should contain the URL of the remote site, and the optional $num_items parameter should indicate the maximum number of items to be returned from the feed. While we don't make use of the $num_items parameter when we call _goodreads_fetch_bookshelf() this would also be a good thing to add to the module's configurable parameters. The first thing the function does is use the Drupal built-in drupal_http_request() function found in the includes/common.php library. This function makes an HTTP connection to a remote site using the supplied URL and then performs an HTTP GET request. The drupal_http_request() function returns an object that contains the response code (from the server or the socket library), the HTTP headers, and the data returned by the remote server. Drupal is occasionally criticized for not using the object-oriented features of PHP. In fact, it does—but less overtly than many other projects. Constructors are rarely used, but objects are employed throughout the framework. Here, for example, an object is returned by a core Drupal function. When the drupal_http_request() function has executed, the $http_result object will contain the returned information. The first thing we need to find out is whether the HTTP request was successful—whether it connected and retrieved the data we expect it to get. We can get this information from the response code, which will be set to a negative number if there was a networking error, and set to one of the HTTP response codes if the connection was successful. We know that if the server responds with the 200 (OK) code, it means that we have received some data. In a more robust application, we might also check for redirect messages (301, 302, 303, and 307) and other similar conditions. With a little more code, we could configure the module to follow redirects. Our simple module will simply treat any other response code as indicating an error: if ($http_result->code == 200) { // ...Process response code goes here... // Otherwise we don't have any data} else { $msg = 'No content from %url.'; $vars = array( '%url' => $url ); watchdog('goodreads', $msg, $vars, WATCHDOG_WARNING); return t("The bookshelf is not accessible.");} First let's look at what happens if the response code is something other than 200: } else { $msg = 'No content from %url.'; $vars = array( '%url' => $url ); watchdog('goodreads', $msg, $vars, WATCHDOG_WARNING); return t("The bookshelf is not accessible.");} We want to do two things when a request fails: we want to log an error, and then notify the user (in a friendly way) that we could not get the content. Let's take a glance at Drupal's logging mechanism. The watchdog() Function Another important core Drupal function is the watchdog() function. It provides a logging mechanism for Drupal. Customize your loggingDrupal provides a hook (hook_watchdog()) that can be implemented to customize what logging actions are taken when a message is logged using watchdog(). By default, Drupal logs to a designated database table. You can view this log in the administration section by going to Administer | Logs. The watchdog() function gathers all the necessary logging information and fires off the appropriate logging event. The first parameter of the watchdog() function is the logging category. Typically, modules should use the module name (goodreads in this case) as the logging category. In this way, finding module-specific errors will be easier. The second and third watchdog parameters are the text of the message ($msg above) and an associative array of data ($vars) that should be substituted into the $msg. These substitutions are done following the same translation rules used by the t() function. Just like with the t() function's substitution array, placeholders should begin with !, @, or %, depending on the level of escaping you need. So in the preceding example, the contents of the $url variable will be substituted into $msg in place of the %url marker. Finally, the last parameter in the watchdog() function is a constant that indicates the log message's priority, that is, how important it is. There are eight different constants that can be passed to this function: WATCHDOG_EMERG: The system is now in an unusable state. WATCHDOG_ALERT: Something must be done immediately. WATCHDOG_CRITICAL: The application is in a critical state. WATCHDOG_ERROR: An error occurred. WATCHDOG_WARNING: Something unexpected (and negative) happened, but didn't cause any serious problems. WATCHDOG_NOTICE: Something significant (but not bad) happened. WATCHDOG_INFO: Information can be logged. WATCHDOG_DEBUG: Debugging information can be logged. Depending on the logging configuration, not all these messages will show up in the log. The WATCHDOG_ERROR and WATCHDOG_WARNING levels are usually the most useful for module developers to record errors. Most modules do not contain code significant enough to cause general problems with Drupal, and the upper three log levels (alert, critical, and emergency) should probably not be used unless Drupal itself is in a bad state. There is an optional fifth parameter to watchdog(), usually called $link, which allows you to pass in an associated URL. Logging back ends may use that to generate links embedded within logging messages. The last thing we want to do in the case of an error is return an error message that can be displayed on the site. This is simply done by returning a (possibly translated) string: return t("The bookshelf is not accessible."); We've handled the case where retrieving the data failed. Now let's turn our attention to the case where the HTTP request was successful. Processing the HTTP Results When the result code of our request is 200, we know the web transaction was successful. The content may or may not be what we expect, but we have good reason to believe that no error occurred while retrieving the XML document. So, in this case, we continue processing the information: if ($http_result->code == 200) { // ... Processing response here... $doc = simplexml_load_string($http_result->data); if ($doc === false) { $msg = "Error parsing bookshelf XML for %url: %msg."; $vars = array('%url'=>$url, '%msg'=>$e->getMessage()); watchdog('goodreads', $msg, $vars, WATCHDOG_WARNING); return t("Getting the bookshelf resulted in an error."); } return _goodreads_block_content($doc, $num_items); // Otherwise we don't have any data} else { // ... Error handling that we just looked at. In the above example, we use the PHP 5 SimpleXML library. SimpleXML provides a set of convenient and easy-to-use tools for handling XML content. This library is not present in the now-deprecated PHP 4 language version. For compatibility with outdated versions of PHP, Drupal code often uses the Expat parser, a venerable old event-based XML parser supported since PHP 4 was introduced. Drupal even includes a wrapper function for creating an Expat parser instance. However, writing the event handlers is time consuming and repetitive. SimpleXML gives us an easier interface and requires much less coding. For an example of using the Expat event-based method for handling XML documents, see the built-in Aggregator module. For detailed documentation on using Expat, see the official PHP documentation: http://php.net/manual/en/ref.xml.php. We will parse the XML using simplexml_load_string(). If parsing is successful, the function returns a SimpleXML object. However, if parsing fails, it will return false. In our code, we check for a false. If one is found, we log an error and return a friendly error message. But if the Goodreads XML document was parsed properly, this function will call another function in our module, _goodreads_block_content(). This function will build some content from the XML data.
Read more
  • 0
  • 0
  • 1554

article-image-searching-data-using-phpmyadmin-and-mysql
Packt
09 Oct 2009
4 min read
Save for later

Searching Data using phpMyAdmin and MySQL

Packt
09 Oct 2009
4 min read
In this article by Marc Delisle, we present mechanisms that can be used to find the data we are looking for instead of just browsing tables page-by-page and sorting them. This article covers single-table and whole database searches. Single-Table Searches This section describes the Search sub-page where single-table search is available. Daily Usage of phpMyAdmin The main use for the tool for some users is with the Search mode, for finding and updating data. For this, the phpMyAdmin team has made it possible to define which sub-page is the starting page in Table view, with the $cfg['DefaultTabTable'] parameter. Setting it to 'tbl_select.php' defines the default sub-page to search. With this mode, application developers can look for data in ways not expected by the interface they are building, adjusting and sometimes repairing data. Entering the Search Sub-Page The Search sub-page can be accessed by clicking the Search link in the Table view. This has been done here for the book table: Selection of Display Fields The first panel facilitates a selection of the fields to be displayed in the results: All fields are selected by default, but we can control-click other fields to make the necessary selections. Mac users would use command-click to select / unselect the fields. Here are the fields of interest to us in this example: We can also specify the number of rows per page in the textbox just next to the field selection. The Add search conditions box will be explained in the Applying a WHERE Clause section later in this article. Ordering the Results The Display order dialog permits to specify an initial sorting order for the results to come. In this dialog, a drop-down menu contains all the table's columns; it's up to us to select the one on which we want to sort. By default, the sorting will be in Ascending order, but a choice of Descending order is available. It should be noted that on the results page, we can also change the sort order. Search Criteria by Field: Query by Example The main usage of the Search panel is to enter criteria for some fields so as to retrieve only the data in which we are interested. This is called Query by example because we give an example of what we are looking for. Our first retrieval will concern finding the book with ISBN 1-234567-89-0. We simply enter this value in the isbn box and choose the = operator: Clicking on Go gives the results shown in the following screenshot. The four fields displayed are those selected in the Select fields dialog: This is a standard results page. If the results ran in pages, we could navigate through them, and edit and delete data for the subset we chose during the process. Another feature of phpMyAdmin is that the fields used as the criteria are highlighted by changing the border color of the columns to better reflect their importance on the results page. It isn't necessary to specify that the isbn column be displayed. We could have selected only the title column for display and selected the isbn column as a criterion. Print View We see the Print view and Print view (with full texts) links on the results page. These links produce a more formal report of the results (without the navigation interface) directly to the printer. In our case, using Print view would produce the following: This report contains information about the server, database, time of generation, version of phpMyAdmin, version of MySQL, and SQL query used. The other link, Print view (with full texts) would print the contents of TEXT fields in its entirety.
Read more
  • 0
  • 0
  • 7720

article-image-multi-table-query-generator-using-phpmyadmin-and-mysql
Packt
09 Oct 2009
4 min read
Save for later

The Multi-Table Query Generator using phpMyAdmin and MySQL

Packt
09 Oct 2009
4 min read
The Search pages in the Database or Table view are intended for single-table lookups. This article by Marc Delisle, covers the multi-table Query by example (QBE) feature available in the Database view. Many phpMyAdmin users work in the Table view, table-by-table, and thus tend to overlook the multi-table query generator, which is a wonderful feature for fine-tuning queries. The query generator is useful not only in multi-table situations but also for a single table. It enables us to specify multiple criteria for a column, a feature that the Search page in the Table view does not possess. The examples in this article assumes that a multi-user installation of the linked-tables infrastructure has been made and that the book-copy table created during an exercise in the article on Table and Database Operations in PHP is still there in the marc_book database. To access the code used in this article Click Here. To open the page for this feature, we go to the Database view for a specific database (the query generator supports working on only one database at a time) and click on Query. The screenshot overleaf shows the initial QBE page. It contains the following elements: Criteria columns An interface to add criteria rows An interface to add criteria columns A table selector The query area Buttons to update or to execute the query Choosing Tables The initial selection includes all the tables. In this example, we assume that the linked-table infrastructure has been installed into the marc_book database. Consequently, the Field selector contains a great number of fields. For our example, we will work only with the author and book tables: We then click Update Query. This refreshes the screen and reduces the number of fields available in the Field selector. We can always change the table choice later, using our browser's mechanism for multiple choices in drop-down menus (usually, control-click). Column Criteria Three criteria columns are provided by default. This section discusses the options we have for editing their criteria. These include options for selecting fields, sorting individual columns, entering conditions for individual columns, and so on. Field Selector: Single-Column or All Columns The Field selector contains all individual columns for the selected tables, plus a special choice ending with an asterisk (*) for each table, which means all the fields are selected: To display all the fields in the author table, we choose `author`.* and check the Show checkbox, without entering anything in the Sort and Criteria boxes. In our case, we select `author`.`name`, because we want to enter some criteria for the author's name. Sorts For each selected individual column, we can specify a sort (in Ascending or Descending order) or let this line remain intact (meaning no sort). If we choose more than one sorted column, the sort will be done with a priority from left to right. When we ask for a column to be sorted, we normally check the Show checkbox, but this is not necessary because we might want to do just the sorting operation without displaying this column. Showing a Column We check the Show checkbox so that we can see the column in the results. Sometimes, we may just want to apply a criterion on a column and not include it in the resulting page. Here, we add the phone column, ask for a sort on it, and choose to show both the name and phone number. We also ask for a sort on the name in ascending order. The sort will be done first by name, and then by phone number, if the names are identical. This is because the name is in a column criterion to the left of the phone column, and thus has a higher priority:
Read more
  • 0
  • 0
  • 8427
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-developing-post-types-plugin-wordpress
Packt
09 Oct 2009
7 min read
Save for later

Developing Post Types Plugin with WordPress

Packt
09 Oct 2009
7 min read
And you will do all of these by developing a Post Types plugin that provide pre-defined post templates to add a photo or a link quickly to your blog. The concepts you will learn in this article will help you discover the not so obvious capabilities of the WordPress platform that allows you to transform it into software—capable of handling much more than just a blog. Handling localization Localization is an important part of WordPress development as not everyone using WordPress speaks English (WordPress comes in different languages too). Localization involves just a small amount of the extra work on your side, since the translation itself is usually done by volunteers (people who like and use your plugin). You only need to provide some base files for translation, and don't be surprised when you start getting translated files sent to your inbox. WordPress uses the GNU gettext localization framework, which is a standardized method of managing translations, and we will make use of it in our plugin. Time for action – Create plugin and add localization We will start by defining our plugin as usual, and then add localization support. Create a new folder called post-types. Create a new post-types.php file with the following content: <?php// pluginname Post Types// shortname PostTypes// dashname post-types/*Plugin Name: Post TypesVersion: 0.1Plugin URI: http://www.prelovac.com/vladimir/wordpress-plugins/post-typesAuthor: Vladimir PrelovacAuthor URI: http://www.prelovac.com/vladimirDescription: Provides pre-defined post templates to quickly add a photo or a link to your blog*/// Avoid name collisions.if ( !class_exists('PostTypes') ) :class PostTypes{ // localization domain var $plugin_domain='PostTypes'; // Initialize the plugin function PostTypes() { global $wp_version; $exit_msg='Post Types requires WordPress 2.5 or newer. <a href="http://codex.wordpress.org/Upgrading_WordPress"> Please update!</a>'; if (version_compare($wp_version,"2.5","<")) { exit ($exit_msg); } } // Set up default values function install() { }}endif;if ( class_exists('PostTypes') ) : $PostTypes = new PostTypes(); if (isset($PostTypes)) { register_activation_hook( __FILE__, array(&$PostTypes, 'install') ); } endif; Adding localization is fairly simple. First we need to add a function to our class that will load the translation file: // Localization supportfunction handle_load_domain(){ // get current language $locale = get_locale(); // locate translation file $mofile = WP_PLUGIN_DIR.'/'.plugin_basename(dirname (__FILE__)).'/lang/' . $this->plugin_domain . '-' . $locale . '.mo'; // load translation load_textdomain($this->plugin_domain, $mofile);} Since loading the file takes resources, we will load it only when the translation is actually needed by checking the current page ($pagenow) and the list of pages pages where we need translations ($local_pages array): // Initialize the pluginfunction PostTypes(){ global $wp_version, $pagenow; // pages where our plugin needs translation $local_pages=array('plugins.php'); if (in_array($pagenow, $local_pages)) $this->handle_load_domain(); $exit_msg='Post Types requires WordPress 2.5 or newer. <a href="http://codex.wordpress.org/Upgrading_WordPress"> Please update!</a>'; Finally, to use the available translations, we only need to enclose our text in the __() function: $this->handle_load_domain();$exit_msg=__('Post Types requires WordPress 2.5 or newer.<a href="http://codex.wordpress.org/Upgrading_WordPress">Please update!</a>', $this->plugin_domain);if (version_compare($wp_version,"2.5","<")) What just happened? We have added localization support to our plugin by using the provided localization functions provided by WordPress. Currently, we have only localized the error message for WordPress version checking: $exit_msg=__('Post Types requires WordPress 2.5 or newer.<a href="http://codex.wordpress.org/Upgrading_WordPress">Please update!</a>', $this->plugin_domain); We have done that by enclosing the text in the __() function, which takes the text as localized, and enclosing our unique localization domain or context within the WordPress localization files. To load localization, we created a handle_load_domain function. The way it works is to first get the current language in use by using the get_locale() function: // Localization supportfunction handle_load_domain(){ // get current language $locale = get_locale(); Then it creates the language file name by adding together the plugin dir, plugin folder, and the lang folder where we will keep the translations. The file name is derived from the locale, and the *.mo language file extension: // locate translation file$mofile = WP_PLUGIN_DIR.'/'.plugin_basename(dirname(__FILE__)).'/lang/' . $this->plugin_domain .'-' . $locale . '.mo'; Finally, the localization file is loaded using the load_textdomain() function, taking our text domain and .mo file as parameters. // load translationload_textdomain($this->plugin_domain, $mofile); Optimizing localization usage The translation file needs to be loaded as the first thing in the plugin—before you output any messages. So we have placed it as the first thing in the plugin constructor. Since loading the translation file occurs at the beginning of the constructor, which is executed every time, it is a good idea to select only the pages where the translation will be needed in order to preserve resources. WordPress provides the global variable, $pagenow, which holds the name of the current page in use. We can check this variable to find out if we are on a page of interest. In the case of plugin activation error message, we want to check if we are on the plugins page defined as plugins.php in WordPress: // pages where our plugin needs translation$local_pages=array('plugins.php');if (in_array($pagenow, $local_pages)) $this->handle_load_domain(); You can optimize this further by querying the page parameter, if it exists, as this will—in most cases—point precisely to the usage of your page (plugins.php?page=photo): if ($_GET['page']=='photo') Optimizing the usage of the translation file is not required; it's just a matter of generally loading only what you need in order to speed up the whole system. How does localization work? For localization to work, you need to provide .po and .mo files with your plugins. These files are created by using external tools such as PoEdit. These tools output the compiled translation file, which can be then loaded by using the load_textdomain() function. This function accepts a language domain name and a path to the file. In order to use translated messages, you can use the __($text, $domain) and _e($text, $domain) functions. The _e() function is just an equivalent of echo __(); These functions accept two parameters, the first being the desired text, and the second, the language domain where the message will be looked for. If no translation was found, the text is just printed out as it is. This means that you can always safely use these functions, even if you do not provide any translation files. This will prepare the plugin for future translation. Quick reference$pagenow: A global variable holding the name of the currently displayed page within WordPress.get_locale(): A function which gets the currently selected language.load_textdomain(domain, filepath): This function loads the localization file and adds it to the specified language domain identifier._(); _e(): These functions are used to find the output text using a given language domain.More information about WordPress localization is available at: http://codex.wordpress.org/Translating_WordPress.
Read more
  • 0
  • 0
  • 4189

article-image-drupal-6-content-construction-kit-cck
Packt
09 Oct 2009
6 min read
Save for later

Drupal 6 Content Construction Kit (CCK)

Packt
09 Oct 2009
6 min read
Views module provide administrators with the means to modify how Drupal displays lists of content, and CCK exposes its fields to the Views module, making them perfect partners when it comes to creating custom content and then displaying that content in a highly configurable manner. At the time of writing, Views is not available for Drupal 6 (although the module is being actively developed and should hopefully be ready by the time you read this) so it is left as an exercise to download and install it, and create at least one new View, utilizing fields created in the following sections. Installing CCK CCK is available, so go ahead and download the latest version and extract the file to your modules folder. CCK adds its own section to the Modules page under Site building: There are a number of interdependent sections for this module, but all of them rely on the first option, Content, so go ahead and enable this first. We are going to look over all the features provided by CCK, by default, in this section. So go ahead and enable those modules that rely only on Content. With that done, enable the remaining options so that you end up with everything working, like this: Notice that some of the options are disabled to prevent us from inadvertently disabling an option that is required by something else. If, for example, you wish to disable Text, then disable Node Reference and User Reference first. Working with CCK With all the options enabled, we can now go ahead and create a new content type. Actually, it is possible to create new content types without the use of CCK, it's just that the new content types will look pretty much like the standard content types already available, because there are no really interesting fields to add. Head over to Content types under Content management and select the Add content type field to bring up the following page: The identification section is pretty straightforward. You can go ahead and fill in whatever new content settings are appropriate. Of special interest is the Submission form settings below this that allows you to decide whether the default Title and Body fields should be changed or even retained (in the case of the Body field): In the case of the Endangered Species content type, it doesn't really make sense to have a species Title, rather a Common name makes more sense. Leaving the Body field label blank will cause this field to be omitted completely in the event that it is not suitable for the type of content you have in mind. You may have noticed that there are several additional tabs to the right of Add content type tab that provide additional functionality. These options are discussed a little later on in this section. So for now, go ahead and fill out the Name, Type, and Description fields and click Save content type to add this to the default list: We are now ready to begin customizing this new type utilizing whatever options are available—depending on what is or is not enabled. It is possible to customize any type that is available on Drupal, including the default ones like Blog entry or Poll, but to begin with it is best to leave these alone. To begin working on the new content type, click on edit in the Endangered Species row. We can now look at the various aspects of working with content types, beginning with… Adding Fields Select the Add field tab to bring up the following page: This dialog allows you to specify the new field's machine readable name and then select what type of input it is going to be. Presently, only the Create new field section is displayed on this page, because we have yet to add new fields. Once there is at least one custom field available, this page will have an additional section allowing existing fields to be added directly from any content type (you can come back here once there are a few saved fields): Regardless, the Create new field list presently comprises of the following options: Node Reference – Allows the poster to reference another node using its ID value Integer, Decimal, Float – Allows posters to store numbers in various formats Text – Allows posters to enter content User Reference – Allows posters to reference other users. Remember that this list is subject to change, depending on whether you disable various components of the default package, for example, Node Reference or User Reference, or include additional modules that add field types such as Date or Fivestar. Each value type comes with a set of options for how that data should be entered. Looking at the Integer type, we can see that users can be prompted for an integer with a Text Field, Select list, Check boxes, and radio buttons—in this case, the Select list is going to be used. Be careful about how information is stored—it is important to be efficient. For example, don't store information as text when there is only a certain number of options available, instead, store them as a number and provide the right input type to display the various options appropriately. To demonstrate this point, consider that at the moment, the numbers_in_wild field is set as an integer with the Select list input type. We are not going to provide a select list of every possible integer, but we are going to represent a range of numbers with an integer. For example, the value 1 will correspond to the range 1-10, 2 will correspond to 11-100, and so on. With the new field created, the configuration page for this field (Click on configure in the Operations column of the Manage fields page) now displays the current settings available. To begin with, the options in the Endangered Species settings are not of much interest as we have not specified what data this field will hold. To do this, scroll down the page to the Global settings section. From here, you can decide on how the data will be presented to the user and whether or not the field itself will be compulsory or not: Along with the Allowed values list used to input key-value pairs, there are a few other settings that may be of use, depending on what data the field should capture. Minimum and Maximum values along with Suffix and Prefix values allow for some minor input validation, as well as some useful display properties like currency denominations or physical units of measurement.
Read more
  • 0
  • 0
  • 3081

article-image-programmatically-creating-ssrs-report-microsoft-sql-server-2008
Packt
09 Oct 2009
4 min read
Save for later

Programmatically Creating SSRS Report in Microsoft SQL Server 2008

Packt
09 Oct 2009
4 min read
Introduction In order to design the MS SQL Server Reporting Services report programmatically you need to understand what goes into a report. We will start with a simple report shown in the next figure: The above tabular report gets its data from the SQL Server database TestNorthwind using the query shown below: Select EmployeeID, LastName, FirstName, City, Country from Employees. A report is based completely on a Report Definition file, a file in XML format. The file consists of information about the data connection, the datasource in which a dataset is defined, and the layout information together with the data bindings to the report. In the following, we will be referring to the Report Server file called RDLGenSimple.rdl. This is a file written in Report Definition Language in XML Syntax. The next figure shows this file opened as an XML file with the significant nodes collapsed. Note the namespace references. The significant items are the following: The XML Processing instructions The root element of the report collapsed and contained in the root element are: The DataSources Datasets Contained in the body are the ReportItems This is followed by the Page containing the PageHeader and PageFooter items In order to generate a RDL file of the above type the XMLTextWriter class will be used in Visual Studio 2008. In some of the hands-on you have seen how to connect to the SQL Server programmatically as well as how to retrieve data using the ADO.NET objects. This is precisely what you will be doing in this hands-on exercise. The XMLTextWriter Class In order to review the properties of the XMLTextWriter you need to add a reference to the project (or web site) indicating this item. This is carried out by right-clicking the Project (or Website) | Add Reference… and then choosing SYSTEM.XML (http://msdn.microsoft.com/en-us/library/system.xml.aspx) in the Add Reference window. After adding the reference, the ObjectBrowser can be used to look at the details of this class as shown in the next figure. You can access this from View | Object Browser, or by clicking the F2 key with your VS 2008 IDE open. A formal description of this can be found at the bottom of the next figure. The XMLTextWriter takes care of all the elements found in the XML DOM model (see for example, http://www.devarticles.com/c/a/XML/Roaming-through-XMLDOM-An-AJAX-Prerequisite). Hands-on exercise: Generating a Report Definition Language file using Visual Studio 2008 In this hands-on, you will be generating a server report that will display the report shown in the first figure. The coding you will be using is adopted from this article (http://technet.microsoft.com/en-us/library/ms167274.aspx) available  at Microsoft TechNet (http://technet.microsoft.com/en-us/sqlserver/default.aspx). Follow on In this section, you will create a project and add a reference. You add code to the page that is executed by the button click events. The code is scripted and is not generated by any tool. Create project and add reference You will create a Visual Studio 2008 Windows Forms Application and add controls to create a simple user interface for testing the code. Create a Windows Forms Application project in Visual Studio 2008 from File | New | Project… by providing a name. Herein, it is called RDLGen2. Drag-and-drop two labels, three buttons and two text boxes onto the form  as shown: When the Test Connection button Button1 in the code is clicked, a connection to the TestNorthwind database will be made. When the button is clicked, the code in the procedure Connection () is executed. If there are any errors, they will show up in the label at the bottom. When the Get list of Fields button Button2 in the code is clicked, the Query will be run against the database and the retrieved field list will be shown in the adjoining textbox. The Generate a RDL file button Button 3 in the code, creates a report file at the location indicated in the code.
Read more
  • 0
  • 0
  • 9056

article-image-looking-apache-axis2
Packt
09 Oct 2009
11 min read
Save for later

Looking into Apache Axis2

Packt
09 Oct 2009
11 min read
(For more resources on Axis2, see here.) Axis2 Architecture Axis2 is built upon a modular architecture that consists of core modules and non-core modules. The core engine is said to be a pure SOAP processing engine (there is not any JAX-PRC concept burnt into the core). Every message coming into the system has to be transformed into a SOAP message before it is handed over to the core engine. An incoming message can either be a SOAP message or a non-SOAP message (REST JSON or JMX). But at the transport level, it will be converted into a SOAP message. When Axis2 was designed, the following key rules were incorporated into the architecture. These rules were mainly applied to achieve a highly flexible and extensible SOAP processing engine: Separation of logic and state to provide a stateless processing mechanism. (This is because Web Services are stateless.) A single information model in order to enable the system to suspend and resume. Ability to extend support to newer Web Service specifications with minimal changes made to the core architecture. The figure below shows all the key components in Axis2 architecture (including core components as well as non-core components). Core Modules XML Processing Model : Managing or processing the SOAP message is the most diffcult part of the execution of a message. The efficiency of message processing is the single most important factor that decides the performance of the entire system. Axis1 uses DOM as its message representation mechanism. However, Axis2 introduced a fresh XML InfoSet-based representation for SOAP messages. It is known as AXIOM (AXIs Object Model). AXIOM encapsulates the complexities of efficient XML processing within the implementation. SOAP Processing Model :  This model involves the processing of an incoming SOAP message. The model defines the different stages (phases) that the execution will walk through. The user can then extend the processing model in specific places. Information Model :  This keeps both static and dynamic states and has the logic to process them. The information model consists of two hierarchies to keep static and run-time information separate. Service life cycle and service session management are two objectives in the information model. Deployment Model :  The deployment model allows the user to easily deploy the services, configure the transports, and extend the SOAP Processing Model. It also introduces newer deployment mechanisms in order to handle hot deployment, hot updates, and J2EE-style deployment. Client API : This provides a convenient API for users to interact with Web Services using Axis2. The API consists of two sub-APIs, for average and advanced users. Axis2 default implementation supports all the eight MEPs (Message Exchange Patterns) defined in WSDL 2.0. The API also allows easy extension to support custom MEPs. Transports :  Axis2 defines a transport framework that allows the user to use and expose the same service in multiple transports. The transports fit into specific places in the SOAP processing model. The implementation, by default, provides a few common transports (HTTP, SMTP, JMX, TCP and so on). However, the user can write or plug-in custom transports, if needed. XML Processing Model Axis2 is built on a completely new architecture as compared to Axis 1.x. One of the key reasons for introducing Axis2 was to have a better, and an efficient XML processing model. Axis 1.x used DOM as its XML representation mechanism, which required the complete object hierarchy (corresponding to incoming message) to be kept in memory. This will not be a problem for a message of small size. But when it comes to a message of large size, it becomes an issue. To overcome this problem, Axis2 has introduced a new XML representation. AXIOM (AXIs Object Model) forms the basis of the XML representation for every SOAP-based message in Axis2. The advantage of AXIOM over other XML InfoSet representations is that it is based on the PULL parser technique, whereas most others are based on the PUSH parser technique. The main advantage of PULL over PUSH is that in the PULL technique, the invoker has full control over the parser and it can request the next event and act upon that, whereas in case of PUSH, the parser has limited control and delegates most of the functionality to handlers that respond to the events that are fired during its processing of the document. Since AXIOM is based on the PULL parser technique, it has on-demand-building capability whereby it will build an object model only if it is asked to do so. If required, one can directly access the underlying PULL parser from AXIOM, and use that rather than build an OM (Object Model). SOAP Processing Model Sending and receiving SOAP messages can be considered two of the key jobs of the SOAP-processing engine. The architecture in Axis2 provides two Pipes ('Flows'), in order to perform two basic actions. The AxisEngine or driver of Axis2 defines two methods, send() and receive() to implement these two Pipes. The two pipes are namedInFlow and OutFlow.   The complex Message Exchange Patterns (MEPs) are constructed by combining these two types of pipes. It should be noted that in addition to these two pipes there are two other pipes as well, and those two help in handling incoming Fault messages and sending a Fault message. Extensibility of the SOAP processing model is provided through handlers. When a SOAP message is being processed, the handlers that are registered will be executed. The handlers can be registered in global, service, or in operation scopes, and the final handler chain is calculated by combining the handlers from all the scopes. The handlers act as interceptors, and they process parts of the SOAP message and provide the quality of service features (a good example of quality of service is security or reliability). Usually handlers work on the SOAP headers; but they may access or change the SOAP body as well. The concept of a flow is very simple and it constitutes a series of phases wherein a phase refers to a collection of handlers. Depending on the MEP for a given method invocation, the number of flows associated with it may vary. In the case of an in-only MEP, the corresponding method invocation has only one pipe, that is, the message will only go through the in pipe (inflow). On the other hand, in the case of in-out MEP, the message will go through two pipes, that is the in pipe (inflow) and the out pipe (outflow). When a SOAP message is being sent, an OutFlow begins. The OutFlow invokes the handlers and ends with a Transport Sender that sends the SOAP message to the target endpoint. The SOAP message is received by a Transport Receiver at the target endpoint, which reads the SOAP message and starts the InFlow. The InFlow consists of handlers and ends with the Message Receiver, which handles the actual business logic invocation. A phase is a logical collection of one or more handlers, and sometimes a phase itself acts as a handler. Axis2 introduced the phase concept as an easy way of extending core functionalities. In Axis 1.x, we need to change the global configuration files if we want to add a handler into a handler chain. But Axis2 makes it easier by using the concept of phases and phase rules. Phase rules specify how a given set of handlers, inside a particular phase, are ordered. The figure below illustrates a flow and its phases. If the message has gone through the execution chain without having any problem, then the engine will hand over the message to the message receiver in order to do the business logic invocation, After this, it is up to the message receiver to invoke the service and send the response, if necessary. The figure below shows how the Message Receiver fits into the execution chain. The two pipes do not differentiate between the server and the client. The SOAP processing model handles the complexity and provides two abstract pipes to the user. The different areas or the stages of the pipes are named 'phases' in Axis2. A handler always runs inside a phase, and the phase provides a mechanism to specify the ordering of handlers. Both pipes have built-in phases, and both define the areas for User Phases, which can be defined by the user, as well. Information Model As  shown  in  the figure below, the information model consists of two hierarchies: Description hierarchy and Context hierarchy. The Description hierarchy represents the static data that may come from different deployment descriptors. If hot deployment is turned off, then the description hierarchy is not likely to change. If hot deployment is turned on, then we can deploy the service while the system is up and running. In this case, the description hierarchy is updated with the corresponding data of the service. The context hierarchy keeps run-time data. Unlike the description hierarchy, the context hierarchy keeps on changing when the server starts receiving messages. These two hierarchies create a model that provides the ability to search for key value pairs. When the values are to be searched for at a given level, they are searched while moving up the hierarchy until a match is found. In the resulting model, the lower levels override the values present in the upper levels. For example, when a value has been searched for in the Message Context and is not found, then it would be searched in the Operation Context, and so on. The search is first done up the hierarchy, and if the starting point is a Context then it would search for in the Description hierarchy as well. This allows the user to declare and override values, with the result being a very flexible configuration model. The flexibility could be the Achilles' heel of the system, as the search is expensive, especially for something that does not exist. Deployment Model The previous versions of Axis failed to address the usability factor involved in the deployment of a Web Service. This was due to the fact that Axis 1.x was released mainly to prove the Web Service concepts. Therefore in Axis 1.x, the user has to manually invoke the admin client and update the server classpath. Then, you need to restart the server in order to apply the changes. This burdensome deployment model was a definite barrier for beginners. Axis2 is engineered to overcome this drawback, and provide a flexible, user-friendly, easily configurable deployment model. Axis2 deployment introduced a J2EE-like deployment mechanism, wherein the developer can bundle all the class files, library files, resources files, and configuration fi  les together as an archive file, and drop it in a specified location in the file system. The concept of hot deployment and hot update is not a new technical paradigm, particularly for the Web Service platform. But in the case of Apache Axis, it is a new feature. Therefore, when Axis2 was developed, hot deployment features were added to the feature list. Hot deployment : This refers to the capability to deploy services while the system is up and running. In a real time system or in a business environment, the availability of the system is very important. If the processing of the system is slow, even for a moment, then the loss might be substantial and it may affect the viability of the business. In the meanwhile, it is required to add new service to the system. If this can be done without needing to shut down the servers, it will be a great achievement. Axis2 addresses this issue and provides a Web Service hot deployment ability, wherein we need not shut down the system to deploy a new Web Service. All that needs to be done is to drop the required Web Service archive into the services directory in the repository. The deployment model will automatically deploy the service and make it available. Hot update : This refers to the ability to make changes to an existing Web Service without even shutting down the system. This is an essential feature, which is best suited to use in a testing environment. It is not advisable to use hot updates in a real-time system, because a hot update could lead a system into an unknown state. Additionally, there is the possibility of loosening the existing service data of that service. To prevent this, Axis2 comes with the hot update parameter set to FALSE by default.
Read more
  • 0
  • 0
  • 3114
article-image-creating-your-own-theme-wordpress-tutorial
Packt
09 Oct 2009
10 min read
Save for later

Creating Your Own Theme - A Wordpress Tutorial

Packt
09 Oct 2009
10 min read
WordPress is the most widely used content management system amongst bloggers for many reasons. Not only does it make site management seem like a walk in the park, but it also uses a type of shared hosting, which means that most users can afford it. It has plug-ins for any occasion and desire and finally, it has themes. For many WordPress users, finding the right theme is a long process that often leads to endless tweaking in the code and stylesheets. However, only a few ever consider learning how to create their own. If you are one of them, this tutorial by Brian Franklin will help you learn how to built and start your own theme. Template files and DIV tags The WordPress theme that waits to be created in this tutorial consists of one stylesheet, one functions file, one comments file and a number of template files. Each template file represents a separate part. Together they form the outline of the website’s overall look. However, it is in the stylesheet that web design elements for each template file are decided. DIV tags are used to define, design, and format values of each template file as well as structure its content and elements. The <div> is often described as an invisible box that is used to separate site content in a structural manner. These are later defined in the stylesheet where size, position, form etc. are assigned.  Template files that will be used in this tutorial are: Index Header Footer Sidebar Single Page And the files that define specific functions for the theme are: Functions Comments DIV tags that will be used in this tutorial are: <div id="wrapper"></div> <div id="header"></div> <div id="container"></div> <div id="footer"></div> <div class="post"></div> <div class="entry"></div> <div class="navigation"></div> <div class="sidebar"></div> Using tags requires proper opening and closing. If any tag is not properly closed (</div>) it may affect the presentation on the entire site. There is difference between a DIV id and a class is what they are used for. Unique site features require definition by ID while values that are used to define several elements are defined as class. In the stylesheet, div id is identified with a # and div class with a . (dot). Install WordPress Before you can start building a theme you need to decide how to work. It is recommended to install WordPress on your local computer. This will allow you to save your changes on a local server rather than dealing with remote server access and uploading. Follow the link for further instructions. The other alternative is installing WordPress through your hosting provider. WordPress hosting is a popular plan offered by most hosts. Either way you need to have WordPress installed, remote or local, and create a folder for your theme under the directory /wp-content/themes/yourtheme. Create the template files Use Dreamweaver, Notepad, EditPlus or any other text editor of your choice and create the template files listed above, including the functions files. Leave them blank for now and save them as PHP files (.php) in your theme folder. index.php Open the index.php, add this piece of code and save the file: <?php get_header(); ?><div id="container"> <?php if(have_posts()) : ?><?php while(have_posts()) : the_post(); ?> <div class="post" id="post-<?php the_ID(); ?>"> <h2><a href="<?php the_permalink(); ?>" title="<?php the_title(); ?>"> <?php the_title(); ?></a></h2> <div class="entry"> <?php the_content(); ?> <p class="postmetadata"><?php _e('Filed under&#58;'); ?> <?php the_category(', ') ?> <?php _e('by'); ?> <?php the_author(); ?><br /><?php comments_popup_link('No Comments &#187;', '1 Comment &#187;', '% Comments &#187;'); ?> <?php edit_post_link('Edit', ' &#124; ', ''); ?> </p> </div> </div> <?php endwhile; ?> <div class="navigation"> <?php posts_nav_link(); ?> </div> <?php endif; ?></div><?php get_sidebar(); ?><?php get_footer(); ?> The index-file now contains the code that calls to the header, sidebar and footer template file. But since there is no code to define these template files yet, you will not see them when previewing the site. The index file is the first file the web browser will call when requesting your site. This file defines the site’s frontpage and thus also includes DIV ID tags "container" and DIV classes "post" and "entry". These elements have been included to structure your frontpage content; posts and post entries. If you preview your WordPress site you should see your three latest blog posts, including ‘next’ and ‘previous’ buttons to the remaining ones. To shorten the length of displayed frontpage post, simply log in to the WordPress admin and insert a page break where you want the post to display a Read more link. You will come back to index.php in a moment, but for now you can save and close the file. It is now time to create the remaining template files. header.php Open header.php, add this code and save the file: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html ><head profile="http://gmpg.org/xfn/11"> <title><?php bloginfo('name'); ?><?php wp_title(); ?></title> <meta http-equiv="Content-Type" content="<?php bloginfo('html_type'); ?>; charset=<?php bloginfo('charset'); ?>" /> <meta name="generator" content="WordPress <?php bloginfo('version'); ?>" /> <!-- leave this for stats please --> <link rel="stylesheet" href="<?php bloginfo('stylesheet_url'); ?>" type="text/css" media="screen" /> <link rel="alternate" type="application/rss+xml" title="RSS 2.0" href="<?php bloginfo('rss2_url'); ?>" /> <link rel="alternate" type="text/xml" title="RSS .92" href="<?php bloginfo('rss_url'); ?>" /> <link rel="alternate" type="application/atom+xml" title="Atom 0.3" href="<?php bloginfo('atom_url'); ?>" /> <link rel="pingback" href="<?php bloginfo('pingback_url'); ?>" /> <?php wp_get_archives('type=monthly&format=link'); ?> <?php //comments_popup_script(); // off by default ?> <?php wp_head(); ?></head><body><div id="wrapper"><div id="header"><h1><a href="<?php bloginfo('url'); ?>"><?php bloginfo('name'); ?></a></h1><?php bloginfo('description'); ?></div> The header template file now has the opening tags <html> and <body>. Included in the <html> tag is the <head> tag that holds the meta tags and stylesheet links. The body tag includes two DIV ID tags; wrapper and header that define the boxes and hold the overall position of the site and the header content. footer.php Open footer.php, add the code below and save the file: <div id="footer"> <p>Copyright 2007 <a href="<?php bloginfo('url'); ?>"><?php bloginfo('name'); ?></a></p></div></div></body></html> The footer is a template that defines the bottom part of the website. The footer for this theme now holds a Copyright announcement and a php code that adds the name of the blog as a permalink. As it is the last template file to be called for a site it also closes the body and html tag. sidebar.php Open sidebar.php, add the code below and save the file: <div class="sidebar"><ul><?php if ( function_exists('dynamic_sidebar') && dynamic_sidebar() ) : else : ?> <?php wp_list_pages('depth=3&title_li=<h2>Pages</h2>'); ?> <li><h2><?php _e('Categories'); ?></h2> <ul> <?php wp_list_cats('sort_column=name&optioncount=1&hierarchical=0'); ?> </ul> </li> <li><h2><?php _e('Archives'); ?></h2> <ul> <?php wp_get_archives('type=monthly'); ?> </ul> </li> <?php get_links_list(); ?> <?php endif; ?></ul></div> The sidebar template is now defined and includes the site’s pages, categories, archive and blogroll. Study the code and see the changes in the web browser. Regarding the DIV class, it is left to be designed in the stylesheet. single.php Open single.php, add the code below and save the file: <?php get_header(); ?><div id="container"> <?php if(have_posts()) : ?><?php while(have_posts()) : the_post(); ?> <div class="post" id="post-<?php the_ID(); ?>"> <h2><a href="<?php the_permalink(); ?>" title="<?php the_title(); ?>"> <?php the_title(); ?></a></h2> <div class="entry"> <?php the_content(); ?> <p class="postmetadata"><?php _e('Filed under&#58;'); ?> <?php the_category(', ') ?> <?php _e('by'); ?> <?php the_author(); ?><br /><?php comments_popup_link('No Comments &#187;', '1 Comment &#187;', '% Comments &#187;'); ?> <?php edit_post_link('Edit', ' &#124; ', ''); ?> </p> </div> <div class="comments-template"><?php comments_template(); ?></div> </div> <?php endwhile; ?> <div class="navigation"> <?php previous_post_link('%link') ?> <?php next_post_link(' %link') ?> </div> <?php endif; ?></div><?php get_sidebar(); ?><?php get_footer(); ?> The single.php template file specifically defines the elements on the single post page that is different from both the frontpage post listing and pages. The code above is basically a copy/paste from the index-file, only with minor changes to the Next and Previous links. 
Read more
  • 0
  • 0
  • 7720

article-image-search-engine-optimization-wordpress-part1
Packt
09 Oct 2009
8 min read
Save for later

Search Engine Optimization in WordPress-part1

Packt
09 Oct 2009
8 min read
Search Engine Optimization Having put so much time and effort into making your blog look pretty and creating fabulous content, you would want people to find it. The most common way for this to happen is via search engines. For many people, a typical web browsing session begins with a visit to their favorite search engine, so you want to be sure your blog appears high up in the rankings. Unfortunately, having a great-looking blog with lots of interesting posts isn't enough. To get a good place in the rankings takes time, perseverance, and no small amount of knowledge. The good news is that search engines love blogs. This fact, coupled with the techniques covered in this article, will go a long way to making your blog as findable as possible. Search engine optimization (SEO) is the art and science of getting your blog noticed by the search engines and ranked as high as possible. This article outlines the most important SEO strategies and how to apply them. We'll also discuss how to submit your blog to the search engines as well as look at some SEO software and tools, which could save you time and improve your results. The Principles of SEO SEO is a huge subject. There are thousands of professionals all over the world who earn their living by providing SEO services to website owners. The good SEO pros spend huge amounts of time and resources learning the skills of effective optimization. This goes to show that you could easily spend your entire life boning up on SEO—there's so much to learn. Obviously, you won't have anything like this amount of time to spend on your own SEO education. However, you can still do a lot to improve your blog's performance with the major search engines. The option to bring in a professional to really rocket through the rankings is there, if your marketing budget allows. If you do decide to hire a professional, make sure you choose a reputable one who does not use unscrupulous tactics, which could harm you more than help you. The good news is that WordPress has been made with SEO in mind. The software comes with many built-in SEO features. For example, you don't need to worry too much about the validity of the XHTML on your blog. The WordPress developers have ensured their code is valid. This is a big help as search engines will rank sites with valid code higher than those that have been poorly put together. There is plenty of other stuff going on behind the scenes in your WordPress installation that will aid your search engine findability—the WordPress developers have been very thoughtful. We'll be considering the aspects of SEO that are your responsibility. But first, a quick '101' on how search engines work. How Search Engines Find Stuff Search engines use special programs called robots that automatically crawl the Web and send back information about the web pages to the search engines' servers. They navigate the Web by following all the links they find. This is how a search engine collects the data for its index. The index is a huge database of entries cross-referenced between keywords and relevant website pages. The search engines use special algorithms to determine the rank of the web pages held in their index. When a web user enters a search query, the engine returns a list of results. The order of the search results depends on the rank of the pages, as determined by the algorithm. These algorithms are closely guarded secrets and the search engine companies are constantly updating them. The aim of the updates is to improve the relevancy of the search results. Because of the secrecy of the algorithms and the constant changes, it is very difficult for website owners to figure out the exact criteria used to rank pages. This prevents website owners from unfairly influencing the search rankings. However, by subscribing to the blogs or feeds of the major search engines, and using tools such as Google's Webmaster tools (more on this later), you can keep abreast of major changes. SEO professionals spend their lives trying to second-guess the search algorithms, but the search engine companies usually remain one step ahead. It's a game of cat and mouse, with the odds strongly skewed in favor of the search engines—they make the rules and can change them whenever they want. Despite the ever-changing algorithms, there are certain principles of SEO that stay constant. These are what we will look at in this article. For the purposes of this article, we will be concentrating on techniques for the 'traditional' search engines such as Google, MSN, Yahoo, and Ask. We will look at some of the blog-specific search engines, such as Technorati, in the next article. Keywords Keywords are the search terms that people type into a search engine when they are looking for something on the Web. They can be single words or several words that make up a phrase. It's essential to know the keywords being used by people who are looking for the type of content on your blog. You then need to ensure that you're using those keywords correctly. Let's look at a few strategies for finding and using keywords effectively. Choosing Your Keywords You should spend some time building up a list of your blog's keywords. The first step is to be clear in your mind about what your blog's content is about. What are the main themes you are writing about? Once you are clear about the main theme(s) of your blog, try a quick brainstorming exercise. You can do this alone or enlist the help of colleagues and friends. Put yourself in the shoes of someone looking for the kind of information you publish on your blog. What words or phrases are they likely to type into a search engine? We could run this exercise for ChilliGuru.com. People looking for the kind of content on ChilliGuru, may use the following keywords: Chilli (UK spelling) Chili (US spelling) Spicy food Growing chilies Chili recipe Mexican food Indian food Thai food Birds eye chilies Jalapeno Scotch bonnet Cook chilies OK, that's just a small handful of the more obvious keywords that took me about 60 seconds to come up with. If I spent longer, I'm sure I could come up with a list of 50 or more words and phrases. The more people you enlist into your keyword brainstorming, the more you are likely to come up with. Once you have a fairly good list, you can use keyword software to help you find even more. There are literally hundreds of keyword tools out there. Some are free, some are paid for, and they have a range of features. Later in this article, in the section on search engine submissions, we will introduce some software called Web CEO, which includes a good keyword tool. In the meantime, you can start with some of the tools provided by the search engines themselves. For example, Google provides a keyword selector tool for its advertising (Ad Words) customers, but you can use it to research your keywords. Go to https://adwords.google.com/select/KeywordToolExternal and enter a keyword or phrase into the search box. Keep the Use synonyms box checked. Enter the security code and click Get keyword ideas and you will be presented with a list of related keywords: OK, so it's a pretty long list; not all of the keywords will be relevant to your blog. For example, for ChilliGuru we could ignore 'red hot chilli peppers lyrics' and any other references to the band, The Red Hot Chilli Peppers. The preceding screen shot shows just the first few suggestions for one keyword, 'chilli' (the whole list runs into dozens). So you can see that if you were to use this tool for all the keywords in your original brainstorming list, you could easily end up with a very long list. This might seem like a good idea, but when we discuss using your keywords, shortly, you'll see that you don't actually want too many. When you're working on your list, try to be selective and keep the list manageable. Use your judgment to pick the important keywords and also look at the Avg Search Volume column in the Google list. This tells you how often each keyword is actually being used. Focus on the most popular ones. There's no point in my giving you a recommended number of keywords for your list, as this will depend on the type of content in your blog. If your blog covers a fairly narrow subject area, then you won't need as many keywords as if your blog covers a wide subject or even a range of subjects. Once you've read the next section on using keywords, you'll also have a better idea of how many you need.
Read more
  • 0
  • 0
  • 2082

article-image-search-engine-optimization-wordpress-part2
Packt
09 Oct 2009
8 min read
Save for later

Search Engine Optimization in WordPress-part2

Packt
09 Oct 2009
8 min read
Inbound Links Having plenty of good quality inbound links to your blog will improve your ranking in the search engines. Google started life as a student project to rank the importance of websites based on the number of incoming links; link popularity is still at the heart of Google's ranking process. But for many people link building seems like a daunting task. How do you get other people to link to you? It's actually not as difficult as it first seems—once you get into it, you'll see there are plenty of strategies to use. The point is to stick at it and treat link building as an integral part of your blogging routine. You can check how many inbound links Google has found for your blog by using the link: command. Enter link:http://www.packtpub.com into the Google search box to see all the inbound links for the Packt website. You can do the same for your blog.  There is a more 'organic' technique that we'll discuss here. It's often referred to by SEO pros as link baiting. It's basically creating content that other bloggers and webmasters just can't resist linking to. Obviously, you should always be trying to create interesting and exciting content, but every now and then it pays to come up with a killer post that is intended purely to attract links. There are several methods to achieve this. Here are a few suggestions to get you thinking: Write something controversial that other people will just have to disagree with. Be careful not to upset anyone and don't be offensive, but come up with something that goes against the grain and makes your opinion on an issue stand out. Disagree with a renowned expert. A post title like Seth Godin Is Plain Wrong About XYZ, backed up with a reasoned argument, could attract plenty of attention and encourage back links to the post. Provide a really useful resource. This could be something like a 'Top 10' list or a how-to guide. Run a contest, competition, or some other event that is likely to attract attention. Give away a useful 'freebie'. For example, a PDF e-book, a piece of software (that you own the rights to), or a free sample of one of your products. These are the kind of posts that are likely to attract attention and links back to your blog. Try brainstorming a few ideas along these lines and you'll be surprised how many you come up with. As well as link baiting you can also simply ask other people to link to you. This is a fairly straightforward approach, but you need to be careful not to come across as a spammer. It may be worth restricting this to people you know or people who regularly leave comments on your blog. Some people may be annoyed about receiving an email out of the blue requesting a back link, so exercise some discretion here. Definitely don't send out a broadcast email to lots of addresses requesting links. Don't be tempted to buy inbound links. There are many unscrupulous dealers on the Web who will sell you quantities of inbound links. Google and the other search engines regard this practice as 'cheating' and severely frown upon anyone involved. If you buy links, you will be banned from the search engines. Robots.txt Optimization A robots.txt file is read by search engine robots when they crawl on your blog. You can use it to tell them which pages should be indexed. There are a couple of reasons why using a robots.txt file is good for SEO. First, Google and other search engines recommend you use one and it's generally a good idea to do what they say. Second, it can help you to cut down on duplicated content. Search engines do not like duplicated content (that is the same content appearing at two different URLs within a website) because they suspect it might be spam. One minor drawback with WordPress is that it can create a lot of duplicate content. For example, http://blog.chilliguru.com/category/recipes points to exactly the same content as http://blog.chilliguru.com/recipes. Also, the same content is repeated on different pages. For example, most of the posts listed at http://blog.chilliguru.com/category/recipes are also listed on http://blog.chilliguru.com/tag/receipe. We can tell the search engines to ignore any duplicate content by giving instructions in the robots.txt file. Here is the robots.txt file for ChilliGuru: Sitemap: http://blog.chilliguru.com/sitemap.xmlUser-agent: *Disallow: /cgi-bin/Disallow: /wp-admin/Disallow: /wp-includes/Disallow: /wp-content/plugins/Disallow: /wp-content/cache/Disallow: /wp-content/themes/Disallow: /wp-Disallow: /category/Disallow: /comments/Disallow: /tag/Disallow: /author/Disallow: /trackback/ The first line is a big signpost to your Google Sitemap. User-agent: * means that the file is intended for all robots to read. It is possible to target the different search engine robots with specific instructions, for example, User-agent: Googlebot would just apply to the Google robot; however, you don't need to do this with your blog. The lines that begin with Disallow: tell the robots not to visit those files and folders. This is how you tell them to ignore certain parts of your site. For example, we don't need any of the content in the wp- directories to be indexed because it's mainly just PHP code. The one exception is /wp-content/uploads/. We haven't included this one in the robots.txt file, because we do want the search engines to crawl its contents. There may be images in there that should be indexed. Disallow: /category/ should cure the duplicate content problem we outlined above. You can use a simple text editor (for example, Notepad or Crimson Editor) to create your robots.txt file (you can go to http://blog.chilliguru.com/robots.txt and use that file as a starting point). Then it's simply a matter of using your FTP client to upload it to the root directory of your blog. Using Excerpts on the Home Page Another way to cut down on duplicated content is to display just excerpts of the posts on your home page instead of showing them in full. Obviously, each post is displayed in full on its own single post page, so having them in full on the home page may be regarded as duplicate content by the search engines. In fact, it's not just the home page, as the posts slip down to pages 2, 3, 4, and so on; they are still displayed in full. Using excerpts is not only a great SEO strategy; it is also becoming popular amongst bloggers in its own right. Some people prefer it as it makes the home page more concise and there is less vertical scrolling required to get an overview of all the posts. It makes it easier for readers to scan the posts and pick the ones they are really interested in. Also, forcing readers to click through to the single post page means they see the comments in full for each post and so may be more inclined to make a contribution to the discussion. It should still be OK to display the most recent post in full as it can take up to a week for a new post to be indexed by the search engines. By then, the post will have moved down the list and become excerpted, thus removing the risk of duplicate content. I'm noticing home page excerpts more and more. I just did a very quick (and unscientific) survey of the current top-10 blogs on technorati.com and seven of them used excerpts on their home page (these are big names like Gizmodo, The Huffington Post, Lifehacker, and so on). However, there will always be some traditionalists who prefer to see the full posts on the home page. You need to balance the SEO and usability benefits against the possibility of alienating some of your readers. Personally, I think the benefits of using excerpts outweigh any drawbacks, so we'll go ahead and set them up on ChilliGuru. You could go through and edit each post adding a tag where appropriate. However, there is a plugin we can use that will do this automatically. It's called Excerpt Editor by Andrew Ozz. Go to http://wordpress.org/extend/plugins/excerpt-editor/, download, install, and activate it in the usual way on your local development server. Select the plugin (it's under the Manage tab). First, select Auto-Generate from the menu and enter the following settings: Click Save Auto-Generate Options. Now select Replace Posts from the menu and enter the following settings: Click Save the Replace Posts options and view your home page. You will see that the latest post is shown in full but all the others have been excerpted and now have a Continue reading link. The same thing has been applied on all the Archive pages (Category, Author, Day, Month, and Year). The default settings in the plugin mean that the first 70 words are used in the excerpts. On the Auto-Generate page of the plugin, you can change the number of words to be included in the excerpts. Or, if you don't like having the post cut-off in the middle of a sentence, you can use the Editor to select each post and then manually set the content you want to appear in the excerpt. Having set the Auto-Generate options, every new post you publish will be excerpted accordingly. Simply deactivate the plugin if you ever want to revert to full posts.
Read more
  • 0
  • 0
  • 1988
article-image-file-sharing-grails
Packt
09 Oct 2009
7 min read
Save for later

File Sharing in Grails

Packt
09 Oct 2009
7 min read
File domain object The first step, as usual, is to create a domain object to represent a file. We want to store the following information: Name The data of the file A description of the file The file size Who uploaded the file The date the file was created and last modified Create the File domain class as follows: package app class File { private static final int TEN_MEG_IN_BYTES = 1024*1024*10 byte[] data String name String description int size String extension User user Date dateCreated Date lastUpdated static constraints = { data( nullable: false, minSize: 1, maxSize: TEN_MEG_IN_BYTES ) name( nullable: false, blank: false ) description( nullable: false, blank: false ) size( nullable: false ) extension( nullable: false ) user( nullable: false ) } } There should be nothing unfamiliar here. You have created a new domain class to represent a file. The file data will be stored in the data property. The other properties of the file are all metadata. Defining the user property creates the association to a user object. The constraints are then defined to make sure that all of the information that is needed for a file has been supplied. There is one important side effect of setting the maxSize constraint on the data property. GORM will use this value as a hint when generating the database schema for the domain objects. For example, if this value is not specified, the underlying database may end up choosing a data type to store the binary file data that is too small for the size of files that you wish to persist. FileController Now, we will need a controller. Let's name it FileController. Our controller will allow users to perform the following actions: Go to a page that allows users to select a file Submit a file to the server Download the file Create the FileController groovy class, alongside our existing MessageController, by following the actions shown below: package app class FileController { def create = { return [ file: new File() ] } def save = { } def download = { } } In the create action, we are simply constructing a new file instance that can be used as the backing object when rendering the file-upload form. We will fill in the implementation details of the save and download actions as and when we will need them. File Upload GSP The next step is to create a GSP to render the form that allows users to upload a file to the application. Create the file grails-app/views/file/create.gsp and enter the following markup: <%@ page contentType="text/html;charset=UTF-8" %> <html> <head> <meta http-equiv="Content-Type" content= "text/html; charset=UTF-8"/> <meta name="layout" content="main"/> <title>Post File</title> </head> <body> <g:hasErrors bean="${file}"> <div class="validationerror"> <g:renderErrors bean="${file}" as="list"/> </div> </g:hasErrors> <g:form action="save" method="post" enctype="multipart/form-data" class="inputform"> <fieldset> <dl> <dt>Title <span class="requiredfield">required</span></dt> <dd><g:textField name="name" value="${file.name}" size="35" class="largeinput"/></dd> <dt>File <span class="requiredfield">required</span></dt> <dd><input type="file" name="data"/></dd> <dt>File description <span class="requiredfield">required</span></dt> <dd><g:textArea name="description" value="${file .description}" cols="40" rows="10"/></dd> </dl> </fieldset> <g:submitButton name="Save" value="Save"/> | <g:link controller="home">Cancel</g:link> </g:form> </body> </html> This GSP looks very similar to the create.gsp file for messages. Obviously, it has different fields that correspond to fields on the File domain class. The important difference is that this form tells the browser it will be submitting the file data: <g:form action="save" method="post" enctype="multipart/form-data">   Run the application, go to http://localhost:8080/teamwork/file/create and sign in with the username flancelot and the password password. You should see the window as shown in the following screenshot: Saving the file Now that our users can select files to upload, we need to implement the save action so that these files can be persisted and can be viewed by other users. Grails file upload Grails provides two methods of handling file upload. We are going to use both of them. The two approaches are: Using data binding Using the Spring MultipartFile interface Data binding makes receiving the data of the file very simple, but is quite limited if used on its own. There is no way of binding anything other than the data of the file, such as the filename or the size of the file, to our domain object. By also providing access to the Spring MultipartFile interface, Grails allows us to programmatically access any other information we might want from the file. The save action Update the FileController class and implement the save action as follows: package app import org.springframework.web.multipart.MultipartFile class FileController { def userService def create = { return [ file: new File() ] } def save = { def file = new File( params ) file.user = userService.getAuthenticatedUser() MultipartFile f = request.getFile( 'data' ) file.size = f.getSize() / 1024 file.extension = extractExtension( f ) if(file.save()) { flash.userMessage = "File [${file.name}] has been uploaded." redirect(controller: 'home') } else { render(view: 'create', model: [file: file]) } } def extractExtension( MultipartFile file ) { String filename = file.getOriginalFilename() return filename.substring(filename.lastIndexOf( "." ) + 1 ) } def download = { } } Apart from the implementation of the save action, we have had to import Spring MultipartFile and also inject the userService. The first highlighted line within the save action performs the binding of request parameters to the File domain object. The usual binding will take place, that is, the name and the description properties of the File object will be populated from the request. In addition, since we have a property on our domain object that is an array of bytes, the contents of the file object in the request will also be bound into our File object. A quick review of our code shows that we have the following property on theFile class: byte[] data Also the create.gsp defines the file input field with the same name: <dd><input type="file" name="data" /></dd> Grails is also capable of binding the contents of a file to a String property. In this case, we could just declare the data property in our File class as a String, and Grails would bind the file contents as a String. The next line of interest occurs when we fetch the MultipartFile off the request by using the getFile method. We simply specify the request parameter that contains the file data and Grails does the rest. With an instance of MultipartFile we can access the file size and the original file name to extract the file extension. Once we have finished populating our File object, we can call the save method and GORM will manage the persistence of the file object and the file data to the database. Validation messages The last thing we need to remember to add is the validation messages that will be displayed if the users don't enter all the data that is needed to save a file. Add the following to grails-app/i18n/messages.properties: file.name.blank=You must give the file a name file.description.blank=The file must have a description file.data.minSize.notmet=No file has been uploaded file.data.maxSize.exceeded=The file is too large. The maximum file size is 10MB
Read more
  • 0
  • 0
  • 2845

article-image-seam-data-validation
Packt
09 Oct 2009
8 min read
Save for later

Seam Data Validation

Packt
09 Oct 2009
8 min read
Data validation In order to perform consistent data validation, we would ideally want to perform all data validation within our data model. We want to perform data validation in our data model so that we can then keep all of the validation code in one place, which should then make it easier to keep it up-to-date if we ever change our minds about allowable data values. Seam makes extensive use of the Hibernate validation tools to perform validation of our domain model. The Hibernate validation tools grew from the Hibernate project (http://www.hibernate.org) to allow the validation of entities before they are persisted to the database. To use the Hibernate validation tools in an application, we need to add hibernate-validator.jar into the application's class path, after which we can use annotations to define the validation that we want to use for our data model. Let's look at a few validations that we can add to our sample Seam Calculator application. In order to implement data validation with Seam, we need to apply annotations either to the member variables in a class or to the getter of the member variables. It's good practice to always apply these annotations to the same place in a class. Hence, throughout this article, we will always apply our annotation to the getter methods within classes. In our sample application, we are allowing numeric values to be entered via edit boxes on a JSF form. To perform data validation against these inputs, there are a few annotations that can help us. Annotation Description @Min The @Min annotation allows a minimum value for a numeric variable to be specified. An error message to be displayed if the variable's value is less than the specified minimum can also be specified. The message parameter is optional. If it is not specified, then a sensible error message will be generated (similar to must be greater than or equal to ...). @Min(value=0, message="...") @Max The @Max annotation allows a maximum value for a numeric variable to be specified. An error message to be displayed if the variable's value is greater than the specified maximum can also be specified. The message parameter is optional. If it is not specified, then a sensible error message will be generated (similar to must be less than or equal to ...). @Max(Value=100, message="...") @Range The @Range annotation allows a numeric range-that is, both minimum and maximum values-to be specified for a variable. An error message to be displayed if the variable's value is outside the specified range can also be specified. The message parameter is optional. If it is not specified, then a sensible error message will be generated (similar to must be between ... and ...). @Range(min=0, max=10, message="...") At this point, you may be wondering why we need to have an @Range validator, when by combining the @Min and @Max validators, we can get a similar effect. If you want a different error message to be displayed when a variable is set above its maximum value as compared to the error message that is displayed when it is set below its minimum value, then the @Min and @Max annotations should be used. If you are happy with the same error message being displayed when a variable is set outside its minimum or maximum values, then the @Range validator should be used. Effectively, the @Min and @Max validators are providing a finer level of error message provision than the @Range validator. The following code sample shows how these annotations can be applied to a sample application, to add basic data validation to our user inputs. package com.davidsalter.seamcalculator; import java.io.Serializable; import org.jboss.seam.annotations.Name; import org.jboss.seam.faces.FacesMessages; import org.hibernate.validator.Max;import org.hibernate.validator.Min;import org.hibernate.validator.Range; @Name("calculator") public class Calculator implements Serializable { private double value1; private double value2; private double answer; @Min(value=0) @Max(value=100) public double getValue1() { return value1; } public void setValue1(double value1) { this.value1 = value1; } @Range(min=0, max=100) public double getValue2() { return value2; } public void setValue2(double value2) { this.value2 = value2; } public double getAnswer() { return answer; } ... } Displaying errors to the user In the previous section, we saw how to add data validation to our source code to stop invalid data from being entered into our domain model. Now that we have reached a level of data validation, we need to provide feedback to the user to inform them of any invalid data that they have entered. JSF applications have the concept of messages that can be displayed associated with different components. For example, if we have a form asking for a date of birth to be entered, we could display a message next to the entry edit box if an invalid date were entered. JSF maintains a collection of these error messages, and the simplest way of providing feedback to the user is to display a list of all of the error messages that were generated as a part of the previous operation. In order to obtain error messages within the JSF page, we need to tell JSF which components we want to be validated against the domain model. This is achieved by using the <s:validate/> or <s:validateAll/> tags. These are Seam-specific tags and are not a part of the standard JSF runtime. In order to use these tags, we need to add the following taglib reference to the top of the JSF page. <%@ taglib uri="http://jboss.com/products/seam/taglib" prefix="s" %> In order to use this tag library, we need to add a few additional JAR files into the WEB-INF/lib directory of our web application, namely: jboss-el.jar jboss-seam-ui.jar jsf-api.jar jsf-impl.jar This tag library allows us to validate all of the components (<s:validateAll/>) within a block of JSF code, or individual components (<s:validate/>) within a JSF page. To validate all components within a particular scope, wrap them all with the <s:validateAll/> tag as shown here: <h:form> <s:validateAll> <h:inputText value="..." /> <h:inputText value="..." /> </s:validateAll> </h:form> To validate individual components, embed the <s:validate/> tag within the component, as shown in the following code fragment. <h:form> <h:inputText value="..." > <s:validate/> </h:inputText> <h:inputText value="..." > <s:validate/> </h:inputText> </h:form> After specifying that we wish validation to occur against a specified set of controls, we can display error messages to the user. JSF maintains a collection of errors on a page, which can be displayed in its entirety to a user via the <h:messages/> tag. It can sometimes be useful to show a list of all of the errors on a page, but it isn't very useful to the user as it is impossible for them to say which error relates to which control on the form. Seam provides some additional support at this point to allow us to specify the formatting of a control to indicate error or warning messages to users. Seam provides three different JSF facets (<f:facet/>) to allow HTML to be specified both before and after the offending input, along with a CSS style for the HTML. Within these facets, the <s:message/> tag can be used to output the message itself. This tag could be applied either before or after the input box, as per requirements. Facet Description beforeInvalidField This facet allows HTML to be displayed before the input that is in error. This HTML could contain either text or images to notify the user that an error has occurred. <f:facet name="beforeInvalidField"> ... </f:facet> afterInvalidField This facet allows HTML to be displayed after the input that is in error. This HTML could contain either text or images to notify the user that an error has occurred. <f:facet name="afterInvalidField"> ... </f:facet> aroundInvalidField This facet allows the CSS style of the text surrounding the input that is in error to be specified. <f:facet name="aroundInvalidField"> ... </f:facet> In order to specify these facets for a particular field, the <s:decorate/>  tag must be specified outside the facet scope. <s:decorate> <f:facet name="aroundInvalidField"> <s:span styleClass="invalidInput"/> </f:facet> <f:facet name="beforeInvalidField"> <f:verbatim>**</f:verbatim> </f:facet> <f:facet name="afterInvalidField"> <s:message/> </f:facet> <h:inputText value="#{calculator.value1}" required="true" > <s:validate/> </h:inputText> </s:decorate> In the preceding code snippet, we can see that a CSS style called invalidInput is being applied to any error or warning information that is to be displayed regarding the <inputText/> field. An erroneous input field is being adorned with a double asterisk (**) preceding the edit box, and the error message specific to the inputText field after is displayed in the edit box.
Read more
  • 0
  • 0
  • 2566
Modal Close icon
Modal Close icon