Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-installing-php-nuke
Packt
22 Feb 2010
10 min read
Save for later

Installing PHP-Nuke

Packt
22 Feb 2010
10 min read
The steps to install and configure PHP-Nuke are simple: Download and extract the PHP-Nuke files. Download and apply ChatServ's patches. Create the database for PHP-Nuke. Create a database user, and fill the database with data. Make some simple changes to the PHP-Nuke configuration file. Copy the PHP-Nuke files to the document root of the web server. Test it out! Let's get started. Downloading PHP-Nuke The latest version of PHP-Nuke can be downloaded at phpnuke.org downloads page: http://www.phpnuke.org/modules.php?name=Downloads&d_op=viewdownload&cid=1 You can also obtain older versions of PHP-Nuke, including version 1.0, from SourceForge: http://sourceforge.net/project/showfiles.php?group_id=7511&package_id=7622 SourceForge is the world's largest home of open-source projects. Many projects use SourceForge's facilities to host and maintain their projects. You can find almost anything you want on SourceForge—whether it is in a usable state or has been updated recently is another matter. Extracting PHP-Nuke Once you have downloaded PHP-Nuke, you should extract the contents of the PHP-Nuke ZIP archive to the root of your c: drive. You will have to create a folder called PHP-Nuke-7.8 in the root of your c: drive. (If you extract the files elsewhere, create the folder PHP-Nuke-7.8 and copy the contents of the main unzipped folder to this new folder). If you don't have a tool for extracting the files, you can download an evaluation edition (or buy a full edition) of WinZip from www.winzip.com. There are also free, powerful, extracting tools such as ZipGenius (http://www.zipgenius.it/index_eng.htm) and 7-Zip (http://sourceforge.net/projects/sevenzip/) among others. In the PHP-Nuke-7.8 folder, you will find three subfolders called html, sql, and upgrades. The upgrades folder contains scripts that handle upgrading the database between different versions, the sql folder contains the definition of the PHP-Nuke database that we will be working with, and the html folder contains the guts of your PHP-Nuke installation. The html folder contains all the PHP scripts, HTML files, images, CSS stylesheets, and so on that drive PHP-Nuke. Within the html folder are further subfolders; some of these include: modules: Contains the modules that make up your PHP-Nuke site. Modules are the essence of PHP-Nuke's operation; we look at them from article Your First Page onwards. blocks: Contains PHP-Nuke's blocks. Blocks are 'mini-functionality' units and usually provide snippet views of modules. We will look at blocks in article Managing the Site. language: Contains PHP-Nuke language files. These allow the language of PHP-Nuke's interface to be changed. images: Contains images used in the display of the PHP-Nuke site. themes: Contains the themes for PHP-Nuke. The use of themes allows you to completely change the look of a PHP-Nuke site with a click of a button. includes, db: Contain code to support the running of PHP-Nuke. The db folder, for example, contains database access code. admin: Contains code to power the administration area of your site. Downloading the Patches No software is without its flaws, and PHP-Nuke is no exception. After a release, the large user community invariably finds problems and potential security holes. Furthermore, PHP-Nuke also contains features such as its forum, which is in fact the phpBB application specially modified to work with PHP-Nuke. phpBB itself is updated on a regular basis to correct critical security vulnerabilities or to fix other problems, and consequently the corresponding part of PHP-Nuke also needs to be updated. Rather than releasing a new version of PHP-Nuke for these situations, patches for its various parts are released. ChatServ's patches from www.nukeresources.com are mostly concerned with variable validation, in other words, making sure that the variables used in the application are of the right type for storing in the database. This has been an area of weakness with many earlier versions of PHP-Nuke. The patches are often incorporated into subsequent versions of PHP-Nuke so that each new version becomes more robust. Note that you don't have to apply the patches, and PHP-Nuke will still work without them. However, by applying them you will have taken a good step towards improving the security of your site. If you navigate to http://www.nukeresources.com, there is a handy menu on the front page to access the patches: To obtain the patch corresponding to your version, click the link and you will be taken to the relevant file (of course, www.nukeresources is a PHP-Nuke powered site!). Click on the Nuke 7.8 link to go to the Downloads page of www.nukeresources.com. On this page, clicking the Download this file Now! link will download the patches for PHP-Nuke 7.8. The name of this file will be of the form 78patched.tar.gz. This is a GZIP compressed file that contains all the patches that we are about to apply. The GZIP file can be extracted with WinZip, or any of the other utilities we discussed earlier. The patches are simply modified versions of the original PHP-Nuke files. The original files have been modified to address various security issues that may have been identified since the initial release, or maybe since the last version of the patch. Applying the Patches To apply the patches, first we need to extract the 78patched.tar.gz file. We will extract the files into a folder called patches that we will create in the PHP-Nuke-7.8 folder. After extracting the files, copy the contents of the patches folder to your html folder. Do not copy the patches folder, copy its contents. The patches folder contains files that replace the files in the default installation, and you get a Confirm File Replace window. Select Yes for all the files, and when the copying is complete, your installation is ready to go. We have performed this patching immediately after installing PHP-Nuke, but we could have done this at any time. Doing it at this point is more sensible as it means that we are working on the most secure version of PHP-Nuke. Also, the patch process we have described here overwrites existing PHP-Nuke installation files. If you have modified these files, then the changes will be lost on applying the patch. Thus applying the patches later without disturbing any of your changes becomes more demanding. There is one further thing to watch for after applying the patches. You may find that the patched files have had their permissions set to read-only, and that you are unable to modify the files. To modify the files (and we do have to modify at least the config.php file in this article) you will need to remove this setting. You can do this on Windows by right-clicking on the file, selecting Properties from the menu, unchecking the Read-only setting, and clicking the OK button: We've done almost all the work with the files that we need to; now we turn our attention to creating and populating PHP-Nuke's database. Preparing the PHP-Nuke Database We'll be using the phpMyAdmin tool to do our database work. phpMyAdmin is part of the XAMPP installation (detailed in Appendix A), or can be downloaded from www.phpmyadmin.net, if you don't already have it. phpMyAdmin provides a powerful web interface for working with your MySQL databases. First of all, open your browser and navigate to http://localhost/phpmyadmin/, or whatever the location of your phpMyAdmin installation is: Creating the Database We need to create an empty database for PHP-Nuke to hold all the data about our site. To do this, we simply enter a name for our database into the Create new database textbox: We will call our database nuke. Enter this, and click the Create button. The name you give doesn't particularly matter, as long as it is not the name of some already existing database. If you try to use the same name as an already existing database, phpMyAdmin will inform you of this, and no action will be taken. The exact name isn't particularly important at this point because there is another configuration step coming up, which requires us to tell PHP-Nuke the name of the database we've created for it. After clicking Create, the screen will reload and you will be notified of the successful creation of your database: Creating a Database User Before we start populating the database, we will create a database user that can access only the PHP-Nuke database. This user is not a human, but will be used by PHP-Nuke to connect to the database while it performs its data-handling activities. The advantage of creating a database user is that it adds an extra level of security to our installation. PHP-Nuke will be able to work with data only in this database of the MySQL server, and no other. Also, PHP-Nuke will be restricted in the operations it can perform on the tables in the database. We will need to create a username for this boxed-in user to access the nuke database. Let's call our user nuker and go with the password nukepassword. However, in order to add an extra level of security we will introduce some digits into nukepassword, and some other slight twists, to strengthen it, and so use the word No0kPassv0rd as our database user password. To create the database user, click the SQL tab, and enter the following into the Run SQL query/queries on database textbox: GRANT ALL PRIVILEGES ON nuke.* TO nuker@localhost IDENTIFIED BY 'No0kPassv0rd' WITH GRANT OPTION Your screen should look like this: Click the Go button, and the database user will be created: Populating the Database Now we are ready to fill our database with data for PHP-Nuke. This doesn't mean we start typing the data in ourselves; the data comes with the PHP-Nuke installation. This data is found in a file called nuke.sql in the sql folder of the PHP-Nuke installation. This file contains a number of SQL statements that define the tables within the database and also fill them with 'raw' data for the site. However, before we fill the database with the tables from this file, we need to make a modification to this file. By default, the name of each database table in PHP-Nuke begins with nuke_. For example, there is a table with the name nuke_stories that holds information about stories, and a table called nuke_topics that holds information about story topics. These are just two of the tables; there are more than 90 in the standard installation. The word nuke_ is a 'table prefix', and is used to ensure that there are no clashes between the names of PHP-Nuke's tables and tables from another application in the same database, since the rest of the table name is descriptive of the data stored in the table, and other applications may have similarly named tables. What this does mean is that unless this table prefix is changed, the table names in your PHP-Nuke database will be known to anyone attempting to hack your site. Many of the typical attacks used to damage PHP-Nuke are based around the fact that the names of the tables in the database powering a PHP-Nuke site are known. By changing the table prefix to something less obvious, you have taken another step to making your site more secure.
Read more
  • 0
  • 8
  • 28245

article-image-storing-passwords-using-freeradius-authentication
Packt
08 Sep 2011
6 min read
Save for later

Storing Passwords Using FreeRADIUS Authentication

Packt
08 Sep 2011
6 min read
In the previous article we covered the Authentication Methods used while working with FreeRADIUS. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, teaches methods for storing passwords and how they work. Passwords do not need to be stored in clear text and it is better to store them in a hashed format. There are, however, limitations to the kind of authentication protocols that can be used when the passwords are stored as a hash which we will explore in this article. (For more resources on this subject, see here.) Storing passwords Username and password combinations have to be stored somewhere. The following list mentions some of the popular places: Text files: You should be familiar with this method by now. SQL databases: FreeRADIUS includes modules to interact with SQL databases. MySQL is very popular and widely used with FreeRADIUS. Directories: Microsoft's Active Directory or Novell's e-Directory are typical enterprise-size directories. OpenLDAP is a popular open source alternative. The users file and the SQL database that can be used by FreeRADIUS store the username and password as AVPs. When the value of this AVP is in clear text, it can be dangerous if the wrong person gets hold of it. Let's see how this risk can be minimized. Hash formats To reduce this risk, we can store the passwords in a hashed format. A hashed format of a password is like a digital fingerprint of that password's text value. There are many different ways to calculate this hash, for example MD5 or SHA1. The end result of a hash should be a one-way fixed-length encrypted string that uniquely represents the password. It should be impossible to retrieve the original password out of the hash. To make the hash even more secure and more immune to dictionary attacks we can add a salt to the function that generates the hash. A salt is randomly generated bits to be used in combination with the password as input to the one way hash function. With FreeRADIUS we store the salt along with the hash. It is therefore essential to have a random salt with each hash to make a rainbow table attack difficult. The pap module, which is used for PAP authentication, can use passwords stored in the following hash formats to authenticate users: Both MD5 and SSH1 hash functions can be used with a salt to make it more secure. Time for action – hashing our password We will replace the Cleartext-Password AVP in the users file with a more secure hashed password AVP in this section. There seems to be a general confusion on how the hashed password should be created and presented. We will help you clarify this issue in order to produce working hashes for each format. A valuable URL to assist us with the hashes is the OpenLDAP FAQ: http://www.openldap.org/faq/data/cache/419.html There are a few sections that show how to create different types of password hashes. We can adapt this for our own use in FreeRADIUS. Crypt-Password Crypt password hashes have their origins in Unix computing. Stronger hashing methods are preferred over crypt, although crypt is still widely used. The following Perl one-liner will produce a crypt password for passme with the salt value of 'salt': #> perl -e 'print(crypt("passme","salt")."n");' Use this output and change Alice's check entry in the users file from: "alice" Cleartext-Password := "passme" to: "alice" Crypt-Password := "sa85/iGj2UWlA" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the crypt password by looking for the following line in the FreeRADIUS debug feedback: [pap] Using CRYPT password "sa85/iGj2UWlA" MD5-Password The MD5 hash is often used to check the integrity of a file. When downloading a Linux ISO image you are also typically supplied with the MD5 sum of the file. You can then confirm the integrity of the file by using the md5sum command. We can also generate an MD5 hash from a password. We will use Perl to generate and encode the MD5 hash in the correct format that is required by the pap module. The creation of this password hash involves external Perl modules, which you may have to install first before the script can be used. The following steps will show you how: Create a Perl script with the following contents; we'll name it 4088_04_md5.pl: #! /usr/bin/perl -wuse strict;use Digest::MD5;use MIME::Base64;unless($ARGV[0]){ print "Please supply a password to create a MD5 hash from.n"; exit;}my $ctx = Digest::MD5->new;$ctx->add($ARGV[0]);print encode_base64($ctx->digest,'')."n"; Make the 4088_04_md5.pl file executable: chmod 755 4088_04_md5.pl Get the MD5 password for passme: ./4088_04_md5.pl passme Use this output and update Alice's entry in the user's file to: "alice" MD5-Password := "ugGBYPwm4MwukpuOBx8FLQ==" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the MD5 password by looking for the following line in the FreeRADIUS debug feedback: [pap] Using MD5 encryption. SMD5-Password This is an MD5 password with salt. The creation of this password hash involves external Perl modules, which you may have to install first before the script can be used. Create a Perl script with the following contents; we'll name it 4088_04_smd5.pl: #! /usr/bin/perl -wuse strict;use Digest::MD5;use MIME::Base64;unless(($ARGV[0])&&($ARGV[1])){ print "Please supply a password and salt to create a salted MD5 hash from.n"; exit;}my $ctx = Digest::MD5->new;$ctx->add($ARGV[0]);my $salt = $ARGV[1];$ctx->add($salt);print encode_base64($ctx->digest . $salt ,'')."n"; Make the 4088_04_smd5.pl file executable: chmod 755 4088_04_smd5.pl Get the SMD5 value for passme using a salt value of 'salt': ./4088_04_smd5.pl passme salt Remember that you should use a random value for the salt. We only used salt here for the demonstration. Use this output and update Alice's entry in the user's file to: "alice" SMD5-Password := "Vr6uPTrGykq4yKig67v5kHNhbHQ=" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the SMD5 password by looking for the following line in the FreeRADIUS debug feedback. [pap] Using SMD5 encryption.
Read more
  • 0
  • 0
  • 27022

article-image-anatomy-wordpress-plugin
Packt
25 Mar 2011
7 min read
Save for later

Anatomy of a WordPress Plugin

Packt
25 Mar 2011
7 min read
  WordPress 3 Plugin Development Essentials Create your own powerful, interactive plugins to extend and add features to your WordPress site         Read more about this book       WordPress is a popular content management system (CMS), most renowned for its use as a blogging / publishing application. According to usage statistics tracker, BuiltWith (http://builtWith.com), WordPress is considered to be the most popular blogging software on the planet—not bad for something that has only been around officially since 2003. Before we develop any substantial plugins of our own, let's take a few moments to look at what other people have done, so we get an idea of what the final product might look like. By this point, you should have a fresh version of WordPress installed and running somewhere for you to play with. It is important that your installation of WordPress is one with which you can tinker. In this article by Brian Bondari and Everett Griffiths, authors of WordPress 3 Plugin Development Essentials, we will purposely break a few things to help see how they work, so please don't try anything in this article on a live production site. Deconstructing an existing plugin: "Hello Dolly" WordPress ships with a simple plugin named "Hello Dolly". Its name is a whimsical take on the programmer's obligatory "Hello, World!", and it is trotted out only for pedantic explanations like the one that follows (unless, of course, you really do want random lyrics by Jerry Herman to grace your administration screens). Activating the plugin Let's activate this plugin so we can have a look at what it does: Browse to your WordPress Dashboard at http://yoursite.com/wp-admin/. Navigate to the Plugins section. Under the Hello Dolly title, click on the Activate link. You should now see a random lyric appear in the top-right portion of the Dashboard. Refresh the page a few times to get the full effect. Examining the hello.php file Now that we've tried out the "Hello Dolly" plugin, let's have a closer look. In your favorite text editor, open up the /wp-content/plugins/hello.php file. Can you identify the following integral parts? The Information Header which describes details about the plugin (author and description). This is contained in a large PHP /* comment */. User-defined functions, such as the hello_dolly() function. The add_action() and/or add_filter() functions, which hook a WordPress event to a user-defined function. It looks pretty simple, right? That's all you need for a plugin: An information header Some user-defined functions add_action() and/or add_filter() functions In your WordPress Dashboard, ensure that the "Hello Dolly" plugin has been activated. If applicable, use your preferred (s)FTP program to connect to your WordPress installation. Using your text editor, temporarily delete the information header from wpcontent/ plugins/hello.php and save the file (you can save the header elsewhere for now). Save the file. Refresh the Plugins page in your browser. You should get a warning from WordPress stating that the plugin does not have a valid header: Ensure that the "Hello Dolly" plugin is active. Open the /wp-content/plugins/hello.php file in your text editor. Immediately before the line that contains function hello_dolly_get_lyric, type in some gibberish text, such as "asdfasdf" and save the file. Reload the plugins page in your browser. This should generate a parse error, something like: pre width="70"> Parse error: syntax error, unexpected T_FUNCTION in /path/to/ wordpress/html/wp-content/plugins/hello.php on line 16 Author: Listed below the plugin name Author URI: Together with "Author", this creates a link to the author's site Description: Main block of text describing the plugin Plugin Name: The displayed name of the plugin Plugin URI: Destination of the "Visit plugin site" link Version: Use this to track your changes over time Now that we've identified the critical component parts, let's examine them in more detail. Information header Don't just skim this section thinking it's a waste of breath on the self-explanatory header fields. Unlike a normal PHP file in which the comments are purely optional, in WordPress plugin and theme files, the Information Header is required! It is this block of text that causes a file to show up on WordPress' radar so that you can activate it or deactivate it. If your plugin is missing a valid information header, you cannot use it! Exercise—breaking the header To reinforce that the information header is an integral part of a plugin, try the following exercise: After you've seen the tragic consequences, put the header information back into the hello.php file. This should make it abundantly clear to you that the information header is absolutely vital for every WordPress plugin. If your plugin has multiple files, the header should be inside the primary file—in this article we use index.php as our primary file, but many plugins use a file named after the plugin name as their primary file. Location, name, and format The header itself is similar in form and function to other content management systems, such as Drupal's module.info files or Joomla's XML module configurations—it offers a way to store additional information about a plugin in a standardized format. The values can be extended, but the most common header values are listed below: For more information about header blocks, see the WordPress codex at: http://codex.wordpress.org/File_Header. In order for a PHP file to show up in WordPress' Plugins menu: The file must have a .php extension. The file must contain the information header somewhere in it (preferably at the beginning). The file must be either in the /wp-content/plugins directory, or in a subdirectory of the plugins directory. It cannot be more deeply nested. Understanding the Includes When you activate a plugin, the name of the file containing the information header is stored in the WordPress database. Each time a page is requested, WordPress goes through a laundry list of PHP files it needs to load, so activating a plugin ensures that your own files are on that list. To help illustrate this concept, let's break WordPress again. Exercise – parse errors Try the following exercise: Yikes! Your site is now broken. Why did this happen? We introduced errors into the plugin's main file (hello.php), so including it caused PHP and WordPress to choke. Delete the gibberish line from the hello.php file and save to return the plugin back to normal. The parse error only occurs if there is an error in an active plugin. Deactivated plugins are not included by WordPress and therefore their code is not parsed. You can try the same exercise after deactivating the plugin and you'll notice that WordPress does not raise any errors. Bonus for the curious In case you're wondering exactly where and how WordPress stores the information about activated plugins, have a look in the database. Using your MySQL client, you can browse the wp_options table or execute the following query: SELECT option_value FROM wp_options WHERE option_name='active_ plugins'; The active plugins are stored as a serialized PHP hash, referencing the file containing the header. The following is an example of what the serialized hash might contain if you had activated a plugin named "Bad Example". You can use PHP's unserialize() function to parse the contents of this string into a PHP variable as in the following script: <?php $active_plugin_str = 'a:1:{i:0;s:27:"bad-example/bad-example. php";}'; print_r( unserialize($active_plugin_str) ); ?> And here's its output: Array ( [0] => bad-example/bad-example.php )
Read more
  • 0
  • 1
  • 22340

article-image-working-forms-using-rest-api
Packt
11 Jul 2016
21 min read
Save for later

Working with Forms using REST API

Packt
11 Jul 2016
21 min read
WordPress, being an ever-improving content management system, is now moving toward becoming a full-fledged application framework, which brings up the necessity for new APIs. The WordPress REST API has been created to create necessary and reliable APIs. The plugin provides an easy-to-use REST API, available via HTTP that grabs your site's data in the JSON format and further retrieves it. WordPress REST API is now at its second version and has brought a few core differences, compared to its previous one, including route registration via functions, endpoints that take a single parameter, and all built-in endpoints that use a common controller. In this article by Sufyan bin Uzayr, author of the book Learning WordPress REST API, you'll learn how to write a functional plugin to create and edit posts using the latest version of the WordPress REST API. This article will also cover the process on how to work efficiently with data to update your page dynamically based on results. This tutorial comes to serve as a basis and introduction to processing form data using the REST API and AJAX and not as a redo of the WordPress post editor or a frontend editing plugin. REST API's first task is to make your WordPress powered websites more dynamic, and for this precise reason, I have created a thorough tutorial that will take you step by step in this process. After you understand how the framework works, you will be able to implement it on your sites, thus making them more dynamic. (For more resources related to this topic, see here.) Fundamentals In this article, you will be doing something similar, but instead of using the WordPress HTTP API and PHP, you'll use jQuery's AJAX methods. All of the code for that project should go in its plugin file.Another important tip before starting is to have the required JavaScript client installed that uses the WordPress REST API. You will be using the JavaScript client to make it possible to authorize via the current user's cookies. As a note for this tip would be the fact that you can actually substitute another authorization method such as OAuth if you would find it suitable. Setup the plugin During the course of this tutorial, you'll only need one PHP and one JavaScript file. Nothing else is necessary for the creation of our plugin. We will be starting off with writing a simple PHP file that will do the following three key things for us: Enqueue the JavaScript file Localize a dynamically created JavaScript object into the DOM when you use the said file Create the HTML markup for our future form All that is required of us is to have two functions and two hooks. To get this done, we will be creating a new folder in our plugin directory with one of the PHP files inside it. This will serve as the foundation for our future plugin. We will give the file a conventional name, such as my-rest-post-editor.php. In the following you can see our starting PHP file with the necessary empty functions that we will be expanding in the next steps: <?php /* Plugin Name: My REST API Post Editor */ add_shortcode( 'My-Post-EditorR', 'my_rest_post_editor_form'); function my_rest_post_editor_form( ) { } add_action( 'wp_enqueue_scripts', 'my_rest_api_scripts' ); function my_rest_api_scripts() { } For this demonstration, notice that you're working only with the post title and post content. This means that in the form editor function, you only need the HTML for a simple form for those two fields. Creating the form with HTML markup As you can notice, we are only working with the post title and post content. This makes it necessary only to have the HTML for a simple form for those two fields in the editor form function. The necessary code excerpt is as follows: function my_rest_post_editor_form( ) { $form = ' <form id="editor"> <input type="text" name="title" id="title" value="My title"> <textarea id="content"></textarea> <input type="submit" value="Submit" id="submit"> </form> <div id="results"> </div>'; return $form; } Our aim is to show this only to those users who are logged in on the site and have the ability to edit posts. We will be wrapping the variable containing the form in some conditional checks that will allow us to fulfill the said aim. These tests will check whether the user is logged-inin the system or not, and if he's not,he will be provided with a link to the default WordPress login page. The code excerpt with the required function is as follows: function my_rest_post_editor_form( ) { $form = ' <form id="editor"> <input type="text" name="title" id="title" value="My title"> <textarea id="content"></textarea> <input type="submit" value="Submit" id="submit"> </form> <div id="results"> </div> '; if ( is_user_logged_in() ) { if ( user_can( get_current_user_id(), 'edit_posts' ) ) { return $form; } else { return __( 'You do not have permissions to do this.', 'my-rest-post-editor' ); } } else {      return sprintf( '<a href="%1s" title="Login">%2s</a>', wp_login_url( get_permalink( get_ queried_object_id() ) ), __( 'You must be logged in to do this, please click here to log in.', 'my-rest-post-editor') ); } } To avoid confusions, we do not want our page to be processed automatically or somehow cause a page reload upon submitting it, which is why our form will not have either a method or an action set. This is an important thing to notice because that's how we are avoiding the unnecessary automatic processes. Enqueueing your JavaScript file Another necessary thing to do is to enqueue your JavaScript file. This step is important because this function provides a systematic and organized way of loading Javascript files and styles. Using the wp_enqueue_script function, you will tell WordPress when to load a script, where to load it, and what are its dependencies. By doing this, everyone utilizes the built-in JavaScript libraries that come bundled with WordPress rather than loading the same third-party script several times. Another big advantage of doing this is that it helps reduce the page load time and avoids potential code conflicts with other plugins. We use this method instead the wrong method of loading in the head section of our site because that's how we avoid loading two different plugins twice, in case we add one more manually. Once the enqueuing is done, we will be localizing an array of data into it, which you'll need to include in the JavaScript that needs to be generated dynamically. This will include the base URL for the REST API, as that can change with a filter, mainly for security purposes. Our next step is to make this piece as useable and user-friendly as possible, and for this, we will be creating both a failure and success message in an array so that our strings would be translation friendly. When done with this, you'll need to know the current user's ID and include that one in the code as well. The result we have accomplished so far is owed to the wp_enqueue_script()and wp_localize_script()functions. It would also be possible to add custom styles to the editor, and that would be achieved by using the wp_enqueue_style()function. While we have assessed the importance and functionality of wp_enqueue_script(), let's take a close look at the other ones as well. The wp_localize_script()function allows you to localize a registered script with data for a JavaScript variable. By this, we will be offered a properly localized translation for any used string within our script. As WordPress currently offers localization API in PHP; this comes as a necessary measure. Though the localization is the main use of the function, it can be used to make any data available to your script that you can usually only get from the server side of WordPress. The wp_enqueue_stylefunctionis the best solution for adding stylesheets within your WordPress plugins, as this will handle all of the stylesheets that need to be added to the page and will do it in one place. If you have two plugins using the same stylesheet and both of them use the same handle, then WordPress will only add the stylesheet on the page once. When adding things to wp_enqueue_style, it adds your styles to a list of stylesheets it needs to add on the page when it is loaded. If a handle already exists, it will not add a new stylesheet to the list. The function is as follows: function my_rest_api_scripts() { wp_enqueue_script( 'my-api-post-editor', plugins_url( 'my-api-post-editor.js', __FILE__ ), array( 'jquery' ), false, true ); wp_localize_script( 'my-api-post-editor', 'my_post_editor', array( 'root' => esc_url_raw( rest_url() ), 'nonce' => wp_create_nonce( 'wp_json' ), 'successMessage' => __( 'Post Creation Successful.', 'my-rest-post-editor' ), 'failureMessage' => __( 'An error has occurred.', 'my-rest-post-editor' ), 'userID'    => get_current_user_id(), ) ); } That will be all the PHP you need as everything else is handled via JavaScript. Creating a new page with the editor shortcode (MY-POST-EDITOR) is what you should be doing next and then proceed to that new page. If you've followed the instructions precisely, then you should see the post editor form on that page. It will obviously not be functional just yet, not before we write some JavaScript that will add functionality to it. Issuing requests for creating posts To create posts from our form, we will need to use a POST request, which we can make by using jQuery's AJAX method. This should be a familiar and very simple process for you, yet if you're not acquitted with it,you may want to take a look through the documentation and guiding offered by the guys at jQuery themselves (http://api.jquery.com/jquery.ajax/). You will also need to create two things that may be new to you, such as the JSON array and adding the authorization header. In the following, we will be walking through each of them in details. To create the JSON object for your AJAX request, you must firstly create a JavaScript array from the input and then use the JSON.stringify()to convert it into JSON. The JSON.strinfiy() method will convert a JavaScript value to a JSON string by replacing values if a replacer function is specified or optionally including only the specified properties if a replacer array is specified. The following code excerpt is the beginning of the JavaScript file that shows how to build the JSON array: (function($){ $( '#editor' ).on( 'submit', function(e) {        e.preventDefault(); var title = $( '#title' ).val(); var content = $( '#content' ).val();        var JSONObj = { "title"  :title, "content_raw" :content, "status"  :'publish' };        var data = JSON.stringify(JSONObj); })(jQuery); Before passing the variable data to the AJAX request, you will have first to set the URL for the request. This step is as simple as appending wp.v2/posts to the root URL for the API, which is accessible via _POST_EDITOR.root: var url = _POST_EDITOR.root; url = url + 'wp/v2/posts'; The AJAX request will look a lot like any other AJAX request you would make, with the sole exception of the authorization headers. Because of the REST API's JavaScript client, the only thing that you will be required to do is to add a header to the request containing the nonce set in the _POST_EDITOR object. Another method that could work as an alternative would be the OAuth authorization method. Nonce is an authorization method that generates a number for specific use, such as a session authentication. In this context, nonce stands for number used once or number once. OAuth authorization method OAuth authorization method provides users with secure access to server resources on behalf of a resource owner. It specifies a process for resource owners to authorize third-party access to their server resources without sharing any user credentials. It is important to state that is has been designed to work with HTTP protocols, allowing an authorization server to issue access tokens to third-party clients. The third party would then use the access token to access the protected resources hosted on the server. Using the nonce method to verify cookie authentication involves setting a request header with the name X-WP-Nonce, which will contain the said nonce value. You can then use the beforeSend function of the request to send the nonce. Following is what that looks like in the AJAX request: $.ajax({            type:"POST", url: url, dataType : 'json', data: data,            beforeSend : function( xhr ) {                xhr.setRequestHeader( 'X-WP-Nonce', MY_POST_EDITOR.nonce ); }, }); As you might have noticed, the only missing things are the functions that would display success and failure. These alerts can be easily created by using the messages that we localized into the script earlier. We will now output the result of the provided request as a simple JSON array so that we would see how it looks like. Following is the complete code for the JavaScript to create a post editor that can now create new posts: (function($){ $( '#editor' ).on( 'submit', function(e) {        e.preventDefault(); var title = $( '#title' ).val(); var content = $( '#content' ).val();        var JSONObj = { "title"   :title, "content_raw" :content, "status"   :'publish' };        var data = JSON.stringify(JSONObj);        var url = MY_POST_EDITOR.root; url += 'wp/v2/posts';        $.ajax({            type:"POST", url: url, dataType : 'json', data: data,            beforeSend : function( xhr ) {                xhr.setRequestHeader( 'X-WP-Nonce', MY_POST_EDITOR.nonce ); }, success: function(response) {                alert( MY_POST_EDITOR.successMessage );                $( "#results").append( JSON.stringify( response ) ); }, failure: function( response ) {                alert( MY_POST_EDITOR.failureMessage ); } }); }); })(jQuery); This is how we can create a basic editor in WP REST API. If you are a logged in and the API is still active, you should create a new post and then create an alert telling you that the post has been created. The returned JSON object would then be placed into the #results container. Insert image_B05401_04_01.png If you followed each and every step precisely, you should now have a basic editor ready. You may want to give it a try and see how it works for you. So far, we have created and set up a basic editor that allows you to create posts. In our next steps, we will go through the process of adding functionality to our plugin, which will enable us to edit existing posts. Issuing requests for editing posts In this section, we will go together through the process of adding functionality to our editor so that we could edit existing posts. This part may be a little bit more detailed, mainly because the first part of our tutorial covered the basics and setup of the editor. To edit posts, we would need to have the following two things: A list of posts by author, with all of the posts titles and post content A new form field to hold the ID of the post you're editing As you can understand, the list of posts by author and the form field would lay the foundation for the functionality of editing posts. Before adding that hidden field to your form, add the following HTMLcode: <input type="hidden" name="post-id" id="post-id" value=""> In this step, we will need to get the value of the field for creating new posts. This will be achieved by writing a few lines of code in the JavaScript function. This code will then allow us to automatically change the URL, thus making it possible to edit the post of the said ID, rather than having to create a new one every time we would go through the process. This would be easily achieved by writing down a simple code piece, like the following one: var postID = $( '#post-id').val(); if ( undefined !== postID ) { url += '/';    url += postID; } As we move on, the preceding code will be placed before the AJAX section of the editor form processor. It is important to understand that the variable URL in the AJAX function will have the ID of the post that you are editing only if the field has value as well. The case in which no such value is present for the field, it will yield in the creation of a new post, which would be identical to the process you have been taken through previously. It is important to understand that to populate the said field, including the post title and post content field, you will be required to add a second form. This will result in all posts to be retrieved by the current user, by using a GET request. Based on the selection provided in the said form, you can set the editor form to update. In the PHP, you will then add the second form, which will look similar to the following: <form id="select-post"> <select id="posts" name="posts"> </select> <input type="submit" value="Select a Post to edit" id="choose-post"> </form> REST API will now be used to populate the options within the #posts select. For us to achieve that, we will have to create a request for posts by the current user. To accomplish our goal, we will be using the available results. We will now have to form the URL for requesting posts by the current user, which will happen if you will set the current user ID as a part of the _POST_EDITOR object during the processes of the script setup. A function needs to be created to get posts by the current author and populate the select field. This is very similar to what we did when we made our posts update, yet it is way simpler. This function will not require any authentication, and given the fact that you have already been taken through the process of creating a similar function, creating this one shouldn't be any more of a hassle for you. The success function loops through the results and adds them to the postselector form as options for its one field and will generate a similar code, something like the following: function getPostsByUser( defaultID ) {    url += '?filter[author]=';    url += my_POST_EDITOR.userID;    url += '&filter[per_page]=20';    $.ajax({ type:"GET", url: url, dataType : 'json', success: function(response) { var posts = {}; $.each(response, function(i, val) {                $( "#posts" ).append(new Option( val.title, val.ID ) ); });            if ( undefined != defaultID ) {                $('[name=posts]').val( defaultID ) } } }); } You can notice that the function we have created has one of the parameters set for defaultID, but this shouldn't be a matter of concern for you just now. The parameter, if defined, would be used to set the default value of the select field, yet, for now, we will ignore it. We will use the very same function, but without the default value, and will then set it to run on document ready. This is simply achieved by a small piece of code like the following: $( document ).ready( function() {    getPostsByUser(); }); Having a list of posts by the current user isn't enough, and you will have to get the title and the content of the selected post and push it into the form for further editing. This is will assure the proper editing possibility and make it possible to achieve the projected result. Moving on, we will need the other GET request to run on the submission of the postselector form. This should be something of the kind: $( '#select-post' ).on( 'submit', function(e) {    e.preventDefault();    var ID = $( '#posts' ).val();    var postURL = MY_POST_EDITOR.root; postURL += 'wp/v2/posts/';    postURL += ID;    $.ajax({ type:"GET", url: postURL, dataType : 'json', success: function(post) { var title = post.title; var content = post.content;            var postID = postID; $( '#editor #title').val( title ); $( '#editor #content').val( content );            $( '#select-post #posts').val( postID ); } }); }); In the form of <json-url>wp/v2/posts/<post-id>, we will build a new URL that will be used to scrape post data for any selected post. This will result in us making an actual request that will be used to take the returned data and then set it as the value of any of the three fields there in the editor form. Upon refreshing the page, you will be able to see all posts by the current user in a specific selector. Submitting the data by a click will yield in the following: The content and title of the post that you have selected will be visible to the editor, given that you have followed the preceding steps correctly. And the second occurrence will be in the fact that the hidden field for the post ID you have added should now be set. Even though the content and title of the post will be visible, we would still be unable to edit the actual posts as the function for the editor form was not set for this specific purpose, just yet. To achieve that, we will need to make a small modification to the function that will make it possible for the content to be editable. Besides, at the moment, we would only get our content and title displayed in raw JSON data; however, applying the method described previously will improve the success function for that request so that the title and content of the post displays in the proper container, #results. In order to achieve this, you will need a function that is going to update the said container with the appropriate data. The code piece for this function will be something like the following: function results( val ) { $( "#results").empty();        $( "#results" ).append( '<div class="post-title">' + val.title + '</div>'  );        $( "#results" ).append( '<div class="post-content">' + val.content + '</div>'  ); } The preceding code makes use of some very simple jQuery techniques, but that doesn't make it any worse as a proper introduction to updating page content by making use of data from the REST API. There are countless ways of getting a lot more detailed or creative with this if you dive in the markup or start adding any additional fields. That will always be an option for you if you're more of a savvy developer, but as an introductory tutorial, we're trying not to keep this tutorial extremely technical, which is why we'll stick to the provided example for now. Insert image_B05401_04_02.png As we move forward, you can use it in your modified form procession function, which will be something like the following: $( '#editor' ).on( 'submit', function(e) {    e.preventDefault(); var title = $( '#title' ).val(); var content = $( '#content' ).val(); console.log( content );    var JSONObj = { "title" "content_raw" "status" }; :title, :content, :'publish'    var data = JSON.stringify(JSONObj);    var postID = $( '#post-id').val();    if ( undefined !== postID ) { url += '/';        url += postID; }    $.ajax({        type:"POST", url: url, dataType : 'json', data: data,        beforeSend : function( xhr ) {            xhr.setRequestHeader( 'X-WP-Nonce', MY_POST_EDITOR.nonce ); }, success: function(response) {            alert( MY_POST_EDITOR.successMessage );            getPostsByUser( response.ID ); results( response ); }, failure: function( response ) {            alert( MY_POST_EDITOR.failureMessage ); } }); }); As you have noticed, a few changes have been applied, and we will go through each of them in specific: The first thing that has changed is the Post ID that's being edited is now conditionally added. This implies that we will make use of the form and it will serve to create new posts by POSTing to the endpoint. Another change with the POST ID is that it will now update posts via posts/<post-id>. The second change regards the success function. A new result() function was used to output the post title and content during the process of editing. Another thing is that we also reran the getPostsbyUser() function, yet it has been set in a way that posts will automatically offer the functionality of editing, just after you will createthem. Summary With this, we havefinishedoff this article, and if you have followed each step with precision, you should now have a simple yet functional plugin that can create and edit posts by using the WordPress REST API. This article also covered techniques on how to work with data in order to update your page dynamically based on the available results. We will now progress toward further complicated actions with REST API. Resources for Article: Further resources on this subject: Implementing a Log-in screen using Ext JS [article] Cluster Computing Using Scala [article] Understanding PHP basics [article]
Read more
  • 0
  • 0
  • 20956

article-image-blogs-and-forums-using-plone-3
Packt
28 May 2010
11 min read
Save for later

Blogs and Forums using Plone 3

Packt
28 May 2010
11 min read
Blogs and forums have much to offer in a school setting. They help faculty and students communicate despite large class sizes. They engage students in conversations with each other. And they provide an easy way for instructors and staff members to build their personal reputations—and, thereby, the reputation of your institution. In this article, we consider how best to build blogs and forums in Plone. Along the way, we cite education-domain examples and point out tips for keeping your site stable and your users smiling. Plone's blogging potential Though Plone wasn't conceived as a blogging platform, its role as a full-fledged content management system gives it all the functionality of a blog and more. With a few well-placed tweaks, it can present an interface that puts users of other blogging packages right at home while letting you easily maintain ties between your blogs and the rest of your site. Generally speaking, blog entries are… Prominently labeled by date and organized in reverse chronological order Tagged by subject Followed by reader comments Syndicated using RSS or other protocols Plone provides all of these, with varying degrees of polish, out of the box: News items make good blog entries, and the built-in News portlet lists the most recent few, in reverse chronological order and with publication dates prominently shown. A more comprehensive, paginated list can easily be made using collections. Categories are a basic implementation of tags. Plone's built-in commenting can work on any content type, News Items included. Every collection has its own RSS feed. Add-on products: free as in puppies In addition to Plone's built-in tools, this article will explore the capabilities of several third-party add-ons. Open-source software is often called "free as in beer" or "free as in freedom". As typical of Plone add-ons, the products we will consider are both. However, they are also "free as in puppies". Who can resist puppies? They are heart-meltingly cute and loads of fun, but it's easy to forget, when their wet little noses are in your face, that they come with responsibility. Likewise, add-ons are free to install and use, but they also bring hidden costs: Products can hold you back. If you depend on one that doesn't support a new version of Plone, you'll face a choice between the product and the Plone upgrade. This situation is most likely at major version boundaries: for example, upgrading from Plone 3.x to Plone 4. Minor upgrades, as from Plone 3.2 to 3.3, should be fairly uneventful. (This was not always true with Plone 2.x, but release numbering has since gotten a dose of sanity.) One place products often fall short is uninstallation. It takes care to craft a quality uninstallation routine; low-quality or prerelease products sometimes fail to uninstall cleanly, leaving bits of themselves scattered throughout your site. They can even prevent your site from displaying any pages at all (often due to leaving remnants in portal_actions), and you may have to repair things by hand through the ZMI or, failing that, through an afternoon of fun with the Python debugger. The moral: even trying a product can be a risk. Test installation and uninstallation on a copy of your site before committing to one, and back up your Data.fs file before installing or uninstalling on production servers. Pace of work varies widely. Reporting a bug against an actively developed product might get you a new release within the week. Hitting a bug in an abandoned one could leave you fixing it yourself or paying someone else to. (Fortunately, there are scads of Plone consultants for hire in the #plone IRC channel and on the plone-users mailing list.) In addition to the above, products that add new content types (like blog entries, for instance) bring a risk of lock-in proportional to the amount of content you create with them. If a product is abandoned by its maintainer or you decide to stop using it for some other reason, you will need to migrate its content into some other type, either by writing custom scripts or by copying and pasting. These considerations are major drivers of this article's recommendations. For each of the top three Plone blogging strategies, we'll outline its capabilities, tick off its pros and cons, and estimate how high-maintenance a puppy it will be. Remember, even though puppies can be some work, a well-chosen and well-trained one becomes a best friend for life. News Items: blogging for the hurried or risk-averse Using news items as blog entries is, in true Extreme Programming style, "the simplest thing that could possibly work". Nonetheless, it's a surprisingly flexible practice and will disappoint only if you need features like pings, trackbacks, and remote editor integration. Here is an example front page of a Plone blog built using only news items, collections, and the built-in portlets: Structure of a news-item blog A blog in Plone can be as simple as a folder full of News Items, further organized into subfolders if necessary. Add a collection showing the most recent News Items to the top-level folder, and set it as its default page. As illustrated below, use an Item Type criterion for the collection to pull in the News Items, and use a Location criterion to exclude those created outside the blog folder: To provide pagination—recommended once the length of listings starts to noticeably impact download or render timetime—use the Limit Search Results option on the collection. One inconsistency is that only the Summary and Tabular Views on collections support pagination; Standard View (which shows the same information) does not. This means that Summary View, which sports a Read more link and is a bit more familiar to most blog users, is typically a good choice. Go easy on the pagination More items displayed per page is better. User tests on prototypes of gap.com's online store have suggested that, at least when selling shirts, more get sold when all are on one big page. Perhaps it's because users are faced with a louder mental "Continue or leave?" when they reach the end of a page. Regardless, it's something to consider when setting page size using a collection's Number of Items setting; you may want to try several different numbers and see how it affects the frequency with which your listing pages show up as "exit pages" in a web analytics package like AWStats. As a starting point, 50 is a sane choice, assuming your listings show only the title and description of each entry (as the built-in views do). The ideal number will be a trade-off between tempting visitors to leave with page breaks and keeping load and render times tolerable.   Finally, make sure to sort the entries by publication date. Set this up on the front-page collection's Criteria tab by selecting Effective Date and reversing the display order: As with all solutions in this article, a blog built on raw News Items can easily handle either single- or multi-author scenarios; just assign rights appropriately on the Sharing tab of the blog folder. News Item pros and cons Unadorned News Items are a great way to get started fast and confer practically zero upgrade risk, since they are maintained as part of Plone itself. However, be aware of these pointy edges you might bang into when using them as blog entries: With the built-in views, logged-out users can't see the authors or the publication dates of entries. Even logged-in users see only the modification dates unless they go digging through the workflow history. Categories applied to a News Item appear on its page, but clicking them takes you to a search for all items (both blog-related and otherwise) having that category. This could be a bug or a feature, depending on your situation. However, the ordering of the search results is unpredictable, and that is definitely unhelpful. The great thing about plain News Items is that there's a forward migration path. QuillsEnabled, which we'll explore later, can be layered atop an existing news-item-based blog with no migrations necessary and removed again if you decide to go back. Thus, a good strategy may be to start simple, with plain news items, and go after more features (and risk) as the need presents itself. Scrawl: a blog with a view One step up from plain News Items is Scrawl, a minimalist blog product that adds only two things: A custom Blog Entry type, which is actually just a copy of News Item. A purpose-built Blog view that can be applied to folders or collections, which are otherwise used just as with raw News Items. Here are both additions in action: Scrawl's Blog Entry isn't quite a verbatim copy of News Item; Scrawl makes a few tweaks: Commenting is turned on for new Blog Entries, without which authors would have to enable it manually each time. The chances of that happening are slim, since it's buried on the Edit → Settings tab, and users seldom stray from the default tab when editing. Blog Entry's default view is a slightly modified version of News Item's: it shows the author's name and the posting date even to unauthenticated users—and in a friendly "Posted by Fred Finster" format. It also adds a Permalink link, lest you forfeit crosslinks from users who know no other way of finding an entry's address. Calm your ringing phone by cloning types Using a custom content type for blog entries—even if it's just a copy of an existing one—has considerable advantages. For one, you can match contributors' vocabulary: assuming contributors think of part of your site as a blog (which they probably will if the word "blog" appears anywhere onscreen), they won't find it obvious to add "news items" there. Adding a "blog entry," on the other hand, lines up naturally with their expectations. This little trick, combined with judicious use of the Add new… → Restrictions… feature to pare down their options, will save hours of your time in training and support calls. A second advantage of a custom type is that it shows separately in Plone's advanced search. Visitors, like contributors, will identify better with the "blog entry" nomenclature. Plus, sometimes it's just plain handy to limit searches to only blogs. This type-cloning technique isn't limited to blog entries; you can clone and rename any content type: just visit portal_types in the ZMI, copy and paste a type, rename it, and edit its Title and Description fields. One commonly cloned type is File. Many contributors, even experts in noncomputer domains, aren't familiar with the word file. Cloning it to create PDF File, Word Document, and so on can go a long way toward making them comfortable using Plone.   Pros and cons of scrawl Scrawl's biggest risk is lock-in: since it uses its own Blog Entry content type to store your entries, uninstalling it leaves them inaccessible. However, because the Blog Entry type is really just the News Item type, a migration script is easy to write: # Turn all Blog Entries in a Plone site into News Items. # # Run by adding a "Script (Python)" in the ZMI (it doesn't matter where) and pasting this in. from Products.CMFCore.utils import getToolByName portal_catalog = getToolByName(context, 'portal_catalog') for brain in portal_catalog(portal_type='Blog Entry'): blog_entry = brain.getObject() # Get the actual blog entry from # the catalog entry. blog_entry.portal_type = 'News Item' # Update the catalog so searches see the new info: blog_entry.reindexObject() The reverse is also true: if you begin by using raw News Items and decide to switch to Scrawl, you'll need the reverse of the above script—just swap 'News Item' and 'Blog Entry'. If you have news items that shouldn't be converted to blog entries, your catalog query will have to be more specific, perhaps adding a path keyword argument, as in portal_catalog(portal_type='News Item', path='/my-plonesite/ blog-folder'). Aside from that, Scrawl is pretty risk-free. Its simplicity makes it unlikely to accumulate showstopping bugs or to break in future versions of Plone, and, if it does, you can always migrate back to news items or, if you have some programming skill, maintain it yourself—it's only 1,000 lines of code.
Read more
  • 0
  • 0
  • 20284

article-image-manipulating-jquery-tables
Packt
24 Oct 2009
20 min read
Save for later

Manipulating jQuery tables

Packt
24 Oct 2009
20 min read
In this article by Karl Swedberg and Jonathan Chaffer, we will use an online bookstore as our model website, but the techniques we cook up can be applied to a wide variety of other sites as well, from weblogs to portfolios, from market-facing business sites to corporate intranets. In this article, we will use jQuery to apply techniques for increasing the readability, usability, and visual appeal of tables, though we are not dealing with tables used for layout and design. In fact, as the web standards movement has become more pervasive in the last few years, table-based layout has increasingly been abandoned in favor of CSS‑based designs. Although tables were often employed as a somewhat necessary stopgap measure in the 1990s to create multi-column and other complex layouts, they were never intended to be used in that way, whereas CSS is a technology expressly created for presentation. But this is not the place for an extended discussion on the proper role of tables. Suffice it to say that in this article we will explore ways to display and interact with tables used as semantically marked up containers of tabular data. For a closer look at applying semantic, accessible HTML to tables, a good place to start is Roger Johansson's blog entry, Bring on the Tables at www.456bereastreet.com/archive/200410/bring_on_the_tables/. Some of the techniques we apply to tables in this article can be found in plug‑ins such as Christian Bach's Table Sorter. For more information, visit the jQuery Plug‑in Repository at http://jQuery.com/plugins. Sorting One of the most common tasks performed with tabular data is sorting. In a large table, being able to rearrange the information that we're looking for is invaluable. Unfortunately, this helpful operation is one of the trickiest to put into action. We can achieve the goal of sorting in two ways, namely Server-Side Sorting and JavaScript Sorting. Server-Side Sorting A common solution for data sorting is to perform it on the server side. Data in tables often comes from a database, which means that the code that pulls it out of the database can request it in a given sort order (using, for example, the SQL language's ORDER BY clause). If we have server-side code at our disposal, it is straightforward to begin with a reasonable default sort order. Sorting is most useful when the user can determine the sort order. A common idiom is to make the headers of sortable columns into links. These links can go to the current page, but with a query string appended indicating the column to sort by: <table id="my-data">   <tr>     <th class="name"><a href="index.php?sort=name">Name</a></th>     <th class="date"><a href="index.php?sort=date">Date</a></th>   </tr>   ... </table> The server can react to the query string parameter by returning the database contents in a different order. Preventing Page Refreshes This setup is simple, but requires a page refresh for each sort operation. As we have seen, jQuery allows us to eliminate such page refreshes by using AJAX methods. If we have the column headers set up as links as before, we can add jQuery code to change those links into AJAX requests: $(document).ready(function() {   $('#my-data .name a').click(function() {     $('#my-data').load('index.php?sort=name&type=ajax');     return false;   });   $('#my-data .date a').click(function() {     $('#my-data').load('index.php?sort=date&type=ajax');     return false;   }); }); Now when the anchors are clicked, jQuery sends an AJAX request to the server for the same page. We add an additional parameter to the query string so that the server can determine that an AJAX request is being made. The server code can be written to send back only the table itself, and not the surrounding page, when this parameter is present. This way we can take the response and insert it in place of the table. This is an example of progressiveenhancement. The page works perfectly well without any JavaScript at all, as the links for server-side sorting are still present. When JavaScript is present, however, the AJAX hijacks the page request and allows the sort to occur without a full page load. JavaScript Sorting There are times, though, when we either don't want to wait for server responses when sorting, or don't have a server-side scripting language available to us. A viable alternative in this case is to perform the sorting entirely on the browser using JavaScript client-side scripting. For example, suppose we have a table listing books, along with their authors, release dates, and prices: <table class="sortable">   <thead>     <tr>       <th></th>       <th>Title</th>       <th>Author(s)</th>       <th>Publish&nbsp;Date</th>       <th>Price</th>     </tr>   </thead>   <tbody>     <tr>       <td>         <img src="../covers/small/1847192386.png" width="49"              height="61" alt="Building Websites with                                                 Joomla! 1.5 Beta 1" />       </td>       <td>Building Websites with Joomla! 1.5 Beta 1</td>       <td>Hagen Graf</td>       <td>Feb 2007</td>       <td>$40.49</td>     </tr>     <tr>       <td><img src="../covers/small/1904811620.png" width="49"                height="61" alt="Learning Mambo: A Step-by-Step                Tutorial to Building Your Website" /></td>       <td>Learning Mambo: A Step-by-Step Tutorial to Building Your           Website</td>       <td>Douglas Paterson</td>       <td>Dec 2006</td>       <td>$40.49</td>     </tr>     ...   </tbody> </table> We'd like to turn the table headers into buttons that sort by their respective columns. Let us look into ways of doing this.   Row Grouping Tags Note our use of the <thead> and <tbody> tags to segment the data into row groupings. Many HTML authors omit these implied tags, but they can prove useful in supplying us with more convenient CSS selectors to use. For example, suppose we wish to apply typical even/odd row striping to this table, but only to the body of the table: $(document).ready(function() {   $('table.sortable tbody tr:odd').addClass('odd');   $('table.sortable tbody tr:even').addClass('even'); }); This will add alternating colors to the table, but leave the header untouched: Basic Alphabetical Sorting Now let's perform a sort on the Titlecolumn of the table. We'll need a class on the table header cell so that we can select it properly: <thead>   <tr>     <th></th>    <th class="sort-alpha">Title</th>     <th>Author(s)</th>     <th>Publish&nbsp;Date</th>     <th>Price</th>   </tr> </thead> To perform the actual sort, we can use JavaScript's built in .sort()method. It does an in‑place sort on an array, and can take a function as an argument. This function compares two items in the array and should return a positive or negative number depending on the result. Our initial sort routine looks like this: $(document).ready(function() {   $('table.sortable').each(function() {     var $table = $(this);     $('th', $table).each(function(column) {       if ($(this).is('.sort-alpha')) {         $(this).addClass('clickable').hover(function() {           $(this).addClass('hover');         }, function() {           $(this).removeClass('hover');         }).click(function() {           var rows = $table.find('tbody > tr').get();           rows.sort(function(a, b) {             var keyA = $(a).children('td').eq(column).text()                                                       .toUpperCase();             var keyB = $(b).children('td').eq(column).text()                                                       .toUpperCase();             if (keyA < keyB) return -1;             if (keyA > keyB) return 1;             return 0;           });           $.each(rows, function(index, row) {             $table.children('tbody').append(row);           });         });       }     });   }); }); The first thing to note is our use of the .each() method to make iteration explicit. Even though we could bind a click handler to all headers with the sort-alpha class just by calling $('table.sortable th.sort-alpha').click(), this wouldn't allow us to easily capture a crucial bit of information—the column index of the clicked header. Because .each() passes the iteration index into its callback function, we can use it to find the relevant cell in each row of the data later. Once we have found the header cell, we retrieve an array of all of the data rows. This is a great example of how .get()is useful in transforming a jQuery object into an array of DOM nodes; even though jQuery objects act like arrays in many respects, they don't have any of the native array methods available, such as .sort(). With .sort() at our disposal, the rest is fairly straightforward. The rows are sorted by comparing the textual contexts of the relevant table cell. We know which cell to look at because we captured the column index in the enclosing .each() call. We convert the text to uppercase because string comparisons in JavaScript are case-sensitive and we wish our sort to be case-insensitive. Finally, with the array sorted, we loop through the rows and reinsert them into the table. Since .append() does not clone nodes, this moves them rather than copying them. Our table is now sorted. This is an example of progressive enhancement's counterpart, gracefuldegradation. Unlike with the AJAX solution discussed earlier, we cannot make the sort work without JavaScript, as we are assuming the server has no scripting language available to it in this case. The JavaScript is required for the sort to work, so by adding the "clickable" class only through code, we make sure not to indicate with the interface that sorting is even possible unless the script can run. The page degrades into one that is still functional, albeit without sorting available. We have moved the actual rows around, hence our alternating row colors are now out of whack: We need to reapply the row colors after the sort is performed. We can do this by pulling the coloring code out into a function that we call when needed: $(document).ready(function() {   var alternateRowColors = function($table) {     $('tbody tr:odd', $table).removeClass('even').addClass('odd');     $('tbody tr:even', $table).removeClass('odd').addClass('even');   };     $('table.sortable').each(function() {     var $table = $(this);     alternateRowColors($table);     $('th', $table).each(function(column) {       if ($(this).is('.sort-alpha')) {         $(this).addClass('clickable').hover(function() {           $(this).addClass('hover');         }, function() {           $(this).removeClass('hover');         }).click(function() {           var rows = $table.find('tbody > tr').get();           rows.sort(function(a, b) {             var keyA = $(a).children('td').eq(column).text()                                                       .toUpperCase();             var keyB = $(b).children('td').eq(column).text()                                                       .toUpperCase();             if (keyA < keyB) return -1;             if (keyA > keyB) return 1;             return 0;           });           $.each(rows, function(index, row) {             $table.children('tbody').append(row);           });           alternateRowColors($table);         });       }     });   }); }); This corrects the row coloring after the fact, fixing our issue:   The Power of Plug-ins The alternateRowColors()function that we wrote is a perfect candidate to become a jQuery plug-in. In fact, any operation that we wish to apply to a set of DOM elements can easily be expressed as a plug-in. In this case, we only need to modify our existing function a little bit: jQuery.fn.alternateRowColors = function() {   $('tbody tr:odd', this).removeClass('even').addClass('odd');   $('tbody tr:even', this).removeClass('odd').addClass('even');   return this; }; We have made three important changes to the function. It is defined as a new property of jQuery.fn rather than as a standalone function. This registers the function as a plug-in method. We use the keyword this as a replacement for our $table parameter. Within a plug-in method, thisrefers to the jQuery object that is being acted upon. Finally, we return this at the end of the function. The return value makes our new method chainable. More information on writing jQuery plug-ins can be found in Chapter 10 of our book Learning jQuery. There we will discuss making a plug-in ready for public consumption, as opposed to the small example here that is only to be used by our own code. With our new plug-in defined, we can call $table.alternateRowColors(), which is a more natural jQuery syntax, intead of alternateRowColors($table). Performance Concerns Our code works, but is quite slow. The culprit is the comparator function, which is performing a fair amount of work. This comparator will be called many times during the course of a sort, which means that every extra moment it spends on processing will be magnified. The actual sort algorithm used by JavaScript is not defined by the standard. It may be a simple sort like a bubble sort (worst case of Θ(n2) in computational complexity terms) or a more sophisticated approach like quick sort (which is Θ(n log n) on average). In either case doubling the number of items increases the number of times the comparator function is called by more than double. The remedy for our slow comparator is to pre-compute the keys for the comparison. We begin with the slow sort function: rows.sort(function(a, b) {   keyA = $(a).children('td').eq(column).text().toUpperCase();   keyB = $(b).children('td').eq(column).text().toUpperCase();   if (keyA < keyB) return -1;   if (keyA > keyB) return 1;   return 0; }); $.each(rows, function(index, row) {   $table.children('tbody').append(row); }); We can pull out the key computation and do that in a separate loop: $.each(rows, function(index, row) {   row.sortKey = $(row).children('td').eq(column).text().toUpperCase(); }); rows.sort(function(a, b) {   if (a.sortKey < b.sortKey) return -1;   if (a.sortKey > b.sortKey) return 1;   return 0; }); $.each(rows, function(index, row) {   $table.children('tbody').append(row);   row.sortKey = null; }); In the new loop, we are doing all of the expensive work and storing the result in a new property. This kind of property, attached to a DOM element but not a normal DOM attribute, is called an expando.This is a convenient place to store the key since we need one per table row element. Now we can examine this attribute within the comparator function, and our sort is markedly faster.  We set the expando property to null after we're done with it to clean up after ourselves. This is not necessary in this case, but is a good habit to establish because expando properties left lying around can be the cause of memory leaks. For more information, see Appendix C.   Finessing the Sort Keys Now we want to apply the same kind of sorting behavior to the Author(s) column of our table. By adding the sort-alpha class to its table header cell, the Author(s)column can be sorted with our existing code. But ideally authors should be sorted by last name, not first. Since some books have multiple authors, and some authors have middle names or initials listed, we need outside guidance to determine what part of the text to use as our sort key. We can supply this guidance by wrapping the relevant part of the cell in a tag: <tr>   <td>     <img src="../covers/small/1847192386.png" width="49" height="61"             alt="Building Websites with Joomla! 1.5 Beta 1" /></td>   <td>Building Websites with Joomla! 1.5 Beta 1</td>   <td>Hagen <span class="sort-key">Graf</span></td>   <td>Feb 2007</td>   <td>$40.49</td> </tr> <tr>   <td>     <img src="../covers/small/1904811620.png" width="49" height="61"          alt="Learning Mambo: A Step-by-Step Tutorial to Building                                                 Your Website" /></td>   <td>     Learning Mambo: A Step-by-Step Tutorial to Building Your Website   </td>   <td>Douglas <span class="sort-key">Paterson</span></td>   <td>Dec 2006</td>   <td>$40.49</td> </tr> <tr>   <td>     <img src="../covers/small/1904811299.png" width="49" height="61"                   alt="Moodle E-Learning Course Development" /></td>   <td>Moodle E-Learning Course Development</td>   <td>William <span class="sort-key">Rice</span></td>   <td>May 2006</td>   <td>$35.99</td> </tr> Now we have to modify our sorting code to take this tag into account, without disturbing the existing behavior for the Titlecolumn, which is working well. By prepending the marked sort key to the key we have previously calculated, we can sort first on the last name if it is called out, but on the whole string as a fallback: $.each(rows, function(index, row) {   var $cell = $(row).children('td').eq(column);   row.sortKey = $cell.find('.sort-key').text().toUpperCase()                                   + ' ' + $cell.text().toUpperCase(); }); Sorting by the Author(s)column now uses the last name:     If two last names are identical, the sort uses the entire string as a tiebreaker for positioning.
Read more
  • 0
  • 0
  • 15414
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-migrating-wordpress-blog-middleman-and-deploying-amazon-s3
Mike Ball
07 Nov 2014
11 min read
Save for later

Migrating a WordPress Blog to Middleman and Deploying to Amazon S3

Mike Ball
07 Nov 2014
11 min read
Part 1: Getting up and running with Middleman Many of today’s most prominent web frameworks, such as Ruby on Rails, Django, Wordpress, Drupal, Express, and Spring MVC, rely on a server-side language to process HTTP requests, query data at runtime, and serve back dynamically constructed HTML. These platforms are great, yet developers of dynamic web applications often face complex performance challenges under heavy user traffic, independent of the underlying technology. High traffic, and frequent requests, may exploit processing-intensive code or network latency, in effect yielding a poor user experience or production outage. Static site generators such as Middleman, Jeckyll, and Wintersmith offer developers an elegant, highly scalable alternative to complex, dynamic web applications. Such tools perform dynamic processing and HTML construction during build time rather than runtime. These tools produce a directory of static HTML, CSS, and JavaScript files that can be deployed directly to a web server such as Nginx or Apache. This architecture reduces complexity and encourages a sensible separation of concerns; if necessary, user-specific customization can be handled via client-side interaction with third-party satellite services. In this three part series, we'll walk-through how to get started in developing a Middleman site, some basics of Middleman blogging, how to migrate content from an existing WordPress blog, and how to deploy a Middleman blog to production. We will also learn how to create automated tests, continuous integration, and automated deployments. In this part, we’ll cover the following: Creating a basic Middleman project Middleman configuration basics A quick overview of the Middleman template system Creating a basic Middleman blog Why should you use middleman? Middleman is a mature, full-featured static site generator. It supports a strong templating system, numerous Ruby-based HTML templating tools such as ERb and HAML, as well as a Sprockets-based asset pipeline used to manage CSS, JavaScript, and third-party client-side code. Middleman also integrates well with CoffeeScript, SASS, and Compass. Environment For this tutorial, I’m using an RVM-installed Ruby 2.1.2. I’m on Mac OSX 10.9.4. Installing middleman Install middleman via bundler: $ gem install middleman Create a basic middleman project called middleman-demo: $ middleman init middleman-demo This results in a middleman-demo directory with the following layout: ├── Gemfile ├── Gemfile.lock ├── config.rb └── source    ├── images    │   ├── background.png    │   └── middleman.png    ├── index.html.erb    ├── javascripts    │   └── all.js    ├── layouts    │   └── layout.erb    └── stylesheets        ├── all.css        └── normalize.css[SB4]  There are 5 directories and 10 files. A quick tour Here are a few notes on the middleman-demo layout: The Ruby Gemfile  cites Ruby gem dependencies; Gemfile.lock cites the full dependency chain, including  middleman-demo’s dependencies’ dependencies The config.rb  houses middleman-demo’s configuration The source directory houses middleman-demo ’s source code–the templates, style sheets, images, JavaScript, and other source files required by the  middleman-demo [SB7] site While a Middleman production build is simply a directory of static HTML, CSS, JavaScript, and image files, Middleman sites can be run via a simple web server in development. Run the middleman-demo development server: $ middleman Now, the middleman-demo site can be viewed in your web browser at  http://localhost:4567. Set up live-reloading Middleman comes with the middleman-livereload gem. The gem detects source code changes and automatically reloads the Middleman app. Activate middleman-livereload  by uncommenting the following code in config.rb: # Reload the browser automatically whenever files change configure :development do activate :livereload end Restart the middleman server to allow the configuration change to take effect. Now, middleman-demo should automatically reload on change to config.rb and your web browser should automatically refresh when you edit the source/* code. Customize the site’s appearance Middleman offers a mature HTML templating system. The source/layouts directory contains layouts, the common HTML surrounding individual pages and shared across your site. middleman-demo uses ERb as its template language, though Middleman supports other options such as HAML and Slim. Also note that Middleman supports the ability embed metadata within templates via frontmatter. Frontmatter allows page-specific variables to be embedded via YAML or JSON. These variables are available in a current_page.data namespace. For example, source/index.html.erb contains the following frontmatter specifying a title; it’s available to ERb templates as current_page.data.title: --- title: Welcome to Middleman --- Currently, middleman-demo is a default Middleman installation. Let’s customize things a bit. First, remove all the contents of source/stylesheets/all.css  to remove the default Middleman styles. Next, edit source/index.html.erb to be the following: --- title: Welcome to Middleman Demo --- <h1>Middleman Demo</h1> When viewing middleman-demo at http://localhost:4567, you’ll now see a largely unstyled HTML document with a single Middleman Demo heading. Install the middleman-blog plugin The middleman-blog plugin offers blog functionality to middleman applications. We’ll use middleman-blog in middleman-demo. Add the middleman-blog version 3.5.3 gem dependency to middleman-demo by adding the following to the Gemfile: gem "middleman-blog", "3.5.3 Re-install the middleman-demo gem dependencies, which now include middleman-blog: $ bundle install Activate middleman-blog and specify a URL pattern at which to serve blog posts by adding the following to config.rb: activate :blog do |blog| blog.prefix = "blog" blog.permalink = "{year}/{month}/{day}/{title}.html" end Write a quick blog post Now that all has been configured, let’s write a quick blog post to confirm that middleman-blog works. First, create a directory to house the blog posts: $ mkdir source/blog The source/blog directory will house markdown files containing blog post content and any necessary metadata. These markdown files highlight a key feature of middleman: rather than query a relational database within which content is stored, a middleman application typically reads data from flat files, simple text files–usually markdown–stored within the site’s source code repository. Create a markdown file for middleman-demo ’s first post: $ touch source/blog/2014-08-20-new-blog.markdown Next, add the required frontmatter and content to source/blog/2014-08-20-new-blog.markdown: --- title: New Blog date: 2014/08/20 tags: middleman, blog --- Hello world from Middleman! Features Rich templating system Built-in helpers Easy configuration Asset pipeline Lots more  Note that the content is authored in markdown, a plain text syntax, which is evaluated by Middleman as HTML. You can also embed HTML directly in the markdown post files. GitHub’s documentation provides a good overview of markdown. Next, add the following ERb template code to source/index.html.erb [SB37] to display a list of blog posts on middleman-demo ’s home page: <ul> <% blog.articles.each do |article| %> <li> <%= link_to article.title, article.path %> </li> <% end %> </ul> Now, when running middleman-demo and visiting http://localhost:4567, a link to the new blog post is listed on middleman-demo ’s home page. Clicking the link renders the permalink for the New Blog blog post at blog/2014-08-20/new-blog.html, as is specified in the blog configuration in config.rb. A few notes on the template code Note the use of a link_to method. This is a built-in middleman template helper. Middleman provides template helpers to simplify many common template tasks, such as rendering an anchor tag. In this case, we pass the link_to method two arguments, the intended anchor tag text and the intended href value. In turn, link_to generates the necessary HTML. Also note the use of a blog variable, within which an article’s method houses an array of all blog posts. Where did this come from?  middleman-demo is an instance of  Middleman::Application;  a blog  method on this instance. To explore other Middleman::Application methods, open middleman-demo via the built-in Middleman console by entering the following in your terminal: $ middleman console To view all the methods on the blog, including the aforementioned articles method, enter the following within the console: 2.1.2 :001 > blog.methods To view all the additional methods, beyond the blog, available to the Middleman::Application instance, enter the following within the console: 2.1.2 :001 > self.methods More can be read about all these methods on Middleman::Application’s rdoc.info class documentation. Cleaner URLs Note that the current new blog URL ends in .html. Let’s customize middleman-demo to omit .html from URLs. Add the following config.rb: activate :directory_indexes Now, rather than generating files such as /blog/2014-08-20/new-blog.html,  middleman-demo generates files such as /blog/2014-08-20/new-blog/index.html, thus enabling the page to be served by most web servers at a /blog/2014-08-20/new-blog/ path. Adjusting the templates Let’s adjust our the middleman-demo ERb templates a bit. First, note that <h1>Middleman Demo</h1> only displays on the home page; let’s make it render on all of the site’s pages. Move <h1>Middleman Demo</h1> from  source/index.html.erb  to source/layouts/layout.erb. Put it just inside the <body> tag: <body class="<%= page_classes %>"> <h1>Middleman Demo</h1> <%= yield %> </body> Next, let’s create a custom blog post template. Create the template file: $ touch source/layout/post.erb Add the following to extend the site-wide functionality of source/layouts/layout.erb to  source/layouts/post.erb: <% wrap_layout :layout do %> <h2><%= current_article.title %></h2> <p>Posted <%= current_article.date.strftime('%B %e, %Y') %></p> <%= yield %> <ul> <% current_article.tags.each do |tag| %> <li><a href="/blog/tags/<%= tag %>/"><%= tag %></a></li> <% end %> </ul> <% end %> Note the use of the wrap_layout  ERb helper.  The wrap_layout ERb helper takes two arguments. The first is the name of the layout to wrap, in this case :layout. The second argument is a Ruby block; the contents of the block are evaluated within the <%= yield %> call of source/layouts/layout.erb. Next, instruct  middleman-demo  to use  source/layouts/post.erb  in serving blog posts by adding the necessary configuration to  config.rb : page "blog/*", :layout => :post Now, when restarting the Middleman server and visiting  http://localhost:4567/blog/2014/08/20/new-blog/,  middleman-demo renders a more comprehensive blog template that includes the post’s title, date published, and tags. Let’s add a simple template to render a tags page that lists relevant tagged content. First, create the template: $ touch source/tag.html.erb And add the necessary ERb to list the relevant posts assigned a given tag: <h2>Posts tagged <%= tagname %></h2> <ul> <% page_articles.each do |post| %> <li> <a href="<%= post.url %>"><%= post.title %></a> </li> <% end %> </ul> Specify the blog’s tag template by editing the blog configuration in config.rb: activate :blog do |blog| blog.prefix = 'blog' blog.permalink = "{year}/{month}/{day}/{title}.html" # tag template: blog.tag_template = "tag.html" end Edit config.rb to configure middleman-demo’s tag template to use source/layout.erb rather than source/post.erb: page "blog/tags/*", :layout => :layout Now, when visiting http://localhost:4567/2014/08/20/new-blog/, you should see a linked list of New Blog’s tags. Clicking a tag should correctly render the tags page. Part 1 recap Thus far, middleman-demo serves as a basic Middleman-based blog example. It demonstrates Middleman templating, how to set up the middleman-blog  plugin, and how to make author markdown-based blog posts in Middleman. In part 2, we’ll cover migrating content from an existing Wordpress blog. We’ll also step through establishing an Amazon S3 bucket, building middleman-demo, and deploying to production. In part 3, we’ll cover how to create automated tests, continuous integration, and automated deployments. About this author Mike Ball is a Philadelphia-based software developer specializing in Ruby on Rails and JavaScript. He works for Comcast Interactive Media where he helps build web-based TV and video consumption applications.
Read more
  • 0
  • 0
  • 15232

article-image-html-php-and-content-posting-drupal-6
Packt
15 Oct 2009
9 min read
Save for later

HTML, PHP, and Content Posting in Drupal 6

Packt
15 Oct 2009
9 min read
Input Formats and Filters It is necessary to stipulate the type of content we will be posting, in any given post. This is done through the use of the Input format setting that is displayed when posting content to the site—assuming the user in question has sufficient permissions to post different types of content. In order to control what is and is not allowed, head on over to the Input formats link under Site configuration. This will bring up a list of the currently defined input formats, like this: At the moment, you might be wondering why we need to go to all this trouble to decide whether people can add certain HTML tags to their content. The answer to this is that because both HTML and PHP are so powerful, it is not hard to subvert even fairly simple abilities for malicious purposes. For example, you might decide to allow users the ability to link to their homepages from their blogs. Using the ability to add a hyperlink to their postings, a malicious user could create a Trojan, virus or some other harmful content, and link to it from an innocuous and friendly looking piece of HTML like this: <p>Hi Friends! My <a href="link_to_trojan.exe">homepage</a> is a great place to meet and learn about my interests and hobbies. </p> This snippet writes out a short paragraph with a link, supposedly to the author's homepage. In reality, the hyperlink reference attribute points to a trojan, link_to_trojan.exe. That's just HTML! PHP can do a lot more damage—to the extent that if you don't have proper security or disaster-recovery policies in place, then it is possible that your site can be rendered useless or destroyed entirely. Security is the main reason why, as you may have noticed from the previous screenshot, anything other than Filtered HTML is unavailable for use by anyone except the administrator. By default, PHP is not even present, let alone disabled. When thinking about what permissions to allow, it is important to re-iterate the tenet: Never allow users more permissions than they require to complete their intended tasks! As they stand, you might not find the input formats to your liking, and so Drupal provides some functionality to modify them. Click on the configure link adjacent to the Filtered HTML option, and this will bring up the following page: The Edit tab provides the option to alter the Name property of the input format; the Roles section in this case cannot be changed, but as you will see when we come around to creating our own input format, roles can be assigned however you wish to allow certain users to make use of an input format, or not. The final section provides a checklist of the types of Filters to apply when using this input format. In this previous screenshot, all have been selected, and this causes the input format to apply the: HTML corrector – corrects any broken HTML within postings to prevent undesirable results in the rest of your page. HTML filter – determines whether or not to strip or remove unwanted HTML. Line break converter – Turns standard typed line breaks (i.e. whenever a poster clicks Enter) into standard HTML. URL filter – allows recognized links and email addresses to be clickable without having to write the HTML tags, manually. The line break converter is particularly useful for users because it means that they do not have to explicitly enter <br> or <p> HTML tags in order to display new lines or paragraph breaks—this can get tedious by the time you are writing your 400th blog entry. If this is disabled, unless the user has the ability to add the relevant HTML tags, the content may end up looking like this: Click on the Configure tab, at the top of the page, in order to begin working with the HTML filter. You should be presented with something like this: The URL filter option is really there to help protect the formatting and layout of your site. It is possible to have quite long URLs these days, and because URLs do not contain spaces, there is nowhere to naturally split them up. As a result, a browser might do some strange things to cater for the long string and whatever it is; this will make your site look odd. Decide how many characters the longest string should be and enter that number in the space provided. Remember that some content may appear in the sidebars, so you can't let it get too long if they is supposed to be a fixed width. The HTML filter section lets you specify whether to Strip disallowed tags, or escape them (Escape all tags causes any tags that are present in the post to be displayed as written). Remember that if all the tags are stripped from the content, you should enable the Line break converter so that users can at least paragraph their content properly. Which tags are to be stripped is decided in the Allowed HTML tags section, where a list of all the tags that are to be allowed can be entered—anything else gets handled appropriately. Selecting Display HTML help forces Drupal to provide HTML help for users posting content—try enabling and disabling this option and browsing to this relative URL in each case to see the difference: filter/tips. There is quite a bit of helpful information on HTML in the long filter tips; so take a moment to read over those. The filter tips can be reached whenever a user expands the Input format section of the content post and clicks on More information about formatting options at the bottom of that section. Finally, the Spam link deterrent is a useful tool if the site is being used to bombard members with links to unsanctioned (and often unsavory) products. Spammers will use anonymous accounts to post junk (assuming anonymous users are allowed to post content) and enabling this for anonymous posts is an effective way of breaking them. This is not the end of the story, because we also need to be able to create input formats in the event we require something that the default options can't cater for. For our example, there are several ways in which this can be done, but there are three main criteria that need to be satisfied before we can consider creating the page. We need to be able to: Upload image files and attach them to the post. Insert and display the image files within the body of the post. Use PHP in order to dynamically generate some of the content (this option is really only necessary to demonstrate how to embed PHP in a posting for future reference). There are several methods for displaying image files within posts. The one we will discuss here, does not require us to download and install any contribution modules, such as Img_assist. Instead, we will use HTML directly to achieve this, specifically, we use the <img> tag. Take a look at the previous screenshot that shows the configure page of the Filtered HTML input format. Notice that the <img> tag is not available for use. Let's create our own input format to cater for this, instead of modifying this default format. Before we do, first enable the PHP Filter module under Modules in Site building so that it can easily be used when the time comes. With that change saved, you will find that there is now an extra option to the Filters section of each input format configuration page: It's not a good idea to enable the PHP evaluator for either of the default options, but adding it to one of our own input formats will be ok to play with. Head on back to the main input formats page under Site configuration (notice that there is an additional input format available, called PHP code) and click on Add input format. This will bring up the same configuration type page we looked at earlier. It is easy to implement whatever new settings you want, based on how the input format is to be used. For our example, we need the ability to post images and make use of PHP scripts, so make the new input format as follows: As we will need to make use of some PHP code a bit later on, we have enabled the PHP evaluator option, as well as prevented the use of this format for anyone but ourselves—normally, you would create a format for a group of users who require the modified posting abilities, but in this case, we are simply demonstrating how to create a new input format; so this is fine for now. PHP should not be enabled for anyone other than yourself, or a highly trusted administrator who needs it to complete his or her work. Click Save configuration to add this new format to the list, and then click on the Configure tab to work on the HTML filter. The only change required between this input format and the default Filtered HTML, in terms of HTML, is the addition of the <img> and <div> tags, separated by a space in the Allowed HTML tags list, as follows: As things stand at the moment, you may run into problems with adding PHP code to any content postings. This is because some filters affect the function of others, and to be on the safe side, click on the Rearrange tab and set the PHP evaluator to execute first: Since the PHP evaluator's weight is the lowest, it is treated first, with all the others following suit. It's a safe bet that if you are getting unexpected results when using a certain type of filter, you need to come to this page and change the settings. We'll see a bit more about this, in a moment. Now, the PHP evaluator gets dibs on the content and can properly process any PHP. For the purposes of adding images and PHP to posts (as the primary user), this is all that is needed for now. Once satisfied with the settings save the changes. Before building the new page, it is probably most useful to have a short discourse on HTML, because it is a requirement if you are to attempt more complex postings.  
Read more
  • 0
  • 1
  • 15002

article-image-jailbreaking-ipad-ubuntu
Packt
20 Jul 2010
3 min read
Save for later

Jailbreaking the iPad - in Ubuntu

Packt
20 Jul 2010
3 min read
(For more resources on Ubuntu, see here.) What is jailbreaking? Jailbreaking an iPhone or iPad allows you to run unsigned code by unlocking the root account on the device. Simply, this allows you to install any software you like - without the restriction of having to be in the main Apple app store. Remember, jailbreaking is not SIM unlocking. Jailbreaking voids the Apple-supplied warranty. What does this mean for developers? The mass availability of jailbreaking for these devices allows developers to write apps without having to shell out Apple's developer fees. Previously a one-off payment of $300 US, an "official" developer must now pay $100 US each year to keep the right to develop applications. What jailbreaks are available? Arguably the most advanced jailbreak available now is called Spirit. Though unlike a few others, which can now hack iOS 4.0, Spirit differs in a few key features. Not only is Spirit the first to be able to jailbreak the iPad, but this jailbreak also allows an "untethered" jailbreak - you won't have to plug it into a computer every boot to "keep" it jailbroken. Support for jailbreaking iOS 4.0 is coming soon for Spirit. There are tutorials on jailbreaking using Spirit, like this one, but they generally skip over the fact that there's a Linux version, and only talk about Windows and/or OS X. Jailbreaking the iPad A very simple process, you can now jailbreak the iPad very quickly thanks to excellent device support and drivers in Ubuntu. Please note that from now on, you should only plug in the device to iTunes 9 before 9.2, or better still, just use Rhythmbox or gtkpod to manage your library. Install git if you haven't already got it: sudo apt-get install git Clone the Spirit repository: git clone http://github.com/posixninja/spirit-linux.git Install the dev package for libimobiledevice: sudo apt-get install libimobiledevice-dev Enter the Spirit directory and build the program: cd spirit-linux make I've noticed that though Ubuntu has excellent Apple device support, and you can mount these devices just fine, that the jailbreak program won't detect the device without iFuse. Install this first: sudo apt-get install ifuse Now for the fun! Plug in your iPad (you'll see it under the Places menu) and run the jailbreak: ./spirit You'll see output similar to this: INFO: Retriving device listINFO: Opening deviceINFO: Creating lockdownd clientINFO: Starting AFC serviceINFO: Sending files via AFC.INFO: Found version iPad1,1_3.2INFO: Read igor/map.plistINFO: Sending "install"INFO: Sending "one.dylib"INFO: Sending "freeze.tar.xz"INFO: Sending "bg.jpg"INFO: Sending files completeINFO: Creating lockdownd clientINFO: Starting MobileBackup serviceINFO: Beginning restore processINFO: Read resources/overrides.plistDEBUG: add_fileDEBUG: Data size 922:DEBUG: add_fileDEBUG: Data size 0:DEBUG: start_restoreDEBUG: Sending fileDEBUG: Sending fileINFO: Completed restoreINFO: Completed successfully The device will reboot, and if all went well, you'll see a new app called Cydia on the home screen. This is the app that allows you to install other apps. Open Cydia. Cydia will ask you to choose what kind of user you are. There's no harm in choosing Developer; you'll just see more information. Also, if you choose the bottom level (User) console packages like OpenSSH will be hidden from you. You'll also receive some updates; install them. Interestingly, Cydia uses the deb package format, just like Ubuntu: That's it! Wasn't that quick?
Read more
  • 0
  • 0
  • 14334

article-image-using-front-controllers-create-new-page
Packt
28 Nov 2014
22 min read
Save for later

Using front controllers to create a new page

Packt
28 Nov 2014
22 min read
In this article, by Fabien Serny, author of PrestaShop Module Development, you will learn about controllers and object models. Controllers handle display on front and permit us to create new page type. Object models handle all required database requests. We will also see that, sometimes, hooks are not enough and can't change the way PrestaShop works. In these cases, we will use overrides, which permit us to alter the default process of PrestaShop without making changes in the core code. If you need to create a complex module, you will need to use front controllers. First of all, using front controllers will permit to split the code in several classes (and files) instead of coding all your module actions in the same class. Also, unlike hooks (that handle some of the display in the existing PrestaShop pages), it will allow you to create new pages. (For more resources related to this topic, see here.) Creating the front controller To make this section easier to understand, we will make an improvement on our current module. Instead of displaying all of the comments (there can be many), we will only display the last three comments and a link that redirects to a page containing all the comments of the product. First of all, we will add a limit to the Db request in the assignProductTabContent method of your module class that retrieves the comments on the product page: $comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$id_product.'ORDER BY `date_add` DESCLIMIT 3'); Now, if you go to a product, you should only see the last three comments. We will now create a controller that will display all comments concerning a specific product. Go to your module's root directory and create the following directory path: /controllers/front/ Create the file that will contain the controller. You have to choose a simple and explicit name since the filename will be used in the URL; let's name it comments.php. In this file, create a class and name it, ensuring that you follow the [ModuleName][ControllerFilename]ModuleFrontController convention, which extends the ModuleFrontController class. So in our case, the file will be as follows: <?phpclass MyModCommentsCommentsModuleFrontController extendsModuleFrontController{} The naming convention has been defined by PrestaShop and must be respected. The class names are a bit long, but they enable us to avoid having two identical class names in different modules. Now you just to have to set the template file you want to display with the following lines: class MyModCommentsCommentsModuleFrontController extendsModuleFrontController{public function initContent(){parent::initContent();$this->setTemplate('list.tpl');}} Next, create a template named list.tpl and place it in views/templates/front/ of your module's directory: <h1>{l s='Comments' mod='mymodcomments'}</h1> Now, you can check the result by loading this link on your shop: /index.php?fc=module&module=mymodcomments&controller=comments You should see the Comments title displayed. The fc parameter defines the front controller type, the module parameter defines in which module directory the front controller is, and, at last, the controller parameter defines which controller file to load. Maintaining compatibility with the Friendly URL option In order to let the visitor access the controller page we created in the preceding section, we will just add a link between the last three comments displayed and the comment form in the displayProductTabContent.tpl template. To maintain compatibility with the Friendly URL option of PrestaShop, we will use the getModuleLink method. This will generate a URL according to the URL settings (defined in Preferences | SEO & URLs). If the Friendly URL option is enabled, then it will generate a friendly URL (for example, /en/5-tshirts-doctor-who); if not, it will generate a classic URL (for example, /index.php?id_category=5&controller=category&id_lang=1). This function takes three parameters: the name of the module, the controller filename you want to call, and an array of parameters. The array of parameters must contain all of the data that's needed, which will be used by the controller. In our case, we will need at least the product identifier, id_product, to display only the comments related to the product. We can also add a module_action parameter just in case our controller contains several possible actions. Here is an example. As you will notice, I created the parameters array directly in the template using the assign Smarty method. From my point of view, it is easier to have the content of the parameters close to the link. However, if you want, you can create this array in your module class and assign it to your template in order to have cleaner code: <div class="rte">{assign var=params value=['module_action' => 'list','id_product'=> $smarty.get.id_product]}<a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}">{l s='See all comments' mod='mymodcomments'}</a></div> Now, go to your product page and click on the link; the URL displayed should look something like this: /index.php?module_action=list&id_product=1&fc=module&module=mymodcomments&controller=comments&id_lang=1 Creating a small action dispatcher In our case, we won't need to have several possible actions in the comments controller. However, it would be great to create a small dispatcher in our front controller just in case we want to add other actions later. To do so, in controllers/front/comments.php, we will create new methods corresponding to each action. I propose to use the init[Action] naming convention (but this is not mandatory). So in our case, it will be a method named initList: protected function initList(){$this->setTemplate('list.tpl');} Now in the initContent method, we will create a $actions_list array containing all possible actions and associated callbacks: $actions_list = array('list' => 'initList'); Now, we will retrieve the id_product and module_action parameters in variables. Once complete, we will check whether the id_product parameter is valid and if the action exists by checking in the $actions_list array. If the method exists, we will dynamically call it: if ($id_product > 0 && isset($actions_list[$module_action]))$this->$actions_list[$module_action](); Here's what your code should look like: public function initContent(){parent::initContent();$id_product = (int)Tools::getValue('id_product');$module_action = Tools::getValue('module_action');$actions_list = array('list' => 'initList');if ($id_product > 0 && isset($actions_list[$module_action]))$this->$actions_list[$module_action]();} If you did this correctly nothing should have changed when you refreshed the page on your browser, and the Comments title should still be displayed. Displaying the product name and comments We will now display the product name (to let the visitor know he or she is on the right page) and associated comments. First of all, create a public variable, $product, in your controller class, and insert it in the initContent method with an instance of the selected product. This way, the product object will be available in every action method: $this->product = new Product((int)$id_product, false,$this->context->cookie->id_lang); In the initList method, just before setTemplate, we will make a DB request to get all comments associated with the product and then assign the product object and the comments list to Smarty: // Get comments$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id.'ORDER BY `date_add` DESC');// Assign comments and product object$this->context->smarty->assign('comments', $comments);$this->context->smarty->assign('product', $this->product); Once complete, we will display the product name by changing the h1 title: <h1>{l s='Comments on product' mod='mymodcomments'}"{$product->name}"</h1> If you refresh your page, you should now see the product name displayed. I won't explain this part since it's exactly the same HTML code we used in the displayProductTabContent.tpl template. At this point, the comments should appear without the CSS style; do not panic, just go to the next section of this article. Including CSS and JS media in the controller As you can see, the comments are now displayed. However, you are probably asking yourself why the CSS style hasn't been applied properly. If you look back at your module class, you will see that it is the hookDisplayProductTab hook in the product page that includes the CSS and JS files. The problem is that we are not on a product page here. So we have to include them on this page. To do so, we will create a method named setMedia in our controller and add CS and JS files (as we did in the hookDisplayProductTab hook). It will override the default setMedia method contained in the FrontController class. Since this method includes general CSS and JS files used by PrestaShop, it is very important to call the setMedia parent method in our override: public function setMedia(){// We call the parent methodparent::setMedia();// Save the module path in a variable$this->path = __PS_BASE_URI__.'modules/mymodcomments/';// Include the module CSS and JS files needed$this->context->controller->addCSS($this->path.'views/css/starrating.css', 'all');$this->context->controller->addJS($this->path.'views/js/starrating.js');$this->context->controller->addCSS($this->path.'views/css/mymodcomments.css', 'all');$this->context->controller->addJS($this->path.'views/js/mymodcomments.js');} If you refresh your browser, the comments should now appear well formatted. In an attempt to improve the display, we will just add the date of the comment beside the author's name. Just replace <p>{$comment.firstname} {$comment.lastname|substr:0:1}.</p> in your list.tpl template with this line: <div>{$comment.firstname} {$comment.lastname|substr:0:1}. <small>{$comment.date_add|substr:0:10}</small></div> You can also replace the same line in the displayProductTabContent.tpl template if you want. If you want more information on how the Smarty method works, such as substr that I used for the date, you can check the official Smarty documentation. Adding a pagination system Your controller page is now fully working. However, if one of your products has thousands of comments, the display won't be quick. We will add a pagination system to handle this case. First of all, in the initList method, we need to set a number of comments per page and know how many comments are associated with the product: // Get number of comments$nb_comments = Db::getInstance()->getValue('SELECT COUNT(`id_product`)FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id);// Init$nb_per_page = 10; By default, I have set the number per page to 10, but you can set the number you want. The value is stored in a variable to easily change the number, if needed. Now we just have to calculate how many pages there will be : $nb_pages = ceil($nb_comments / $nb_per_page); Also, set the page the visitor is on: $page = 1;if (Tools::getValue('page') != '')$page = (int)$_GET['page']; Now that we have this data, we can generate the SQL limit and use it in the comment's DB request in such a way so as to display the 10 comments corresponding to the page the visitor is on: $limit_start = ($page - 1) * $nb_per_page;$limit_end = $nb_per_page;$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->product->id.'ORDER BY `date_add` DESCLIMIT '.(int)$limit_start.','.(int)$limit_end); If you refresh your browser, you should only see the last 10 comments displayed. To conclude, we just need to add links to the different pages for navigation. First, assign the page the visitor is on and the total number of pages to Smarty: $this->context->smarty->assign('page', $page);$this->context->smarty->assign('nb_pages', $nb_pages); Then in the list.tpl template, we will display numbers in a list from 1 to the total number of pages. On each number, we will add a link with the getModuleLink method we saw earlier, with an additional parameter page: <ul class="pagination">{for $count=1 to $nb_pages}{assign var=params value=['module_action' => 'list','id_product' => $smarty.get.id_product,'page' => $count]}<li><a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}"><span>{$count}</span></a></li>{/for}</ul> To make the pagination clearer for the visitor, we can use the native CSS class to indicate the page the visitor is on: {if $page ne $count}<li><a href="{$link->getModuleLink('mymodcomments', 'comments',$params)}"><span>{$count}</span></a></li>{else}<li class="active current"><span><span>{$count}</span></span></li>{/if} Your pagination should now be fully working. Creating routes for a module's controller At the beginning of this article, we chose to use the getModuleLink method to keep compatibility with the Friendly URL option of PrestaShop. Let's enable this option in the SEO & URLs section under Preferences. Now go to your product page and look at the target of the See all comments link; it should have changed from /index.php?module_action=list&id_product=1&fc=module&module=mymodcomments&controller=comments&id_lang=1 to /en/module/mymodcomments/comments?module_action=list&id_product=1. The result is nice, but it is not really a Friendly URL yet. ISO code at the beginning of URLs appears only if you enabled several languages; so if you have only one language enabled, the ISO code will not appear in the URL in your case. Since PrestaShop 1.5.3, you can create specific routes for your module's controllers. To do so, you have to attach your module to the ModuleRoutes hook. In your module's install method in mymodcomments.php, add the registerHook method for ModuleRoutes: // Register hooksif (!$this->registerHook('displayProductTabContent') ||!$this->registerHook('displayBackOfficeHeader') ||!$this->registerHook('ModuleRoutes'))return false; Don't forget; you will have to uninstall/install your module if you want it to be attached to this hook. If you don't want to uninstall your module (because you don't want to lose all the comments you filled in), you can go to the Positions section under the Modules section of your back office and hook it manually. Now we have to create the corresponding hook method in the module's class. This method will return an array with all the routes we want to add. The array is a bit complex to explain, so let me write an example first: public function hookModuleRoutes(){return array('module-mymodcomments-comments' => array('controller' => 'comments','rule' => 'product-comments{/:module_action}{/:id_product}/page{/:page}','keywords' => array('id_product' => array('regexp' => '[d]+','param' => 'id_product'),'page' => array('regexp' => '[d]+','param' => 'page'),'module_action' => array('regexp' => '[w]+','param' => 'module_action'),),'params' => array('fc' => 'module','module' => 'mymodcomments','controller' => 'comments')));} The array can contain several routes. The naming convention for the array key of a route is module-[ModuleName]-[ModuleControllerName]. So in our case, the key will be module-mymodcomments-comments. In the array, you have to set the following: The controller; in our case, it is comments. The construction of the route (the rule parameter). You can use all the parameters you passed in the getModuleLink method by using the {/:YourParameter} syntax. PrestaShop will automatically add / before each dynamic parameter. In our case, I chose to construct the route this way (but you can change it if you want): product-comments{/:module_action}{/:id_product}/page{/:page} The keywords array corresponding to the dynamic parameters. For each dynamic parameter, you have to set Regexp, which will permit to retrieve it from the URL (basically, [d]+ for the integer values and '[w]+' for string values) and the parameter name. The parameters associated with the route. In the case of a module's front controller, it will always be the same three parameters: the fc parameter set with the fix value module, the module parameter set with the module name, and the controller parameter set with the filename of the module's controller. Very importantNow PrestaShop is waiting for a page parameter to build the link. To avoid fatal errors, you will have to set the page parameter to 1 in your getModuleLink parameters in the displayProductTabContent.tpl template: {assign var=params value=[ 'module_action' => 'list', 'id_product' => $smarty.get.id_product, 'page' => 1 ]} Once complete, if you go to a product page, the target of the See all comments link should now be: /en/product-comments/list/1/page/1 It's really better, but we can improve it a little more by setting the name of the product in the URL. In the assignProductTabContent method of your module, we will load the product object and assign it to Smarty: $product = new Product((int)$id_product, false,$this->context->cookie->id_lang);$this->context->smarty->assign('product', $product); This way, in the displayProductTabContent.tpl template, we will be able to add the product's rewritten link to the parameters of the getModuleLink method: (do not forget to add it in the list.tpl template too!): {assign var=params value=['module_action' => 'list','product_rewrite' => $product->link_rewrite,'id_product' => $smarty.get.id_product,'page' => 1]} We can now update the rule of the route with the product's link_rewrite variable: 'product-comments{/:module_action}{/:product_rewrite} {/:id_product}/page{/:page}' Do not forget to add the product_rewrite string in the keywords array of the route: 'product_rewrite' => array('regexp' => '[w-_]+','param' => 'product_rewrite'), If you refresh your browser, the link should look like this now: /en/product-comments/list/tshirt-doctor-who/1/page/1 Nice, isn't it? Installing overrides with modules As we saw in the introduction of this article, sometimes hooks are not sufficient to meet the needs of developers; hooks can't alter the default process of PrestaShop. We could add code to core classes; however, it is not recommended, as all those core changes will be erased when PrestaShop is updated using the autoupgrade module (even a manual upgrade would be difficult). That's where overrides take the stage. Creating the override class Installing new object models and controller overrides on PrestaShop is very easy. To do so, you have to create an override directory in the root of your module's directory. Then, you just have to place your override files respecting the path of the original file that you want to override. When you install the module, PrestaShop will automatically move the override to the overrides directory of PrestaShop. In our case, we will override the getProducts method of the /classes/Search.php class to display the grade and the number of comments on the product list. So we just have to create the Search.php file in /modules/mymodcomments/override/classes/Search.php, and fill it with: <?phpclass Search extends SearchCore{public static function find($id_lang, $expr, $page_number = 1,$page_size = 1, $order_by = 'position', $order_way = 'desc',$ajax = false, $use_cookie = true, Context $context = null){}} In this method, first of all, we will call the parent method to get the products list and return it: // Call parent method$find = parent::find($id_lang, $expr, $page_number, $page_size,$order_by, $order_way, $ajax, $use_cookie, $context);// Return productsreturn $find; We want to display the information (grade and number of comments) to the products list. So, between the find method call and the return statement, we will add some lines of code. First, we will check whether $find contains products. The find method can return an empty array when no products match the search. In this case, we don't have to change the way this method works. We also have to check whether the mymodcomments module has been installed (if the override is being used, the module is most likely to be installed, but as I said, it's just for security): if (isset($find['result']) && !empty($find['result']) &&Module::isInstalled('mymodcomments')){} If we enter these conditions, we will list the product identifier returned by the find parent method: // List id product$products = $find['result'];$id_product_list = array();foreach ($products as $p)$id_product_list[] = (int)$p['id_product']; Next, we will retrieve the grade average and number of comments for the products in the list: // Get grade average and nb comments for products in list$grades_comments = Db::getInstance()->executeS('SELECT `id_product`, AVG(`grade`) as grade_avg,count(`id_mymod_comment`) as nb_commentsFROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` IN ('.implode(',', $id_product_list).')GROUP BY `id_product`'); Finally, fill in the $products array with the data (grades and comments) corresponding to each product: // Associate grade and nb comments with productforeach ($products as $kp => $p)foreach ($grades_comments as $gc)if ($gc['id_product'] == $p['id_product']){$products[$kp]['mymodcomments']['grade_avg'] =round($gc['grade_avg']);$products[$kp]['mymodcomments']['nb_comments'] =$gc['nb_comments'];}$find['result'] = $products; Now, as we saw at the beginning of this section, the overrides of the module are installed when you install the module. So you will have to uninstall/install your module. Once this is done, you can check the override contained in your module; the content of /modules/mymodcomments/override/classes/Search.php should be copied in /override/classes/Search.php. If an override of the class already exists, PrestaShop will try to merge it by adding the methods you want to override to the existing override class. Once the override is added by your module, PrestaShop should have regenerated the cache/class_index.php file (which contains the path of every core class and controller), and the path of the Category class should have changed. Open the cache/class_index.php file and search for 'Search'; the content of this array should now be: 'Search' =>array ( 'path' => 'override/classes /Search.php','type' => 'class',), If it's not the case, it probably means the permissions of this file are wrong and PrestaShop could not regenerate it. To fix this, just delete this file manually and refresh any page of your PrestaShop. The file will be regenerated and the new path will appear. Since you uninstalled/installed the module, all your comments should have been deleted. So take 2 minutes to fill in one or two comments on a product. Then search for this product. As you must have noticed, nothing has changed. Data is assigned to Smarty, but not used by the template yet. To avoid deletion of comments each time you uninstall the module, you should comment the loadSQLFile call in the uninstall method of mymodcomments.php. We will uncomment it once we have finished working with the module. Editing the template file to display grades on products list In a perfect world, you should avoid using overrides. In this case, we could have used the displayProductListReviews hook, but I just wanted to show you a simple example with an override. Moreover, this hook exists only since PrestaShop 1.6, so it would not work on PrestaShop 1.5. Now, we will have to edit the product-list.tpl template of the active theme (by default, it is /themes/default-bootstrap/), so the module won't be a turnkey module anymore. A merchant who will install this module will have to manually edit this template if he wants to have this feature. In the product-list.tpl template, just after the short description, check if the $product.mymodcomments variable exists (to test if there are comments on the product), and then display the grade average and the number of comments: {if isset($product.mymodcomments)}<p><b>{l s='Grade:'}</b> {$product.mymodcomments.grade_avg}/5<br/><b>{l s='Number of comments:'}</b>{$product.mymodcomments.nb_comments}</p>{/if} Here is what the products list should look like now: Creating a new method in a native class In our case, we have overridden an existing method of a PrestaShop class. But we could have added a method to an existing class. For example, we could have added a method named getComments to the Product class: <?phpclass Product extends ProductCore{public function getComments($limit_start, $limit_end = false){$limit = (int)$limit_start;if ($limit_end)$limit = (int)$limit_start.','.(int)$limit_end;$comments = Db::getInstance()->executeS('SELECT * FROM `'._DB_PREFIX_.'mymod_comment`WHERE `id_product` = '.(int)$this->id.'ORDER BY `date_add` DESCLIMIT '.$limit);return $comments;}} This way, you could easily access the product comments everywhere in the code just with an instance of a Product class. Summary This article taught us about the main design patterns of PrestaShop and explained how to use them to construct a well-organized application. Resources for Article: Further resources on this subject: Django 1.2 E-commerce: Generating PDF Reports from Python using ReportLab [Article] Customizing PrestaShop Theme Part 2 [Article] Django 1.2 E-commerce: Data Integration [Article]
Read more
  • 0
  • 0
  • 13278
article-image-setting-complete-django-e-commerce-store-30-minutes
Packt
21 May 2010
7 min read
Save for later

Setting up a Complete Django E-commerce store in 30 minutes

Packt
21 May 2010
7 min read
In order to demonstrate Django's rapid development potential, we will begin by constructing a simple, but fully-featured, e-commerce store. The goal is to be up and running with a product catalog and products for sale, including a simple payment processing interface, in about half-an-hour. If this seems ambitious, remember that Django offers a lot of built-in shortcuts for the most common web-related development tasks. We will be taking full advantage of these and there will be side discussions of their general use. In addition to building our starter storefront, this article aims to demonstrate some other Django tools and techniques. In this article by Jesse Legg, author of Django 1.2 e-commerce, we will: Create our Django Product model to take advantage of the automatic admin tool Build a flexible but easy to use categorization system, to better organize our catalog of products Utilize Django's generic view framework to expose a quick set of views on our catalog data Finally, create a simple template for selling products through the Google Checkout API (Read more interesting articles on Django 1.2 e-commerce here.) Before we begin, let's take a moment to check our project setup. Our project layout includes two directories: one for files specific to our personal project (settings, URLs, and so on), and the other for our collection of e-commerce Python modules (coleman). This latter location is where the bulk of the code will live. If you have downloaded the source code from the Packt website, the contents of the archive download represents everything in this second location. Designing a product catalog The starting point of our e-commerce application is the product catalog. In the real world, businesses may produce multiple catalogs for mutually exclusive or overlapping subsets of their products. Some examples are: fall and spring catalogs, catalogs based on a genre or sub-category of product such as catalogs for differing kinds of music (for example, rock versus classical), and many other possibilities. In some cases a single catalog may suffice, but allowing for multiple catalogs is a simple enhancement that will add flexibility and robustness to our application. As an example, we will imagine a fictitious food and beverage company, CranStore.com, that specializes in cranberry products: cranberry drinks, food, and desserts. In addition, to promote tourism at their cranberry bog, they sell numerous gift items, including t-shirts, hats, mouse pads, and the like. We will consider this business to illustrate examples as they relate to the online store we are building. We will begin by defining a catalog model called Catalog. The basic model structure will look like this: class Catalog(models.Model): name = models.CharField(max_length=255 slug = models.SlugField(max_length=150) publisher = models.CharField(max_length=300) description = models.TextField() pub_date = models.DateTimeField(default=datetime.now) This is potentially the simplest model we will create. It contains only five, very simple fields. But it is a good starting point for a short discussion about Django model design. Notice that we have not included any relationships to other models here. For example, there is no products ManyToManyField. New Django developers tend to overlook simple design decisions such as the one shown previously, but the ramifications are quite important. The first reason for this design is a purely practical one. Using Django's built-in admin tool can be a pleasure or a burden, depending on the design of your models. If we were to include a products field in the Catalog design, it would be a ManyToManyField represented in the admin as an enormous multiple-select HTML widget. This is practically useless in cases where there could be thousands of possible selections. If, instead, we attach a ForeignKey to Catalog on a Product model (which we will build shortly), we instantly increase the usability of Django's automatic admin tool. Instead of a select-box where we must shift-click to choose multiple products, we have a much simpler HTML drop-down interface with significantly fewer choices. This should ultimately increase the usability of the admin for our users. For example, CranStore.com sells lots of t-shirts during the fall when cranberries are ready to harvest and tourism spikes. They may wish to run a special catalog of touristy products on their website during this time. For the rest of the year, they sell a smaller selection of items online. The developers at CranStore create two catalogs: one is named Fall Collection and the other is called Standard Collection. When creating product information, the marketing team can decide which catalog an individual product belongs to by simply selecting them from the product editing page. This is more intuitive than selecting individual products out of a giant list of all products from the catalog admin page. Secondly, designing the Catalog model this way prevents potential "bloat" from creeping into our models. Imagine that CranStore decides to start printing paper versions of their catalogs and mailing them to a subscriber list. This would be a second potential ManyToManyField on our Catalog model, a field called subscribers. As you can see, this pattern could repeat with each new feature CranStore decides to implement. By keeping models as simple as possible, we prevent all kinds of needless complexity. In addition we also adhere to a basic Django design principle, that of "loose coupling". At the database level, the tables Django generates will be very similar regardless of where our ManyToManyField lives. Usually the only difference will be in the table name. Thus it generally makes more sense to focus on the practical aspects of Django model design. Django's excellent reverse relationship feature also allows a great deal of flexibility when it comes time to using the ORM to access our data. Model design is difficult and planning up-front can pay great dividends later. Ideally, we want to take advantage of the automatic, built-in features that make Django so great. The admin tool is a huge part of this. Anyone who has had to build a CRUD interface by hand so that non-developers can manage content should recognize the power of this feature. In many ways it is Django's "killer app". Creating the product model Finally, let's implement our product model. We will start with a very basic set of fields that represent common and shared properties amongst all the products we're likely to sell. Things like a picture of the item, its name, a short description, and pricing information. class Product(models.Model): name = models.CharField(max_length=300) slug = models.SlugField(max_length=150) description = models.TextField() photo = models.ImageField(upload_to='product_photo', blank=True) manufacturer = models.CharField(max_length=300, blank=True) price_in_dollars = models.DecimalField(max_digits=6, decimal_places=2) Most e-commerce applications will need to capture many additional details about their products. We will add the ability to create arbitrary sets of attributes and add them as details to our products later in this article. For now, let's assume that these six fields are sufficient. A few notes about this model: first, we have used a DecimalField to represent the product's price. Django makes it relatively simple to implement a custom field and such a field may be appropriate here. But for now we'll keep it simple and use a plain and built-in DecimalField to represent currency values. Notice, too, the way we're storing the manufacturer information as a plain CharField. Depending on your application, it may be beneficial to build a Manufacturer model and convert this field to a ForeignKey. Lastly, you may have realized by now that there is no connection to a Catalog model, either by a ForeignKey or ManyToManyField. Earlier we discussed the placement of this field in terms of whether it belonged to the Catalog or in the Product model and decided, for several reasons, that the Product was the better place. We will be adding a ForeignKey to our Product model, but not directly to the Catalog. In order to support categorization of products within a catalog, we will be creating a new model in the next section and using that as the connection point for our products.
Read more
  • 0
  • 0
  • 13104

article-image-configuring-manage-out-directaccess-clients
Packt
14 Oct 2013
14 min read
Save for later

Configuring Manage Out to DirectAccess Clients

Packt
14 Oct 2013
14 min read
In this article by Jordan Krause, the author of the book Microsoft DirectAccess Best Practices and Troubleshooting, we will have a look at how Manage Out is configured to DirectAccess clients. DirectAccess is obviously a wonderful technology from the user's perspective. There is literally nothing that they have to do to connect to company resources; it just happens automatically whenever they have Internet access. What isn't talked about nearly as often is the fact that DirectAccess is possibly of even greater benefit to the IT department. Because DirectAccess is so seamless and automatic, your Group Policy settings, patches, scripts, and everything that you want to use to manage and manipulate those client machines is always able to run. You no longer have to wait for the user to launch a VPN or come into the office for their computer to be secured with the latest policies. You no longer have to worry about laptops being off the network for weeks at a time, and coming back into the network after having been connected to dozens of public hotspots while someone was on a vacation with it. While many of these management functions work right out of the box with a standard DirectAccess configuration, there are some functions that will need a couple of extra steps to get them working properly. That is our topic of discussion for this article. We are going to cover the following topics: Pulls versus pushes What does Manage Out have to do with IPv6 Creating a selective ISATAP environment Setting up client-side firewall rules RDP to a DirectAccess client No ISATAP with multisite DirectAccess (For more resources related to this topic, see here.) Pulls versus pushes Often when thinking about management functions, we think of them as the software or settings that are being pushed out to the client computers. This is actually not true in many cases. A lot of management tools are initiated on the client side, and so their method of distributing these settings and patches are actually client pulls. A pull is a request that has been initiated by the client, and in this case, the server is simply responding to that request. In the DirectAccess world, this kind of request is handled very differently than an actual push, which would be any case where the internal server or resource is creating the initial outbound communication with the client, a true outbound initiation of packets. Pulls typically work just fine over DirectAccess right away. For example, Group Policy processing is initiated by the client. When a laptop decides that it's time for a Group Policy refresh, it reaches out to Active Directory and says "Hey AD, give me my latest stuff". The Domain Controllers then simply reply to that request, and the settings are pulled down successfully. This works all day, every day over DirectAccess. Pushes, on the other hand, require some special considerations. This scenario is what we commonly refer to as DirectAccess Manage Out, and this does not work by default in a stock DirectAccess implementation. What does Manage Out have to do with IPv6? IPv6 is essentially the reason why Manage Out does not work until we make some additions to your network. "But DirectAccess in Server 2012 handles all of my IPv6 to IPv4 translations so that my internal network can be all IPv4, right?" The answer to that question is yes, but those translations only work in one direction. For example, in our previous Group Policy processing scenario, the client computer calls out for Active Directory, and those packets traverse the DirectAccess tunnels using IPv6. When the packets get to the DirectAccess server, it sees that the Domain Controller is IPv4, so it translates those IPv6 packets into IPv4 and sends them on their merry way. Domain Controller receives the said IPv4 packets, and responds in kind back to the DirectAccess server. Since there is now an active thread and translation running on the DirectAccess server for that communication, it knows that it has to take those IPv4 packets coming back (as a response) from the Domain Controller and spin them back into IPv6 before sending across the IPsec tunnels back to the client. This all works wonderfully and there is absolutely no configuration that you need to do to accomplish this behavior. However, what if you wanted to send packets out to a DirectAccess client computer from inside the network? One of the best examples of this is a Helpdesk computer trying to RDP into a DirectAccess-connected computer. To accomplish this, the Helpdesk computer needs to have IPv6 routability to the DirectAccess client computer. Let's walk through the flow of packets to see why this is necessary. First of all, if you have been using DirectAccess for any duration of time, you might have realized by now that the client computers register themselves in DNS with their DirectAccess IP addresses when they connect. This is a normal behavior, but what may not look "normal" to you is that those records they are registering are AAAA (IPv6) records. Remember, all DirectAccess traffic across the internet is IPv6, using one of these three transition technologies to carry the packets: 6to4, Teredo, or IP-HTTPS. Therefore, when the clients connect, whatever transition tunnel is established has an IPv6 address on the adapter (you can see it inside ipconfig /all on the client), and those addresses will register themselves in DNS, assuming your DNS is configured to allow it. When the Helpdesk personnel types CLIENT1 in their RDP client software and clicks on connect, it is going to reach out to DNS and ask, "What is the IP address of CLIENT1?" One of two things is going to happen. If that Helpdesk computer is connected to an IPv4-only network, it is obviously only capable of transmitting IPv4 packets, and DNS will hand them the DirectAccess client computer's A (IPv4) record, from the last time the client was connected inside the office. Routing will fail, of course, because CLIENT1 is no longer sitting on the physical network. The following screenshot is an example of pinging a DirectAccess connected client computer from an IPv4 network: How to resolve this behavior? We need to give that Helpdesk computer some form of IPv6 connectivity on the network. If you have a real, native IPv6 network running already, you can simply tap into it. Each of your internal machines that need this outbound routing, as well as the DirectAccess server or servers, all need to be connected to this network for it to work. However, I find out in the field that almost nobody is running any kind of IPv6 on their internal networks, and they really aren't interested in starting now. This is where the Intra-Site Automatic Tunnel Addressing Protocol, more commonly referred to as ISATAP, comes into play. You can think of ISATAP as a virtual IPv6 cloud that runs on top of your existing IPv4 network. It enables computers inside your network, like that Helpdesk machine, to be able to establish an IPv6 connection with an ISATAP router. When this happens, that Helpdesk machine will get a new network adapter, visible via ipconfig / all, named ISATAP, and it will have an IPv6 address. Yes, this does mean that the Helpdesk computer, or any machine that needs outbound communications to the DirectAccess clients, has to be capable of talking IPv6, so this typically means that those machines must be Windows 7, Windows 8, Server 2008, or Server 2012. What if your switches and routers are not capable of IPv6? No problem. Similar to the way that 6to4, Teredo, and IP-HTTPS take IPv6 packets and wrap them inside IPv4 so they can make their way across the IPv4 internet, ISATAP also takes IPv6 packets and encapsulates them inside IPv4 before sending them across the physical network. This means that you can establish this ISATAP IPv6 network on top of your IPv4 network, without needing to make any infrastructure changes at all. So, now I need to go purchase an ISATAP router to make this happen? No, this is the best part. Your DirectAccess server is already an ISATAP router; you simply need to point those internal machines at it. Creating a selective ISATAP environment All of the Windows operating systems over the past few years have ISATAP client functionality built right in. This has been the case since Vista, I believe, but I have yet to encounter anyone using Vista in a corporate environment, so for the sake of our discussion, we are generally talking about Windows 7, Windows 8, Server 2008, and Server 2012. For any of these operating systems, out of the box all you have to do is give it somewhere to resolve the name ISATAP, and it will go ahead and set itself up with a connection to that ISATAP router. So, if you wanted to immediately enable all of your internal machines that were ISATAP capable to suddenly be ISATAP connected, all you would have to do is create a single host record in DNS named ISATAP and point it at the internal IP address of your DirectAccess server. To get that to work properly, you would also have to tweak DNS so that ISATAP was no longer part of the global query block list, but I'm not even going to detail that process because my emphasis here is that you should not set up your environment this way. Unfortunately, some of the step-by-step guides that are available on the web for setting up DirectAccess include this step. Even more unfortunately, if you have ever worked with UAG DirectAccess, you'll remember on the IP address configuration screen that the GUI actually told you to go ahead and set ISATAP up this way. Please do not create a DNS host record named ISATAP! If you have already done so, please consider this article to be a guide on your way out of danger. The primary reason why you should stay away from doing this is because Windows prefers IPv6 over IPv4. Once a computer is setup with connection to an ISATAP router, it receives an IPv6 address which registers itself in DNS, and from that point onward whenever two ISATAP machines communicate with each other, they are using IPv6 over the ISATAP tunnel. This is potentially problematic for a couple of reasons. First, all ISATAP traffic default routes through the ISATAP router for which it is configured, so your DirectAccess server is now essentially the default gateway for all of these internal computers. This can cause performance problems and even network flooding. The second reason is that because these packets are now IPv6, even though they are wrapped up inside IPv4, the tools you have inside the network that you might be using to monitor internal traffic are not going to be able to see this traffic, at least not in the same capacity as it would do for normal IPv4 traffic. It is in your best interests that you do not follow this global approach for implementing ISATAP, and instead take the slightly longer road and create what I call a "Selective ISATAP environment", where you have complete control over which machines are connected to the ISATAP network, and which ones are not. Many DirectAccess installs don't require ISATAP at all. Remember, this is only used for those instances where you need true outbound reach to the DirectAccess clients. I recommend installing DirectAccess without ISATAP first, and test all of your management tools. If they work without ISATAP, great! If they don't, then you can create your selective ISATAP environment. To set ourselves up for success, we need to create a simple Active Directory security group, and a GPO. The combination of these things is going to allow us to decisively grant or deny access to the ISATAP routing features on the DirectAccess server. Creating a security group and DNS record First, let's create a new security group in Active Directory. This is just a normal group like any other, typically a global or universal, whichever you prefer. This group is going to contain the computer accounts of the internal computers to which we want to give that outbound reaching capability. So, typically the computer accounts that we will eventually add into this group are Helpdesk computers, SCCM servers, maybe a management "jump box" terminal server, that kind of thing. To keep things intuitive, let's name the group DirectAccess – ISATAP computers or something similar. We will also need a DNS host record created. For obvious reasons we don't want to call this ISATAP, but perhaps something such as Contoso_ISATAP, swapping your name in, of course. This is just a regular DNS A record, and you want to point it at the internal IPv4 address of the DirectAccess server. If you are running a clustered array of DirectAccess servers that are configured for load balancing, then you will need multiple DNS records. All of the records have the same name,Contoso_ISATAP, and you point them at each internal IP address being used by the cluster. So, one gets pointed at the internal Virtual IP (VIP), and one gets pointed at each of the internal Dedicated IPs (DIP). In a two-node cluster, you will have three DNS records for Contoso_ISATAP. Creating the GPO Now go ahead and follow these steps to create a new GPO that is going to contain the ISATAP connection settings: Create a new GPO, name it something such as DirectAccess – ISATAP settings. Place the GPO wherever it makes sense to you, keeping in mind that these settings should only apply to the ISATAP group that we created. One way to manage the distribution of these settings is a strategically placed link. Probably the better way to handle distribution of these settings is to reflect the same approach that the DirectAccess GPOs themselves take. This would mean linking this new GPO at a high level, like the top of the domain, and then using security filtering inside the GPO settings so that it only applies to the group that we just created. This way you ensure that in the end the only computers to receive these settings are the ones that you add to your ISATAP group. I typically take the Security Filtering approach, because it closely reflects what DirectAccess itself does with GPO filtering. So, create and link the GPO at a high level, and then inside the GPO properties, go ahead and add the group (and only the group, remove everything else) to the Security Filtering section, like what is shown in the following screenshot: Then move over to the Details tab and set the GPO Status to User configuration settings disabled. Configuring the GPO Now that we have a GPO which is being applied only to our special ISATAP group that we created, let's give it some settings to apply. What we are doing with this GPO is configuring those computers which we want to be ISATAP-connected with the ISATAP server address with which they need to communicate , which is the DNS name that we created for ISATAP. First, edit the GPO and set the ISATAP Router Name by configuring the following setting: Computer Configuration | Policies | Administrative Templates | Network | TCPIP Settings | IPv6 Transition Technologies | ISATAP Router Name = Enabled (and populate your DNS record). Second, in the same location within the GPO, we want to enable your ISATAP state with the following configuration: Computer Configuration | Policies | Administrative Templates | Network | TCPIP Settings | IPv6 Transition Technologies | ISATAP State = Enabled State. Adding machines to the group All of our settings are now squared away, time to test! Start by taking a single computer account of a computer inside your network from which you want to be able to reach out to DirectAccess client computers, and add the computer account to the group that we created. Perhaps pause for a few minutes to ensure Active Directory has a chance to replicate, and then simply restart that computer. The next time it boots, it'll grab those settings from the GPO, and reach out to the DirectAccess server acting as the ISATAP router, and have an ISATAP IPv6 connection. If you generate an ipconfig /all on that internal machine, you will see the ISATAP adapter and address now listed as shown in the following screenshot: And now if you try to ping the name of a client computer that is connected via DirectAccess, you will see that it now resolves to the IPv6 record. Perfect! But wait, why are my pings timing out? Let's explore that next.
Read more
  • 0
  • 2
  • 12931

article-image-ibm-websphere-mq-commands
Packt
25 Aug 2010
10 min read
Save for later

IBM WebSphere MQ commands

Packt
25 Aug 2010
10 min read
(For more resources on IBM, see here.) After reading this article, you will not be a WebSphere MQ expert, but you will have brought your knowledge of MQ to a level where you can have a sensible conversation with your site's MQ administrator about what the Q replication requirements are. An introduction to MQ In a nutshell, WebSphere MQ is an assured delivery mechanism, which consists of queues managed by Queue Managers. We can put messages onto, and retrieve messages from queues, and the movement of messages between queues is facilitated by components called Channels and Transmission Queues. There are a number of fundamental points that we need to know about WebSphere MQ: All objects in WebSphere MQ are case sensitive We cannot read messages from a Remote Queue (only from a Local Queue) We can only put a message onto a Local Queue (not a Remote Queue) It does not matter at this stage if you do not understand the above points, all will become clear in the following sections. There are some standards regarding WebSphere MQ object names: Queue names, processes and Queue Manager names can have a maximum length of 48 characters Channel names can have a maximum length of 20 characters The following characters are allowed in names: A-Z,a-z,0-9, and . / _ % symbols There is no implied structure in a name — dots are there for readability Now let's move on to look at MQ queues in a little more detail. MQ queues MQ queues can be thought of as conduits to transport messages between Queue Managers. There are four different types of MQ queues and one related object. The four different types of queues are: Local Queue (QL), Remote Queue (QR), Transmission Queue (TQ), and Dead Letter Queue, and the related object is a Channel (CH). One of the fundamental processes of WebSphere MQ is the ability to move messages between Queue Managers. Let's take a high-level look at how messages are moved, as shown in the following diagram: When the application Appl1 wants to send a message to application Appl2, it opens a queue - the local Queue Manager (QM1) determines if it is a Local Queue or a Remote Queue. When Appl1 issues an MQPUT command to put a message onto the queue, then if the queue is local, the Queue Manager puts the message directly onto that queue. If the queue is a Remote Queue, then the Queue Manager puts the message onto a Transmission Queue. The Transmission Queue sends the message using the Sender Channel on QM1 to the Receiver Channel on the remote Queue Manager (QM2). The Receiver Channel puts the message onto a Local Queue on QM2. Appl2 issues a MQGET command to retrieve the message from this queue. Now let's move on to look at the queues used by Q replication and in particular, unidirectional replication, as shown in the following diagram. What we want to show here is the relationship between Remote Queues, Transmission Queues, Channels, and Local Queues. As an example, let's look at the path a message will take from Q Capture on QMA to Q Apply on QMB. (Move the mouse over the image to enlarge.) Note that in this diagram the Listeners are not shown. Q Capture puts the message onto a remotely-defned queue on QMA (the local Queue Manager for Q Capture). This Remote Queue (CAPA.TO.APPB.SENDQ.REMOTE) is effectively a "place holder" and points to a Local Queue (CAPA.TO.APPB.RECVQ) on QMB and specifes the Transmission Queue (QMB.XMITQ) it should use to get there. The Transmission Queue has, as part of its defnition, the Channel (QMA.TO.QMB) to use. The Channel QMA.TO.QMB has, as part of its defnition, the IP address and Listening port number of the remote Queue Manager (note that we do not name the remote Queue Manager in this defnition—it is specifed in the defnition for the Remote Queue). The defnition for unidirectional Replication Queue Map (circled queue names) is: SENDQ: CAPA.TO.APPB.SENDQ.REMOTE on the source RECVQ: CAPA.TO.APPB.RECVQ on the target ADMINQ: CAPA.ADMINQ.REMOTE on the target Let's look at the Remote Queue defnition for CAPA.TO.APPB.SENDQ.REMOTE, shown next. On the left-hand side are the defnitions on QMA, which comprise the Remote Queue, the Transmission Queue, and the Channel defnition. The defnitions on QMB are on the right-hand side and comprise the Local Queue and the Receiver Channel. Let's break down these defnitions to the core values to show the relationship between the different parameters, as shown next: We defne a Remote Queue by matching up the superscript numbers in the defnitions in the two Queue Managers: For defnitions on QMA, QMA is the local system and QMB is the remote system. For defnitions on QMB, QMB is the local system and QMA is the remote system. Remote Queue Manager name Name of the queue on the remote system Transmission Queue name Port number that the remote system is listening on The IP address of the Remote Queue Manager Local Queue Manager name Channel name Queue names: QMB: Decide on the Local Queue name on QMB—CAPA.TO.APPB.RECVQ. QMA: Decide on the Remote Queue name on QMB—CAPA.TO.APPB.SENDQ.REMOTE. Channels: QMB: Defne a Receiver Channel on QMB, QMA.TO.QMB—make sure the channel type (CHLTYPE) is RCVR. The Channel names on QMA and QMB have to match: QMA.TO.QMB. QMA: Defne a Sender Channel, which takes the messages from the Transmission Queue QMB.XMITQ and which points to the IP address and Listening port number of QMB. The Sender Channel name must be QMA.TO.QMB. Let's move on from unidirectional replication to bidirectional replication. The bidirectional queue diagram is shown next, which is a cut-down version of the full diagram of the The WebSphere MQ layer and just shows the queue names and types without the details. The principles in bidirectional replication are the same as for unidirectional replication. There are two Replication Queue Maps—one going from QMA to QMB (as unidirectional replication) and one going from QMB to QMA. MQ queue naming standards The naming of the WebSphere MQ queues is an important part of Q replication setup. It may be that your site already has a naming standard for MQ queues, but if it does not, then here are some thoughts on the subject (WebSphere MQ naming standards were discussed in the Introduction to MQ section): Queues are related to Q Capture and Q Apply programs, so it would be useful to have that fact refected in the name of the queues. A Q Capture needs a local Restart Queue and we use the name CAPA.RESTARTQ. Each Queue Manager can have a Dead Letter Queue. We use the prefx DEAD.LETTER.QUEUE with a suffx of the Queue Manager name giving DEAD.LETTER.QUEUE.QMA. Receive Queues are related to Send Queues. For every Send Queue, we need a Receive Queue. Our Send Queue names are made up of where they are coming from, Q Capture on QMA (CAPA), and where they are going to, Q Apply on QMB (APPB), and we also want to put in that it is a Send Queue and that it is a remote defnition, so we end up with CAPA.TO.APPB.SENDQ.REMOTE. The corresponding Receive Queue will be called CAPA.TO.APPB.RECVQ. Transmission Queues should refect the names of the "to" Queue Manager. Our Transmission Queue on QMA is called QMB.XMITQ, refecting the Queue Manager that it is going to, and that it is a Transmission Queue. Using this naming convention on QMB, the Transmission Queue is called QMA.XMITQ. Channels should refect the names of the "from" and "to" Queue Managers. Our Sender Channel defnition on QMA is QMA.TO.QMB refecting that it is a channel from QMA to QMB and the Receiver Channel on QMB is also called QMA.TO.QMB. The Receiver Queue on QMA is called QMB.TO.QMA for a Sender Channel of the same name on QMB. A Replication Queue Map defnition requires a local Send Queue, and a remote Receive Queue and a remote Administration Queue. The Send Queue is the queue that Q Capture writes to, the Receive Queue is the queue that Q Apply reads from, and the Administration Queue is the queue that Q Apply writes messages back to Q Capture with. MQ queues required for different scenarios This section lists the number of Local and Remote Queues and Channels that are needed for each type of replication scenario. The queues and channels required for unidirectional replication (including replicating to a Stored Procedure) and Event Publishing are shown in the following tables. Note that the queues and channels required for Event Publishing are a subset of those required for unidirectional replication, but creating extra queues and not using them is not a problem. The queues and channels required for unidirectional (including replicating to a Stored Procedure) are shown in the following table: QMA (7) QMB (7) 3 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE.QMA 1 Remote Queue: CAPA.TO.APPB.SENDQ.REMOTE 1 Transmission Queue: QMB.XMITQ 1 Sender Channel: QMA.TO.QMB 1 Receiver Channel: QMB.TO.QMA 2 Local Queues: CAPA.TO.APPB.REVCQ DEAD.LETTER.QUEUE.QMB 1 Remote Queue: CAPA.ADMINQ.REMOTE 1 Transmission Queue: QMA.XMITQ 1 Sender Channel: QMB.TO.QMA 1 Receiver Channel: QMA.TO.QMB 1 Model Queue: IBMQREP.SPILL.MODELQ   The queues required for Event Publishing are shown in the following table: QMA (7) QMB (7) 3 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE.QMA 1 Remote Queue: CAPA.TO.APPB.SENDQ.REMOTE 1 Transmission Queue: QMB.XMITQ 1 Sender Channel: QMA.TO.QMB 1 Receiver Channel: QMB.TO.QMA 2 Local Queues: CAPA.TO.APPB.REVCQ DEAD.LETTER.QUEUE.QMB 1 Receiver Channel: QMA.TO.QMB The queues and channels required for bidirectional/P2P two-way replication are shown in the following table: QMA (10) QMB (10) 4 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE.QMA CAPB.TO.APPA.RECVQ 2 Remote Queues: CAPA.TO.APPB.SENDQ.REMOTE CAPB.ADMINQ.REMOTE 1 Transmission Queue: QMB.XMITQ 1 Sender Channel: QMA.TO.QMB 1 Receiver Channel: QMB.TO.QMA 1 Model Queue: IBMQREP.SPILL.MODELQ 4 Local Queues: CAPB.ADMINQ CAPB.RESTARTQ DEAD.LETTER.QUEUE.QMB CAPA.TO.APPB.RECVQ 2 Remote Queues: CAPB.TO.APPA.SENDQ.REMOTE CAPA.ADMINQ.REMOTE 1 Transmission Queue: QMA.XMITQ 1 Sender Channel: QMB.TO.QMA 1 Receiver Channel: QMA.TO.QMB 1 Model Queue: IBMQREP.SPILL.MODELQ The queues and channels required for P2P three-way replication are shown in the following table: QMA (16) QMB (16) QMC (16) 5 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE. QMA CAPB.TO.APPA.RECVQ CAPC.TO.APPA.RECVQ 4 Remote Queues: CAPA.TO.APPB. SENDQ.REMOTE CAPB.ADMINQ.REMOTE CAPA.TO.APPC. SENDQ.REMOTE CAPC.ADMINQ.REMOTE 2 Transmission Queues: QMB.XMITQ QMC.XMITQ 2 Sender Channels: QMA.TO.QMC QMA.TO.QMB 2 Receiver Channels: QMC.TO.QMA QMB.TO.QMA 1 Model Queue: IBMQREP.SPILL. MODELQ 5 Local Queues: CAPB.ADMINQ CAPB.RESTARTQ DEAD.LETTER.QUEUE. QMB CAPA.TO.APPB.RECVQ CAPC.TO.APPB.RECVQ 4 Remote Queues: CAPB.TO.APPA. SENDQ.REMOTE CAPA.ADMINQ.REMOTE CAPB.TO.APPC. SENDQ.REMOTE CAPC.ADMINQ.REMOTE 2 Transmission Queues: QMA.XMITQ QMC.XMITQ 2 Sender Channels: QMB.TO.QMA QMB.TO.QMC 2 Receiver Channels: QMA.TO.QMB QMC.TO.QMB 1 Model Queue: IBMQREP.SPILL. MODELQ 5 Local Queues: CAPC.ADMINQ CAPC.RESTARTQ DEAD.LETTER. QUEUE.QMC CAPA.TO.APPC.RECVQ CAPB.TO.APPC.RECVQ 4 Remote Queues: CAPC.TO.APPA. SENDQ.REMOTE CAPA.ADMINQ.REMOTE CAPC.TO.APPB. SENDQ.REMOTE CAPB.ADMINQ.REMOTE 2 Transmission Queues: QMA.XMITQ QMB.XMITQ 2 Sender Channels: QMC.TO.QMA QMC.TO.QMB 2 Receiver Channels: QMA.TO.QMC QMB.TO.QMC 1 Model Queue: IBMQREP.SPILL. MODELQ
Read more
  • 0
  • 0
  • 12799
article-image-5-things-consider-developing-ecommerce-website
Johti Vashisht
11 Apr 2018
7 min read
Save for later

5 things to consider when developing an eCommerce website

Johti Vashisht
11 Apr 2018
7 min read
Online businesses are booming and rightly so – this year it is expected that 18% of all UK retail purchases will occur online this year. That's partly because eCommerce website development has got easy - almost anyone can do it. But hubris might be your downfall; there are a number of important things to consider before you start to building your eCommerce website. This is especially true if you want customers to keep coming back to your site. We’ve compiled a list of things to keep in mind for when you are ready to build an eCommerce store. eCommerce website development begins with the right platform and brilliant Design Platform Before creating your eCommerce website, you need to decide which platform to create the website on. There are a variety of content management systems including WordPress, Joomla and Magento. Wordpress is a versatile and easy to use platform which also supports a large number of plugins so it may be suitable if you are offering services or only a few products. Platforms such as Magento have been created specifically for eCommerce use. If you are thinking of opening up an online store with many products then Magento is the best option as it is easier to manage your products. Design When designing your website, use a clean, simple design rather than one with too many graphics and incorporate clear call to actions. Another thing to take into account is whether you want to create your own custom theme or choose a preselected theme and build upon it. Although it can be pricier, a custom theme allows you to add custom functionality to your website that a standard pre-made theme may not have. In contrast, pre-made themes will be much cheaper or in most cases free. If you are choosing a pre-made theme, then be sure to check that it is regularly updated and that they have support contact details in case of any queries. Your website design should also be responsive so that your website can be viewed correctly across multiple platforms and operating systems. Your eCommerce website needs to be secure A secure website is beneficial for both you and your customers. With a growing number of websites being hacked and data being stolen, security is the one part of a website you cannot skip out on. An SSL (Secure Sockets Layer) certificate is essential to get for your website, as not only does it allow for a secure connection over which personal data can be transmitted, it also provides authentication so that customers know it’s safe to make purchases on your website.  SSL certificates are mandatory if you collect private information from customers via forms. HTTPS – (Hyper Text Transfer Protocol Secure) is an encrypted standard for website client communications. In order to for HTTP to become HTTPS, data is wrapped into secure SSL packets before being sent and after receiving the data. As well as securing data, HTTPS may also be used for search ranking purposes. If you utilise HTTPS, you will have a slight boost in ranking over competitor websites that do not utilise HTTPS. eCommerce plugins make adding features to your site easier If you have decided to use Wordpress to create your eCommerce website then there are a number of eCommerce plugins available to help you create your online store. Top eCommerce plugins include WooCommerce, Shopify, Shopp and Easy Digital Downloads. SEO attracts organic traffic to your eCommerce site If you want potential customers to see your products before that of competitors then optimising your website pages will aid in trying to be on the first page of search results. Undertake a keyword research to get the words that potential customers are most commonly using to find the products you offer. Google’s keyword planner is quite helpful in managing your keyword research. You can then add relevant words to your product names and descriptions. Revisit these keywords occasionally to update them and experiment with which keywords work better. You can improve your rankings with good page titles that include relevant keywords. Although meta descriptions do not improve ranking, it’s good to add useful meta descriptions as a better description may draw more clicks. Also ensure that the product URLs mirror what the product is and isn’t unnecessarily long. Other things to consider when building an eCommerce website You may wish to consider additional features in order to increase your chance of returning visitors: Site speed If your website is slow then it’s likely that customers may not return for a repeat purchase if it takes too long for a product to load. They’ll simply visit a competitor website that loads much faster. There are a few things you can do to speed up your website including caching and using in memory technology for certain things rather than constantly accessing the database. You could also use fast hosting servers to meet traffic requirements. Site speed is also an important SEO consideration. Guest checkout 23% of shoppers will abandon their shopping basket if they are forced to register an account. Make it easier for customers to purchase items with guest checkout. Some customers may not wish to create an account as they may be limited for time. Create a smooth, quick transaction process by adding the option of a guest checkout. Once they have completed their checkout, you can ask them if they would like to create an account. Site search Utilise search functionality to allow users to search for products with the ability to filter products through a variety of options (if applicable). Pain points Address potential concerns customers may have before purchasing your products by displaying information they may have concerns or queries about. This can include delivery options and whether free returns are offered. Mobile optimization In 2017 almost 59% of ecommerce sales occurred via mobile. There is an increasing number of users who now shop online using their smart phones and this trend will most likely grow. That’s why optimising your website for mobile is a must. User-generated reviews and testimonials Use social proof on your website with user reviews and testimonials. If a potential customer reads customer reviews then they are more likely to purchase a product. However, user-generated reviews can go both ways – a user may also post a negative review which may not be good for your website/online store. Related items Showing related items under a product is useful for customers who are looking for an item but may not have decided what type of that particular product they want. This is also useful for when the main product is out of stock. FAQs section Creating an FAQ section with common questions is very useful and saves both the customer and company time as basic queries can be answered by looking at the FAQ page. If you're starting out, good luck! Yes, in 2018 eCommerce website development is pretty easy thanks to the likes of Shopify, WooCommerce, and Magento among others. But as you can see, there’s plenty you need to consider. By incorporating most of these points, you will be able to create an ecommerce website that users will be able to navigate through easily and find the products or services they are looking for.
Read more
  • 0
  • 0
  • 12757

article-image-troubleshooting-freenas-server
Packt
28 Oct 2009
9 min read
Save for later

Troubleshooting FreeNAS server

Packt
28 Oct 2009
9 min read
Where to Look for Log Information The first place to head whenever you have a configuration problem with FreeNAS is to the related configuration section and check that it is configured as expected. If, having double checked the settings, the problem persists, the next port of call is the log and information files in the Diagnostics: section of the web interface. Keep Diagnostics Section ExpandedBy default, the menu tree in the Diagnostics section of the web interface is collapsed, meaning the menu items aren't visible. To see the menu items, you need to click the word Diagnostics and the tree will expand. During initial setup and if you are doing lots of troubleshooting, you can save yourself a click by having the Diagnostics section permanently expanded. To set this option, go to System: Advanced and click on the Navigation - Keep diagnostics in navigation expanded tick box. The Diagnostics sections has five sections, the first two are logs and information pages about the status of the FreeNAS server. The other three are networking diagnostic tools and information. Diagnostics: Logs This section collates all the different log files that are generated by the FreeNAS server into one convenient place. There are several tabs, one for each different service to log file type. Some of the information can be very technical, especially in the System tab. However, with some key information they can become more readable. The tabs are as follows: Tab Meaning System When FreeBSD (the underlying OS of FreeNAS) boots, various log entries are recorded here about the hardware of the server and various messages about the boot process. FTP This shows the activity on the FTP server including successful logins and failed logins. RSYNC The log information for the RSYNC server is divided into three sections: Server, Client, and Local. Depending on which type of RSYNC operation you are interested, click the appropriate tab. SSHD Here you will find log entries from the SSH server including some limited startup information and records of logins and failed login attempts. SMARTD This tab logs the output of the S.M.A.R.T daemon. Daemon Any other minor system service like the built-in HTTP server, the Apple Filing Protocol server and Windows networking server (Samab) will log information to this page. UPnP The log information from the FreeNAS UPnP server called "MediaTomb" is displayed here. The logging can be quite verbose so careful attention is needed when reading it. Don't be distracted by entires such as "INFO: Config: option not found:" as this is just the server logging that it will be using a default value for that particular attribute. Settings The settings tab allows you to change how the log information is displayed including the sort order and the number of entries shown. What is a Daemon?In UNIX speak, a Daemon is a system service. It is a program that runs in the background performing certain tasks. The Daemons in FreeNAS don't work with the users in an interactive mode (via the monitor, mouse, and keyboard) and as such need a place to log the results (or problems)of their actives. The FreeNAS Daemons are launched automatically by FreeBSD when it boots and some are dependent on being enabled in the web interface. Understanding Diagnostics Logs: System The most complicated of all the log pages is the System log page. Here, FreeBSD logs information about the system, its hardware, and the startup process. At first, this page can seem intimidating but with a little help, this page can be very helpful particularly in tracking down hardware or driver related problems. 50 Log Entries Might Not be EnoughThe default number of log entries shown on the Diagnostics: Logs page is 50. For most situations, this will be sufficient but there can be times when it is not enough. For example in the Diagnostics: Logs: System tab, the total number of log entries made during the boot up process is more than 50. If you want to see how much system memory has been recognized by FreeBSD, you won't find it within the standard 50 entries. The solution is to increase the Number of log entries to show parameter on the Diagnostics: Logs: Setting tab. The best way to learn to read the Diagnostics: Logs: System page is by example, below are several different log entry examples including logs about the CPU, memory, disks, and disk controllers: kernel: FreeBSD 6.2-RELEASE-p11 #0: Wed Mar 12 18:17:49 CET 2008 This first entry shows the heritage of the FreeNAS server. It is based on FreeBSD and in this particular case, we see that this version of FreeNAS is using FreeBSD 6.2. There are plans (which may have already become reality) to use FreeBSD version 7.0 as the base for FreeNAS. kernel: CPU: Intel(R) Xeon(TM) CPU 1.70GHz (1680.52-MHz 686-class CPU) Here, the type of CPU that was detected by the FreeBSD is displayed. In this case, it is an Intel Xeon CPU running at 1.7GHz. kernel: FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs If your system has more than one CPU or is a dual core machine then you will see an entry in the log file (like the one above) recognizing the second CPU. If your machine has Hyper-threading technology, then the second logical processor will be reported like this: Logical CPUs per core:2 Apr 1 11:06:00 kernel: real memory = 268435456 (256 MB)Apr 1 11:06:00 kernel: avail memory = 252907520 (241 MB) These log entries show how much memory the system has detected. The difference in size between real memory and available memory is the difference between the amount of RAM physically installed in the computer and the amount of memory left over after the FreeBSD kernel is loaded. kernel: atapci0: <Intel PIIX4 UDMA33 controller> port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0x1050-0x105f at device 7.1 on pci0 kernel: ata0: <ATA channel 0> on atapci0 kernel: ata1: <ATA channel 1> on atapci0 For disks to work on your FreeNAS server, a disk controller is needed and it will be either a standard ATA/IDE controller, a SATA controller or a SCSI controller. Above are the log entries for a standard ATA controller built into the motherboard. You can see that it is an Intel controller and that two channels have been seen (the primary and the secondary). kernel: atapci1: <SiS 181 SATA150 controller> irq 17 at device 5.0 on pci0kernel: ata2: <ATA channel 0> on atapci1kernel: ata3: <ATA channel 1> on atapci1 Like the ATA controller listed a moment ago, SATA controllers are all recognized at boot up. Here is a SiS 181 SATA 150 controller with two channels. They are listed as devices ata2 and ata3 as ata0 and ata1 are used by the standard ATA/IDE controller. kernel: mpt0: <LSILogic 1030 Ultra4 Adapter> irq 17 at device 16.0 on pci0 Like IDE and SATA controllers, all recognized SCSI drivers are listed in the boot up system log. Here, the controller is an LSILogic 1030 Ultra4. kernel: ad0: 476940MB <WDC WD5000AAJB-00YRA0 12.01C02> at ata0-master UDMA100kernel: ad4: 476940MB <Seagate ST3500320AS SD04> at ata2-master SATA150 Once the disk controllers are recognized by the system, FreeBSD can search to see which disks are attached. Above is an example of a Western Digital 500GB hard drive using the standard ATA100 interface at 100MB/s. There is also a 500GB Seagate drive connected using the SATA interface. acd0: CDROM <TOSHIBA CD-ROM XM-7002B/1005> at ata1 as master UDMA33 When the CDROM (which is normally attached to an ATA/IDE controller) is recognized, it will look like the above. kernel: da0 at ahd0 bus 0 target 0 lun 0kernel: da0: <MAXTOR ATLAS10K4_73WLS DFL0> Fixed Direct Access SCSI-3 devicekernel: da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit), Tagged Queueing Enabledkernel: da0: 70149MB (143666192 512 byte sectors: 255H 63S/T 8942C) SCSI addressing is a little more complicated than that of ATA/IDE. In SCSI land, you have a controller, a channel (bus), a disk (target), and the Logical Unit Number (LUN). The example above shows that a disk (which has been assigned the device name da0) is found on the controller ahd0 on bus 0, as target 0 with the LUN 0. SCSI controllers can have multiple buses and multiple targets. Further down, you can see that the disk is a MAXTOR 73GB SCSI-3 disk. kernel: da0 at umass-sim0 bus 0 target 0 lun 0kernel: da0: <Verbatim Store 'n' Go 1.30> Removable Direct Access SCSI-2 devicekernel: da0: 40.000MB/s transferskernel: da0: 963MB (1974271 512 byte sectors: 64H 32S/T 963C) If you are using a USB flash disk for storing the configuration information, it will most likely appear in the log file as a type of SCSI disk. The above example shows a 1GB Verbatim Store 'n' Go disk. kernel: lnc0: <PCNet/PCI Ethernet adapter> irq 18 at device 17.0 on pci0kernel: lnc0: Ethernet address: 00:0c:29:a5:9a:28 Another important device that needs to work correctly on your system is the network interface card. Like disk controllers and disks, it will be logged in the log file when FreeBSD recognizes it. Above is an example of an AMD Lance/PCNet-based Ethernet adapter. Each Ethernet card has a unique address know as the Ethernet address or the MAC address. It is made up of 6 numbers specified using a colon notation. Once found, FreeBSD queries the card to find its MAC address and logs the result. In the above example, it is "00:0c:29:a5:9a:28". Converting between Device Names and the Real World In the SCSI example above, the SCSI controller listed is ahd0. The trick to understanding these log entries better is to know how to interpret the device name ahd0. First of all ahd0 means it is a device using the ahd driver and it is the first one in the system (with numbering starting from 0). So what is a ahd? The first place to look is further up in the log file. There should be an entry like: kernel: ahd0: <Adaptec 39320 Ultra320 SCSI adapter> irq 11 at device 1.0 on pci2 This shows that the particular device is an Adaptec 39320 SCSI 3 controller. You can also find out more about the the ahd driver (and all FreeBSD drivers) at: http://www.freebsd.org/releases/6.2R/hardware-i386.html Search for ahd and you will find which controllers this driver supports (in this case, they are all controllers from Adaptec. If you click on the link provided, you will be taken to a specific help page about this driver. When FreeNAS moves to FreeBSD 7, then the relevant web page will be: http://www.freebsd.org/releases/7.0R/hardware.html.
Read more
  • 0
  • 0
  • 12614