Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Design

132 Articles
article-image-social-bookmarking-wordpress-plugin
Packt
08 Oct 2009
6 min read
Save for later

Social Bookmarking with WordPress Plugin

Packt
08 Oct 2009
6 min read
You will learn these by creating a Social Bookmarking type of plugin that adds a Digg button to each post on your blog As you probably know, Digg is a very popular service for promoting interesting content on the Internet. The purpose of a Digg button on your blog is to make it easier for Digg users to vote for your article and also to bring in more visitors to your blog. The plugin we'll create in this article will automatically insert the necessary code to each of your posts. So let's get started with WordPress plugin development! Plugging in your first plugin Usually, the first step in plugin creation is coming up with a plugin name. We usually want to use a name that is associated with what the plugin does, so we will call this plugin, WP Digg This. WP is a common prefix used to name WordPress plugins. To introduce the plugin to WordPress, we need to create a standard plugin header. This will always be the first piece of code in the plugin file and it is used to identify the plugin to WordPress. Time for action – Create your first plugin In this example, we're going to write the code to register the plugin with WordPress, describe what the plugin does for the user, check whether it works on the currently installed version of WordPress, and to activate it. Create a file called wp-digg-this.php in your favorite text editor. It is common practice to use the plugin name as the name for the plugin file, with dashes '-' instead of spaces. Next, add a plugin information header. The format of the header is always the same and you only need to change the relevant information for every plugin: <?php /* Plugin Name: WP Digg This Version: 0.1 Description: Automatically adds Digg This button to your posts. Author: Vladimir Prelovac Author URI: http://www.prelovac.com/vladimir Plugin URI: http://www.prelovac.com/vladimir/wordpress-plugins/ wp-digg-this */ ?> Now add the code to check the WordPress version: /* Version check */ global $wp_version; $exit_msg='WP Digg This requires WordPress 2.5 or newer. <a href="http://codex.wordpress.org/Upgrading_WordPress">Please update!</a>'; if (version_compare($wp_version,"2.5","<")) { exit ($exit_msg); } ?> Upload your plugin file to the wp-content/plugins folder on your server using your FTP client. Go to your WordPress Plugins admin panel. You should now see your plugin listed among other plugins: This means we have just completed the necessary steps to display our plugin in WordPress. Our plugin can be even activated now—although it does not do anything useful (yet). What just happened? We created a working plugin template by using a plugin information header and the version check code. The plugin header allows the plugin to be identified and displayed properly in the plugins admin panel. The version check code will warn users of our plugin who have older WordPress versions to upgrade their WordPress installation and prevent compatibility problems. The plugin information header To identify the plugin to WordPress, we need to include a plugin information header with each plugin. The header is written as a PHP comment and contains several fields with important information. This code alone is enough for the plugin to be registered, displayed in the admin panel and readied for activation. If your future plugin has more than one PHP file, the plugin information should be placed only in your main file, the one which will include() or require()  the other plugin PHP files. Checking WordPress versions To ensure that our plugin is not activated on incompatible WordPress versions, we will perform a simple WordPress version check at the very beginning of our code. WordPress provides the global variable $wp_version that provides the current WordPress version in standard format. We can then use PHP function version_compare() to compare this and our required version for the plugin, using the following code: if (version_compare($wp_version,"2.6","<")) { // do something if WordPress version is lower then 2.6 } If we want to stop the execution of the plugin upon activation, we can use the exit() function with the error message we want to show. In our case, we want to show the required version information and display the link to the WordPress upgrade site. $exit_msg='WP Digg This requires WordPress 2.6 or newer. <a href="http://codex.wordpress.org/Upgrading_WordPress">Please update!</a>'; if (version_compare($wp_version,"2.6","<")) { exit ($exit_msg); } While being simple, this piece of code is also very effective. With the constant development of WordPress, and newer versions evolving relatively often, you can use version checking to prevent potential incompatibility problems. The version number of your current WordPress installation can be found in the footer text of the admin menu. To begin with, you can use that version in your plugin version check (for example, 2.6). Later, when you learn about WordPress versions and their differences, you'll be able to lower the version requirement to the minimal your plugin will be compatible with. This will allow your plugin to be used on more blogs, as not all blogs always use the latest version of WordPress. Checking the plugin You can go ahead and activate the plugin. The plugin will be activated but will do nothing at this moment. Time for Action – Testing the version check Deactivate the plugin and change the version check code to a higher version. For example, replace 2.6 with 5.0. if (version_compare($wp_version,"5.0","<")) Re-upload the plugin and try to activate it again. You will see a WordPress error and a message from the plugin: What just happened? The version check fails and the plugin exits with our predefined error message. The same thing will happen to a user trying to use your plugin with outdated WordPress installation, requiring them to update to a newer version. Have a go Hero We created a basic plugin that you can now customize. Change the plugin description to include HTML formatting (add bold or links to the description). Test your plugin to see what happens if you have two plugins with the same name (upload a copy of the file under a different name).
Read more
  • 0
  • 0
  • 7031

article-image-django-debugging-overview
Packt
28 Apr 2010
9 min read
Save for later

Django Debugging Overview

Packt
28 Apr 2010
9 min read
Django debug settings Django has a number of settings that control the collection and presentation of debug information. The primary one is named DEBUG; it broadly controls whether the server operates in development (if DEBUG is True) or production mode. In development mode, the end-user is expected to be a site developer. Thus, if an error arises during processing of a request, it is useful to include specific technical information about the error in the response sent to the web browser. This is not useful in production mode, when the user is expected to be simply a general site user. This section describes three Django settings that are useful for debugging during development. Additional settings are used during production to control what errors should be reported, and where error reports should be sent. These additional settings will be discussed in the section on handling problems in production. The DEBUG and TEMPLATE_DEBUG settings DEBUG is the main debug setting. One of the most obvious effects of setting this to True is that Django will generate fancy error page responses in the case of serious code problems, such as exceptions raised during processing of a request. If TEMPLATE_DEBUG is also True, and the exception raised is related to a template error, then the fancy error page will also include information about where in the template the error occurred. The default value for both of these settings is False, but the settings.py file created by manage.py startproject turns both of them on by including these lines at the top of the file: DEBUG = True TEMPLATE_DEBUG = DEBUG Note that setting TEMPLATE_DEBUG to True when DEBUG is False isn't useful. The additional information collected with TEMPLATE_DEBUG turned on will never be displayed if the fancy error pages, controlled by the DEBUG setting, are not displayed. Similarly, setting TEMPLATE_DEBUG to False when DEBUG is True isn't very useful. In this case, for template errors, the fancy debug page will be lacking helpful information. Thus, it makes sense to keep these settings tied to each other, as previously shown. Details on the fancy error pages and when they are generated will be covered in the next section. Besides generating these special pages, turning DEBUG on has several other effects. Specifically, when DEBUG is on: A record is kept of all queries sent to the database. Details of what is recorded and how to access it will be covered in a subsequent section. For the MySQL database backend, warnings issued by the database will be turned into Python Exceptions. These MySQL warnings may indicate a serious problem, but a warning (which only results in a message printed to stderr) may pass unnoticed. Since most development is done with DEBUG turned on, raising exceptions for MySQL warnings then ensures that the developer is aware of the possible issue. The admin application performs extensive validation of the configuration of all registered models and raises an ImproperlyConfigured exception on the first attempt to access any admin page if an error is found in the configuration. This extensive validation is fairly expensive and not something you'd generally want done during production server start-up, when the admin configuration likely has not changed since the last start-up. When running with DEBUG on, though, it is possible that the admin configuration has changed, and thus it is useful and worth the cost to do the explicit validation and provide a specific error message about what is wrong if a problem is detected. Finally, there are several places in Django code where an error will occur while DEBUG is on, and the generated response will contain specific information about the cause of the error, whereas when DEBUG is off the generated response will be a generic error page. The TEMPLATE_STRING_IF_INVALID setting A third setting that can be useful for debugging during development is TEMPLATE_STRING_IF_INVALID. The default value for this setting is the empty string. This setting is used to control what gets inserted into a template in place of a reference to an invalid (for example, non-existent in the template context) variable. The default value of an empty string results in nothing visible taking the place of such invalid references, which can make them hard to notice. Setting TEMPLATE_STRING_IF_INVALID to some value can make tracking down such invalid references easier. However, some code that ships with Django (the admin application, in particular), relies on the default behavior of invalid references being replaced with an empty string. Running code like this with a non-empty TEMPLATE_STRING_IF_INVALID setting can produce unexpected results, so this setting is only useful when you are specifically trying to track down something like a misspelled template variable in code that always ensures that variables, even empty ones, are set in the template context. Debug error pages With DEBUG on, Django generates fancy debug error pages in two circumstances: When a django.http.Http404 exception is raised When any other exception is raised and not handled by the regular view processing code In the latter case, the debug page contains a tremendous amount of information about the error, the request that caused it, and the environment at the time it occurred. The debug pages for Http404 exceptions are considerably simpler. To see examples of the Http404 debug pages, consider the survey_detail view def survey_detail(request, pk): survey = get_object_or_404(Survey, pk=pk) today = datetime.date.today() if survey.closes < today: return display_completed_survey(request, survey) elif survey.opens > today: raise Http404 else: return display_active_survey(request, survey) There are two cases where this view may raise an Http404 exception: when the requested survey is not found in the database, and when it is found but has not yet opened. Thus, we can see the debug 404 page by attempting to access the survey detail for a survey that does not exist, say survey number 24. The result will be as follows: Notice there is a message in the middle of the page that describes the cause of the page not found response: No Survey matches the given query. This message was generated automatically by the get_object_or_404 function. By contrast, the bare raise Http404 in the case where the survey is found but not yet open does not look like it will have any descriptive message. To confirm this, add a survey that has an opens date in the future, and try to access its detail page. The result will be something like the following: That is not a very helpful debug page, since it lacks any information about what was being searched for and why it could not be displayed. To make this page more useful, include a message when raising the Http404 exception. For example: raise Http404("%s does not open until %s; it is only %s" % (survey.title, survey.opens, today)) Then an attempt to access this page will be a little more helpful: Note that the error message supplied with the Http404 exception is only displayed on the debug 404 page; it would not appear on a standard 404 page. So you can make such messages as descriptive as you like and not worry that they will leak private or sensitive information to general users. Another thing to note is that a debug 404 page is only generated when an Http404 exception is raised. If you manually construct an HttpResponse with a 404 status code, it will be returned, not the debug 404 page. Consider this code: return HttpResponse("%s does not open until %s; it is only %s" % (survey.title, survey.opens, today), status=404) If that code were used in place of the raise Http404 variant, then the browser will simply display the passed message: Without the prominent Page not found message and distinctive error page formatting, this page isn't even obviously an error report. Note also that some browsers by default will replace the server-provided content with a supposedly "friendly" error page that tends to be even less informative. Thus, it is both easier and more useful to use the Http404 exception instead of manually building HttpResponse objects with status code 404. A final example of the debug 404 page that is very useful is the one that is generated when URL resolution fails. For example, if we add an extra space before the survey number in the URL, the debug 404 page generated will be as follows: The message on this page includes all of the information necessary to figure out why URL resolution failed. It includes the current URL, the name of the base URLConf used for resolution, and all patterns that were tried, in order, for matching. If you do any significant amount of Django application programming, it's highly likely that at some time this page will appear and you will be convinced that one of the listed patterns should match the given URL. You would be wrong. Do not waste energy trying to figure out how Django could be so broken. Rather, trust the error message, and focus your energies on figuring out why the pattern you think should match doesn't in fact match. Look carefully at each element of the pattern and compare it to the actual element in the current URL: there will be something that doesn't match. In this case, you might think the third listed pattern should match the current URL. The first element in the pattern is the capture of the primary key value, and the actual URL value does contain a number that could be a primary key. However, the capture is done using the pattern d+. An attempt to match this against the actual URL characters—a space followed by 2—fails because d only matches numeric digits and the space character is not a numeric digit. There will always be something like this to explain why the URL resolution failed. For now, we will leave the subject of debug pages and learn about accessing the history of database queries that is maintained when DEBUG is on.
Read more
  • 0
  • 0
  • 6943

article-image-creating-mobile-friendly-themes
Packt
15 Mar 2013
3 min read
Save for later

Creating mobile friendly themes

Packt
15 Mar 2013
3 min read
(For more resources related to this topic, see here.) Getting ready Before we start working on the code, we'll need to copy the basic gallery example code from the previous example and rename the folder and files to be pre fixed with mobile. After this is done, our folder and file structure should look like the following screenshot: How to do it... We'll start off with our galleria.mobile.js theme JavaScript file: (function($) { Galleria.addTheme({ name: 'mobile', author: 'Galleria How-to', css: 'galleria.mobile.css', defaults: { transition: 'fade', swipe: true, responsive: true }, init: function(options) { } }); }(jQuery)); The only difference here from our basic theme example is that we've enabled the swipe parameter for navigating images and the responsive parameter so we can use different styles for different device sizes. Then, we'll provide the additional CSS file using media queries to match only smartphones. Add the following rules in the already present code in the galleria. mobile.css file: /* for smartphones */ @media only screen and (max-width : 480px){ .galleria-stage, .galleria-thumbnails-container, .galleria-thumbnails-container, .galleria-image-nav, .galleria-info { width: 320px; } #galleria{ height: 300px; } .galleria-stage { max-height: 410px; } } Here, we're targeting any device size with a width less than 480 pixels. This should match smartphones in landscape and portrait mode. These styles will override the default styles when the width of the browser is less than 480 pixels. Then, we wire it up just like the previous theme example. Modify the galleria. mobile-example.html file to include the following code snippet for bootstrapping the gallery: <script> $(document).ready(function(){ Galleria.loadTheme('galleria.mobile.js'); Galleria.run('#galleria'); }); </script> We should now have a gallery that scales well for smaller device sizes, as shown in the following screenshot: How to do it... The responsive option tells Galleria to actively detect screen size changes and to redraw the gallery when they are detected. Then, our responsive CSS rules style the gallery differently for different device sizes. You can also provide additional rules so the gallery is styled differently when in portrait versus landscape mode. Galleria will detect when the device screen orientation has changed and apply the new styles accordingly. Media queries A very good list of media queries that can be used to target different devices for responsive web design is available at http://css-tricks.com/snippets/css/media-queriesfor-standard-devices/. Testing for mobile The easiest way to test the mobile theme is to simply resize the browser window to the size of the mobile device. This simulates the mobile device screen size and allows the use of standard web development tools that modern browsers provide. Integrating with existing sites In order for this to work effectively with existing websites, the existing styles will also have to play nicely with mobile devices. This means that an existing site, if trying to integrate a mobile gallery, will also need to have its own styles scale correctly to mobile devices. If this is not the case and the mobile devices switch to zoom-like navigation mode for the site, the mobile gallery styles won't ever kick in. Summary In this article we have seen how to create mobile friendly themes using responsive web design. Resources for Article : Further resources on this subject: jQuery Mobile: Organizing Information with List Views [Article] An Introduction to Rhomobile [Article] Creating and configuring a basic mobile application [Article]
Read more
  • 0
  • 0
  • 6933

article-image-null-16
Packt
30 Jan 2013
4 min read
Save for later

Getting Started with HTML5 Modernizr

Packt
30 Jan 2013
4 min read
Detect and design with features, not User Agents (browsers) What if you could build your website based on features instead of for the individual browser idiosyncrasies by manufacturer and version, making your website not just backward compatible but also forward compatible? You could quite potentially build a fully backward and forward compatible experience using a single code base across the entire UA spectrum. What I mean by this is instead of baking in an MSIE 7.0 version, an MSIE 8.0 version , a Firefox version, and so on of your website, and then using JavaScript to listen for, or sniff out, the browser version in play, it would be much simpler to instead build a single version of your website that supports all of the older generation, latest generation, and in many cases even future generation technologies, such as a video API , box-shadow, first-of-type , and more. Think of your website as a full-fledged cable television network broadcasting over 130 channels, and your users as customers that sign up for only the most basic package available, of only 15 channels. Any time that they upgrade their cable (browser) package to one offering additional channels (features), they can begin enjoying them immediately because you have already been broadcasting to each one of those channels the entire time. What happens now is that a proverbial line is drawn in the sand, and the site is built on the assumption that a particular set of features will exist and are thus supported. If not, fallbacks are in place to allow a smooth degradation of the experience as usual, but more importantly the site is built to adopt features that the browser will eventually have. Modernizr can detect CSS features, such as @font-face , box-shadow , and CSS gradients. It can also detect HTML5 elements, such as canvas, localstorage, and application cache. In all it can detect over 40 features for you, the developer. Another term commonly used to describe this technique is progressive enhancement. When the time finally comes that the user decides to upgrade their browser, the new features that the more recent browser version brings with it, for example text-shadow, will automatically be detected and picked up by your website, to be leveraged by your site with no extra work or code from you when they do. Without any additional work on your part, any text that is assigned text-shadow attributes will turn on at the flick of a switch so that user's experience will smoothly, and progressively be enhanced. What is Modernizr? More importantly, why should you use it? At its foundation, Modernizr is a feature-detection library powered by none other than JavaScript. Here is an example of conditionally adding CSS classes based on the browser, also known as the User Agent. When the browser parses this HTML document and finds a match, that class will be conditionally added to the page. code 1 Now that the browser version has been found, the developer can use CSS to alter the page based on the version of the browser that is used to parse the page. In the following example, IE 7, IE 8, and IE 9 all use a different method for a drop shadow attribute on an anchor element: code 2 The problem with the conditional method of applying styles is that not only does it require more code, but it also leaves a burden on the developer to know what browser version is capable of a given feature, in this case box-shadow . Here is the same example using Modernizr. Note how Modernizr has done the heavy lifting for you, irrespective of whether or not the box-shadow feature is supported by the browser: code 3
Read more
  • 0
  • 0
  • 6768

article-image-performing-table-and-database-operations-phpmyadmin-33x-effective-mysql-management
Packt
12 Oct 2010
4 min read
Save for later

Performing Table and Database Operations in phpMyAdmin 3.3.x for Effective MySQL Management

Packt
12 Oct 2010
4 min read
  Mastering phpMyAdmin 3.3.x for Effective MySQL Management A complete guide to get started with phpMyAdmin 3.3 and master its features The best introduction to phpMyAdmin available Written by the project leader of phpMyAdmin, and improved over several editions A step-by-step tutorial for manipulating data with phpMyAdmin Learn to do things with your MySQL database and phpMyAdmin that you didn't know were possible!        Introduction Various links that enable table operations have been put together on the Operations subpage of the Table view. Here is an overview of this subpage: Maintaining a table During its lifetime, a table repeatedly gets modified and is therefore continually growing and shrinking. In addition, outages may occur on the server, leaving some tables in a damaged state. Using the Operations subpage, we can perform various operations, which are listed next. However, not every operation is available for every storage engine: Check table: Scans all rows to verify that deleted links are correct. A checksum is also calculated to verify the integrity of the keys. If everything is alright, we will obtain a message stating OK or Table is already up to date; if any other message shows up, it's time to repair this table (see the third bullet). Analyze table: Analyzes and stores the key distribution; this will be used on subsequent JOIN operations to determine the order in which the tables should be joined. This operation should be performed periodically (in case data has changed in the table), in order to improve JOIN efficiency. Repair table: Repairs any corrupted data for tables in the MyISAM and ARCHIVE engines. Note that a table might be so corrupted that we cannot even go into Table view for it! In such a case, refer to the Multi-table operations section for the procedure to repair it. Optimize table: This is useful when the table contains overheads. After massive deletions of rows or length changes for VARCHAR fields, lost bytes remain in the table. phpMyAdmin warns us in various places (for example, in the Structure view) if it feels the table should be optimized. This operation reclaims unused space in the table. In the case of MySQL 5.x, the relevant tables that can be optimized use the MyISAM, InnoDB, and ARCHIVE engines. Flush table: This must be done when there have been many connection errors and the MySQL server blocks further connections. Flushing will clear some internal caches and allow normal operations to resume. Defragment table: Random insertions or deletions in an InnoDB table fragment its index. The table should be periodically defragmented for faster data retrieval. This operation causes MySQL to rebuild the table, and only applies to InnoDB. The operations are based on the available underlying MySQL queries—phpMyAdmin only calls those queries. Changing table attributes Table attributes are the various properties of a table. This section discusses the settings for some of them. Table storage engine The first attribute that we can change is called Storage Engine. This controls the whole behavior of the table—its location (on-disk or in-memory), the index structure, and whether it supports transactions and foreign keys. The drop-down list varies depending on the storage engines supported by our MySQL server. Changing a table's storage engine may be a long operation if the number of rows is large. Table comments This allows us to enter comments for the table: These comments will be shown at appropriate places—for example, in the navigation panel, next to the table name in the Table view, and in the export file. Here is what the navigation panel looks like when the $cfg['ShowTooltip'] parameter is set to its default value of TRUE: The default value (FALSE) of $cfg['ShowTooltipAliasDB'] and $cfg['ShowTooltipAliasTB'] produces the behavior we saw earlier—the true database and table names are displayed in the navigation panel and in the Database view for the Structure subpage. Comments appear when the cursor is moved over a table name. If one of these parameters is set to TRUE, the corresponding item (database names for DB and table names for TB) will be shown as a tooltip instead of the names. This time, the mouseover box shows the true name for the item. This is convenient when the real table names are not meaningful. There is another possibility for $cfg['ShowTooltipAliasTB']—the 'nested' value. Here is what happens if we use this feature: The true table name is displayed in the navigation panel The table comment (for example project__) is interpreted as the project name and is displayed as it is
Read more
  • 0
  • 0
  • 6690

article-image-importing-structure-and-data-phpmyadmin-33x-effective-mysql-management
Packt
12 Oct 2010
11 min read
Save for later

Importing Structure and Data in phpMyAdmin 3.3.x for MySQL Management

Packt
12 Oct 2010
11 min read
In general, an exported file can be imported either to the same database it came from or to any other database; the XML format is an exception to this and a workaround is given in the XML section later in this chapter. Also, a file generated from an older phpMyAdmin version should have no problem being imported by the current version, but the difference between the MySQL version at the time of export and the one at the time of import might play a bigger role regarding compatibility. It's difficult to evaluate how future MySQL releases will change the language's syntax, which could result in import challenges. The import feature can be accessed from several panels: The Import menu available from the homepage, the Database view, or the Table view The Import files menu offered inside the Query window An import file may contain the DELIMITER keyword. This enables phpMyAdmin to mimic the mysql command-line interpreter. The DELIMITER separator is used to delineate the part of the file containing a stored procedure, as these procedures can themselves contain semicolons. The default values for the Import interface are defned in $cfg['Import']. Before examining the actual import dialog, let's discuss some limits issues. Limits for the transfer When we import, the source file is usually on our client machine and therefore must travel to the server via HTTP. This transfer takes time and uses resources that may be limited in the web server's PHP configuration. Instead of using HTTP, we can upload our file to the server by using a protocol such as FTP, as described in the Reading files from a web server upload directory section. This method circumvents the web server's PHP upload limits. Time limits First, let's consider the time limit. In config.inc.php, the $cfg['ExecTimeLimit'] configuration directive assigns, by default, a maximum execution time of 300 seconds (five minutes) for any phpMyAdmin script, including the scripts that process data after the file has been uploaded. A value of 0 removes the limit, and in theory, gives us infinite time to complete the import operation. If the PHP server is running in safe mode, modifying $cfg['ExecTimeLimit'] will have no effect. This is because the limits set in php.ini or the user-related web server configuration file, (such as .htaccess or the virtual host configuration files) take precedence over this parameter. Of course, the time it effectively takes depends on two key factors: Web server load MySQL server load The time taken by the file, as it travels between the client and the server does not count as execution time because the PHP script only starts to execute after the file has been received on the server. Therefore, the $cfg['ExecTimeLimit'] parameter has an impact only on the time used to process data (like decompression or sending it to the MySQL server). Other limits The system administrator can use the php.ini file or the web server's virtual host configuration file to control uploads on the server. The upload_max_filesize parameter specifies the upper limit or maximum file size that can be uploaded via HTTP. This one is obvious, but another less obvious parameter is post_max_size. As HTTP uploading is done via the POST method, this parameter may limit our transfers. For more details about the POST method, please refer to http://en.wikipedia.org/wiki/Http#Request_methods. The memory_limit parameter is provided to prevent web server child processes from grabbing too much of the server's memory—phpMyAdmin runs inside a child process. Thus, the handling of normal file uploads, especially compressed dumps, can be compromised by giving this parameter a small value. Here, no preferred value can be recommended; the value depends on the size of uploaded data we want to handle and on the size of the physical memory. The memory limit can also be tuned via the $cfg['MemoryLimit'] parameter in config.inc.php, as seen in Chapter 6, Exporting Structure and Data (Backup). Finally, file uploads must be allowed by setting file_uploads to On; otherwise, phpMyAdmin won't even show the Location of the textfile dialog. It would be useless to display this dialog as the connection would be refused later by the PHP component of the web server. Handling big export files If the file is too big, there are ways in which we can resolve the situation. If the original data is still accessible via phpMyAdmin, we could use phpMyAdmin to generate smaller CSV export files, choosing the Dump n rows starting at record # n dialog. If this were not possible, we could use a spreadsheet program or a text editor to split the file into smaller sections. Another possibility is to use the upload directory mechanism, which accesses the directory defined in $cfg['UploadDir']. In recent phpMyAdmin versions, the Partial import feature can also solve this file size problem. By selecting the Allow interrupt... checkbox, the import process will interrupt itself if it detects that it's close to the time limit. We can also specify a number of queries to skip from the start, in case we successfully import a number of rows and wish to continue from that point. Uploading into a temporary directory On a server, a PHP security feature called open_basedir (which limits the files that can be opened by PHP to the specified directory tree) can impede the upload mechanism. In this case, or if uploads are problematic for any other reason, the $cfg['TempDir'] parameter can be set with the value of a temporary directory. This is probably a subdirectory of phpMyAdmin's main directory, into which the web server is allowed to put the uploaded file. Importing SQL files Any file containing MySQL statements can be imported via this mechanism. This format is more commonly used for backup/restore purposes. The relevant dialog is available in the Database view or the Table view, via the Import subpage, or in the Query window. There is no relation between the currently-selected table (here author) and the actual contents of the SQL file that will be importeAll of the contents of the SQL file will be imported, and it's those contents that determine which tables or databases are affected. However, if the imported file does not contain any SQL statementsto select a database, all statements in the imported file will be executed on the currently selected database. Let's try an import exercise. First, we make sure that we have a current SQL export of the book table (as explained in Chapter 6, Exporting Structure and Data (Backup)). This export file must contain the structure and the data. Then we drop the book table—yes, really! We could also simply rename it. (See Chapter 9, Performing Table and Database Operations, for the procedure.) Now it's time to import the file back. We should be on the Import subpage, where we can see the Location of the text file dialog. We just have to hit the Browse button and choose our file. phpMyAdmin is able to detect which compression method (if any) has been applied to the file. Depending on the phpMyAdmin version, and the extensions that are available in the PHP component of the web server, there is variation in the format that the program can decompress. However, to import successfully, phpMyAdmin must be informed of the character set of the file to be imported. The default value is utf8. However, if we know that the import file was created with another character set, we should specify it here. A SQL compatibility mode selector is available at import time. This mode should be adjusted to match the actual data that we are about to import, according to the type of the server where the data was previously exported. Another option, Do not use AUTO_INCREMENT for zero values, is selected by default. If we have a value of zero in a primary key and we want it to stay zero instead of being auto-incremented, we should use this option. To start the import, we click on Go. The import procedure continues and we receive a message: Import has been successfully finished, 2 queries executed. We can browse our newly-created tables to confirm the success of the import operation. The file could be imported for testing in a different database or even on another MySQL server. Importing CSV files In this section, we will examine how to import CSV files. There are two possible methods—CSV and CSV using LOAD DATA. The first method is implemented internally by phpMyAdmin and is the recommended one for its simplicity. With the second method, phpMyAdmin receives the file to be loaded and passes it to MySQL. In theory, this method should be faster. However, it has more requirements due to MySQL itself (see the Requirements subsection of the CSV using LOAD DATA section). Differences between SQL and CSV formats There are some differences between the SQL and CSV formats. The CSV file format contains data only, so we must already have an existing table in place. This table does not need to have the same structure as the original table (from which the data comes); the Column names dialog enables us to choose which columns are affected in the target table. Because the table must exist prior to the import, the CSV import dialog is available only from the Import subpage in the Table view, and not in the Database view. Exporting a test file Before trying an import, let's generate an author.csv export file from the author table. We use the default values in the CSV export options. We can then Empty the author table—we should avoid dropping this table because we still need the table structure. CSV From the author table menu, we select Import and then CSV: We can influence the behavior of the import in a number of ways. By default, importing does not modify existing data (based on primary or unique keys). However, the Replace table data with file option instructs phpMyAdmin to use the REPLACE statement instead of the INSERT statement, so that existing rows are replaced with the imported data. Using Ignore duplicate rows, INSERT IGNORE statements are generated. These cause MySQL to ignore any duplicate key problems during insertion. A duplicate key from the import file does not replace existing data, and the procedure continues for the next line of CSV data. We can also specify the character that terminates each field, the character that encloses data, and the character that escapes the enclosing character. Usually, this is. For example, for a double quote enclosing character, if the data field contains a double quote, it must be expressed as "some data " some other data". For Lines terminated by, recent versions of phpMyAdmin offer the auto choice, which should be tried first as it detects the end-of-line character automatically. We can also specify manually which characters terminate the lines. The usual choice is n for UNIX-based systems, rn for DOS or Windows systems, and r for Mac-based system (up to Mac OS 9). If in doubt, we can use a hexadecimal file editor on our client computer (not part of phpMyAdmin) to examine the exact codes. By default, phpMyAdmin expects a CSV file with the same number of fields and the same field order as the target table. However, this can be changed by entering a comma-separated list of column names in Column names, respecting the source file format. For example, let's say our source file contains only the author ID and the author name information: "1","John Smith" "2","Maria Sunshine" We'd have to put id, name in Column names in order to match the source file. When we click on Go, the import is executed and we receive a confirmation. We might also see the actual INSERT queries generated if the total size of the file is not too big. Import has been successfully finished, 2 queries executed. INSERT INTO `author` VALUES ('1', 'John Smith', '+01 445 789-1234')# 1 row(s) affected. INSERT INTO `author` VALUES ('2', 'Maria Sunshine', '333-3333')# 1 row(s) affected.
Read more
  • 0
  • 0
  • 6548
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-common-design-patterns-and-how-prototype-them
Packt
15 Mar 2013
7 min read
Save for later

Common design patterns and how to prototype them

Packt
15 Mar 2013
7 min read
(For more resources related to this topic, see here.) When you tackle a design challenge and decide to re-use a solution that was used in the past, you probably have a design pattern. Design patterns are commonly used concepts that plead to be re-used. They save time, they are well recognized by a majority of users, they provide their intended functionality, and they always work (well, assuming you selected the right design pattern and used it in the right place). In this article , we will prototype one of the commonly used design pattern,: Module tabs pattern Once you created a design pattern prototype, you can copy and paste it into an Axure project. A better alternative is to create a custom widgets library and load it into your Axure environment, making it handy for every new prototype you make. Module tabs Module tabs pattern allows the user to browse through a series of tabs without refreshing the page. Why use it? Module tabs design pattern is commonly used in the following situations: When we have a group of closely related content, and we do not want to expose it to the user all at once. (In order to decrease cognitive load, because we have limited space, or maybe in order to avoid a second navigation bar.) When we do not care for (or maybe even prefer) a use case where a visitor interacts only with one pane at a time, thereby limiting his capability to easily compare information or process data simultaneously. Hipmunk.com is using it A module tabs design is used in www.hipmunk.com for a quick toggle between Flights and Hotels. By default, the user is exposed to Hipmunk's prime use case, which is the flight search. Prototyping it This is a classic use of the Axure dynamic panels. We will use a mixture of button-shaped widgets along with a dynamic panel. A module tabs design is comprised of a set of clickable tabs, located horizontally on top of a dynamic content area. Tabs can also be located vertically on the left side of the content area. We will start with creating a content area having three states: Drag a Rectangle-Dropshadow widget from the Better Defaults library into the wireframe and resize it to fit a large enough content area. This will be our background image for all content states. Now drag a Dynamic Panel widget, locate it on top of the rectangle, and resize it to fit the blank area. Label the dynamic panel with a descriptive name (the Label field is in the Widget Properties pane). In this example we will use the label: content-area dPanel. Examine your dynamic panel on the Dynamic Panel Manager pane, and add two more states under it. Since we are in the travel business, you can name them as flight, hotel, and car (these states are going to include each tab's associated content). Put some text in each of the three dynamic panel states to easily designate them upon state change (later on you can replace this text with some real content). In the screenshot that follows, you can see the content-area dPanel with its three states. Next, we will add three tab buttons to control these three states: We are going to experiment a little bit with styles and aesthetics, so let's hide the dynamic panel object from view by clicking on the small, blue box next to its name on the Dynamic Panel Manager pane. Hiding a dynamic panel has no functional implication; it will still be generated in the prototype. Drag one basic Rectangle widget, and place it above our large dropshadow rectangle. Resize the widget so that it will have the right proportions. In the following screenshot, you can see the new rectangle widget location. Also you can see what happens when we hide the dynamic panel from view: When a tab is selected (becomes active), it is important to make it look like it is part of the active area. We can achieve this by giving it the same color of the active state background as well as eliminating any border line separating them. But let's start with reshaping the rectangle widget, making it look like a tab: Right-click on the Rectangle widget and edit its shape (click on Edit button shape and select a shape). Choose the rounded top shape. This shape has no bottom border line and that's why we like it. Notice the small yellow triangle on top of the rounded top. Try to drag it to the left. You will see that we can easily control its rounded corners' angle, making it look much more like a tab button (See the screenshot that follows). Now that we have a nice looking tab button we can move forward. By default, the tab should have a gray color, which designates an unselected state. So, select the widget and click on Fill color button on the Axure toolbar to set its default color to gray. Once the tab is selected, it needs to be painted white, so that it easily blends with the content area background color under it. To do that, right-click on the tab and edit its selected style (Edit button shape Edit selected style|); the fill color should be set to white (select the preview checkbox to verify that). We are almost done. Duplicate the tab and place three identical widgets above the content area. Move them one pixel down so that they overlap and hide the rectangle border line underneath them. In the screenshot that follows, you can see how the three tabs are placed on top of the white rectangle in such a way that they hide the rectangle's border line. Examine the split square in the upper-left corner of each tab widget you created. You can preview the rollover style and selected state style of the widget by hovering over and clicking on it. Try it! The last thing to do is to add interactions. We would like to set a tab to the selected state upon mouse click, and to change the state of content-area dPanel accordingly. Put a name on each tab (Flight, Hotel, and Car), and give each tab a descriptive label. Set the OnClick event for each tab to perform two actions: Set Widget(s) to selected state: Setting the widget after it was clicked to be in a selected style. Set Panel state(s) to state(s): Setting the dynamic panel state to the relevant content (Flight, Hotel, or Car). Since we would like to have only one selected widget at a time, we should put the three of them in a selection group. Do that by selecting all the three widgets, right-clicking, and choosing the menu option Assign Selection Group. Each time this page is loaded, the Flight tab needs to be selected. To do that, we will configure the OnPageLoad event to set the Flight tab to be selected (find it at the pane located under the wireframe). By the way, there is no need to set the dynamic panel state to flight as well since it is already located on top of the three states in the Dynamic Panel Manager pane, making it the default state. In the screenshot that follows, you can see the Page Interactions pane located under the wireframe area. You can also see that the flight state is the one on top of the list, which makes it the dynamic panel's default view. The menu navigation widget cannot be used as an alternative for the tab buttons design explained here. The reason is that the navigation menu does not support the Selection Group option. Summary In this article, we learned about one of the three commonly used UI design patterns, Module Tabs and learned how to prototype them using the key features of Axure. We can now work more efficiently with Axure and will be able to deliver detailed designs much faster. Resources for Article : Further resources on this subject: Using Prototype Property in JavaScript [Article] Creating Your First Module in Drupal 7 Module Development [Article] Axure RP 6 Prototyping Essentials: Advanced Interactions [Article]
Read more
  • 0
  • 0
  • 6393

article-image-django-debug-toolbar
Packt
20 Apr 2010
9 min read
Save for later

The Django Debug Toolbar

Packt
20 Apr 2010
9 min read
The debug toolbar has a far more advanced way of displaying the information than simply embedding it in HTML comments. The capabilities are best shown by example, so we will immediately proceed with installing the toolbar. Installing the Django Debug Toolbar The toolbar can be found on the Python package index site: https://pypi.python.org/pypi/django-debug-toolbar. Once installed, activating the debug toolbar in a Django project is accomplished with the addition of just a couple of settings. First, the debug toolbar middleware, debug_toolbar.middleware.DebugToolbarMiddleware, must be added to the MIDDLEWARE_CLASSES setting. The documentation for the toolbar notes that it should be placed after any other middleware that encodes the response content, so it is best to place it last in the middleware sequence. Second, the debug_toolbar application needs to be added to INSTALLED_APPS. The debug_toolbar application uses Django templates to render its information, thus it needs to be listed in INSTALLED_APPS so that its templates will be found by the application template loader. Third, the debug toolbar requires that the requesting IP address be listed in INTERNAL_IPS. Finally, the debug toolbar is displayed only when DEBUG is True. We've been running with debug turned on, so again we don't have to make any changes here. Note also that the debug toolbar allows you to customize under what conditions the debug toolbar is displayed. It's possible, then, to set things up so that the toolbar will be displayed for requesting IP addresses not in INTERNAL_IPS or when debug is not turned on, but for our purposes the default configuration is fine so we will not change anything. One thing that is not required is for the application itself to use a RequestContext in order for things such as the SQL query information to be available in the toolbar. The debug toolbar runs as middleware, and thus is not dependent on the application using a RequestContext in order for it to generate its information. Thus, the changes made to the survey views to specify RequestContexts on render_to_response calls would not have been needed if we started off first with the Django Debug Toolbar. Debug toolbar appearance Once the debug toolbar is added to the middleware and installed applications settings, we can see what it looks like by simply visiting any page in the survey application. Let's start with the home page. The returned page should now look something like this: Note this screenshot shows the appearance of the 0.8.0 version of the debug toolbar. Earlier versions looked considerably different, so if your results do not look like this you may be using a different version than 0.8.0. The version that you have will most likely be newer than what was available when this was written, and there may be additional toolbar panels or functions that are not covered here. As you can see, the debug toolbar appears on the right-hand side of the browser window. It consists of a series of panels that can be individually enabled or disabled by changing the toolbar configuration. The ones shown here are the ones that are enabled by default. Before taking a closer look at some of the individual panels, notice that the toolbar contains an option to hide it at the top. If Hide is selected, the toolbar reduces itself to a small tab-like indication to show that it is present: This can be very useful for cases where the expanded version of the toolbar obscures application content on the page. All of the information provided by the toolbar is still accessible, after clicking again on the DjDT tab; it is just out of the way for the moment. Most of the panels will provide detailed information when they are clicked. A few also provide summary information in the main toolbar display. As of debug toolbar version 0.8.0, the first panel listed, Django Version, only provides summary information. There is no more detailed information available by clicking on it. As you can see in the screenshot, Django 1.1.1 is the version in use here. Note that the current latest source version of the debug toolbar already provides more information for this panel than the 0.8.0 release. Since 0.8.0, this panel has been renamed to Versions, and can be clicked to provide more details. These additional details include version information for the toolbar itself and for any other installed Django applications that provide version information. The other three panels that show summary information are the Time, SQL, and Logging panels. Thus, we can see at a glance from the first appearance of the page that 60 milliseconds of CPU time were used to produce this page (111 milliseconds total elapsed time), that the page required four queries, which took 1.95 milliseconds, and that zero messages were logged during the request. In the following sections, we will dig into exactly what information is provided by each of the panels when clicked. We'll start first with the SQL panel, since it is one of the most interesting and provides the same information (in addition to a lot more). The SQL panel If we click on the SQL section of the debug toolbar, the page will change to: At a glance, this is a much nicer display of the SQL queries for the page than what we came up with earlier. The queries themselves are highlighted so that SQL keywords stand out, making them easier to read. Also, since they are not embedded inside an HTML comment, their content does not need to be altered in any way—there was no need to change the content of the query containing the double dash in order to avoid it causing display problems. (Now would probably be a good time to remove that added query, before we forget why we added it.) Notice also that the times listed for each query are more specific than what was available in Django's default query history. The debug toolbar replaces Django's query recording with its own, and provides timings in units of milliseconds instead of seconds. The display also includes a graphical representation of how long each query took, in the form of horizontal bars that appear above each query. This representation makes it easy to see when there are one or more queries that are much more expensive than the others. In fact, if a query takes an excessive amount of time, its bar will be colored red. In this case, there is not a great deal of difference in the query times, and none took particularly long, so all the bars are of similar length, and are colored gray. Digging deeper, some of the information we had to manually figure out earlier in this article is just a click away on this SQL query display. Specifically, the answer to the question of what line of our code triggered a particular SQL query to be issued. Each of the displayed queries has a Toggle Stacktrace option, which when clicked will show the stack trace associated with the query: Here we can see that all queries are made by the home method in the survey views. py file. Note that the toolbar filters out levels in the stack trace that are within Django itself, which explains why each of these has only one level shown. The first query is triggered by Line 61, which contains the filter call added to test what will happen if a query containing two dashes in a row was logged. The remaining queries are all attributed to Line 66, which is the last line of the render_to_response call in the home view. These queries, as we figured out earlier, are all made during the rendering of the template. (Your line numbers may vary from those shown here, depending on where in the file various functions were placed.) Finally, this SQL query display makes available information that we had not even gotten around to wanting yet. Under the Action column are links to SELECT, EXPLAIN, and PROFILE each query. Clicking on the SELECT link shows what the database returns when the query is actually executed. For example: Similarly, clicking on EXPLAIN and PROFILE displays what the database reports when asked to explain or profile the selected query, respectively. The exact display, and how to interpret the results, will differ from database to database. (In fact, the PROFILE option is not available with all databases—it happens to be supported by the database in use here, MySQL.) Interpreting the results from EXPLAIN and PROFILE is beyond the scope of what's covered here, but it is useful to know that if you ever need to dig deep into the performance characteristics of a query, the debug toolbar makes it easy to do so. We've now gotten a couple of pages deep into the SQL query display. How do we get back to the actual application page? Clicking on the circled >> at the upper-right of the main page display will return to the previous SQL query page, and the circled >> will turn into a circled X. Clicking the circled X on any panel detail page closes the details and returns to displaying the application data. Alternatively, clicking again on the panel area on the toolbar for the currently displayed panel will have the same effect as clicking on the circled symbol in the display area. Finally, if you prefer using the keyboard to the mouse, pressing Esc has the same effect as clicking the circled symbol. Now that we have completely explored the SQL panel, let's take a brief look at each of the other panels provided by the debug toolbar. The Time panel Clicking on the Time panel brings up more detailed information on where time was spent during production of the page: The total CPU time is split between user and system time, the total elapsed (wall clock) time is listed, and the number of voluntary and involuntary context switches are displayed. For a page that is taking too long to generate, these additional details about where the time is being spent can help point towards a cause. Note that the detailed information provided by this panel comes from the Python resource module. This is a Unix-specific Python module that is not available on non-Unix-type systems. Thus on Windows, for example, the debug toolbar time panel will only show summary information, and no further details will be available.
Read more
  • 0
  • 0
  • 5833

article-image-user-extensions-and-add-ons-selenium-10-testing-tools
Packt
26 Nov 2010
6 min read
Save for later

User Extensions and Add-ons in Selenium 1.0 Testing Tools

Packt
26 Nov 2010
6 min read
Important preliminary points If you are creating an extension that can be used by all, make sure that it is stored in a central place, like a version control system. This will prevent any potential issues in the future when others in your team start to use it. User extensions Imagine that you wanted to use a snippet of code that is used in a number of different tests. You could use: type | locator | javascript{ .... } However, if you had a bug in the JavaScript you would need to go through all the tests that reused this snippet of code. This, as we know from software development, is not good practice and is normally corrected with a refactoring of the code. In Selenium, we can create our own function that can then be used throughout the tests. User extensions are stored in a separate file that we will tell Selenium IDE or Selenium RC to use. Inside there the new function will be written in JavaScript. Because Selenium's core is developed in JavaScript, creating an extension follows the standard rules for prototypal languages. To create an extension, we create a function in the following design pattern. Selenium.prototype.doFunctionName = function(){ . . . } The "do" in front of the function name tells Selenium that this function can be called as a command for a step instead of an internal or private function. Now that we understand this, let's see this in action. Time for action – installing a user extension Now that you have a need for a user extension, let's have a look at installing an extension into Selenium IDE. This will make sure that we can use these functions in future Time for action sections throughout this article. Open your favorite text editor. Create an empty method with the following text: Selenium.prototype.doNothing = function(){ . . . } Start Selenium IDE. Click on the Options menu and then click on Options. Place the path of the user-extension.js file in the textbox labeled Selenium IDE extensions. Click on OK. Restart Selenium IDE. Start typing in the Command textbox and your new command will be available, as seen in the next screenshot: What just happened? We have just seen how to create our first basic extension command and how to get this going in Selenium IDE. You will notice that you had to restart Selenium IDE for the changes to take effect. Selenium has a process that finds all the command functions available to it when it starts up, and does a few things to it to make sure that Selenium can use them without any issues. Now that we understand how to create and install an extension command let's see what else we can do with it. In the next Time for action, we are going to have a look at creating a randomizer command that will store the result in a variable that we can use later in the test. Time for action – using Selenium variables in extensions Imagine that you are testing something that requires some form of random number entered into a textbox. You have a number of tests that require you to create a random number for the test so you can decide that you are going to create a user extension and then store the result in a variable. To do this we will need to pass in arguments to our function that we saw earlier. The value in the target box will be passed in as the first argument and the value textbox will be the second argument. We will use this in a number of different examples throughout this article. Let's now create this extension. Open your favorite text editor and open the user-extension.js file you created earlier. We are going to create a function called storeRandom. The function will look like the following: Selenium.prototype.doStoreRandom = function(variableName){ random = Math.floor(Math.random()*10000000); storedVars[variableName] = random; } Save the file. Restart Selenium IDE. Create a new step with storeRandom and the variable that will hold the value will be called random. Create a step to echo the value in the random variable. What just happened? In the previous example, we saw how we can create an extension function that allows us to use variables that can be used throughout the rest of the test. It uses the storedVars dictionary that we saw in the previous chapter. As everything that comes from the Selenium IDE is interpreted as a string, we just needed to put the variable as the key in storedVars. It is then translated and will look like storedVars['random'] so that we can use it later. As with normal Selenium commands, if you run the command a number of times, it will overwrite the value that is stored within that variable, as we can see in the previous screenshot. Now that we know how to create an extension command that computes something and then stores the results in a variable, let's have a look at using that information with a locator. Time for action – using locators in extensions Imagine that you need to calculate today's date and then type that into a textbox. To do that you can use the type | locator | javascript{...} format, but sometimes it's just neater to have a command that looks like typeTodaysDate | locator. We do this by creating an extension and then calling the relevant Selenium command in the same way that we are creating our functions. To tell it to type in a locator, use: this.doType(locator,text); The this in front of the command text is to make sure that it used the doType function inside of the Selenium object and not one that may be in scope from the user extensions. Let's see this in action: Use your favorite text editor to edit the user extensions that you were using in the previous examples. Create a new function called doTypeTodaysDate with the following snippet: Selenium.prototype.doTypeTodaysDate = function(locator){ var dates = new Date(); var day = dates.getDate(); if (day < 10){ day = '0' + day; } month = dates.getMonth() + 1; if (month < 10){ month = '0' + month; } var year = dates.getFullYear(); var prettyDay = day + '/' + month + '/' + year; this.doType(locator, prettyDay); } Save the file and restart Selenium IDE. Create a step in a test to type this in a textbox. Run your script. It should look similar to the next screenshot: What just happened? We have just seen that we can create extension commands that use locators. This means that we can create commands to simplify tests as in the previous example where we created our own Type command to always type today's date in the dd/mm/yyyy format. We also saw that we can call other commands from within our new command by calling its original function in Selenium. The original function has do in front of it. Now that we have seen how we can use basic locators and variables, let's have a look at how we can access the page using browserbot from within an extension method.
Read more
  • 0
  • 0
  • 5401

article-image-soap-and-php-5
Packt
22 Oct 2009
16 min read
Save for later

SOAP and PHP 5

Packt
22 Oct 2009
16 min read
SOAP SOAP, formerly known as Simple Object Access Protocol (until the acronym was dropped in version 1.2), came around shortly after XML-RPC was released. It was created by a group of developers with backing from Microsoft. Interestingly, the creator of XML-RPC, David Winer, was also one of the primary contributors to SOAP. Winer released XML-RPC before SOAP, when it became apparent to him that though SOAP was still a way away from being completed, there was an immediate need for some sort of web service protocol. Like XML-RPC, SOAP is an XML-based web service protocol. SOAP, however, satisfies a lot of the shortcomings of XML-RPC: namely the lack of user-defined data types, better character set support, and rudimentary security. It is quite simply, a more powerful and flexible protocol than REST or XML-RPC. Unfortunately, sacrifices come with that power. SOAP is a much more complex and rigid protocol. For example, even though SOAP can stand alone, it is much more useful when you use another XML-based standard, called Web Services Descriptor Language (WSDL), in conjunction with it. Therefore, in order to be proficient with SOAP, you should also be proficient with WSDL. The most-levied criticism of SOAP is that it is overly complex. Indeed, SOAP is not simple. It is long and verbose. You need to know how namespaces work in XML. SOAP can rely heavily on other standards. This is true for most implementations of SOAP, including Microsoft Live Search, which we will be looking at. The most common external specifications used by a SOAP-based service is WSDL to describe its available services, and that, in turn, usually relies on XML Schema Data (XSD) to describe its data types. In order to "know" SOAP, it would be extremely useful to have some knowledge of WSDL and XSD. This will allow one to figure out how to use the majority of SOAP services. We are going to take a "need to know" approach when looking at SOAP. Microsoft Live Search's SOAP API uses WSDL and XSD, so we will take a look at SOAP with the other two in mind. We will limit our discussion on how to gather information about the web service that you, as a web service consumer, would need and how to write SOAP requests using PHP 5 against it. Even though this article will just introduce you to the core necessities of SOAP, there is a lot of information and detail. SOAP is very meticulous and you have to keep track of a fair amount of things. Do not be discouraged, take notes if you have to, and be patient. All three, SOAP, WSD, and XSD are maintained by the W3C. All three specifications are available for your perusal. The official SOAP specification is located at http://www.w3.org/TR/soap/. WSDL specification is located at http://www.w3.org/TR/wsdl. Finally, the recommended XSD specification can be found at http://www.w3.org/XML/Schema. Web Services Descriptor Language (WSDL) With XML Schema Data (XSD) Out of all the drawbacks of XML-RPC and REST, there is one that is prominent. Both of these protocols rely heavily on good documentation by the service provider in order to use them. Lacking this, you really do not know what operations are available to you, what parameters you need to pass in order to use them, and what you should expect to get back. Even worse, an XML-RPC or REST service may be poorly or inaccurately documented and give you inaccurate or unexpected results. SOAP addresses this by relying on another XML standard called WSDL to set the rules on which web service methods are available, how parameters should be passed, and what data type might be returned. A service's WSDL document, basically, is an XML version of the documentation. If a SOAP-based service is bound to a WSDL document, and most of them are, requests and responses must adhere to the rules set in the WSDL document, otherwise a fault will occur. WSDL is an acronym for a technical language. When referring to a specific web service's WSDL document, people commonly refer to the document as "the WSDL" even though that is grammatically incorrect. Being XML-based, this allows clients to automatically discover everything about the functionality of the web service. Human-readable documentation is technically not required for a SOAP service that uses a WSDL document, though it is still highly recommended. Let's take a look at the structure of a WSDL document and how we can use it to figure out what is available to us in a SOAP-based web service. Out of all three specifications that we're going to look at in relationship to SOAP, WSDL is the most ethereal. Both supporters and detractors often call writing WSDL documents a black art. As we go through this, I will stress the main points and just briefly note other uses or exceptions. Basic WSDL Structure Beginning with a root definitions element, WSDL documents follow this basic structure:     <definitions>        <types>        …        </types>        <message>        …        </message>        <portType>        …        </portType>        <binding>        …        </binding>    </definitions> As you can see, in addition to the definitions element, there are four main sections to a WSDL document: types, message, portType, and binding. Let's take a look at these in further detail. Google used to provide a SOAP service for their web search engine. However, this service is now deprecated, and no new developer API keys are given out. This is unfortunate because the service was simple enough to learn SOAP quickly, but complex enough to get a thorough exposure to SOAP. Luckily, the service itself is still working and the WSDL is still available. As we go through WSDL elements, we will look at the Google SOAP Search WSDL and Microsoft Live Search API WSDL documents for examples. These are available at http://api.google.com/GoogleSearch.wsdl and http://soap.search.msn.com/webservices.asmx?wsdl respectively. definitions Element This is the root element of a WSDL document. If the WSDL relies on other specifications, their namespace declarations would be made here. Let's take a look at Google's WSDL's definition tag:     <definitions name="GoogleSearch"        targetNamespace="urn:GoogleSearch"                                                > The more common ones you'll run across are xsd for schema namespace, wsdl for the WSDL framework itself, and soap and soapenc for SOAP bindings. As these namespaces refer to W3C standards, you will run across them regardless of the web service implementation. Note that some searches use an equally common prefix, xs, for XML Schema. tns is another common namespace. It means "this namespace" and is a convention used to refer to the WSDL itself. types Element In a WSDL document, data types used by requests and responses need to be explicitly declared and defined. The textbook answer that you'll find is that the types element is where this is done. In theory, this is true. In practice, this is mostly true. The types element is used only for special data types. To achieve platform neutrality, WSDL defaults to, and most implementations use, XSD to describe its data types. In XSD, many basic data types are already included and do not need to be declared. Common Built-in XSD Data Types Time Date Boolean String Base64Binary Float Double Integer Byte For a complete list, see the recommendation on XSD data types at http://www.w3.org/TR/xmlschema-2/. If the web service utilizes nothing more than these built-in data types, there is no need to have special data types, and thus, types will be empty. So, the data types will just be referred to later, when we define the parameters. There are three occasions where data types would be defined here: If you want a special data type that is based on a built-in data type. Most commonly this is a built-in, whose value is restricted in some way. These are known as simple types. If the data type is an object, it is known as a complex type in XSD, and must be declared. An array, which can be described as a hybrid of the former two. Let's take a look at some examples of what we will encounter in the types element. Simple Type Sometimes, you need to restrict or refine a value of a built-in data type. For example, in a hospital's patient database, it would be ludicrous to have the length of a field called Age to be more than three digits. To add such a restriction in the SOAP world, you would have to define Age here in the types section as a new type. Simple types must be based on an existing built-in type. They cannot have children or properties like complex types. Generally, a simple type is defined with the simpleType element, the name as an attribute, followed by the restriction or definition. If the simple type is a restriction, the built-in data type that it is based on, is defined in the base attribute of the restriction element. For example, a restriction for an age can look like this:     <xsd:simpleType name="Age">        <xsd:restriction base="xsd:integer">            <xsd:totalDigits value="3" />        </xsd:restriction>    </xsd:simpleType> Children elements of restriction define what is acceptable for the value. totalDigits is used to restrict a value based on the character length. A table of common restrictions follows: Restriction Use Applicable In enumeration Specifies a list of acceptable values. All except boolean fractionDigits Defines the number of decimal places allowed. Integers length Defines the exact number of characters allowed. Strings and all binaries maxExclusive/ maxInclusive Defines the maximum value allowed. If Exclusive is used, value cannot be equal to the definition. If Inclusive, can be equal to, but not greater than, this definition. All numeric and dates minLength/ maxLength Defines the minimum and maximum number of characters or list items allowed. Strings and all binaries minExclusive/ minInclusive Defines the minimum value allowed. If Exclusive is used, value cannot be equal to the definition. If Inclusive, can be equal to, but not less than, this definition. All numeric and dates pattern A regular expression defining the allowed values. All totalDigits Defines the maximum number of digits allowed. Integers whiteSpace Defines how tabs, spaces, and line breaks are handled. Can be preserve (no changes), replace (tabs and line breaks are converted to spaces) or collapse (multiple spaces, tabs, and line breaks are converted to one space. Strings and all binaries A practical example of a restriction can be found in the MSN Search Web Service WSDL. Look at the section that defines SafeSearchOptions.     <xsd:simpleType name="SafeSearchOptions">        <xsd:restriction base="xsd:string">            <xsd:enumeration value="Moderate" />            <xsd:enumeration value="Strict" />            <xsd:enumeration value="Off" />    </xsd:restriction> </xsd:simpleType> In this example, the SafeSearchOptions data type is based on a string data type. Unlike a regular string, however, the value that SafeSearchOptions takes is restricted by the restriction element. In this case, the several enumeration elements that follow. SafeSearchOptions can only be what is given in this enumeration list. That is, SafeSearchOptions can only have a value of "Moderate", "Strict", or "Off". Restrictions are not the only reason to use a simple type. There can also be two other elements in place of restrictions. The first is a list. If an element is a list, it means that the value passed to it is a list of space-separated values. A list is defined with the list element followed by an attribute named itemType, which defines the allowed data type. For example, this example specifies an attribute named listOfValues, which comprises all integers.     <xsd:simpleType name="listOfValues">        <xsd:list itemType="xsd:integer" />    </xsd:simpleType> The second is a union. Unions are basically a combination of two or more restrictions. This gives you a greater ability to fine-tune the allowed value. Back to our age example, if our service was for a hospital's pediatrics ward that admits only those under 18 years old, we can restrict the value with a union.     <xsd:simpleType name="Age">        <xsd:union>            <xsd:simpleType>                <xsd:restriction base="decimal">                        <xsd:minInclusive value="0" />                </xsd:restriction>            </xsd:simpleType>            <xsd:simpleType>                <xsd:restriction base="decimal">                        <xsd:maxExclusive value="18" />                </xsd:restriction>            </xsd:simpleType>        </xsd:union>    </xsd:simpleType> Finally, it is important to note that while simple types are, especially in the case of WSDLs, used mainly in the definition of elements, they can be used anywhere that requires the definition of a number. For example, you may sometimes see an attribute being defined and a simple type structure being used to restrict the value. Complex Type Generically, a complex type is anything that can have multiple elements or attributes. This is opposed to a simple type, which can have only one element. A complex type is represented by the element complexType in the WSDL. The most common use for complex types is as a carrier for objects in SOAP transactions. In other words, to pass an object to a SOAP service, it needs to be serialized into an XSD complex type in the message. The purpose of a complexType element is to explicitly define what other data types make up the complex type. Let's take a look at a piece of Google's WSDL for an example:     <xsd:complexType name="ResultElement">        <xsd:all>            <xsd:element name="summary" type="xsd:string"/>            <xsd:element name="URL" type="xsd:string"/>            <xsd:element name="snippet" type="xsd:string"/>            <xsd:element name="title" type="xsd:string"/>            <xsd:element name="cachedSize" type="xsd:string"/>            <xsd:element name=                        "relatedInformationPresent" type="xsd:boolean"/>            <xsd:element name="hostName" type="xsd:string"/>            <xsd:element name=                        "directoryCategory" type="typens:DirectoryCategory"/>            <xsd:element name="directoryTitle" type="xsd:string"/>        </xsd:all>    </xsd:complexType> First thing to notice is how the xsd: namespace is used throughout types. This denotes that these elements and attributes are part of the XSD specification. In this example, a data type called ResultElement is defined. We don't exactly know what it is used for right now, but we know that it exists. An element tag denotes complex type's equivalent to an object property. The first property of it is summary, and the type attribute tells us that it is a string, as are most properties of ResultElement. One exception is relatedInformationPresent, which is a Boolean. Another exception is directoryCategory. This has a data type of DirectoryCategory. The namespace used in the type attribute is typens. This tells us that it is not an XSD data type. To find out what it is, we'll have to look for the namespace declaration that declared typens.
Read more
  • 0
  • 0
  • 5329
article-image-different-strategies-make-responsive-websites
Packt
04 Oct 2013
9 min read
Save for later

Different strategies to make responsive websites

Packt
04 Oct 2013
9 min read
(For more resources related to this topic, see here.) The Goldilocks approach In 2011, and in response to the dilemma of building several iterations of the same website by targeting every single device, the web-design agency, Design by Front, came out with an official set of guidelines many designers were already adhering to. In essence, the Goldilocks approach states that rather than rearranging our layouts for every single device, we shouldn't be afraid of margins on the left and right of our designs. There's a blurb about sizing around the width of our body text (which they state should be around 66 characters per line, or 33 em's wide), but the important part is that they completely destroyed the train of thought that every single device needed to be explicitly targeted—effectively saving designers countless hours of time. This approach became so prevalent that most CSS frameworks, including Twitter Bootstrap 2, adopted it without realizing that it had a name. So how does this work exactly? You can see a demo at http://goldilocksapproach.com/demo; but for all you bathroom readers out there, you basically wrap your entire site in an element (or just target the body selector if it doesn't break anything else) and set the width of that element to something smaller than the width of the screen while applying a margin: auto. The highlighted element is the body tag. You can see the standard and huge margins on each side of it on larger desktop monitors. As you contract the viewport to a generic tablet-portrait size, you can see the width of the body is decreased dramatically, creating margins on each side again. They also do a little bit of rearranging by dropping the sidebar below the headline. As you contract the viewport more to a phone size, you'll notice that the body of the page occupies the full width of the page now, with just some small margins on each side to keep text from butting up against the viewport edges. Okay, so what are the advantages and disadvantages? Well, one advantage is it's incredibly easy to do. You literally create a wrapping element and every time the width of the viewport touches the edges of that element, you make that element smaller and tweak a few things. But, the huge advantage is that you aren't targeting every single device, so you only have to write a small amount of code to make your site responsive. The downside is that you're wasting a lot of screen real-estate with all those margins. For the sake of practice, create a new folder called Goldilocks. Inside that folder create a goldilocks.html and goldilocks.css file. Put the following code in your goldilocks.html file: <!DOCTYPE html> <html> <head> <title>The Goldilocks Approach</title> <link rel="stylesheet" href="goldilocks.css"> </head> <body> <div id="wrap"> <header> <h1>The Goldilocks Approach</h1> </header> <section> <aside>Sidebar</aside> <article> <header> <h2>Hello World</h2> <p> Lorem ipsum... </p> </header> </article> </section> </div> </body> </html> We're creating an incredibly simple page with a header, sidebar, and content area to demonstrate how the Goldilocks approach works. In your goldilocks.css file, put the following code: * { margin: 0; padding: 0; background: rgba(0,0,0,.05); font: 13px/21px Arial, sans-serif; } h1, h2 { line-height: 1.2; } h1 { font-size: 30px; } h2 { font-size: 20px; } #wrap { width: 900px; margin: auto; } section { overflow: hidden; } aside { float: left; margin-right: 20px; width: 280px; } article { float: left; width: 600px; } @media (max-width: 900px) { #wrap { width: 500px; } aside { width: 180px; } article { width: 300px; } } @media (max-width: 500px) { #wrap { width: 96%; margin: 0 2%; } aside, article { width: 100%; margin-top: 10px; } } Did you notice how the width of the #wrap element becomes the max-width of the media query? After you save and refresh your page, you'll be able to expand/contract to your heart's content and enjoy your responsive website built with the Goldilocks approach. Look at you! You just made a site that will serve any device with only a few media queries. The fewer media queries you can get away with, the better! Here's what it should look like: The preceding screenshot shows your Goldilocks page at desktop width. At tablet size, it looks like the following: On a mobile site, you should see something like the following screenshot: The Goldilocks approach is great for websites that are graphic heavy as you can convert just three mockups to layouts and have completely custom, graphic-rich websites that work on almost any device. It's nice if you are of the type who enjoys spending a lot of time in Photoshop and don't mind putting in the extra work of recreating a lot of code for a more textured website with a lot of attention to detail. The Fluid approach Loss of real estate and a substantial amount of extra work for slightly prettier (and heavier) websites is a problem that most of us don't want to deal with. We still want beautiful sites, and luckily with pure CSS, we can replicate a huge amount of elements in flexible code. A common, real-world example of replacing images with CSS is to use CSS to create buttons. Where Goldilocks looks at your viewport as a container for smaller, usually pixel-based containers, the Fluid approach looks at your viewport as a 100 percent large container. If every element inside the viewport adds up to around 100 percent, you've effectively used the real estate you were given. Duplicate your goldilocks.html file, then rename it to fluid.html. Replace the mentions of "Goldilocks" with "Fluid": <!DOCTYPE html> <html> <head> <title>The Fluid Approach</title> <link rel="stylesheet" href="fluid.css"> </head> <body> <div id="wrap"> <header> <h1>The Fluid Approach</h1> </header> <section> <aside>Sidebar</aside> <article> <header> <h2>Hello World</h2> </header> <p> Lorem ipsum... </p> </article> </section> </div> </body> </html> We're just duplicating our very simple header, sidebar, and article layout. Create a fluid.css file and put the following code in it: * { margin: 0; padding: 0; background: rgba(0,0,0,.05); font: 13px/21px Arial, sans-serif; } aside { float: left; width: 24%; margin-right: 1%; } article { float: left; width: 74%; margin-left: 1%; } Wow! That's a lot less code already. Save and refresh your browser, then expand/contract your viewport. Did you notice how we're using all available space? Did you notice how we didn't even have to use media queries and it's already responsive? Percentages are pretty cool. Your first fluid, responsive, web design We have a few problems though: On large monitors, when that layout is full of text, every paragraph will fit on one line. That's horrible for readability. Text and other elements butt up against the edges of the design. The sidebar and article, although responsive, don't look great on smaller devices. They're too small. Luckily, these are all pretty easy fixes. First, let's make sure the layout of our content doesn't stretch to 100 percent of the width of the viewport when we're looking at it in larger resolutions. To do this, we use a CSS property called max-width. Append the following code to your fluid.css file: #wrap { max-width: 980px; margin: auto; } What do you think max-width does? Save and refresh, expand and contract. You'll notice that wrapping div is now centered in the screen at 980 px width, but what happens when you go below 980 px? It simply converts to 100 percent width. This isn't the only way you'll use max-width, but we'll learn a bit more in the Gotchas and best practices section. Our second problem was that the elements were butting up against the edges of the screen. This is an easy enough fix. You can either wrap everything in another element with specified margins on the left and right, or simply add some padding to our #wrap element shown as follows: #wrap { max-width: 980px; margin: auto; padding: 0 20px; } Now our text and other elements are touching the edges of the viewport. Finally, we need to rearrange the layout for smaller devices, so our sidebar and article aren't so tiny. To do this, we'll have to use a media query and simply unassign the properties we defined in our original CSS: @media (max-width: 600px) { aside, article { float: none; width: 100%; margin: 10px 0; } } We're removing the float because it's unnecessary, giving these elements a width of 100 percent, and removing the left and right margins while adding some margins on the top and bottom so that we can differentiate the elements. This act of moving elements on top of each other like this is known as stacking. Simple enough, right? We were able to make a really nice, real-world, responsive, fluid layout in just 28 lines of CSS. On smaller devices, we stack content areas to help with readability/usability: It's up to you how you want to design your websites. If you're a huge fan of lush graphics and don't mind doing extra work or wasting real estate, then use Goldilocks. I used Goldilocks for years until I noticed a beautiful site with only one breakpoint (width-based media query), then I switched to Fluid and haven't looked back. It's entirely up to you. I'd suggest you make a few websites using Goldilocks, get a bit annoyed at the extra effort, then try out Fluid and see if it fits. In the next section we'll talk about a somewhat new debate about whether we should be designing for larger or smaller devices first. Summary In this article, we have taken a look at how to build a responsive website using the Goldilocks approach and the Fluid approach. Resources for Article : Further resources on this subject: Creating a Web Page for Displaying Data from SQL Server 2008 [Article] The architecture of JavaScriptMVC [Article] Setting up a single-width column system (Simple) [Article]
Read more
  • 0
  • 0
  • 5234

article-image-wordpress-customizing-content-display
Packt
14 Dec 2011
8 min read
Save for later

WordPress: Customizing Content Display

Packt
14 Dec 2011
8 min read
(For more resources on WordPress, see here.) At its most basic, a simple implementation of the loop could work like this: <?php if (have_posts()) : while (have_posts()) : the_post(); ?><?php the_title(); ?><?php the_content(); ?><?php endwhile; else: ?><p>Sorry, no posts matched your criteria.'</p><?php endif; ?> In the real world, however, the WordPress loop is rarely that simple. This is one of those concepts best explained by referring to a real world example, so open up the index.php file of your system's TwentyEleven theme. Look for the following lines of code: <?php if ( have_posts() ) : ?><?php twentyeleven_content_nav( 'nav-above' ); ?><?php /* Start the Loop */ ?><?php while ( have_posts() ) : the_post(); ?><?php get_template_part( 'content', get_post_format()); ?><?php endwhile; ?><?php twentyeleven_content_nav( 'nav-below' ); ?><?php else : ?><article id="post-0" class="post no-results not-found"><header class="entry-header"><h1 class="entry-title"><?php _e( 'Nothing Found','twentyeleven' ); ?></h1></header><!-- .entry-header --><div class="entry-content">3<p><?php _e( 'Apologies, but no results were foundfor the requested archive. Perhaps searching will helpfind a related post.', 'twentyeleven' ); ?></p><?php get_search_form(); ?></div><!-- .entry-content --></article><!-- #post-0 --><?php endif; ?> Most of the extra stuff seen in the loop from TwentyEleven is there to add in additional page elements, including content navigation; there's also some code to control what happens if there are no posts to display. The nature of the WordPress loops means that theme authors can add in what they want to display and thereby customize and control the output of their site. As you would expect, the WordPress Codex includes an extensive discussion of the WordPress loop. Visit http://codex.wordpress.org/The_Loop. Accessing posts within the WordPress loop In this recipe, we look at how to create a custom template that includes your own implementation of the WordPress loop. Getting ready All you need to execute this recipe is your favorite code editor and access to the WordPress files on your server. You will also need a theme template file, which we will use to hold our modified WordPress loop. How to do it Let's assume you have created a custom template. Inside of that template you will want to include the WordPress loop. Follow these steps to add the loop, along with a little customization: Access the active theme files on your WordPress installation. Find a template file and open it for editing. If you're not sure which one to use, try the index.php file. Add to the file the following line of code, which will start the loop: <?php if ( have_posts() ) : while ( have_posts() ) : the_post();?> Next, let's display the post title, wrapped in an h2 tag: <h2><?php the_title() ?></h2> Let's also add a link to all posts by this author. Add this code immediately below the previous line: <?php the_author_posts_link() ?> For the post content, let's wrap it in a div for easy styling: <div class="thecontent"><?php the_content(); ?></div> Next, let's terminate the loop and add some code to display a message if there were no posts to display: <?php endwhile; else: ?><p>Oops! There are no posts to display.</p> Finally, let's put a complete stop to the loop by ending the if statement that began the code in step number 3, above: <?php endif; ?> Save the file. That's all there is to it. Your code should look like this: ?php if ( have_posts() ) : while ( have_posts() ) : the_post(); ?><h2><?php the_title() ?></h2><?php the_author_posts_link() ?><div class="thecontent"><?php the_content(); ?></div><?php endwhile; else: ?><p>Oops! There are no posts to display.</p><?php endif; ?> How it works... This basic piece of code first checks if there are posts in your site. If the answer is yes, the loop will repeat until every post title and their contents are displayed on the page. The post title is displayed using the_title(). The author's name and link are added with the_ author_posts_link() function. The content is displayed with the_content() function and styled by the div named thecontent. Finally, if there are no posts to display, the code will display the message Oops! There are no posts to display. There's more... In the preceding code you saw the use of two template tags: the_author_posts_link and the_content. These are just two examples of the many template tags available in WordPress. The tags make your life easier by reducing an entire function to just a short phrase. You can find a full list of the template tags at: http://codex.wordpress.org/ Template_Tags. The template tags can be broken down into a number of categories: General tags: The tags in this category cover general page elements common to most templates Author tags: Tags related to author information Bookmark tags: The tag to list bookmarks Category tags: Category, tag, and item description-related Comment tags: Tags covering the comment elements Link tags: Tags for links and permalinks Post tags: Tags for posts, excerpts, titles, and attachments Post Thumbnail tags: Tags that relate to the post thumbnails Navigation Menu tags: Tags for the nav menu and menu tree Retrieving posts from a specific category There are times when you might wish to display only those posts that belong to a specific category, for example, perhaps you want to show only the featured posts. With a small modification to the WordPress loop, it's easy to grab only those posts you want to display. In this recipe we introduce query_posts(),which can be used to control which posts are displayed by the loop. Getting ready To execute this recipe, you will need a code editor and access to the WordPress files on your server. You will also need a theme template file, which we will use to hold our modified WordPress loop. To keep this recipe short and to the point, we use the loop we created in the preceding recipe. How to do it... Let's create a situation where the loop shows only those posts assigned to the Featured category. To do this, you will need to work through two different processes. First, you need to find the category ID number of the Featured category. To do this, follow these steps: Log in to the Dashboard of your WordPress site. Click on the Posts menu. Click on the Categories option. Click on the category named Featured. Look at the address bar of your browser and you will notice that part of the string looks something like this:&tag_ID=9. On the site where we are working, the Featured category has the ID of 9. Category IDs vary from site to site. The ID used in this recipe may not be the same as the ID for your site! Next, we need to add a query to our loop that will extract only those posts that belong to the Featured category, that is, to those posts that belong to the category with the ID of 9. Follow these steps: Open the file that contains the loop. We'll use the same file we created in the preceding recipe. Find the loop: <?php if ( have_posts() ) : while ( have_posts() ) :the_post(); ?><h2><?php the_title() ?></h2><?php the_author_posts_link() ?><div class="thecontent"><?php the_content(); ?></div><?php endwhile; else: ?><p>Oops! There are no posts to display.</p><?php endif; ?> Add the following line of code immediately above the loop: <?php query_posts($query_string.'&cat=9'); ?> Save the file. That's all there is to it. If you visit your site, you will now see that the page displays only the posts that belong to the category with the ID of 9. How it works... The query_posts() function modifies the default loop. When used with the cat parameter, it allows you to specify one or more categories that you want to use as filters for the posts. For example: query_posts(&query_string.'&cat=5');: Get posts from the category with ID 5 only query_posts(&query_string.'&cat=5,6,9');: Get posts from the category with IDs 5, 6, and 9 query_posts(&query_string.'&cat=-3');: Get posts from all categories, except those from the category with ID 3 For more information, visit the WordPRess Codex page on query posts: http://codex.wordpress.org/Function_Reference/query_posts
Read more
  • 0
  • 0
  • 5152

article-image-article-getting-started-with-html5-modernizer-url
Packt
31 Jan 2013
4 min read
Save for later

Getting Started with Modernizr

Packt
31 Jan 2013
4 min read
(For more resources on this subject, see here.) Detect and design with features, not User agents (browsers) What if you could build your website based on features instead of for the individual browser idiosyncrasies by manufacturer and version, making your website not just backward compatible but also forward compatible? You could quite potentially build a fully backward and forward compatible experience using a single code base across the entire UA spectrum. What I mean by this is instead of baking in an MSIE 7.0 version, an MSIE 8.0 version , a Firefox version, and so on of your website, and then using JavaScript to listen for, or sniff out, the browser version in play, it would be much simpler to instead build a single version of your website that supports all of the older generation, latest generation, and in many cases even future generation technologies, such as a video API,box-shadow,and first-of-type. Think of your website as a full-fledged cable television network broadcasting over 130 channels, and your users as customers that sign up for only the most basic package available, of only 15 channels. Any time that they upgrade their cable (browser) package to one offering additional channels (features), they can begin enjoying them immediately because you have already been broadcasting to each one of those channels the entire time. What happens now is that a proverbial line is drawn in the sand, and the site is built on the assumption that a particular set of features will exist and are thus supported. If not, fallbacks are in place to allow a smooth degradation of the experience as usual, but more importantly the site is built to adopt features that the browser will eventually have. Modernizr can detect CSS features, such as @font-face , box-shadow , and CSS gradients. It can also detect HTML5 elements, such as canvas , localstorage, and application cache. In all it can detect over 40 features for you, the developer. Another term commonly used to describe this technique is "progressive enhancement ". When the time fi nally comes that the user decides to upgrade their browser, the new features that the more recent browser version brings with it, for example text-shadow , will automatically be detected and picked up by your website, to be leveraged by your site with no extra work or code from you when they do. Without any additional work on your part, any text that is assigned text-shadow attributes will turn on at the fl ick of a switch so that user's experience will smoothly, and progressively be enhanced. What is Modernizr? More importantly, why should you use it? At its foundation, Modernizr is a feature-detection library powered by none other than JavaScript. Here is an example of conditionally adding CSS classes based on the browser, also known as the User Agent . When the browser parses this HTML document and fi nds a match, that class will be conditionally added to the page. Now that the browser version has been found, the developer can use CSS to alter the page based on the version of the browser that is used to parse the page. In the following example, IE 7, IE 8, and IE 9 all use a different method for a drop shadow attribute on an anchor element: /* IE7 Conditional class using UA sniffing */ .lt-ie7 a{ display: block; float: left; background: url( drop-shadow. gif ); } .lt-ie8 a{ display: inline-block; background: url( drop-shadow.png ); } .lt-ie9 a{ display: inline-block; box-shadow: 10px 5px 5px rgba(0,0,0,0.5); } The problem with the conditional method of applying styles is that not only does it require more code, but it also leaves a burden on the developer to know what browser version is capable of a given feature, in this case box-shadow . Here is the same example using Modernizr . Note how Modernizr has done the heavy lifting for you, irrespective of whether or not the box-shadow feature is supported by the browser: /* Box shadow using Modernizr CSS feature detected classes */ .box-shadow a{ box-shadow: 10px 5px 5px rgba(0,0,0,0.5); } .no-box-shadow a{ background: url( drop-shadow.gif ); }   The Modernizr namespace The Modernizr JavaScript library in your web page's header is a lightweight feature library that will test your browser for the features that it may or may not support, and store those results in a JavaScript namespace aptly named Modernizr. You can then use those test results as you build your website. From this point on everything you need to know about what features your user's browser can and cannot support can be checked for in two separate places. Going back to the cable television analogy, you now know what channels (features) your user does and does not have, and can act accordingly.
Read more
  • 0
  • 0
  • 5100
Packt
24 Aug 2015
9 min read
Save for later

CSS3 – Selectors, Typography, Color Modes, and New Features

Packt
24 Aug 2015
9 min read
In this article by Ben Frain, the author of Responsive Web Design with HTML5 and CSS3 Second Edition, we'll cover the following topics: What are pseudo classes The :last-child selector The nth-child selectors The nth rules nth-based selection in responsive web design (For more resources related to this topic, see here.) CSS3 gives us more power to select elements based upon where they sit in the structure of the DOM. Let's consider a common design treatment; we're working on the navigation bar for a larger viewport and we want to have all but the last link over on the left. Historically, we would have needed to solve this problem by adding a class name to the last link so that we could select it, like this: <nav class="nav-Wrapper"> <a href="/home" class="nav-Link">Home</a> <a href="/About" class="nav-Link">About</a> <a href="/Films" class="nav-Link">Films</a> <a href="/Forum" class="nav-Link">Forum</a> <a href="/Contact-Us" class="nav-Link nav-LinkLast">Contact Us</a> </nav> This in itself can be problematic. For example, sometimes, just getting a content management system to add a class to a final list item can be frustratingly difficult. Thankfully, in those eventualities, it's no longer a concern. We can solve this problem and many more with CSS3 structural pseudo-classes. The :last-child selector CSS 2.1 already had a selector applicable for the first item in a list: div:first-child { /* Styles */ } However, CSS3 adds a selector that can also match the last: div:last-child { /* Styles */ } Let's look how that selector could fix our prior problem: @media (min-width: 60rem) { .nav-Wrapper { display: flex; } .nav-Link:last-child { margin-left: auto; } } There are also useful selectors for when something is the only item: :only-child and the only item of a type: :only-of-type. The nth-child selectors The nth-child selectors let us solve even more difficult problems. With the same markup as before, let's consider how nth-child selectors allow us to select any link(s) within the list. Firstly, what about selecting every other list item? We could select the odd ones like this: .nav-Link:nth-child(odd) { /* Styles */ } Or, if you wanted to select the even ones: .nav-Link:nth-child(even) { /* Styles */ } Understanding what nth rules do For the uninitiated, nth-based selectors can look pretty intimidating. However, once you've mastered the logic and syntax you'll be amazed what you can do with them. Let's take a look. CSS3 gives us incredible flexibility with a few nth-based rules: nth-child(n) nth-last-child(n) nth-of-type(n) nth-last-of-type(n) We've seen that we can use (odd) or (even) values already in an nth-based expression but the (n) parameter can be used in another couple of ways: As an integer; for example, :nth-child(2) would select the 
second item As a numeric expression; for example, :nth-child(3n+1) would start at 1 and then select every third element The integer based property is easy enough to understand, just enter the element number you want to select. The numeric expression version of the selector is the part that can be a little baffling for mere mortals. If math is easy for you, I apologize for this next section. For everyone else, let's break it down. Breaking down the math Let's consider 10 spans on a page (you can play about with these by looking at example_05-05): <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> By default they will be styled like this: span { height: 2rem; width: 2rem; background-color: blue; display: inline-block; } As you might imagine, this gives us 10 squares in a line: OK, let's look at how we can select different ones with nth-based selections. For practicality, when considering the expression within the parenthesis, I start from the right. So, for example, if I want to figure out what (2n+3) will select, I start with the right-most number (the three here indicates the third item from the left) and know it will select every second element from that point on. So adding this rule: span:nth-child(2n+3) { color: #f90; border-radius: 50%; } Results in this in the browser: As you can see, our nth selector targets the third list item and then every subsequent second one after that too (if there were 100 list items, it would continue selecting every second one). How about selecting everything from the second item onwards? Well, although you could write :nth-child(1n+2), you don't actually need the first number 1 as unless otherwise stated, n is equal to 1. We can therefore just write :nth-child(n+2). Likewise, if we wanted to select every third element, rather than write :nth-child(3n+3), we can just write :nth-child(3n) as every third item would begin at the third item anyway, without needing to explicitly state it. The expression can also use negative numbers, for example, :nth-child(3n-2) starts at -2 and then selects every third item. You can also change the direction. By default, once the first part of the selection is found, the subsequent ones go down the elements in the DOM (and therefore from left to right in our example). However, you can reverse that with a minus. For example: span:nth-child(-2n+3) { background-color: #f90; border-radius: 50%; } This example finds the third item again, but then goes in the opposite direction to select every two elements (up the DOM tree and therefore from right to left in our example): Hopefully, the nth-based expressions are making perfect sense now? The nth-child and nth-last-child differ in that the nth-last-child variant works from the opposite end of the document tree. For example, :nth-last-child(-n+3) starts at 3 from the end and then selects all the items after it. Here's what that rule gives us in the browser: Finally, let's consider :nth-of-type and :nth-last-of-type. While the previous examples count any children regardless of type (always remember the nth-child selector targets all children at the same DOM level, regardless of classes), :nth-of-type and :nth-last-of-type let you be specific about the type of item you want to select. Consider the following markup (example_05-06): <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> If we used the selector: .span-class:nth-of-type(-2n+3) { background-color: #f90; border-radius: 50%; } Even though all the elements have the same span-class, we will only actually be targeting the span elements (as they are the first type selected). Here is what gets selected: We will see how CSS4 selectors can solve this issue shortly. CSS3 doesn't count like JavaScript and jQuery! If you're used to using JavaScript and jQuery you'll know that it counts from 0 upwards (zero index based). For example, if selecting an element in JavaScript or jQuery, an integer value of 1 would actually be the second element. CSS3 however, starts at 1 so that a value of 1 is the first item it matches. nth-based selection in responsive web designs Just to close out this little section I want to illustrate a real life responsive web design problem and how we can use nth-based selection to solve it. Remember the horizontal scrolling panel from example_05-02? Let's consider how that might look in a situation where horizontal scrolling isn't possible. So, using the same markup, let's turn the top 10 grossing films of 2014 into a grid. For some viewports the grid will only be two items wide, as the viewport increases we show three items and at larger sizes still we show four. Here is the problem though. Regardless of the viewport size, we want to prevent any items on the bottom row having a border on the bottom. You can view this code at example_05-09. Here is how it looks with four items wide: See that pesky border below the bottom two items? That's what we need to remove. However, I want a robust solution so that if there were another item on the bottom row, the border would also be removed on that too. Now, because there are a different number of items on each row at different viewports, we will also need to change the nth-based selection at different viewports. For the sake of brevity, I'll show you the selection that matches four items per row (the larger of the viewports). You can view the code sample to see the amended selection at the different viewports. @media (min-width: 55rem) { .Item { width: 25%; } /* Get me every fourth item and of those, only ones that are in the last four items */ .Item:nth-child(4n+1):nth-last-child(-n+4), /* Now get me every one after that same collection too. */ .Item:nth-child(4n+1):nth-last-child(-n+4) ~ .Item { border-bottom: 0; } } You'll notice here that we are chaining the nth-based pseudo-class selectors. It's important to understand that the first doesn't filter the selection for the next, rather the element has to match each of the selections. For our preceding example, the first element has to be the first item of four and also be one of the last four. Nice! Thanks to nth-based selections we have a defensive set of rules to remove the bottom border regardless of the viewport size or number of items we are showing. Summary In this article, we've learned what are structural pseudo-classes. We've also learned what nth rules do. We have also showed the nth-based selection in responsive web design. Resources for Article:   Further resources on this subject: CSS3 Animation[article] A look into responsive design frameworks[article] Linking Dynamic Content from External Websites[article]
Read more
  • 0
  • 0
  • 5021

article-image-article-nesting-extend-placeholders-and-mixins
Packt
05 Jun 2013
9 min read
Save for later

Nesting, Extend, Placeholders, and Mixins

Packt
05 Jun 2013
9 min read
(For more resources related to this topic, see here.) Styling a site with Sass and Compass Personally, while abstract examples are fine, I find it difficult to fully understand concepts until I put them into practice. With that in mind, at this point I'm going to amend the markup in our project's index.html file, so we have some markup to flex our growing Sass and Compass muscles on. We're going to build up a basic home page for this book. Hopefully, the basic layout and structure will be common enough to be of help. If you want to see how things turn out, you can point your browser at http://sassandcompass.com, as that's what we'll be building. The markup for the home page is fairly simple. It consists of a header with links, navigation made up of list items, images, and content and a footer area. Pasting the home page markup here will span a few pages (and be extremely dull to read. Don't get too hung up on the specifics of the markup. Let me be clear. The actual code, selectors used, and the finished webpage that we'll create are not important. The Sass and Compass techniques and tools we use to create them are. You can download the code from the book's page at http://packtpub.com and also from http://sassandcompass.com You can download the code from the book's page at http:// packtpub.com and also from http://sassandcompass.com At this point, here's how the page looks in the browser: Image Wow, what a looker! Thankfully, with Sass and Compass we're going to knock this into shape in no time at all. Notice that in the source code there are semantically named classes in the markup. Some people dislike this practice, but I have found that it makes it easier to create more modular and maintainable code. A full discussion on the reasoning behind using classes against styling elements themselves is a little beyond the scope of this book. However, a good book on this topic is SMACSS by Jonathan Snook (http:// smacss.com). It's not essential to adhere slavishly to the conventions he describes, but it's a great start in thinking about how to structure and write style sheets in general. First, let's open the _normalize.scss partial file and amend the default styles for links (changing the default text-underline to a dotted bottom border) and remove the padding and margin for the ul tags. Now I think about this, before we get knee-deep in code, a little more organization is called for. Separating the layout from visuals Before getting into nesting, @extend, placeholders, and mixins, it makes sense to create some partial files for organizing the styles. Rather than create a partial Sass file for each structural area (the header, footer, and navigation), and lump all the visual styles relevant inside, we can structure the code in a slightly more abstract manner. There is no right or wrong way to split up Sass files. However, it's worth looking at mature and respected projects such as Twitter Bootstrap (https://github.com/twitter/ bootstrap) and Foundation from Zurb (https://github.com/zurb/foundation) to see how they organize their code. We'll create a partial file called _base.scss that will contain some base styles: code Then a partial file called _layout.scss. That will only contain rules pertaining to visual layout and positioning of the main page areas (the previously mentioned header, footer, aside, section, and navigation). Here are the initial styles being added into the _layout.scss partial file: code There is nothing particular to Sass there, it's all standard CSS. Debug help in the browser Thanks to the popularity of Sass there are now experimental features in browsers to make debugging Sass even easier. When inspecting an element with developer tools, the source file and line number is provided, making it easier to find the offending selector. For Chrome, here's a step-by-step explanation: http://benfra.in/1z1 Alternatively, if using Firefox, check out the FireSass extension: https://addons.mozilla.org/en-us/firefox/addon/ firesass-for-firebug/ Let's create another partial file called _modules.scss. This will contain all the modular pieces of code. This means, should we need to move the .testimonial section (which should be a modular section) in the source code, it's not necessary to move it from one partial file to another (which would be the case if the partial files were named according to their existing layout). The _module.scss file will probably end up being the largest file in the Sass project, as it will contain all the aesthetics for the items on the page. However, the hope is that these modular pieces of code will be flexible enough to look good, regardless of the structural constraints defined in the _layout.scss partial file. If working on a very large project, it may be worth splitting the modules into sub-folders. Just create a sub-folder and import the file. For example, if there was a partial called _callouts.scss in a sub-folder called modules, it could be imported using the following code: code Here is the contents of the amended styles.scss file: code A quick look at the HTML in the browser shows the basic layout styles applied: Image With that done, let's look at how Sass's ability to nest styles can help us make modules of code. What nesting is and how it facilitates modules of code Nesting provides the ability to nest styles within one another. This provides a handy way to write mini blocks of modular code. Nesting syntax With a normal CSS style declaration, there is a selector, then an opening curly brace, and then any number of property and value pairs, followed by a closing curly brace. For example: code Where Sass differs is that before the closing curly brace, you can nest another rule within. What do I mean by that? Consider this example: code In the previous code, a color has been set for the link tag using a variable. Then the hover and focus state have been nested within the main anchor style with a different color set. Furthermore, styles for the visited and active state have been nested with an alternate color defined. To reference a variable, simply write the dollar sign ( $) and then the name of the variable, for example, $variable. When compiled, this produces the following CSS: code Notice that Sass is also smart enough to know that the hex value #7FFF00 is actually the color called chartreuse, and when it has generated the CSS it has converted it to the CSS color name. How can this be used practically? Let's look at the markup for the list of navigation links on the home page. Each currently looks like this: code I've omitted the additional list items and closing tags in the previous code for the sake of brevity. From a structure point of view, each list item contains an anchor link with a <b> tag and a <span> tag within. Here's the initial Sass nested styles that have been added into the _modules.scss partial to control that section: code Notice that the rules are being nested inside the class .chapter-summary. That's because it then limits the scope of the nested styles to only apply to elements within a list item with a class of .chapter-summary. However, remember that if the styles can be re-used elsewhere, it isn't necessarily the best practice to nest them, as they may become too specific. The nesting of the selectors in this manner partially mirrors the structure of the HTML, so it's easy to see that the styles nested within only apply to those that are similarly structured in the markup. We're starting with the outermost element that needs to be styled (the <li class=”chapter-summary”>) and then nesting the first element within that. In this instance, it's the anchor tag. Then further styles have been nested for the hover and visited states along with the <b> and <span> tags. I tend to add focus and active states at the end of a project, but that's a personal thing. If you're likely to forget them, by all means add them at the same time you add hover. Here's the generated CSS from that block: code The nested styles are generated with a degree of specificity that limits their scope; they will only apply to elements within a <li class=”chapter-summary”>. It's possible to nest all manner of styles within one another. Classes, IDs, pseudo classes; Sass doesn't much care. For example, let's suppose the <li> elements need to have a dotted border, except for the last one. Let's take advantage of the last-child CSS pseudo class in combination with nesting and add this section: code The parent selector Notice that any pseudo selector that needs to be nested in Sass is prefixed with the ampersand symbol (&), then a colon (:). The ampersand symbol has a special meaning when nesting. It always references the parent selector. It's perfect for defining pseudo classes as it effectively says 'the parent plus this pseudo element'. However, there are other ways it can work too, for example, nesting a few related selectors and expressing different relationships between them: code This actually results in the following CSS: code We have effectively reversed the selectors while nesting by using the & parent selector. Chaining selectors It's also possible to create chained selectors using the parent selector. Consider this: code That will compile to this: code That will only select an element that has both HTML classes, namely selector-one and selector-two. The parent selector is perfect for pseudo selectors and at odd times its necessary to chain selectors, but beyond that I couldn't find much practical use for it, until I saw this set of slides by Sass guru Brandon Mathis http://speakerdeck.com/u/imathis/p/sass-compass-the-future-of-stylesheets-now. In this, he illustrates how to use the parent selector for nesting a Modernizr relevant rule.
Read more
  • 0
  • 0
  • 4736
Modal Close icon
Modal Close icon