Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-html-php-and-content-posting-drupal-6
Packt
15 Oct 2009
9 min read
Save for later

HTML, PHP, and Content Posting in Drupal 6

Packt
15 Oct 2009
9 min read
Input Formats and Filters It is necessary to stipulate the type of content we will be posting, in any given post. This is done through the use of the Input format setting that is displayed when posting content to the site—assuming the user in question has sufficient permissions to post different types of content. In order to control what is and is not allowed, head on over to the Input formats link under Site configuration. This will bring up a list of the currently defined input formats, like this: At the moment, you might be wondering why we need to go to all this trouble to decide whether people can add certain HTML tags to their content. The answer to this is that because both HTML and PHP are so powerful, it is not hard to subvert even fairly simple abilities for malicious purposes. For example, you might decide to allow users the ability to link to their homepages from their blogs. Using the ability to add a hyperlink to their postings, a malicious user could create a Trojan, virus or some other harmful content, and link to it from an innocuous and friendly looking piece of HTML like this: <p>Hi Friends! My <a href="link_to_trojan.exe">homepage</a> is a great place to meet and learn about my interests and hobbies. </p> This snippet writes out a short paragraph with a link, supposedly to the author's homepage. In reality, the hyperlink reference attribute points to a trojan, link_to_trojan.exe. That's just HTML! PHP can do a lot more damage—to the extent that if you don't have proper security or disaster-recovery policies in place, then it is possible that your site can be rendered useless or destroyed entirely. Security is the main reason why, as you may have noticed from the previous screenshot, anything other than Filtered HTML is unavailable for use by anyone except the administrator. By default, PHP is not even present, let alone disabled. When thinking about what permissions to allow, it is important to re-iterate the tenet: Never allow users more permissions than they require to complete their intended tasks! As they stand, you might not find the input formats to your liking, and so Drupal provides some functionality to modify them. Click on the configure link adjacent to the Filtered HTML option, and this will bring up the following page: The Edit tab provides the option to alter the Name property of the input format; the Roles section in this case cannot be changed, but as you will see when we come around to creating our own input format, roles can be assigned however you wish to allow certain users to make use of an input format, or not. The final section provides a checklist of the types of Filters to apply when using this input format. In this previous screenshot, all have been selected, and this causes the input format to apply the: HTML corrector – corrects any broken HTML within postings to prevent undesirable results in the rest of your page. HTML filter – determines whether or not to strip or remove unwanted HTML. Line break converter – Turns standard typed line breaks (i.e. whenever a poster clicks Enter) into standard HTML. URL filter – allows recognized links and email addresses to be clickable without having to write the HTML tags, manually. The line break converter is particularly useful for users because it means that they do not have to explicitly enter <br> or <p> HTML tags in order to display new lines or paragraph breaks—this can get tedious by the time you are writing your 400th blog entry. If this is disabled, unless the user has the ability to add the relevant HTML tags, the content may end up looking like this: Click on the Configure tab, at the top of the page, in order to begin working with the HTML filter. You should be presented with something like this: The URL filter option is really there to help protect the formatting and layout of your site. It is possible to have quite long URLs these days, and because URLs do not contain spaces, there is nowhere to naturally split them up. As a result, a browser might do some strange things to cater for the long string and whatever it is; this will make your site look odd. Decide how many characters the longest string should be and enter that number in the space provided. Remember that some content may appear in the sidebars, so you can't let it get too long if they is supposed to be a fixed width. The HTML filter section lets you specify whether to Strip disallowed tags, or escape them (Escape all tags causes any tags that are present in the post to be displayed as written). Remember that if all the tags are stripped from the content, you should enable the Line break converter so that users can at least paragraph their content properly. Which tags are to be stripped is decided in the Allowed HTML tags section, where a list of all the tags that are to be allowed can be entered—anything else gets handled appropriately. Selecting Display HTML help forces Drupal to provide HTML help for users posting content—try enabling and disabling this option and browsing to this relative URL in each case to see the difference: filter/tips. There is quite a bit of helpful information on HTML in the long filter tips; so take a moment to read over those. The filter tips can be reached whenever a user expands the Input format section of the content post and clicks on More information about formatting options at the bottom of that section. Finally, the Spam link deterrent is a useful tool if the site is being used to bombard members with links to unsanctioned (and often unsavory) products. Spammers will use anonymous accounts to post junk (assuming anonymous users are allowed to post content) and enabling this for anonymous posts is an effective way of breaking them. This is not the end of the story, because we also need to be able to create input formats in the event we require something that the default options can't cater for. For our example, there are several ways in which this can be done, but there are three main criteria that need to be satisfied before we can consider creating the page. We need to be able to: Upload image files and attach them to the post. Insert and display the image files within the body of the post. Use PHP in order to dynamically generate some of the content (this option is really only necessary to demonstrate how to embed PHP in a posting for future reference). There are several methods for displaying image files within posts. The one we will discuss here, does not require us to download and install any contribution modules, such as Img_assist. Instead, we will use HTML directly to achieve this, specifically, we use the <img> tag. Take a look at the previous screenshot that shows the configure page of the Filtered HTML input format. Notice that the <img> tag is not available for use. Let's create our own input format to cater for this, instead of modifying this default format. Before we do, first enable the PHP Filter module under Modules in Site building so that it can easily be used when the time comes. With that change saved, you will find that there is now an extra option to the Filters section of each input format configuration page: It's not a good idea to enable the PHP evaluator for either of the default options, but adding it to one of our own input formats will be ok to play with. Head on back to the main input formats page under Site configuration (notice that there is an additional input format available, called PHP code) and click on Add input format. This will bring up the same configuration type page we looked at earlier. It is easy to implement whatever new settings you want, based on how the input format is to be used. For our example, we need the ability to post images and make use of PHP scripts, so make the new input format as follows: As we will need to make use of some PHP code a bit later on, we have enabled the PHP evaluator option, as well as prevented the use of this format for anyone but ourselves—normally, you would create a format for a group of users who require the modified posting abilities, but in this case, we are simply demonstrating how to create a new input format; so this is fine for now. PHP should not be enabled for anyone other than yourself, or a highly trusted administrator who needs it to complete his or her work. Click Save configuration to add this new format to the list, and then click on the Configure tab to work on the HTML filter. The only change required between this input format and the default Filtered HTML, in terms of HTML, is the addition of the <img> and <div> tags, separated by a space in the Allowed HTML tags list, as follows: As things stand at the moment, you may run into problems with adding PHP code to any content postings. This is because some filters affect the function of others, and to be on the safe side, click on the Rearrange tab and set the PHP evaluator to execute first: Since the PHP evaluator's weight is the lowest, it is treated first, with all the others following suit. It's a safe bet that if you are getting unexpected results when using a certain type of filter, you need to come to this page and change the settings. We'll see a bit more about this, in a moment. Now, the PHP evaluator gets dibs on the content and can properly process any PHP. For the purposes of adding images and PHP to posts (as the primary user), this is all that is needed for now. Once satisfied with the settings save the changes. Before building the new page, it is probably most useful to have a short discourse on HTML, because it is a requirement if you are to attempt more complex postings.  
Read more
  • 0
  • 1
  • 20066

article-image-freeradius-working-authentication-methods
Packt
08 Sep 2011
6 min read
Save for later

FreeRADIUS: Working with Authentication Methods

Packt
08 Sep 2011
6 min read
Authentication is a process where we establish if someone is who he or she claims to be. The most common way is by a unique username and password. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, teaches authentication methods and how they work. Extensible Authentication Protocol (EAP) is covered later in a dedicated article. In this article we shall: Discuss PAP, CHAP, and MS-CHAP authentication protocols See when and how authentication is done in FreeRADIUS Explore ways to store passwords Look at other authentication methods (For more resources on this subject, see here.) Authentication protocols This section will give you background on three common authentication protocols. These protocols involve the supply of a username and password. The radtest program uses the Password Authentication Protocol (PAP) by default when testing authentication. PAP is not the only authentication protocol but probably the most generic and widely used. Authentication protocols you should know about are PAP, CHAP, and MS-CHAP. Each of these protocols involves a username and password. The next article on Extensible Authentication Protocol (EAP) protocol will introduce us to more authentication protocols. An authentication protocol is typically used on the data link layer that connects the client with the NAS. The network layer will only be established after the authentication is successful. The NAS acts as a broker to forward the requests from the user to the RADIUS server. The data link layer and network layer are layers inside the Open Systems Interconnect model (OSI model). The discussion of this model is almost guaranteed to be found in any book on networking:http://en.wikipedia.org/wiki/OSI_model PAP PAP was one of the first protocols used to facilitate the supply of a username and password when making point-to-point connections. With PAP the NAS takes the PAP ID and password and sends them in an Access-Request packet as the User-Name and User-Password. PAP is simpler compared to CHAP and MS-CHAP because the NAS simply hands the RADIUS server a username and password, which are then checked. This username and password come directly from the user through the NAS to the server in a single action. Although PAP transmits passwords in clear text, using it should not always be frowned upon. This password is only in clear text between the user and the NAS. The user's password will be encrypted when the NAS forwards the request to the RADIUS server. If PAP is used inside a secure tunnel it is as secure as the tunnel. This is similar to when your credit card details are tunnelled inside an HTTPS connection and delivered to a secure web server. HTTPS stands for Hypertext Transfer Protocol Secure and is a web standard that uses Secure Socket Layer/Transport Layer Security (SSL/TLS) to create a secure channel over an insecure network. Once this secure channel is established, we can transfer sensitive data, like credit card details, through it. HTTPS is used daily to secure many millions of transactions over the Internet. See the following schematic of a typical captive portal configuration. The following table shows the RADIUS AVPs involved in a PAP request: As you can see the value of User-Password is encrypted between the NAS and the RADIUS server. Transporting the user's password from the user to the NAS may be a security risk if it can be captured by a third party. CHAP CHAP stands for Challenge-Handshake Authentication Protocol and was designed as an improvement to PAP. It prevents you from transmitting a cleartext password. CHAP was created in the days when dial-up modems were popular and the concern about PAP's cleartext passwords was high. After a link is established to the NAS, the NAS generates a random challenge and sends it to the user. The user then responds to this challenge by returning a one-way hash calculated on an identifier (sent along with the challenge), the challenge, and the user's password. The user's response is then used by the NAS to create an Access-Request packet, which is sent to the RADIUS server. Depending on the reply from the RADIUS server, the NAS will return CHAP Success or CHAP Failure to the user. The NAS can also request at random intervals that the authentication process be repeated by sending a new challenge to the user. This is another reason why it is considered more secure than PAP. One major drawback of CHAP is that although the password is transmitted encrypted, the password source has to be in clear text for FreeRADIUS to perform password verification. The FreeRADIUS FAQ discuss the dangers of transmitting a cleartext password compared to storing all the passwords in clear text on the server. The following table shows the RADIUS AVPs involved in a CHAP request: MS-CHAP MS-CHAP is a challenge-handshake authentication protocol created by Microsoft. There are two versions, MS-CHAP version 1 and MS-CHAP version 2. The challenge sent by the NAS is identical in format to the standard CHAP challenge packet. This includes an identifier and arbitrary challenge. The response from the user is also identical in format to the standard CHAP response packet. The only difference is the format of the Value field. The Value field is sub-formatted to contain MS-CHAP-specific fields. One of the fields (NT-Response) contains the username and password in a very specific encrypted format. The reply from the user will be used by the NAS to create an Access-Request packet, which is sent to the RADIUS server. Depending on the reply from the RADIUS server, the NAS will return Success Packet or Failure Packet to the user. The RADIUS server is not involved with the sending out of the challenge. If you sniff the RADIUS traffic between an NAS and a RADIUS server you can confirm that there is only an Access-Request followed by an Access-Accept or Access-Reject. The sending out of a challenge to the user and receiving a response from her or him is between the NAS and the user. MS-CHAP also has some enhancements that are not part of CHAP, like the user's ability to change his or her password or inclusion of more descriptive error messages. The protocol is tightly integrated with the LAN Manager and NT Password hashes. FreeRADIUS will convert a user's cleartext password to an LM-Password and an NT-Password in order to determine if the password hash that came out of the MS-CHAP request is correct. Although there are known weaknesses with MS-CHAP, it remains widely used and very popular. Never say never. If your current requirement for the RADIUS deployment does not include the use of MS-CHAP, rather cater for the possibility that one day you may use it. The most popular EAP protocol makes use of MS-CHAP. EAP is crucial in Wi-Fi authentication. Because MS-CHAP is vendor specific, VSAs instead of AVPs are part of the Access-Request between the NAS and RADIUS server. This is used together with the User-Name AVP. Now that we know more about the authentication protocols, let's see how FreeRADIUS handles them.
Read more
  • 0
  • 0
  • 20016

article-image-how-configure-msdtc-and-firewall-distributed-wcf-service
Packt
21 Jun 2010
4 min read
Save for later

How to configure MSDTC and the firewall for the distributed WCF service

Packt
21 Jun 2010
4 min read
Understanding the distributed transaction support of a WCF service As we have seen, distributed transaction support of a WCF service depends on the binding of the service, the operation contract attribute, the operation implementation behavior, and the client applications. The following table shows some possible combinations of the WCF-distributed transaction support: Binding permits transaction flow Client flows transaction Service contract opts in transaction Service operation requires transaction scope Possible result True Yes Allowed or Mandatory True Service executes under the flowed in transaction True or False No Allowed True Service creates and executes within a new transaction True Yes or No Allowed False Service executes without a transaction True or False No Mandatory True or False SOAP exception True Yes NotAllowed True or False SOAP exception Testing the distributed transaction support of the WCF service Now that we have changed the service to support distributed transaction and let the client propagate the transaction to the service, we will test this. We will propagate a transaction from the client to the service, test the multiple database support of the WCF service, and discuss the Distributed Transaction Coordinator and Firewall settings for the distributed transaction support of the WCF service. Configuring the Distributed Transaction Coordinator In a subsequent section, we will call two services to update two databases on two different computers. As these two updates are wrapped within one distributed transaction, Microsoft Distributed Transaction Coordinator (MSDTC) will be activated to manage this distributed transaction. If MSDTC is not started or configured properly the distributed transaction will not be successful. In this section, we will explain how to configure MSDTC on both machines. You can follow these steps to configure MSDTC on your local and remote machines: Open Component Services from Control Panel | Administrative Tools. In the Component Services window, expand Component Services, then Computers, and then right-click on My Computer. Select Properties from the context menu. On the My Computer Properties window, click on the MSDTC tab. If this machine is running Windows XP, click on the Security Configuration button. If this machine is running Windows 7, verify that Use local coordinator is checked and then close the My Computer Properties window. Expand Distributed Transaction Coordinator under My Computer node, right-click on Local DTC, select Properties from the context menu, and then from the Local DTC Properties window, click on the Security tab. You should now see the Security Configuration for DTC on this machine.Set it as in the following screenshot. Remember you have to make these changes for both your local and remote machines. You have to restart the MSDTC service after you have changed your MSDTC settings, for the changes to take effect.Also, to simplify our example, we have chosen the No Authentication Required option. You should be aware that not needing authentication is a serious security issue in production. For more information about WCF security, you can go to the MSDN WCF security website at this address:MSDN Library. Configuring the firewall Even though Distributed Transaction Coordinator has been enabled the distributed transaction may still fail if the firewall is turned on and hasn't been set up properly for MSDTC. To set up the firewall for MSTC, follow these steps: Open the Windows Firewall window from the Control Panel. If the firewall is not turned on you can skip this section. Go to the Allow a program or feature through Windows Firewall window(for Windows XP, you need to allow exceptions and go to the Exceptions tab on the Windows Firewall window). Add Distributed Transaction Coordinator to the program list (windowssystem32msdtc.exe) if it is not already on the list. Make sure the checkbox before this item is checked. Again you need to change your firewall setting for both your local and remote machines. Now the firewall will allow msdtc.exe to go through so our next test won't fail due to the firewall restrictions. You may have to restart IIS after you have changed your firewall settings. In some cases you may also have to stop and then restart your fi rewall for the changes to take effect.
Read more
  • 0
  • 0
  • 19402

article-image-manipulating-jquery-tables
Packt
24 Oct 2009
20 min read
Save for later

Manipulating jQuery tables

Packt
24 Oct 2009
20 min read
In this article by Karl Swedberg and Jonathan Chaffer, we will use an online bookstore as our model website, but the techniques we cook up can be applied to a wide variety of other sites as well, from weblogs to portfolios, from market-facing business sites to corporate intranets. In this article, we will use jQuery to apply techniques for increasing the readability, usability, and visual appeal of tables, though we are not dealing with tables used for layout and design. In fact, as the web standards movement has become more pervasive in the last few years, table-based layout has increasingly been abandoned in favor of CSS‑based designs. Although tables were often employed as a somewhat necessary stopgap measure in the 1990s to create multi-column and other complex layouts, they were never intended to be used in that way, whereas CSS is a technology expressly created for presentation. But this is not the place for an extended discussion on the proper role of tables. Suffice it to say that in this article we will explore ways to display and interact with tables used as semantically marked up containers of tabular data. For a closer look at applying semantic, accessible HTML to tables, a good place to start is Roger Johansson's blog entry, Bring on the Tables at www.456bereastreet.com/archive/200410/bring_on_the_tables/. Some of the techniques we apply to tables in this article can be found in plug‑ins such as Christian Bach's Table Sorter. For more information, visit the jQuery Plug‑in Repository at http://jQuery.com/plugins. Sorting One of the most common tasks performed with tabular data is sorting. In a large table, being able to rearrange the information that we're looking for is invaluable. Unfortunately, this helpful operation is one of the trickiest to put into action. We can achieve the goal of sorting in two ways, namely Server-Side Sorting and JavaScript Sorting. Server-Side Sorting A common solution for data sorting is to perform it on the server side. Data in tables often comes from a database, which means that the code that pulls it out of the database can request it in a given sort order (using, for example, the SQL language's ORDER BY clause). If we have server-side code at our disposal, it is straightforward to begin with a reasonable default sort order. Sorting is most useful when the user can determine the sort order. A common idiom is to make the headers of sortable columns into links. These links can go to the current page, but with a query string appended indicating the column to sort by: <table id="my-data">   <tr>     <th class="name"><a href="index.php?sort=name">Name</a></th>     <th class="date"><a href="index.php?sort=date">Date</a></th>   </tr>   ... </table> The server can react to the query string parameter by returning the database contents in a different order. Preventing Page Refreshes This setup is simple, but requires a page refresh for each sort operation. As we have seen, jQuery allows us to eliminate such page refreshes by using AJAX methods. If we have the column headers set up as links as before, we can add jQuery code to change those links into AJAX requests: $(document).ready(function() {   $('#my-data .name a').click(function() {     $('#my-data').load('index.php?sort=name&type=ajax');     return false;   });   $('#my-data .date a').click(function() {     $('#my-data').load('index.php?sort=date&type=ajax');     return false;   }); }); Now when the anchors are clicked, jQuery sends an AJAX request to the server for the same page. We add an additional parameter to the query string so that the server can determine that an AJAX request is being made. The server code can be written to send back only the table itself, and not the surrounding page, when this parameter is present. This way we can take the response and insert it in place of the table. This is an example of progressiveenhancement. The page works perfectly well without any JavaScript at all, as the links for server-side sorting are still present. When JavaScript is present, however, the AJAX hijacks the page request and allows the sort to occur without a full page load. JavaScript Sorting There are times, though, when we either don't want to wait for server responses when sorting, or don't have a server-side scripting language available to us. A viable alternative in this case is to perform the sorting entirely on the browser using JavaScript client-side scripting. For example, suppose we have a table listing books, along with their authors, release dates, and prices: <table class="sortable">   <thead>     <tr>       <th></th>       <th>Title</th>       <th>Author(s)</th>       <th>Publish&nbsp;Date</th>       <th>Price</th>     </tr>   </thead>   <tbody>     <tr>       <td>         <img src="../covers/small/1847192386.png" width="49"              height="61" alt="Building Websites with                                                 Joomla! 1.5 Beta 1" />       </td>       <td>Building Websites with Joomla! 1.5 Beta 1</td>       <td>Hagen Graf</td>       <td>Feb 2007</td>       <td>$40.49</td>     </tr>     <tr>       <td><img src="../covers/small/1904811620.png" width="49"                height="61" alt="Learning Mambo: A Step-by-Step                Tutorial to Building Your Website" /></td>       <td>Learning Mambo: A Step-by-Step Tutorial to Building Your           Website</td>       <td>Douglas Paterson</td>       <td>Dec 2006</td>       <td>$40.49</td>     </tr>     ...   </tbody> </table> We'd like to turn the table headers into buttons that sort by their respective columns. Let us look into ways of doing this.   Row Grouping Tags Note our use of the <thead> and <tbody> tags to segment the data into row groupings. Many HTML authors omit these implied tags, but they can prove useful in supplying us with more convenient CSS selectors to use. For example, suppose we wish to apply typical even/odd row striping to this table, but only to the body of the table: $(document).ready(function() {   $('table.sortable tbody tr:odd').addClass('odd');   $('table.sortable tbody tr:even').addClass('even'); }); This will add alternating colors to the table, but leave the header untouched: Basic Alphabetical Sorting Now let's perform a sort on the Titlecolumn of the table. We'll need a class on the table header cell so that we can select it properly: <thead>   <tr>     <th></th>    <th class="sort-alpha">Title</th>     <th>Author(s)</th>     <th>Publish&nbsp;Date</th>     <th>Price</th>   </tr> </thead> To perform the actual sort, we can use JavaScript's built in .sort()method. It does an in‑place sort on an array, and can take a function as an argument. This function compares two items in the array and should return a positive or negative number depending on the result. Our initial sort routine looks like this: $(document).ready(function() {   $('table.sortable').each(function() {     var $table = $(this);     $('th', $table).each(function(column) {       if ($(this).is('.sort-alpha')) {         $(this).addClass('clickable').hover(function() {           $(this).addClass('hover');         }, function() {           $(this).removeClass('hover');         }).click(function() {           var rows = $table.find('tbody > tr').get();           rows.sort(function(a, b) {             var keyA = $(a).children('td').eq(column).text()                                                       .toUpperCase();             var keyB = $(b).children('td').eq(column).text()                                                       .toUpperCase();             if (keyA < keyB) return -1;             if (keyA > keyB) return 1;             return 0;           });           $.each(rows, function(index, row) {             $table.children('tbody').append(row);           });         });       }     });   }); }); The first thing to note is our use of the .each() method to make iteration explicit. Even though we could bind a click handler to all headers with the sort-alpha class just by calling $('table.sortable th.sort-alpha').click(), this wouldn't allow us to easily capture a crucial bit of information—the column index of the clicked header. Because .each() passes the iteration index into its callback function, we can use it to find the relevant cell in each row of the data later. Once we have found the header cell, we retrieve an array of all of the data rows. This is a great example of how .get()is useful in transforming a jQuery object into an array of DOM nodes; even though jQuery objects act like arrays in many respects, they don't have any of the native array methods available, such as .sort(). With .sort() at our disposal, the rest is fairly straightforward. The rows are sorted by comparing the textual contexts of the relevant table cell. We know which cell to look at because we captured the column index in the enclosing .each() call. We convert the text to uppercase because string comparisons in JavaScript are case-sensitive and we wish our sort to be case-insensitive. Finally, with the array sorted, we loop through the rows and reinsert them into the table. Since .append() does not clone nodes, this moves them rather than copying them. Our table is now sorted. This is an example of progressive enhancement's counterpart, gracefuldegradation. Unlike with the AJAX solution discussed earlier, we cannot make the sort work without JavaScript, as we are assuming the server has no scripting language available to it in this case. The JavaScript is required for the sort to work, so by adding the "clickable" class only through code, we make sure not to indicate with the interface that sorting is even possible unless the script can run. The page degrades into one that is still functional, albeit without sorting available. We have moved the actual rows around, hence our alternating row colors are now out of whack: We need to reapply the row colors after the sort is performed. We can do this by pulling the coloring code out into a function that we call when needed: $(document).ready(function() {   var alternateRowColors = function($table) {     $('tbody tr:odd', $table).removeClass('even').addClass('odd');     $('tbody tr:even', $table).removeClass('odd').addClass('even');   };     $('table.sortable').each(function() {     var $table = $(this);     alternateRowColors($table);     $('th', $table).each(function(column) {       if ($(this).is('.sort-alpha')) {         $(this).addClass('clickable').hover(function() {           $(this).addClass('hover');         }, function() {           $(this).removeClass('hover');         }).click(function() {           var rows = $table.find('tbody > tr').get();           rows.sort(function(a, b) {             var keyA = $(a).children('td').eq(column).text()                                                       .toUpperCase();             var keyB = $(b).children('td').eq(column).text()                                                       .toUpperCase();             if (keyA < keyB) return -1;             if (keyA > keyB) return 1;             return 0;           });           $.each(rows, function(index, row) {             $table.children('tbody').append(row);           });           alternateRowColors($table);         });       }     });   }); }); This corrects the row coloring after the fact, fixing our issue:   The Power of Plug-ins The alternateRowColors()function that we wrote is a perfect candidate to become a jQuery plug-in. In fact, any operation that we wish to apply to a set of DOM elements can easily be expressed as a plug-in. In this case, we only need to modify our existing function a little bit: jQuery.fn.alternateRowColors = function() {   $('tbody tr:odd', this).removeClass('even').addClass('odd');   $('tbody tr:even', this).removeClass('odd').addClass('even');   return this; }; We have made three important changes to the function. It is defined as a new property of jQuery.fn rather than as a standalone function. This registers the function as a plug-in method. We use the keyword this as a replacement for our $table parameter. Within a plug-in method, thisrefers to the jQuery object that is being acted upon. Finally, we return this at the end of the function. The return value makes our new method chainable. More information on writing jQuery plug-ins can be found in Chapter 10 of our book Learning jQuery. There we will discuss making a plug-in ready for public consumption, as opposed to the small example here that is only to be used by our own code. With our new plug-in defined, we can call $table.alternateRowColors(), which is a more natural jQuery syntax, intead of alternateRowColors($table). Performance Concerns Our code works, but is quite slow. The culprit is the comparator function, which is performing a fair amount of work. This comparator will be called many times during the course of a sort, which means that every extra moment it spends on processing will be magnified. The actual sort algorithm used by JavaScript is not defined by the standard. It may be a simple sort like a bubble sort (worst case of Θ(n2) in computational complexity terms) or a more sophisticated approach like quick sort (which is Θ(n log n) on average). In either case doubling the number of items increases the number of times the comparator function is called by more than double. The remedy for our slow comparator is to pre-compute the keys for the comparison. We begin with the slow sort function: rows.sort(function(a, b) {   keyA = $(a).children('td').eq(column).text().toUpperCase();   keyB = $(b).children('td').eq(column).text().toUpperCase();   if (keyA < keyB) return -1;   if (keyA > keyB) return 1;   return 0; }); $.each(rows, function(index, row) {   $table.children('tbody').append(row); }); We can pull out the key computation and do that in a separate loop: $.each(rows, function(index, row) {   row.sortKey = $(row).children('td').eq(column).text().toUpperCase(); }); rows.sort(function(a, b) {   if (a.sortKey < b.sortKey) return -1;   if (a.sortKey > b.sortKey) return 1;   return 0; }); $.each(rows, function(index, row) {   $table.children('tbody').append(row);   row.sortKey = null; }); In the new loop, we are doing all of the expensive work and storing the result in a new property. This kind of property, attached to a DOM element but not a normal DOM attribute, is called an expando.This is a convenient place to store the key since we need one per table row element. Now we can examine this attribute within the comparator function, and our sort is markedly faster.  We set the expando property to null after we're done with it to clean up after ourselves. This is not necessary in this case, but is a good habit to establish because expando properties left lying around can be the cause of memory leaks. For more information, see Appendix C.   Finessing the Sort Keys Now we want to apply the same kind of sorting behavior to the Author(s) column of our table. By adding the sort-alpha class to its table header cell, the Author(s)column can be sorted with our existing code. But ideally authors should be sorted by last name, not first. Since some books have multiple authors, and some authors have middle names or initials listed, we need outside guidance to determine what part of the text to use as our sort key. We can supply this guidance by wrapping the relevant part of the cell in a tag: <tr>   <td>     <img src="../covers/small/1847192386.png" width="49" height="61"             alt="Building Websites with Joomla! 1.5 Beta 1" /></td>   <td>Building Websites with Joomla! 1.5 Beta 1</td>   <td>Hagen <span class="sort-key">Graf</span></td>   <td>Feb 2007</td>   <td>$40.49</td> </tr> <tr>   <td>     <img src="../covers/small/1904811620.png" width="49" height="61"          alt="Learning Mambo: A Step-by-Step Tutorial to Building                                                 Your Website" /></td>   <td>     Learning Mambo: A Step-by-Step Tutorial to Building Your Website   </td>   <td>Douglas <span class="sort-key">Paterson</span></td>   <td>Dec 2006</td>   <td>$40.49</td> </tr> <tr>   <td>     <img src="../covers/small/1904811299.png" width="49" height="61"                   alt="Moodle E-Learning Course Development" /></td>   <td>Moodle E-Learning Course Development</td>   <td>William <span class="sort-key">Rice</span></td>   <td>May 2006</td>   <td>$35.99</td> </tr> Now we have to modify our sorting code to take this tag into account, without disturbing the existing behavior for the Titlecolumn, which is working well. By prepending the marked sort key to the key we have previously calculated, we can sort first on the last name if it is called out, but on the whole string as a fallback: $.each(rows, function(index, row) {   var $cell = $(row).children('td').eq(column);   row.sortKey = $cell.find('.sort-key').text().toUpperCase()                                   + ' ' + $cell.text().toUpperCase(); }); Sorting by the Author(s)column now uses the last name:     If two last names are identical, the sort uses the entire string as a tiebreaker for positioning.
Read more
  • 0
  • 0
  • 19215

article-image-clojure-domain-specific-languages-design-concepts-clojure
Packt
13 Dec 2013
3 min read
Save for later

Clojure for Domain-specific Languages - Design Concepts with Clojure

Packt
13 Dec 2013
3 min read
(For more resources related to this topic, see here.) Every function is a little program When I first started getting deep into Clojure development, my friend Tom Marble taught me a very good lesson with a single sentence. I'm not sure if he's the originator of this idea, but he told me to think of writing functions as though "every function is a small program". I'm not really sure what I thought about functions before I heard this, but it all made sense the very moment he told me this. Why write a function as if it were its own program? Because both a function and a program are created to handle a specific set of problems, and this method of thinking allows us to break down our problems into a simpler group of problems. Each set of problems might only need a very limited collection of functions to solve them, so to make a function that fits only a single problem isn't really any different from writing a small program to get the very same result. Some might even call this the Unix philosophy, in the sense that you're trying to build small, extendable, simple, and modular code. A pure function What are the benefits of a program-like function? There are many benefits to this approach of development, but the two clear advantages are that the debugging process can be simplified with the decoupling of task, and this approach can make our code more modular. This approach also allows us to better build pure functions. A pure function isn't dependent on any variable outside the function. Anything other than the arguments passed to the function can't be realized by a pure function. Because our program will cause side effects as a result of execution, not all of our functions can be truly pure. This doesn't mean we should forget about trying to develop program-like functions. Our code inherently becomes more modular because pure functions can survive on their own. This is key when needing to build flexible, extendable, and reusable code components. Floor to roof development It is also known as bottom-up development and is the concept of building basic low- level pieces of a program and then combining them to build the whole program. This approach leads to more reusable code that can be more easily tested because each part of the program acts as an individual building block and doesn't require a large portion of the program to be completed to run a test. Each function only does one thing When a function is written to perform a specific task, that function shouldn't do anything unrelated to the original problem it's needed to solve. For example, if you were to write a function named parse-xml, the function should be able to act as a program that can only parse XML data. If the example function does anything else other than parse lines of XML input, it is probably badly designed and will cause confusion when trying to debug errors in our programs. This practice will help us keep our functions to a more reasonable size and can also help simplify the debugging process.
Read more
  • 0
  • 0
  • 18828

article-image-eav-model
Packt
10 Aug 2015
11 min read
Save for later

EAV model

Packt
10 Aug 2015
11 min read
In this article by Allan MacGregor, author of the book Magento PHP Developer's Guide - Second Edition, we cover details about EAV models, its usefulness in retrieving data, and the advantages it provides to the merchants and developers. EAV stands for entity, attribute, and value and is probably the most difficult concept for new Magento developers to grasp. While the EAV concept is not unique to Magento, it is rarely implemented on modern systems. Additionally, a Magento implementation is not a simple one. (For more resources related to this topic, see here.) What is EAV? In order to understand what EAV is and what its role within Magento is, we need to break down parts of the EAV model: Entity: This represents the data items (objects) inside Magento products, customers, categories, and orders. Each entity is stored in the database with a unique ID. Attribute: These are our object properties. Instead of having one column per attribute on the product table, attributes are stored on separate sets of tables. Value: As the name implies, it is simply the value link to a particular attribute. This data model is the secret behind Magento's flexibility and power, allowing entities to add and remove new properties without having to make any changes to the code, templates, or the database schema. This model can be seen as a vertical way of growing our database (new attributes and more rows), while the traditional model involves a horizontal growth pattern (new attributes and more columns), which would result in a schema redesign every time new attributes are added. The EAV model not only allows for the fast evolution of our database, but is also more effective because it only works with non-empty attributes, avoiding the need to reserve additional space in the database for null values. If you are interested in exploring and learning more about the Magento database structure, I highly recommend visiting www.magereverse.com. Adding a new product attribute is as simple going to the Magento backend and specifying the new attribute type, be it color, size, brand, or anything else. The opposite is true as well and we can get rid of unused attributes on our products or customer models. For more information on managing attributes, visit http://www.magentocommerce.com/knowledge-base/entry/how-do-attributes-work-in-magento. The Magento community edition currently has eight different types of EAV objects: Customer Customer Address Products Product Categories Orders Invoices Credit Memos Shipments The Magento Enterprise Edition has one additional type called RMA item, which is part of the Return Merchandise Authorization (RMA) system. All this flexibility and power is not free; there is a price to pay. Implementing the EAV model results in having our entity data distributed on a large number of tables. For example, just the Product Model is distributed to around 40 different tables. The following diagram only shows a few of the tables involved in saving the information of Magento products: Other major downsides of EAV are the loss of performance while retrieving large collections of EAV objects and an increase in the database query complexity. As the data is more fragmented (stored in more tables), selecting a single record involves several joins. One way Magento works around this downside of EAV is by making use of indexes and flat tables. For example, Magento can save all the product information into the flat_catalog table for easier and faster access. Let's continue using Magento products as our example and manually build the query to retrieve a single product. If you have phpmyadmin or MySQL Workbench installed on your development environment, you can experiment with the following queries. Each can be downloaded on the PHPMyAdmin website at http://www.phpmyadmin.net/ and the MySQL Workbench website at http://www.mysql.com/products/workbench/. The first table that we need to use is the catalog_product_entity table. We canconsider this our main product EAV table since it contains the main entity records for our products: Let's query the table by running the following SQL query: SELECT FROM `catalog_product_entity`; The table contains the following fields: entity_id: This is our product unique identifier that is used internally by Magento. entity_type_id: Magento has several different types of EAV models. Products, customers, and orders are just some of them. Identifying each of these by type allows Magento to retrieve the attributes and values from the appropriate tables. attribute_set_id: Product attributes can be grouped locally into attribute sets. Attribute sets allow even further flexibility on the product structure as products are not forced to use all available attributes. type_id: There are several different types of products in Magento: simple, configurable, bundled, downloadable, and grouped products; each with unique settings and functionality. sku: This stands for Stock Keeping Unit and is a number or code used to identify each unique product or item for sale in a store. This is a user-defined value. has_options: This is used to identify if a product has custom options. required_options: This is used to identify if any of the custom options that are required. created_at: This is the row creation date. updated_at: This is the last time the row was modified. Now we have a basic understanding of the product entity table. Each record represents a single product in our Magento store, but we don't have much information about that product beyond the SKU and the product type. So, where are the attributes stored? And how does Magento know the difference between a product attribute and a customer attribute? For this, we need to take a look into the eav_attribute table by running the following SQL query: SELECT FROM `eav_attribute`; As a result, we will not only see the product attributes, but also the attributes corresponding to the customer model, order model, and so on. Fortunately, we already have a key to filter the attributes from this table. Let's run the following query: SELECT FROM `eav_attribute` WHERE entity_type_id = 4; This query tells the database to only retrieve the attributes where the entity_type_id column is equal to the product entity_type_id(4). Before moving, let's analyze the most important fields inside the eav_attribute table: attribute_id: This is the unique identifier for each attribute and primary key of the table. entity_type_id: This relates each attribute to a specific eav model type. attribute_code: This is the name or key of our attribute and is used to generate the getters and setters for our magic methods. backend_model: These manage loading and storing data into the database. backend_type: This specifies the type of value stored in the backend (database). backend_table: This is used to specify if the attribute should be stored on a special table instead of the default EAV table. frontend_model: These handle the rendering of the attribute element into a web browser. frontend_input: Similar to the frontend model, the frontend input specifies the type of input field the web browser should render. frontend_label: This is the label/name of the attribute as it should be rendered by the browser. source_model: These are used to populate an attribute with possible values. Magento comes with several predefined source models for countries, yes or no values, regions, and so on. Retrieving the data At this point, we have successfully retrieved a product entity and the specific attributes that apply to that entity. Now it's time to start retrieving the actual values. In order to simplify the example (and the query) a little, we will only try to retrieve the name attribute of our products. How do we know which table our attribute values are stored on? Well, thankfully, Magento follows a naming convention to name the tables. If we inspect our database structure, we will notice that there are several tables using the catalog_product_entity prefix: catalog_product_entity catalog_product_entity_datetime catalog_product_entity_decimal catalog_product_entity_int catalog_product_entity_text catalog_product_entity_varchar catalog_product_entity_gallery catalog_product_entity_media_gallery catalog_product_entity_tier_price Wait! How do we know which is the right table to query for our name attribute values? If you were paying attention, I already gave you the answer. Remember that the eav_attribute table had a column called backend_type? Magento EAV stores each attribute on a different table based on the backend type of that attribute. If we want to confirm the backend type of our name attribute, we can do so by running the following code: SELECT FROM `eav_attribute` WHERE `entity_type_id` =4 AND `attribute_code` = 'name'; As a result, we should see that the backend type is varchar and that the values for this attribute are stored in the catalog_product_entity_varchar table. Let's inspect this table: The catalog_product_entity_varchar table is formed by only 6 columns: value_id: This is the attribute value unique identifier and primary key entity_type_id: This is the entity type ID to which this value belongs attribute_id: This is the foreign key that relates the value to our eav_entity table store_id: This is the foreign key matching an attribute value with a storeview entity_id: This is the foreign key relating to the corresponding entity table, in this case, catalog_product_entity value: This is the actual value that we want to retrieve Depending on the attribute configuration, we can have it as a global value, meaning, it applies across all store views or a value per storeview. Now that we finally have all the tables that we need to retrieve the product information, we can build our query: SELECT p.entity_id AS product_id, var.value AS product_name, p.sku AS product_sku FROM catalog_product_entity p, eav_attribute eav, catalog_product_entity_varchar var WHERE p.entity_type_id = eav.entity_type_id AND var.entity_id = p.entity_id    AND eav.attribute_code = 'name'    AND eav.attribute_id = var.attribute_id From our query, we should see a result set with three columns, product_id, product_name, and product_sku. So let's step back for a second in order to get product names with SKUs with raw SQL. We had to write a five-line SQL query, and we only retrieved two values from our products, from one single EAV value table if we want to retrieve a numeric field such as price or a text-value-like product. If we didn't have an ORM in place, maintaining Magento would be almost impossible. Fortunately, we do have an ORM in place, and most likely, you will never need to deal with raw SQL to work with Magento. That said, let's see how we can retrieve the same product information by using the Magento ORM: Our first step is going to be to instantiate a product collection: $collection = Mage::getModel('catalog/product')->getCollection(); Then we will specifically tell Magento to select the name attribute: $collection->addAttributeToSelect('name'); Then, we will ask it to sort the collection by name: $collection->setOrder('name', 'asc'); Finally, we will tell Magento to load the collection: $collection->load(); The end result is a collection of all products in the store sorted by name. We can inspect the actual SQL query by running the following code: echo $collection->getSelect()->__toString(); In just three lines of code, we are telling Magento to grab all the products in the store, to specifically select the name, and finally order the products by name. The last line $collection->getSelect()->__toString(); allows to see the actual query that Magento is executing in our behalf. The actual query being generated by Magento is as follows: SELECT `e`.. IF( at_name.value_id >0, at_name.value, at_name_default.value ) AS `name` FROM `catalog_product_entity` AS `e` LEFT JOIN `catalog_product_entity_varchar` AS `at_name_default` ON (`at_name_default`.`entity_id` = `e`.`entity_id`) AND (`at_name_default`.`attribute_id` = '65') AND `at_name_default`.`store_id` =0 LEFT JOIN `catalog_product_entity_varchar` AS `at_name` ON ( `at_name`.`entity_id` = `e`.`entity_id` ) AND (`at_name`.`attribute_id` = '65') AND (`at_name`.`store_id` =1) ORDER BY `name` ASC As we can see, the ORM and the EAV models are wonderful tools that not only put a lot of power and flexibility in the hands of the developers, but they also do it in a way that is comprehensive and easy to use. Summary In this article, we learned about EAV models and how they are structured to provide Magento with data flexibility and extensibility that both merchants and developers can take advantage of. Resources for Article: Further resources on this subject: Creating a Shipping Module [article] Preparing and Configuring Your Magento Website [article] Optimizing Magento Performance — Using HHVM [article]
Read more
  • 0
  • 0
  • 18756
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-understanding-websockets-and-server-sent-events-detail
Packt
29 Oct 2013
10 min read
Save for later

Understanding WebSockets and Server-sent Events in Detail

Packt
29 Oct 2013
10 min read
(For more resources related to this topic, see here.) Encoders and decoders in Java API for WebSockets As seen in the previous chapter, the class-level annotation @ServerEndpoint indicates that a Java class is a WebSocket endpoint at runtime. The value attribute is used to specify a URI mapping for the endpoint. Additionally the user can add encoder and decoder attributes to encode application objects into WebSocket messages and WebSocket messages into application objects. The following table summarizes the @ServerEndpoint annotation and its attributes: Annotation Attribute Description @ServerEndpoint   This class-level annotation signifies that the Java class is a WebSockets server endpoint.   value The value is the URI with a leading '/.'   encoders The encoders contains a list of Java classes that act as encoders for the endpoint. The classes must implement the Encoder interface.   decoders The decoders contains a list of Java classes that act as decoders for the endpoint. The classes must implement the Decoder interface.   configurator The configurator attribute allows the developer to plug in their implementation of ServerEndpoint.Configurator that is used when configuring the server endpoint.   subprotocols The sub protocols attribute contains a list of sub protocols that the endpoint can support. In this section we shall look at providing encoder and decoder implementations for our WebSockets endpoint. The preceding diagram shows how encoders will take an application object and convert it to a WebSockets message. Decoders will take a WebSockets message and convert to an application object. Here is a simple example where a client sends a WebSockets message to a WebSockets java endpoint that is annotated with @ServerEndpoint and decorated with encoder and decoder class. The decoder will decode the WebSockets message and send back the same message to the client. The encoder will convert the message to a WebSockets message. This sample is also included in the code bundle for the book. Here is the code to define ServerEndpoint with value for encoders and decoders: @ServerEndpoint(value="/book", encoders={MyEncoder.class}, decoders = {MyDecoder.class} ) public class BookCollection { @OnMessage public void onMessage(Book book,Session session) { try { session.getBasicRemote().sendObject(book); } catch (Exception ex) { ex.printStackTrace(); } } @OnOpen public void onOpen(Session session) { System.out.println("Opening socket" +session.getBasicRemote() ); } @OnClose public void onClose(Session session) { System.out.println("Closing socket" + session.getBasicRemote()); } } In the preceding code snippet, you can see the class BookCollection is annotated with @ServerEndpoint. The value=/book attribute provides URI mapping for the endpoint. The @ServerEndpoint also takes the encoders and decoders to be used during the WebSocket transmission. Once a WebSocket connection has been established, a session is created and the method annotated with @OnOpen will be called. When the WebSocket endpoint receives a message, the method annotated with @OnMessage will be called. In our sample the method simply sends the book object using the Session.getBasicRemote() which will get a reference to the RemoteEndpoint and send the message synchronously. Encoders can be used to convert a custom user-defined object in a text message, TextStream, BinaryStream, or BinaryMessage format. An implementation of an encoder class for text messages is as follows: public class MyEncoder implements Encoder.Text<Book> { @Override public String encode(Book book) throws EncodeException { return book.getJson().toString(); } } As shown in the preceding code, the encoder class implements Encoder.Text<Book>. There is an encode method that is overridden and which converts a book and sends it as a JSON string. (More on JSON APIs is covered in detail in the next chapter) Decoders can be used to decode WebSockets messages in custom user-defined objects. They can decode in text, TextStream, and binary or BinaryStream format. Here is a code for a decoder class: public class MyDecoder implements Decoder.Text<Book> { @Override public Book decode(String string) throws DecodeException { javax.json.JsonObject jsonObject = javax.json.Json.createReader(new StringReader(string)).readObject(); return new Book(jsonObject); } @Override public boolean willDecode(String string) { try { javax.json.Json.createReader(new StringReader(string)).readObject(); return true; } catch (Exception ex) { } return false; } In the preceding code snippet, the Decoder.Text needs two methods to be overridden. The willDecode() method checks if it can handle this object and decode it. The decode() method decodes the string into an object of type Book by using the JSON-P API javax.json.Json.createReader(). The following code snippet shows the user-defined class Book: public class Book { public Book() {} JsonObject jsonObject; public Book(JsonObject json) { this.jsonObject = json; } public JsonObject getJson() { return jsonObject; } public void setJson(JsonObject json) { this.jsonObject = json; } public Book(String message) { jsonObject = Json.createReader(new StringReader(message)).readObject(); } public String toString () { StringWriter writer = new StringWriter(); Json.createWriter(writer).write(jsonObject); return writer.toString(); } } The Book class is a user-defined class that takes the JSON object sent by the client. Here is an example of how the JSON details are sent to the WebSockets endpoints from JavaScript. var json = JSON.stringify({ "name": "Java 7 JAX-WS Web Services", "author":"Deepak Vohra", "isbn": "123456789" }); function addBook() { websocket.send(json); } The client sends the message using websocket.send() which will cause the onMessage() of the BookCollection.java to be invoked. The BookCollection.java will return the same book to the client. In the process, the decoder will decode the WebSockets message when it is received. To send back the same Book object, first the encoder will encode the Book object to a WebSockets message and send it to the client. The Java WebSocket Client API WebSockets and Server-sent Events , covered the Java WebSockets client API. Any POJO can be transformed into a WebSockets client by annotating it with @ClientEndpoint. Additionally the user can add encoders and decoders attributes to the @ClientEndpoint annotation to encode application objects into WebSockets messages and WebSockets messages into application objects. The following table shows the @ClientEndpoint annotation and its attributes: Annotation Attribute Description @ClientEndpoint   This class-level annotation signifies that the Java class is a WebSockets client that will connect to a WebSockets server endpoint.   value The value is the URI with a leading /.   encoders The encoders contain a list of Java classes that act as encoders for the endpoint. The classes must implement the encoder interface.   decoders The decoders contain a list of Java classes that act as decoders for the endpoint. The classes must implement the decoder interface.   configurator The configurator attribute allows the developer to plug in their implementation of ClientEndpoint.Configurator, which is used when configuring the client endpoint.   subprotocols The sub protocols attribute contains a list of sub protocols that the endpoint can support. Sending different kinds of message data: blob/binary Using JavaScript we can traditionally send JSON or XML as strings. However, HTML5 allows applications to work with binary data to improve performance. WebSockets supports two kinds of binary data Binary Large Objects (blob) arraybuffer A WebSocket can work with only one of the formats at any given time. Using the binaryType property of a WebSocket, you can switch between using blob or arraybuffer: websocket.binaryType = "blob"; // receive some blob data websocket.binaryType = "arraybuffer"; // now receive ArrayBuffer data The following code snippet shows how to display images sent by a server using WebSockets. Here is a code snippet for how to send binary data with WebSockets: websocket.binaryType = 'arraybuffer'; The preceding code snippet sets the binaryType property of websocket to arraybuffer. websocket.onmessage = function(msg) { var arrayBuffer = msg.data; var bytes = new Uint8Array(arrayBuffer); var image = document.getElementById('image'); image.src = 'data:image/png;base64,'+encode(bytes); } When the onmessage is called the arrayBuffer is initialized to the message.data. The Uint8Array type represents an array of 8-bit unsigned integers. The image.src value is in line using the data URI scheme. Security and WebSockets WebSockets are secured using the web container security model. A WebSockets developer can declare whether the access to the WebSocket server endpoint needs to be authenticated, who can access it, or if it needs an encrypted connection. A WebSockets endpoint which is mapped to a ws:// URI is protected under the deployment descriptor with http:// URI with the same hostname,port path since the initial handshake is from the HTTP connection. So, WebSockets developers can assign an authentication scheme, user roles, and a transport guarantee to any WebSockets endpoints. We will take the same sample as we saw in , WebSockets and Server-sent Events , and make it a secure WebSockets application. Here is the web.xml for a secure WebSocket endpoint: <web-app version="3.0" xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <security-constraint> <web-resource-collection> <web-resource-name>BookCollection</web-resource-name> <url-pattern>/index.jsp</url-pattern> <http-method>PUT</http-method> <http-method>POST</http-method> <http-method>DELETE</http-method> <http-method>GET</http-method> </web-resource-collection> <user-data-constraint> <description>SSL</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> As you can see in the preceding snippet, we used <transport-guarantee>CONFIDENTIAL</transport-guarantee>. The Java EE specification followed by application servers provides different levels of transport guarantee on the communication between clients and application server. The three levels are: Data Confidentiality (CONFIDENTIAL) : We use this level to guarantee that all communication between client and server goes through the SSL layer and connections won't be accepted over a non-secure channel. Data Integrity (INTEGRAL) : We can use this level when a full encryption is not required but we want our data to be transmitted to and from a client in such a way that, if anyone changed the data, we could detect the change. Any type of connection (NONE) : We can use this level to force the container to accept connections on HTTP and HTTPs. The following steps should be followed for setting up SSL and running our sample to show a secure WebSockets application deployed in Glassfish. Generate the server certificate: keytool -genkey -alias server-alias -keyalg RSA -keypass changeit --storepass changeit -keystore keystore.jks Export the generated server certificate in keystore.jks into the file server.cer: keytool -export -alias server-alias -storepass changeit -file server.cer -keystore keystore.jks Create the trust-store file cacerts.jks and add the server certificate to the trust store: keytool -import -v -trustcacerts -alias server-alias -file server.cer -keystore cacerts.jks -keypass changeit -storepass changeit Change the following JVM options so that they point to the location and name of the new keystore. Add this in domain.xml under java-config: <jvm-options>-Djavax.net.ssl.keyStore=${com.sun.aas.instanceRoot}/config/keystore.jks</jvm-options> <jvm-options>-Djavax.net.ssl.trustStore=${com.sun.aas.instanceRoot}/config/cacerts.jks</jvm-options> Restart GlassFish. If you go to https://localhost:8181/helloworld-ws/, you can see the secure WebSocket application. Here is how the the headers look under Chrome Developer Tools: Open Chrome Browser and click on View and then on Developer Tools . Click on Network . Select book under element name and click on Frames . As you can see in the preceding screenshot, since the application is secured using SSL the WebSockets URI, it also contains wss://, which means WebSockets over SSL. So far we have seen the encoders and decoders for WebSockets messages. We also covered how to send binary data using WebSockets. Additionally we have demonstrated a sample on how to secure WebSockets based application. We shall now cover the best practices for WebSocket based-applications.
Read more
  • 0
  • 0
  • 18374

article-image-interacting-gnu-octave-operators
Packt
20 Jun 2011
6 min read
Save for later

Interacting with GNU Octave: Operators

Packt
20 Jun 2011
6 min read
GNU Octave Beginner's Guide Become a proficient Octave user by learning this high-level scientific numerical tool from the ground up The reader will benefit from the previous article on GNU Octave Variables. Basic arithmetic Octave offers easy ways to perform different arithmetic operations. This ranges from simple addition and multiplication to very complicated linear algebra. In this section, we will go through the most basic arithmetic operations, such as addition, subtraction, multiplication, and left and right division. In general, we should think of these operations in the framework of linear algebra and not in terms of arithmetic of simple scalars. Addition and subtraction We begin with addition. Time for action – doing addition and subtraction operations I have lost track of the variables! Let us start afresh and clear all variables first: octave:66> clear (Check with whos to see if we cleared everything). Now, we define four variables in a single command line(!) octave:67> a = 2; b=[1 2 3]; c=[1; 2; 3]; A=[1 2 3; 4 5 6]; Note that there is an important difference between the variables b and c; namely, b is a row vector, whereas c is a column vector. Let us jump into it and try to add the different variables. This is done using the + character: octave:68>a+a ans = 4 octave:69>a+b ans = 3 4 5 octave:70>b+b ans = 2 4 6 octave:71>b+c error: operator +: nonconformant arguments (op1 is 1x3, op2 is 3x1) It is often convenient to enter multiple commands on the same line. Try to test the difference in separating the commands with commas and semicolons. What just happened? The output from Command 68 should be clear; we add the scalar a with itself. In Command 69, we see that the + operator simply adds the scalar a to each element in the b row vector. This is named element-wise addition. It also works if we add a scalar to a matrix or a higher dimensional array. Now, if + is applied between two vectors, it will add the elements together element-wise if and only if the two vectors have the same size, that is, they have same number of rows or columns. This is also what we would expect from basic linear algebra. From Command 70 and 71, we see that b+b is valid, but b+c is not, because b is a row vector and c is a column vector—they do not have the same size. In the last case, Octave produces an error message stating the problem. This would also be a problem if we tried to add, say b with A: octave:72>b+A error: operator +: nonconformant arguments (op1 is 1x3, op2 is 2x3) From the above examples, we see that adding a scalar to a vector or a matrix is a special case. It is allowed even though the dimensions do not match! When adding and subtracting vectors and matrices, the sizes must be the same. Not surprisingly, subtraction is done using the - operator. The same rules apply here, for example: octave:73> b-b ans = 0 0 0 is fine, but: octave:74> b-c error: operator -: nonconformant arguments (op1 is 1x3, op2 is 2x3) produces an error. Matrix multiplication The * operator is used for matrix multiplication. Recall from linear algebra that we cannot multiply any two matrices. Furthermore, matrix multiplication is not commutative. For example, consider the two matrices: The matrix product AB is defined, but BA is not. If A is size n x k and B has size k x m, the matrix product AB will be a matrix with size n x m. From this, we know that the number of columns of the "left" matrix must match the number of rows of the "right" matrix. We may think of this as (n x k)(k x m) = n x m. In the example above, the matrix product AB therefore results in a 2 x 3 matrix: Time for action – doing multiplication operations Let us try to perform some of the same operations for multiplication as we did for addition: octave:75> a*a ans = 4 octave:76> a*b ans = 2 4 6 octave:77> b*b error: operator *: nonconformant arguments (op1 is 1x3, op2 is 1x3) octave:78> b*c ans = 14 What just happened? From Command 75, we see that * multiplies two scalar variables just like standard multiplication. In agreement with linear algebra, we can also multiply a scalar by each element in a vector as shown by the output from Command 76. Command 77 produces an error—recall that b is a row vector which Octave also interprets as a 1 x 3 matrix, so we try to perform the matrix multiplication (1 x 3)(1 x 3), which is not valid. In Command 78, on the other hand, we have (1 x 3)(3 x 1) since c is a column vector yielding a matrix with size 1 x 1, that is, a scalar. This is, of course, just the dot product between b and c. Let us try an additional example and perform the matrix multiplication between A and B discussed above. First, we need to instantiate the two matrices, and then we multiply them: octave:79> A=[1 2; 3 4]; B=[1 2 3; 4 5 6]; octave:80> A*B ans = 9 12 15 19 26 33 octave:81> B*A error: operator *: nonconformant arguments (op1 is 2x3, op2 is 2x2) Seems like Octave knows linear algebra! Element-by-element, power, and transpose operations If the sizes of two arrays are the same, Octave provides a convenient way to multiply the elements element-wise. For example, for B: octave:82> B.*B ans = 1 4 9 16 25 36 Notice that the period (full stop) character precedes the multiplication operator. The period character can also be used in connection with other operators. For example: octave:83> B.+B ans = 2 4 6 8 10 12 which is the same as the command B+B. If we wish to raise each element in B to the power 2.1, we use the element-wise power operator.ˆ: octave:84> B.^2.1 ans = 1.0000 4.2871 10.0451 18.3792 29.3655 43.0643 You can perform element-wise power operation on two matrices as well (if they are of the same size, of course): octave:85> B.^B ans = 1 4 27 256 3125 46656 If the power is a real number, you can use ˆ instead of .ˆ; that is, instead of Command 84 above, you can use: octave:84>Bˆ2.1 Transposing a vector or matrix is done via the 'operator. To transpose B, we simply type: octave:86> B' ans = 1 4 2 5 3 6 Strictly, the ' operator is a complex conjugate transpose operator. We can see this in the following examples: octave:87> B = [1 2; 3 4] + I.*eye(2) B = 1 + 1i 2 + 0i 3 + 0i 4 + 1i octave:88> B' ans = 1 - 1i 3 - 0i 2 - 0i 4 - 1i Note that in Command 87, we have used the .* operator to multiply the imaginary unit with all the elements in the diagonal matrix produced by eye(2). Finally, note that the command transpose(B)or the operator .' will transpose the matrix, but not complex conjugate the elements.
Read more
  • 0
  • 0
  • 17820

Packt
10 Oct 2013
3 min read
Save for later

Top Features You Need to Know About – Responsive Web Design

Packt
10 Oct 2013
3 min read
Responsive web design Nowadays, almost everyone has a smartphone or tablet in hand; this article prepares these individuals to adapt their portfolio to this new reality. Acknowledging that, today, there are tablets that are also phones and some laptops that are also tablets, we use an approach known as device agnostic, where instead of giving devices names, such as mobile, tablet, or desktop, we refer to them as small, medium, or large. With this approach, we can cover a vast array of gadgets from smartphones, tablets, laptops, and desktops, to the displays on refrigerators, cars, watches, and so on. Photoshop Within the pages of this article, you will find two Photoshop templates that I prepared for you. The first is small.psd, which you may use to prepare your layouts for smartphones, small tablets, and even, to a certain extent, displays on a refrigerator. The second is medium.psd, which can be used for tablets, net books, or even displays in cars. I used these templates to lay out all the sizes of our website (portfolio) that we will work on in this article, as you can see in the following screenshot: One of the principle elements of responsive web design is the flexible grid and what I did with Photoshop layout was to mimic those grids, which we will use later. With time, this will be easier and it won't be necessary to lay out every version of every page, but, for now, it is good to understand how things happen. Code Now that we have a preview of how the small version will look, it's time to code it. The first thing we will need is the fluid version of the 960.gs, which you can download from https://raw.github.com/bauhouse/fluid960gs/master/css/grid.css and save as 960_fluid.css in the css folder. After that, let's create two more files in this folder, small.css and medium.css. We will use these files to maintain the organized versions of our portfolio. Lastly, let's link the files to our HTML document as follows: <head> <meta charset="utf-8"> <meta name="viewport" content="width=device-width"> <title>Portfolio</title> <link href="css/reset.css" rel="stylesheet" type="text/css"> <link href="css/960_fluid.css" rel="stylesheet" type="text/css"> <link href="css/main.css" rel="stylesheet" type="text/css"> <link href="css/medium.css" rel="stylesheet" type="text/css"> <link href="css/small.css" rel="stylesheet" type="text/css"> </head> If you reload your browser now, you should see that the portfolio is stretching all over the browser. This occurs because the grid is now fluid. To fix the width to, at most, 960 pixels, we need to insert the following lines at the beginning of the main.css file: Code 2: /* grid ================================ */ .container_12 { max-width: 960px; margin: Once you reload the browser and resize the window, you will see that the display is overly stretched and broken. In order to fix this, keeping in mind the layout we did in Photoshop, we can use the small version and medium version. Summary In this article we saw how to prepare our desktop-only portfolio using Photoshop and the method used to fix the broken and overly stretched display. Resources for Article: Further resources on this subject: Web Design Principles in Inkscape Building HTML5 Pages from Scratch HTML5 Presentations - creating our initial presentation
Read more
  • 0
  • 0
  • 17768

article-image-issues-and-wikis-gitlab
Packt
20 Nov 2013
6 min read
Save for later

Issues and Wikis in GitLab

Packt
20 Nov 2013
6 min read
(For more resources related to this topic, see here.) Issues The built-in features for issue tracking and documentation will be very beneficial to you, especially if you're working on extensive software projects, the ones with many components, or those that need to be supported in multiple versions at once, for example, stable, testing, and unstable. In this article, we will have a closer look at the formats that are supported for issues and wiki pages (in particular, Markdown); also the elements that can be referenced from within these and how issues can be organized. Furthermore, we will go through the process of assigning issues to team members, and keeping documentation in wiki pages, which can also be edited locally. Lastly, we will see how the RSS feeds generated by GitLab can keep your team in a closer loop around the projects they work on. The metadata covered in this article may seem trivial, but many famous software projects have gained traction due to their extensive and well-written documentation, which initially was done by core developers. It enables your users to do the same with their projects, even if only internally; it opens up for a much more efficient collaboration. GitLab-flavored Markdown GitLab comes with a Markdown formatting parser that is fairly similar to GitHubs, which makes it very easy to adapt and migrate. Many standalone editors also support this format, such as Mou (http://mouapp.com/) for Mac or MarkdownPad (http://markdownpad.com/) for Windows. On Linux, editors with a split view, such as ReText (http://sourceforge.net/projects/retext/) or the more Zen-writing UberWriter (http://uberwriter.wolfvollprecht.de/) are available. For the popular Vim editor , multiple Markdown plugins too are up for grabs on a number of GitHub repositories; one of them is Vim Markdown (https://github.com/tpope/vim-markdown) by Tim Pope. Lastly, I'd like to mention that you don't need a dedicated editor for Markdown because they are plain text files. The mentioned editors simply enhance the view through syntax highlighting and preview modes. About Markdown Markdown was originally written by John Gruber, and has since evolved into various flavors. The intention of this very lightweight markup language is to have a source that is easy to edit and can be transformed into meaningful HTML to be displayed on the Web. Different variations of Markdown have made it to a majority of very successful software projects as the default language; readme files, documentation, and even blogging engines adopt it. In Markdown, text styles can be applied, links placed, and images can be inserted. If ever Markdown, by default, does not support what you are currently trying to do, you can insert plain HTML, which will not be altered by the Markdown parser. Referring to elements inside GitLab When working with source code, it can be of importance to refer to a line of code, a file, or other things, when discussing something. Because many development teams are nowadays spread throughout the world, GitLab adapts to that and makes it easy to refer and reference many things directly from comments, wiki pages, or issues. Some things like files or lines can be referenced via links, because GitLab has unique links to the branches of a repository; others are more directly accessible. The following items (basically, prefixed strings or IDs) can be referenced through shortcodes: commit messages comments wall posts issues merge requests milestones wiki pages To reference items, use the following shortcodes inside any field that supports Markdown or RDoc on the web interface: @foofor team members #123for issues !123for merge requests $123for snippets 1234567for commits Issues, knowing what needs to be done An issue is a text message of variable length, describing a bug in the code, an improvement to be made, or something else that should be done or discussed. By commenting on the issue, developers or project leaders can respond to this request or statement. The meta information attached to an issue can be very valuable to the team, because developers can be assigned to an issue, and it can be tagged or labeled with keywords that describe the content or area to which it belongs. Furthermore, you can also set a goal for the milestone to be included in this fix or feature. In the following screenshot, you can see the interface for issues: Creating issues By navigating to the Issues tab of a repository in the web interface, you can easily create new issues. Their title should be brief and precise, because a more elaborate description area is available. The description area supports the GitLab-flavored Markdown, as mentioned previously. Upon creation, you can choose a milestone and a user to assign an issue to, but you can also leave these fields unset, possibly to let your developers themselves choose with what they want to work and at what time. Before they begin their work, they can assign the issues to themselves. In the following screenshot, you can see what the issue creation form looks like: Working with labels or tags Labels are tags used to organize issues by the topic and severity. Creating labels is as easy as inserting them, separated by a comma, into the respective field while creating an issue. Currently in Version 5.2, certain keywords trigger a certain background color on the label. Labels like critical or bug turn red, feature turns green, and other labels are blue by default. The following screenshot shows what a list of labeled features looks like: After the creation of a label, it will be listed under the Labels tab within the Issues page, with a link that lists all the issues that have been labeled the same. Filtering by the label, assigned user, or milestone is also possible from the list of issues within each projects overview. Summary In this article, we have had a look at the project management side of things. You can now make use of the built-in possibilities to distribute tasks across team members through issues, keep track of things that still have to do with the issues, or enable observers to point out bugs. Resources for Article : Further resources on this subject: Using Gerrit with GitHub [Article] The architecture of JavaScriptMVC [Article] Using the OSGi Bundle Repository in OSGi and Apache Felix 3.0 [Article]
Read more
  • 0
  • 1
  • 17589
article-image-creating-blog-content-wordpress
Packt
25 Nov 2013
18 min read
Save for later

Creating Blog Content in WordPress

Packt
25 Nov 2013
18 min read
(For more resources related to this topic, see here.) Posting on your blog The central activity you'll be doing with your blog is adding posts. A post is like an article in a magazine; it's got a title, content, and an author (in this case, you, though WordPress allows multiple authors to contribute to a blog). If a blog is like an online diary, every post is an entry in that diary. A blog post also has a lot of other information attached to it, such as a date, excerpt, tags, and categories. In this section, you will learn how to create a new post and what kind of information to attach to it. Adding a simple post Let's review the process of adding a simple post to your blog. Whenever you want to add content or carry out a maintenance process on your WordPress website, you have to start by logging in to the WP Admin (WordPress Administration panel) of your site. To get to the admin panel, just point your web browser to http://yoursite.com/wp-admin. Remember that if you have installed WordPress in a subfolder (for example, blog), your URL has to include the subfolder (that is, http://yoursite.com/blog/wp-admin). When you first log in to the WP Admin, you'll be at the Dashboard. The Dashboard has a lot of information on it so don't worry about that right now. The quickest way to get to the Add New Post page at any time is to click on + New and then the Post link at the top of the page in the top bar. This is the Add New Post page: To add a new post to your site quickly, all you have to do is: Type in a title into the text field under Add New Post (for example, Making Lasagne). Type the text of your post in the content box. Note that the default view is Visual, but you actually have a choice of the Text view as well. Click on the Publish button, which is at the far right. Note that you can choose to save a draft or preview your post as well. Once you click on the Publish button, you have to wait while WordPress performs its magic. You'll see yourself still on the Edit Post screen, but now the following message would have appeared telling you that your post was published, and giving you a link View post: If you view the front page of your site, you'll see that your new post has been added at the top (newest posts are always at the top). Common post options Now that we've reviewed the basics of adding a post, let's investigate some of the other options on the Add New Post and Edit Post pages. In this section we'll look at the most commonly used options, and in the next section we'll look at the more advanced options. Categories and tags Categories and tags are two types of information that you can add to a blog post. We use them to organize the information in your blog by topic and content (rather than just by, say, date), and to help visitors find what they are looking for on your blog. Categories are primarily used for structural organizing. They can be hierarchical, meaning a category can be a parent of another category. A relatively busy blog will probably have at least 10 categories, but probably not more than 15 or 20. Each post in such a blog is likely to have from one up to, maybe four categories assigned to it. For example, a blog about food and cooking might have these categories: Cooking Adventures, In The Media, Ingredients, Opinion, Recipes Found, Recipes Invented, and Restaurants. Of course, the numbers mentioned are just suggestions; you can create and assign as many categories as you like. The way you structure your categories is entirely up to you as well. There are no true rules regarding this in the WordPress world, just guidelines like these. Tags are primarily used as shorthand for describing the topics covered in a particular blog post. A relatively busy blog will have anywhere from 15 to even 100 tags in use. Each post in this blog is likely to have 3 to 10 tags assigned to it. For example, a post on the food blog about a recipe for butternut squash soup may have these tags: soup, vegetarian, autumn, hot, and easy. Again, you can create and assign as many tags as you like. Let's add a new post to the blog. After you give it a title and content, let's add tags and categories. While adding tags, just type your list of tags into the Tags box on the right, separated by commas: Then click on the Add button. The tags you just typed in will appear below the text field with little x buttons next to them. You can click on an x button to delete a tag. Once you've used some tags in your blog, you'll be able to click on the Choose from the most used tags link in this box so that you can easily re-use tags. Categories work a bit differently than tags. Once you get your blog going, you'll usually just check the boxes next to existing categories in the Categories box. In this case, as we don't have any existing categories, we'll have to add one or two. In the Categories box on the right, click on the + Add New Category link. Type your category into the text field, and click on the Add New Category button. Your new category will show up in the list, already checked. Look at the following screenshot: If in the future you want to add a category that needs a parent category, select — Parent Category — from the pull-down menu before clicking on the Add New Category button. If you want to manage more details about your categories, move them around, rename them, assign parent categories, and assign descriptive text. You can do so on the Categories page. Click on the Publish button, and you're done (you can instead choose to schedule a post; we'll explore that in detail in a few pages). When you look at the front page of your site, you'll see your new post on the top, your new category in the sidebar, and the tags and category (that you chose for your post) listed under the post itself. Images in your posts Almost every good blog post needs an image! An image will give the reader an instant idea of what the post is about, and the image will draw people's attention as well. WordPress makes it easy to add an image to your post, control default image sizes, make minor edits to that image, and designate a featured image for your post. Adding an image to a post Luckily, WordPress makes adding images to your content very easy. Let's add an image to the post we just created. You can click on Edit underneath your post on the front page of your site to get there quickly. Alternatively, go back to the WP Admin, open Posts in the main menu, and then click on the post's title. To add an image to a post, first you'll need to have that image on your computer, or know the exact URL pointing to the image if it's already online. Before you get ready to upload an image, make sure that your image is optimized for the Web. Huge files will be uploaded slowly and slow down the process of viewing your site. Just to give you a good example here, I'm using a photo of my own so I don't have to worry about any copyright issues (always make sure to use only the images that you have the right to use, copyright infringement online is a serious problem, to say the least). I know it's on the desktop of my computer. Once you have a picture on your computer and know where it is, carry out the following steps to add the photo to your blog post: Click on the Add Media button, which is right above the content box and below the title box: The box that appears allows you to do a number of different things regarding the media you want to include in your post. The most user-friendly feature here, however, is the drag-and-drop support. Just drag the image from your desktop and drop it into the center area of the page labeled as Drop files anywhere to upload. Immediately after dropping the image, the uploader bar will show the progress of the operation, and when it's done, you'll be able to do some final tuning up. The fields that are important right now are Title, Alt Text, Alignment, Link To, and Size. Title is a description for the image, Alt Text is a phrase that's going to appear instead of the image in case the file goes missing or any other problems present themselves, Alignment will tell the image whether to have text wrap around it and whether it should be right, left, or center, Link To instructs WordPress whether or not to link the image to anything (a common solution is to select None), and Size is the size of the image. Once you have all of the above filled out click on Insert into post. This box will disappear, and your image will show up in the post—right where your cursor was prior to clicking on the Add Media button—on the edit page itself (in the visual editor, that is. If you're using the text editor, the HTML code of the image will be displayed instead). Now, click on the Update button, and go and look at the front page of your site again. There's your image! Controlling default image sizes You may be wondering about those image sizes. What if you want bigger or smaller thumbnails? Whenever you upload an image, WordPress creates three versions of that image for you. You can set the pixel dimensions of those three versions by opening Settings in the main menu, and then clicking on Media. This takes you to the Media Settings page. Here you can specify the size of the uploaded images for: Thumbnail size Medium size Large size If you change the dimensions on this page, and click on the Save Changes button, only images you upload in the future will be affected. Images you've already uploaded to the site will have had their thumbnail, medium, and large versions created already using the old settings. It's a good idea to decide what you want your three media sizes to be early on in your site, so you can set them and have them applied to all images, right from the start. Another thing about uploading images is the whole craze with HiDPI displays, also called Retina displays. Currently, WordPress is in a kind of a transitional phase with images and being in tune with the modern display technology; the Retina Ready functionality was introduced quite recently in WordPress 3.5. In short, if you want to make your images Retina-compatible (meaning that they look good on iPads and other devices with HiDPI screens), you should upload the images at twice the dimensions you plan to display them in. For example, if you want your image to be presented as 800 pixel wide and 600 pixel high, upload it as 1,600 pixel wide and 1,200 pixel high. WordPress will manage to display it properly anyway, and whoever visits your site from a modern device will see a high-definition version of the image. In future versions, WordPress will surely provide a more managed way of handling Retina-compatible images. Editing an uploaded image As of WordPress 2.9, you can now make minor edits on images you've uploaded. In fact, every image that has been previously uploaded to WordPress can be edited. In order to do this, go to Media Library by clicking on the Media button in the main sidebar. What you'll see is a standard WordPress listing (similar to the one we saw while working with posts) presenting all media files and allowing you to edit each one. When you click on the Edit link and then the Edit Image button on the subsequent screen, you'll enter the Edit Media section. Here, you can perform a number of operations to make your image just perfect. As it turns out, WordPress does a good enough job with simple image tuning so you don't really need expensive software such as Photoshop for this. Among the possibilities you'll find cropping, rotating, and flipping vertically and horizontally. For example, you can use your mouse to draw a box as I have done in the preceding image. On the right, in the box marked Image Crop, you'll see the pixel dimensions of your selection. Click on the Crop icon (top left), then the Thumbnail radio button (on the right), and then Save (just below your photo). You now have a new thumbnail! Of course, you can adjust any other version of your image just by making a different selection prior to hitting the Save button. Play around a little and you can become familiar with the details. Designating a featured image As of WordPress 2.9, you can designate a single image that represents your post. This is referred to as the featured image. Some themes will make use of this, and some will not. The default theme, the one we've been using, is named Twenty Thirteen, and it uses the featured image right above the post on the front page. Depending on the theme you're using, its behavior with featured images can vary, but in general, every modern theme supports them in one way or the other. In order to set a featured image, go to the Edit Post screen. In the sidebar you'll see a box labeled Featured Image. Just click on the Set featured image link. After doing so, you'll see a pop-up window, very similar to the one we used while uploading images. Here, you can either upload a completely new image or select an existing image by clicking on it. All you have to do now is click on the Set featured image button in the bottom right corner. After completing the operation, you can finally see what your new image looks like on the front page. Also, keep in mind that WordPress uses featured images in multiple places not only the front page. And as mentioned above, much of this behavior depends on your current theme. Using the visual editor versus text editor WordPress comes with a visual editor, otherwise known as a WYSIWYG editor (pronounced wissy-wig, and stands for What You See Is What You Get). This is the default editor for typing and editing your posts. If you're comfortable with HTML, you may prefer to write and edit your posts using the text editor—particularly useful if you want to add special content or styling. To switch from the rich text editor to the text editor, click on the Text tab next to the Visual tab at the top of the content box: You'll see your post in all its raw HTML glory, and you'll get a new set of buttons that lets you quickly bold and italicize text, as well as add link code, image code, and so on. You can make changes and swap back and forth between the tabs to see the result. Even though the text editor allows you to use some HTML elements, it's not a fully fledged HTML support. For instance, using the <p> tags is not necessary in the text editor, as they will be stripped by default. In order to create a new paragraph in the text editor, all you have to do is press the Enter key twice. That being said, at the same time, the text editor is currently the only way to use HTML tables in WordPress (within posts and pages). You can easily place your table content inside the <table><tr><td> tags and WordPress won't alter it in any way, effectively allowing you to create the exact table you want. Another thing the text editor is most commonly used for is introducing custom HTML parameters in the <img /> and <a> tags and also custom CSS classes in other popular tags. Some content creators actually prefer working with the text editor rather than the visual editor because it gives them much more control and more certainty regarding the way their content is going to be presented on the frontend. Lead and body One of many interesting publishing features WordPress has to offer is the concept of the lead and the body of the post. This may sound like a strange thing, but it's actually quite simple. When you're publishing a new post, you don't necessarily want to display its whole contents right away on the front page. A much more user-friendly approach is to display only the lead, and then display the complete post under its individual URL. Achieving this in WordPress is very simple. All you have to do is use the Insert More Tag button available in the visual editor (or the more button in the text editor). Simply place your cursor exactly where you want to break your post (the text before the cursor will become the lead) and then click on the Insert More Tag button: An alternative way of using this tag is to switch to the text editor and input the tag manually, which is <!--more-->. Both approaches produce the same result. Clicking on the main Update button will save the changes. On the front page, most WordPress themes display such posts by presenting the lead along with a Continue reading link, and then the whole post (both the lead and the rest of the post) is displayed under the post's individual URL. Drafts, pending articles, timestamps, and managing posts There are four additional, simple but common, items I'd like to cover in this section: drafts, pending articles, timestamps, and managing posts. Drafts WordPress gives you the option to save a draft of your post so that you don't have to publish it right away but can still save your work. If you've started writing a post and want to save a draft, just click on the Save Draft button at the right (in the Publish box), instead of the Publish button. Even if you don't click on the Save Draft button, WordPress will attempt to save a draft of your post for you, about once a minute. You'll see this in the area just below the content box. The text will say Saving Draft... and then show the time of the last draft saved: At this point, after a manual save or an autosave, you can leave the Edit Post page and do other things. You'll be able to access all of your draft posts from Dashboard or from the Edit Posts page. In essence, drafts are meant to hold your "work in progress" which means all the articles that haven't been finished yet, or haven't even been started yet, and obviously everything in between. Pending articles Pending articles is a functionality that's going to be a lot more helpful to people working with multi-author blogs, rather than single-author blogs. The thing is that in a bigger publishing structure, there are individuals responsible for different areas of the publishing process. WordPress, being a quality tool, supports such a structure by providing a way to save articles as Pending Review. In an editor-author relationship, if an editor sees a post marked as Pending Review, they know that they should have a look at it and prepare it for the final publication. That's it for the theory, and now how to do it. While creating a new post, click on the Edit link right next to the Status: Draft label: Right after doing so, you'll be presented with a new drop-down menu from which you can select Pending Review and then click on the OK button. Now just click on the Save as Pending button that will appear in place of the old Save Draft button, and you have a shiny new article that's pending review. Timestamps WordPress will also let you alter the timestamp of your post. This is useful if you are writing a post today that you wish you'd published yesterday, or if you're writing a post in advance and don't want it to show up until the right day. By default, the timestamp will be set to the moment you publish your post. To change it, just find the Publish box, and click on the Edit link (next to the calendar icon and Publish immediately), and fields will show up with the current date and time for you to change: Change the details, click on the OK button, and then click on Publish to publish your post (or save a draft). Managing posts If you want to see a list of your posts so that you can easily skim and manage them, you just need to go to the Edit Post page in the WP Admin and navigate to Posts in the main menu. Once you do so, there are many things you can do on this page as with every management page in the WP Admin.
Read more
  • 0
  • 0
  • 17134

article-image-supporting-hypervisors-opennebula
Packt
25 May 2012
7 min read
Save for later

Supporting hypervisors by OpenNebula

Packt
25 May 2012
7 min read
(For more resources on Open Source, see here.) A host is a server that has the ability to run virtual machines using a special software component called a hypervisor that is managed by the OpenNebula frontend. All the hosts do not need to have homogeneous configuration, but it is possible to use different hypervisors on different GNU/Linux distributions on a single OpenNebula cluster. Using different hypervisors in your infrastructure is not just a technical exercise but assures you greater flexibility and reliability. A few examples where having multiple hypervisors would prove to be beneficial are as follows: A bug in the current release of A hypervisor does not permit the installation of a virtual machine with a particular legacy OS (let's say, for example,Windows 2000 Service Pack 4), but you can execute it with B hypervisor without any problem. You have a production infrastructure that is running a closed source free-to-use hypervisor, and during the next year the software house developing that hypervisor will request a license payment or declare bankruptcy due to economic crisis. The current version of OpenNebula will give you great flexibility regarding hypervisor usage since it natively supports KVM/Xen (which are open source) and VMware ESXi. In the future it will probably support both VirtualBox (Oracle) and Hyper-V (Microsoft). Configuring hosts The first thing to do before starting with the installation of a particular hypervisor on a host is to perform some general configuration steps. They are as follows: Create a dedicated oneadmin UNIX account (which should have sudo privileges for executing particular tasks, for example, iptables/ebtables,and network hooks that we have configured. The frontend and host's hostname should be resolved by a local DNS or a shared/etc/hosts file. The oneadmin on the frontend should be able to connect remotely through SSH to the oneadmin on the hosts without a password. Configure the shared network bridge that will be used by VM to get the physical network.   The oneadmin account and passwordless login Every host should have a oneadmin UNIX account that will be used by the OpenNebula frontend to connect and execute commands. If during the operating system install you did not create it, create a oneadmin user on the host by using the following command: youruser@host1 $ sudo adduser oneadmin You can configure any password you like (even blank) because we are going to set up a passwordless login from the frontend: oneadmin@front-end $ ssh-copy-id oneadmin@host1 Now if you connect from the oneadmin account on the frontend to the oneadminaccount of the host, you should get the shell prompt without entering any password by using the following command: oneadmin@front-end $ ssh oneadmin@host1 Uniformity of oneadmin UID number Later, we will learn about the possible storage solutions available with OpenNebula. However, keep in mind that if we are going to set up a shared storage, we need to make sure that the UID number of the oneadmin user is homogeneous between the frontend and every other host. In other words, check that with the id command the oneadmin UID is the same both on the frontend and the hosts. Verifying the SSH host fingerprints The first time you connect to a remote SSH server from a particular host, the SSH client will provide you the fingerprintprint of the remote server and ask for your permission to continue with the following message: The authenticity of host host01 (192.168.254.2)can't be established. RSA key fingerprint is 5a:65:0f:6f:21:bb:fd:6a:4a:68:cd: 72:58:5c:fb:9f. Are you sure you want to continue connecting (yes/no)? Knowing the fingerprint of the remote SSH key and saving it to the local SSH client fingerprint cache (saved in ~/.ssh/known_hosts) should be good enough to prevent man-in-the-middle attacks. For this reason, you need to connect from the oneadmin user on the frontend to every host in order to save the fingerprints of the remote hosts in the oneadmin known_hosts for the first time. Not doing this will prevent OpenNebula from connecting to the remote hosts. In large environments, this requirement may be a slow-down when cofiguring new hosts. However, it is possible to bypass this operation by instructing the remote client used by OpenNebula to connect to remote hosts and not check the remote SSH key in ~/.ssh/config. The command prompt will show the following content when the operation is bypassed: Host* StrictHostKeyChecking no. If you do not have a local DNS (or you cannot/do not want to set it up), you can manually manage the /etc/hosts file in every host, using the following IP addresses: 127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01 Now you should be able to remotely connect from a node to another with your hostname using the following command: $ ssh oneadmin@kvm01 Configuring a simple DNS with dnsmasq If you do not have a local DNS and manually managing the plain host's file on every host does not excite you, you can try to install and configure dnsmasq. It is a lightweight, easy-to-configure DNS forwarder (optionally DHCP and TFTP can be provided within it) that services well to a small-scale network. The OpenNebula frontend may be a good place to install it. For an Ubuntu/Debian installation use the following command: $ sudo apt-get install dnsmasq The default configuration should be fine. You just need to make sure that /etc/resolv.conf configuration details look similar to the following: # dnsmasq nameserver 127.0.0.1 # another local DNS nameserver 192.168.0.1 # ISP or public DNS nameserver 208.67.220.220 nameserver 208.67.222.222 The /etc/hosts configuration details will look similar to the following: 127.0.0.1 localhost 192.168.66.90 on-front 192.168.66.97 kvm01 192.168.66.98 xen01 192.168.66.99 esx01 Configure any other hostname here in the hosts file on the frontend by running dnsmasq. Configure /etc/resolv.conf configuration details on the other hosts using the following code: # ip where dnsmasq is installed nameserver 192.168.0.2 Now you should be able to remotely connect from a node to another using your plain hostname using the following command: $ ssh oneadmin@kvm01 When you add new hosts, simply add them at /etc/hosts on the frontend and they will automatically work on every other host, thanks to dnsmasq. Configuring sudo To give administrative privileges to the oneadmin account on the hosts, add it to the sudo or admin group depending on your /etc/sudoers configuration using the following code: # /etc/sudoers Defaults env_reset root ALL=(ALL) ALL %sudo ALL=NOPASSWD: ALL With this simple sudo configuration, every user in the sudo group can execute any command with root privileges, without requiring to enter the user password before each command. Now add the oneadmin user to the sudo group with the following command: $ sudo adduser oneadmin sudo Giving full administrative privileges to the oneadmin account might be considered inappropriate for most security-focused people. However, I can assure you that if you are taking the first step with OpenNebula now, having full administrative privileges could save some headaches. This is a suggested configuration but it is not required to run OpenNebula. Configuring network bridges Every host should have its bridges configured with the same name. Check the following /etc/network/interfaces code as an example: # The loopback network interface auto lo iface lo inet loopback # The primary network interface iface eth0 inet manual auto lan0 iface lan0 inet static bridge_ports eth0 bridge_stp off bridge_fd 0 address 192.168.66.97 netmask 255.255.255.0 gateway 192.168.66.1 dns-nameservers 192.168.66.1 You can have as many bridges as you need, bound or not bound to a physical network. By eliminating the bridge_ports parameter you get a pure virtual network for your VMs but remember that without a physical network different VMs on different hosts cannot communicate with each other.
Read more
  • 0
  • 1
  • 16566

article-image-gnu-octave-data-analysis-examples
Packt
28 Jun 2011
7 min read
Save for later

GNU Octave: data analysis examples

Packt
28 Jun 2011
7 min read
Loading data files When performing a statistical analysis of a particular problem, you often have some data stored in a file. You can save your variables (or the entire workspace) using different file formats and then load them back in again. Octave can, of course, also load data from files generated by other programs. There are certain restrictions when you do this which we will discuss here. In the following matter, we will only consider ASCII files, that is, readable text files. When you load data from an ASCII file using the load command, the data is treated as a two-dimensional array. We can then think of the data as a matrix where lines represent the matrix rows and columns the matrix columns. For this matrix to be well defined, the data must be organized such that all the rows have the same number of columns (and therefore the columns the same number of rows). For example, the content of a file called series.dat can be: Next we to load this into Octave's workspace: octave:1> load -ascii series.dat; whereby the data is stored in the variable named series. In fact, Octave is capable of loading the data even if you do not specify the ASCII format. The number of rows and columns are then: octave:2> size(series) ans = 4 3 I prefer the file extension .dat, but again this is optional and can be anything you wish, say .txt, .ascii, .data, or nothing at all. In the data files you can have: Octave comments Data blocks separated by blank lines (or equivalent empty rows) Tabs or single and multi-space for number separation Thus, the following data file will successfully load into Octave: # First block 1 232 334 2 245 334 3 456 342 4 555 321 # Second block 1 231 334 2 244 334 3 450 341 4 557 327 The resulting variable is a matrix with 8 rows and 3 columns. If you know the number of blocks or the block sizes, you can then separate the blocked-data. Now, the following data stored in the file bad.dat will not load into Octave's workspace: 1 232.1 334 2 245.2 3 456.23 4 555.6 because line 1 has three columns whereas lines 2-4 have two columns. If you try to load this file, Octave will complain: octave:3> load -ascii bad.dat error: load: bad.dat: inconsisitent number of columns near line 2 error:load: unable to extract matrix size from file 'bad.dat' Simple descriptive statistics Consider an Octave function mcintgr and its vectorized version mcintgrv. This function can evaluate the integral for a mathematical function f in some interval [a; b] where the function is positive. The Octave function is based on the Monte Carlo method and the return value, that is, the integral, is therefore a stochastic variable. When we calculate a given integral, we should as a minimum present the result as a mean or another appropriate measure of a central value together with an associated statistical uncertainty. This is true for any other stochastic variable, whether it is the height of the pupils in class, length of a plant's leaves, and so on. In this section, we will use Octave for the most simple statistical description of stochastic variables. Histogram and moments Let us calculate the integral given in Equation (5.9) one thousand times using the vectorized version of the Monte Carlo integrator: octave:4> for i=1:1000 > s(i) = mcintgrv("sin", 0, pi, 1000); > endfor The array s now contains a sequence of numbers which we know are approximately 2. Before we make any quantitative statistical description, it is always a good idea to first plot a histogram of the data as this gives an approximation to the true underlying probability distribution of the variable s. The easiest way to do this is by using Octave's hist function which can be called using: octave:5> hist(s, 30, 1) The first argument, s, to hist is the stochastic variable, the second is the number of bins that s should be grouped into (here we have used 30), and the third argument gives the sum of the heights of the histogram (here we set it to 1). The histogram is shown in the figure below. If hist is called via the command hist(s), s is grouped into ten bins and the sum of the heights of the histogram is equal to sum(s). From the figure, we see that mcintgrv produces a sequence of random numbers that appear to be normal (or Gaussian) distributed with a mean of 2. This is what we expected. It then makes good sense to describe the variable via the sample mean defined as: where N is the number of samples (here 1000) and si the i'th data point, as well as the sample variance given by: The variance is a measure of the distribution width and therefore an estimate of the statistical uncertainty of the mean value. Sometimes, one uses the standard deviation instead of the variance. The standard deviation is simply the square root of the variance To calculate the sample mean, sample variance, and the standard deviation in Octave, you use: octave:6> mean(s) ans = 1.9999 octave:7> var(s) ans = 0.002028 octave:8> std(s) ans = 0.044976 In the statistical description of the data, we can also include the skewness which measures the symmetry of the underlying distribution around the mean. If it is positive, it is an indication that the distribution has a long tail stretching towards positive values with respect to the mean. If it is negative, it has a long negative tail. The skewness is often defined as: We can calculate this in Octave via: octave:9> skewness(s) ans = -0.15495 This result is a bit surprising because we would assume from the histogram that the data set represents numbers picked from a normal distribution which is symmetric around the mean and therefore has zero skewness. It illustrates an important point—be careful to use the skewness as a direct measure of the distributions symmetry—you need a very large data set to get a good estimate. You can also calculate the kurtosis which measures the flatness of the sample distribution compared to a normal distribution. Negative kurtosis indicates a relative flatter distribution around the mean and a positive kurtosis that the sample distribution has a sharp peak around the mean. The kurtosis is defined by the following: It can be calculated by the kurtosis function. octave:10> kurtosis(s) ans = -0.02310 The kurtosis has the same problem as the skewness—you need a very large sample size to obtain a good estimate. Sample moments As you may know, the sample mean, variance, skewness, and kurtosis are examples of sample moments. The mean is related to the first moment, the variance the second moment, and so forth. Now, the moments are not uniquely defined. One can, for example, define the k'th absolute sample moment pka and k'th central sample moment pkc as: Notice that the first absolute moment is simply the sample mean, but the first central sample moment is zero. In Octave, you can easily retrieve the sample moments using the moment function, for example, to calculate the second central sample moment you use: octave:11> moment(s, 2, 'c') ans = 0.002022 Here the first input argument is the sample data, the second defines the order of the moment, and the third argument specifies whether we want the central moment 'c' or absolute moment 'a' which is the default. Compare the output with the output from Command 7—why is it not the same?
Read more
  • 0
  • 0
  • 16254
article-image-shipping-and-tax-calculations-php-5-ecommerce
Packt
20 Jan 2010
8 min read
Save for later

Shipping and Tax Calculations with PHP 5 Ecommerce

Packt
20 Jan 2010
8 min read
Shipping Shipping is a very important aspect of an e-commerce system; without it customers will not accurately know the cost of their order. The only situation where we wouldn't want to include shipping costs is where we always offer free shipping. However, in that situation, we could either add provisions to ignore shipping costs, or we could set all values to zero, and remove references to shipping costs from the user interface. Shipping methods The first requirement to calculate shipping costs is a shipping method. We may wish to offer a number of different shipping methods to our customers, such as standard shipping, next-day shipping, International shipping, and so on. The system will require a default shipping method, so when the customer visits their basket, they see shipping costs calculated based off the default method. There should be a suitable drop-down list on the basket page containing the list of shipping methods; when this is changed, the costs in the basket should be updated to reflect the selected method. We should store the following details for each shipping method: An ID number A name for the shipping method If the shipping method is active or not, indicating if it should be selectable by customers If the shipping method is the default method for the store A default shipping cost, this would: Be pre-populated in a suitable field when creating new products; however, when the product is created through the administration interface, we would store the shipping cost for the product with the product. Automatically be assigned to existing products in a store when a new shipping method is created to a store that already contains products. This could be suitably stored in our database as the following: Field Type Description ID Integer, Primary Key, Auto Increment ID number for the shipping method Name Varchar The name of the shipping method Active Boolean Indicates if the shipping method is active Default_cost Float The default cost for products for this shipping method This can be represented in the database using the following SQL: CREATE TABLE `shipping_methods` (`ID` INT NOT NULL AUTO_INCREMENT PRIMARY KEY ,`name` VARCHAR( 50 ) NOT NULL ,`active` BOOL NOT NULL ,`is_default` BOOL NOT NULL ,`default_cost` DOUBLE NOT NULL ,INDEX ( `active` , `is_default` )) ENGINE = INNODB COMMENT = 'Shipping methods'; Shipping costs There are several different ways to calculate the costs of shipping products to customers: We could associate a cost to each product for each shipping method we have in our store We could associate costs for each shipping method to ranges of weights, and either charge the customer based on the weight-based shipping cost for each product combined, or based on the combined weight of the order We could base the cost on the customer's delivery address The exact methods used, and the way they are used, depends on the exact nature of the store, as there are implications to these methods. If we were to use location-based shipping cost calculations, then the customer would not be aware of the total cost of their order until they entered their delivery address. There are a few ways this can be avoided: the system could assume a default delivery location and associated costs, and then update the customer's delivery cost at a later stage. Alternatively, if we enabled delivery methods for different locations or countries, we could associate the appropriate costs to these methods, although this does of course rely on the customer selecting the correct shipping method for their order to be approved; appropriate notifications to the customer would be required to ensure they do select the correct ones. For this article we will implement: Weight-based shipping costs: Here the cost of shipping is based on the weight of the products. Product-based shipping costs: Here the cost of shipping is set on a per product basis for each product in the customer's basket. We will also discuss location-based shipping costs, and look at how we may implement it. To account for international or long-distance shipping, we will use varying shipping methods; perhaps we could use: Shipping within state X. Shipping outside of state X. International shipping. (This could be broken down per continent if we wanted, without imposing on the customer too much.) Product-based shipping costs Product-based shipping costs would simply require each product to have a shipping cost associated to it for each shipping method in the store. As discussed earlier, when a new method is added to an existing store, a default value will initially be used, so in theory the administrator only needs to alter products whose shipping costs shouldn't be the default cost, and when creating new products, the relevant text box for the shipping cost for that method will have the default cost pre-populated. To facilitate these costs, we need a new table in our database storing: Product IDs Shipping method IDs Shipping costs The following SQL represents this table in our database: CREATE TABLE `shipping_costs_product` (`shipping_id` int(11) NOT NULL, `product_id` int(11) NOT NULL,`cost` float NOT NULL, PRIMARY KEY (`shipping_id`,`product_id`) )ENGINE=InnoDB DEFAULT CHARSET=latin1; Weight-based shipping costs Depending on the store being operated from our framework, we may need to base shipping costs on the weights of products. If a particular courier for a particular shipping method charges based on weights, then there isn't any point in creating costs for each product for that shipping method. Our framework can calculate the shipping costs based on the weight ranges and costs for the method, and the weight of the product. Within our database we would need to store: The shipping method in question A lower bound for the product weight, so we know which cost to apply to a product A cost associated for anything between this and the next weight bound The table below illustrates these fields in our database: Field Type Description ID Integer, primary key, Auto Increment A unique reference for the weight range Shipping_id Integer The shipping method the range applies to Lower_weight Float For working out which products this weight range cost applies to Cost Float The shipping cost for a product of this weight The following SQL represents this table: CREATE TABLE `shipping_costs_weight` (`ID` int(11) NOT NULL auto_increment,`shipping_id` int(11) NOT NULL,`lower_weight` float NOT NULL,`cost` float NOT NULL,PRIMARY KEY (`ID`)) ENGINE=InnoDB DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; To think about: Location-based shipping costs One thing we should still think about is location-based shipping costs, and how we may implement this. There are two primary ways in which we can do this: Assign shipping costs or cost surpluses/reductions to delivery addresses (either countries or states) and shipping methods Calculate costs using third-party service APIs These two methods have one issue, which is why we are not going to implement them—that is the costs are calculated later in the checkout process. We want our customers to be well informed and aware of all of their costs as early as possible. As mentioned earlier, however, we could get round this by assuming a default delivery location and providing customers with a guideline shipping cost, which would be subject to change based on their delivery address. Alternatively, we could allow customers to select their delivery location region from a drop-down list on the main "shopping basket" page. This way they would know the costs right away. Regional shipping costs We could look at storing: Shipping method IDs Region types (states or countries) Region values (an ID corresponding to a list of states or countries) A priority (in some cases, we may need to only consider the state delivery costs, and not country costs; in others cases, it may be the other way around) The associated costs changes (this could be a positive or negative value to be added to a product's delivery cost, as calculated by the other shipping systems already) By doing this, we can then combine the delivery address with the products and lookup a price alteration, which is applied to the product's delivery cost, which has already been calculated. Ideally, we would use all the shipping cost calculation systems discussed, to make something as flexible as possible, based on the needs of a particular product, particular shipping method or courier, or of a particular store or business. Third-party APIs The most accurate method of charging delivery costs, encompassing weights and delivery addresses is via APIs provided by couriers themselves, such as UPS. The following web pages may be of reference: http://www.ups.com/onlinetools http://answers.google.com/answers/threadview/id/429083.html Using such an API, means our shipping cost would be accurate, assuming our weight values were correct for our products, and we would not over or under charge customers for shipping costs. One additional consideration that third-party APIs may require would be dimensions of products, if their costs are also based on product sizes.
Read more
  • 0
  • 0
  • 16015

article-image-managing-users-php-nuke
Packt
09 Mar 2010
19 min read
Save for later

Managing Users with PHP-Nuke

Packt
09 Mar 2010
19 min read
PHP-Nuke is about web communities, and communities need members. PHP-Nuke enables visitors to your site to create and maintain their own user account, and add their personal details. This is usually required for them to post their own new stories, make comments, or contribute to discussions in the forums. Those annoying little tasks like managing lost passwords are also taken care of for you by PHP-Nuke. User accounts can be created in two ways: By the super user (that's you) By the user registering on your site The second method involves a confirmation email sent to the user's email account. This email contains a link for them to click and confirm their registration to activate their account (this needs to be done within 24 hours or the registration expires). Once a visitor is registered on your site, the gates to enjoy the full glory of your site will be thrown wide open. Visitors, or users as you could now call them, will be able to contribute to discussions on forums, add comment on posted stories, even add their own new stories, as well as access parts of the site that are off-limits to the 'riff-raff' unregistered visitor. Ingredients of a User Every user requires a certain amount of information to uniquely identify them in PHP-Nuke. There are the usual three things required of every user in PHP-Nuke: A nickname: This is an alias or username if you like. This identifies who the user is, and is their online identity in PHP-Nuke. A password: This is required to verify that the user is who they claim to be. A valid email address: This is where the confirmation email is to be sent. Once the user account is created for a user, the user is of course able to modify their details, and also view the details of other users. Information such as the URL of the user's own website, messenger ID (MSN, AIM, and others), their location, and interests are also part of the user 'profile', but are not compulsory. By default, the real email address of any user is never made public, for both security and to prevent harvesting by spammers. Users can specify a 'fake email' address, possibly in spam-obfuscated form (for example, address_at_mydomain.com) which will be displayed to other users, although this is not required. A user's privacy is always protected. Setting Up a New User User management starts by clicking the Users icon in the Modules Administration menu: Clicking on this icon brings you to the User's Administration panel. This panel consists of two mini-panels, Edit User and Add a New User , whose use is given away by their titles. We'll start by setting up a new user. Our user will imaginatively be called testuser. Time For Action—Setting Up a New User Manually If you're not at the User's Administration panel, click on the Users icon in the Modules Administration menu. In the Add a New User panel, enter testuser into the Nickname field. Enter Test User into the Name field. Enter your own email address into the Email field. Scroll down to the Password field. Enter testuser as the password. Click the Add User button. When the page reloads, you will be taken straight back to the administration homepage. What Just Happened? We created a new user. For this simple user, we only specified the required fields Nickname, Email, and Password, and provided a single piece of personal information, Name. Failing to specify the required fields will mean that the user is not set up, and you will be prompted to go back and add the missing fields. No email notification is sent to the user when the user is set up in this way, and no confirmation of the registration is required. As soon as you click Add User, provided all the required fields have been entered, the user is ready to go. Editing the details of a user is equally easy, but you do have to know their nickname to edit the details. Simply enter this into the Nickname field of the Edit User panel, select Modify from the drop-down box and click Ok! If you have taken a sudden dislike to a particular user, enter their nickname into the Nickname field and select Delete from the drop-down box, click Ok! and they are gone forever (the account, not the person). Subscribing a User Once a user has been created, you have the option to subscribe this user. We mentioned the idea of Subscribed Users in earlier articles; it's a mechanism for restricting module access to specific groups of people, such as fee-paying customers. There is only one group of Subscribed Users in PHP-Nuke at present, so once a user has a subscription, they are able to access any module restricted to Subscribed Users only. The option to subscribe a user is not available when you create the user manually, as we did above. To find the option, you have to edit the user's details. This is done by entering their username into the Edit User panel, selecting Modify from the drop-down box, and clicking on the Ok! button. The subscription options are near the bottom of the user details, underneath the newsletter option. The Subscribe User option does not refer to 'subscribing to' the newsletter; you sign up the user or remove them from your newsletter mailing list with the Newsletter option. The Subscribe User option makes the user into one of the site's elite, a Subscribed User. If you subscribe the user, then you must specify the Subscription Period. This is the length of time that the user remains subscribed, and ranges from 1 year to 10 years, in yearly increments. If you leave the Subscription Period at None then the user will not be subscribed. Once a user has been subscribed, you can change their subscription details from the same panel: You can unsubscribe the user, or extend their subscription period. To shorten the subscription period, you would have to unsubscribe the user, subscribe them again, and then set the new period. Subscribed users are reminded of the passing of time and the impending expiry of their subscriptions when they visit the Your Account module—we'll further explore this module later in the article: Time For Action—Registering as a User This time we'll register to create a user account as a normal visitor would. We'll call the user account userdude. If you do not have your mail server set up, then you will just have to follow the text and screenshots for now. The confirmation email sent by PHP-Nuke is a key part of the registration process, and includes a special link for the visitor to click to activate their account. Don't worry though, when your site is live on your web hosting account, you will undoubtedly be able to access a mail server. If you are still logged in as the super user, logout by clicking the Logout icon in either of the administration menus, or click the Logout link in the Administration block. If you are still logged in as testuser, logout by clicking on the Your Account link in the modules block, then click the Logout/Exit link in the navigation bar that appears: Alternatively, you can enter the logout URL directly: http://localhost/nuke/modules.php?name=Your_Account&op=logout You will be redirected to the site homepage. Now click the Your Account link in the Modules block: Click the New User Registration link. This brings you to the New User Registration panel. The top part of that panel is shown here: Enter the Nickname of userdude. Enter your own email address into the Email field. We are going to use userdude for the password as well as the nickname. If you think of another password at this point, enter it instead. Then put the password into the Re-type password field as well. Click the New User button. You will come to the final step of the registration process: Click the Finish button. Open up your email client, and log in to check your mail. You should find a mail with the subject New User Account Activation waiting for you. It will be from the email address you specified in the Administrator Email field in the Site Configuration Menu. The body of that email will look something like this: Welcome to the Dinosaur Portal You or someone else has used your email account (myaddress@packtpub.com) to register an account at the Dinosaur Portal To finish the registration process you should visit the following link in the next 24 hours to activate your user account, otherwise the information will be automatically deleted by the system and you should apply again: http://thedinosaurportal.com/modules.php?name=Your_Account&op=activate&use rname=userdude&check_num=64ad845758d7f8f572b12800f60842ba Following is the member information: -Nickname: userdude -Password: userdude Click the link in the email, or copy the link and paste it into your browser, and you will be taken to the New User Activation page where you will see a message of the form: userdude: Your account has been activated. Please login from this link using your assigned Nickname and Password. Clicking on this link takes you back to the User Registration/Login page of the Your Account module, and you can use your nickname and password to login. What Just Happened? You just created a new user account. The page for logging in is the homepage of the Your Account module. We'll talk more about this module in a minute; as you could guess, it handles everything to do with 'your' user account. If the visitor is not logged in, they are presented with the login panel when they visit the Your Account module page. From here they can enter their nickname and password to log in, or click the New User Registration link to register a new user account, as we did. For visitors that have forgotten their password, clicking on the Lost your Password? link will take them to a screen where they can enter their nickname, and an email will be sent to their registered email address containing a confirmation code, a random-looking 10 digit string; with this code they can have their password changed. A new, random password is generated and emailed to them. PHP-Nuke never stores raw passwords in its database, so it can never reveal any password. With the new password, the user can log in and change their password to something easier to remember. The registration process for the user is straightforward; they only require a nickname, a valid email address, and a password. There are certain rules, however, that are followed by PHP-Nuke: Only one occurrence of an email address is allowed on the system; if someone uses an email address that belongs to another user account that address will be rejected, and the user will have to choose another. Only one occurrence of a particular nickname is allowed as well; the system will check the uniqueness of the nickname before creating the account. After the visitor clicks Finish on the final step, the user account is created. Following that, the confirmation email is sent to the email address. If the email address specified is invalid, or not the visitor's email address, then that visitor will have to create their account with a new email address. If the user doesn't mind being embarrassed, they can contact the site administrator, or wait 24 hours for the account to be deleted from the list of 'waiting to be activated' accounts, and then try again. You will notice that the link to activate the account contains the URL of your PHP-Nuke site: http://thedinosaurportal.com/modules.php?name=Your_Account&op=activate&use rname=userdude&check_num=64ad845758d7f8f572b12800f60842ba It is very important that you have configured your Site URL option correctly in the Web Site Configuration menu (we saw this in Aritcle 4). If you haven't done that, then the activation link will point to the wrong site! The check_num part of the URL is what identifies the unregistered visitor to the system. When the visitor registers his details, PHP-Nuke stores them in the database along with the check_num value. When the visitor visits the above link, PHP-Nuke will check the value of check_num against the values stored in the database, and if it finds a match, it will move that visitor's details to the proper users table in the database, and remove them from the table of visitors waiting to confirm their registration. That's all there is to creating user accounts. It is possible to turn off the registration, so that only the administrator can create accounts. If you feel the need for this, you can read more about it in the PHP-Nuke HOWTO: http://www.karakas-online.de/EN-Book/disable-registration.html That section of the PHP-Nuke HOWTO also has a number of other user account hacks that you can make use of. Graphical Code for User Registration PHP-Nuke enables you to add a security code to the login or registration pages on the site. The security code is a small graphic with some digits, and is shown under the password fields, along with a textbox for the visitor to type in the digits from the graphic. The point of this device is to prevent automated registrations; without typing the correct digits into the Type Security Code field, the submission will not be accepted. The digits displayed in the image are not part of the page HTML, and the only way for the digits to be read is to actually see them when they are displayed on a monitor. Use of the security code is controlled by a setting in the file config.php in the root of your PHP-Nuke installation. (This was the file in which we made some database settings in Article 2.) The setting to change is the value of the $gfx_chk variable. By default, it looks like this in the file, which means that the security code is not used: $gfx_chk = 0; The config.php file itself has a description of the values for this variable as seen in the table: Value Effect on the Security Code 0 Security code is never used. 1 Security code only appears on the administrators login page (admin.php). 2 Security code only appears on the normal user login page. 3 Security code only appears for new user registrations. 4 Security code appears for user login and new user registrations. Thus to have the security code appear only at the administrator login, you would set $gfx_chk to 1 and then save the config.php file: $gfx_chk = 1; For the graphical code to function properly, the GD extension will need to work properly with PHP on the web server. The GD extension takes care of drawing the graphics, and if this isn't functioning for whatever reason (possibly it's not installed), then the graphic will not be displayed properly, and it will be impossible to determine the security code. In that case, you will have to change the setting in config.php to remove the graphical code. If you are running your site on a web hosting account and the graphical security code is not being displayed when it should, then you should contact your host's technical support to find out if there is a problem with the GD extension. You can tell if the GD extension is installed by using the phpinfo() PHP function. Open a text editor and enter the following code: <?php phpinfo(); ?> Save this file as phpinfo.php in the web server root (xampphtdocs). When you navigate to that page in your browser, a number of PHP settings are displayed, including the status of the GD extension: If you do not see a table like the one above on the page, or if it does not say enabled next to GD Support, then contact your host's technical support. The XAMPP package we install in Appendix A has GD installed and working. Seeing Who's Who Log in to your site as the super user and activate the Members List module (deactivated by default). After activation there will be an additional option available in the Modules block called the Members List module, which provides anyone able to view this module with a list of the registered users: Clicking on the username will bring up a view of that user's profile: This is only a view of the user profile, and it is not an editable form. You will notice the word Forum in the above screenshot. The user profile displayed here is actually the user profile from the Forums module (and note also that the Forums module needs to be activated for this screen to be seen). You will also notice that the name of the site is wrong—it says MySite.com, which is not the value we set for our site name. This is because the Forums module has its own set of configuration settings. We will see how to set these in Article 8. Also note that the Members List module takes information from the Forums module configuration settings. The Forums module is a complete application—phpBB, one of the best pieces of free, open-source forum software around—integrated into PHP-Nuke. One aspect of the integration is the shared user account—the user account you create for the PHP-Nuke site also functions as a user account on the forums. As a user, it is possible to work with your details in two places in PHP-Nuke—from the Your Account module and also from within the Forums module. Although there are two views of information, and two places to edit your details, there is still only one user account. At the moment, the Your Account module offers more user details than are found in the Forums module, such as newsletter subscription information. The integration between the PHP-Nuke user account and the user account for the Forums module has gradually become tighter over the versions of PHP-Nuke, and they are likely to 'converge' further in future versions of PHP-Nuke. Once a user account is created, and the user has logged in, a whole new world opens up to them. The Your Account Module The Your Account module is a visitor's space. The visitor is guided round their space by a graphical navigation bar as seen below: Before we look at each of these links, let's mention what else is on the front page of the Your Account module: My Headlines: The user can view a list of headlines from an RSS news feed of another site. The user can select from one of the headline sites that we saw in the previous article, or enter the URL of the site directly. Broadcast Public Message: The user can enter the text of a public message to be shown to all current visitors of the site. We'll look at this in a moment. These two features are not always displayed; their display is controlled by options in the Web Site Configuration menu that we'll see in a moment. However, the user is always able to see their Last 10 Comments posted and their Last 10 News Submissions on this page. Returning to our discussion of the links in the navigation bar of the Your Account module, we've already seen what the Logout/Exit link does; it logs the visitor out. The Themes link takes the visitor to a page from where they can choose one from the list of themes installed on the site. We'll look at the Comments link in detail in the next article; it leads to options for viewing and posting comments on stories. Note that when you are logged in as the super user, the Your Account module displays another panel called Administration Functions. This panel allows you to modify certain details of that user. We will talk about these in the next article and meet them in their natural context. Editing the User Profile The Your Info link takes the user to their user profile. We saw some of the options here when we looked at creating the user manually. These options are generally for personal details (name, email, and so on), newsletter subscription, private message options, and forum configuration, among others. The options themselves are straightforward. A number of options in the user profile correspond to forum profile options, and don't particularly affect the user outside of the Forums module. After making any changes to a user profile, the Save Changes button needs to be clicked to save these changes. Note that the Save Changes button is not the button at the very bottom of the user details page—the Save Changes button is above the Avatar Control Panel: The button at the bottom of the form is marked Submit , and is only active when the options in the Avatar Control Panel are enabled. The Avatar Control Panel seen at the bottom of the user profile contains an interesting option. An avatar is a small graphic, representing you as an online character. You can choose a graphic from the already existing library by clicking on the Show Gallery button next to the Select Avatar from gallery option: Clicking on this button brings up a selection of little images for the user to choose from. Simply click on the required image and this will be assigned to the user profile: Clicking the Back to Profile link will return you to the Your Info page. The library of images you just saw can be found in the modulesForumsimagesavatarsgallery folder of your PHP-Nuke installation. If you want you can add in more images here, but make sure your image is a GIF file, and that it isn't more than 80 pixels wide or 80 pixels high. Your Account Configuration The Your Home link provides some options for configuring Your Account further: From this panel, the number of news stories displayed on the homepage of the site can be controlled. Remember, this setting only applies to you—and only when you are logged in.
Read more
  • 0
  • 0
  • 15884
Modal Close icon
Modal Close icon