Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-adapting-user-devices-using-mobile-web-technology
Packt
23 Oct 2009
10 min read
Save for later

Adapting to User Devices Using Mobile Web Technology

Packt
23 Oct 2009
10 min read
Luigi's Pizza On The Run mobile shop is working well now. And he wants to adapt it to different mobile devices. Let's look at the following: Understanding the Lowest Common Denominator method Finding and comparing features of different mobile devices Deciding to adapt or not Adapting and progressively enhancing POTR application using Wireless Abstraction Library Detecting device capabilities Evaluating tools that can aid in adaptation Moving your blog to the mobile web By the end of this article, you will have a strong foundation in adapting to different devices. What is Adaptation? Adaptation, sometimes called multiserving, means delivering content as per each user device's capabilities. If the visiting device is an old phone supporting only WML, you will show a WML page with Wireless Bitmap (wbmp) images. If it is a newer XHTML MP-compliant device, you will deliver an XHTML MP version, customized according to the screen size of the device. If the user is on iMode in Japan, you will show a Compact HTML (cHTML) version that's more forgiving than XHTML. This way, users get the best experience possible on their device. Do I Need Adaptation? I am sure most of you are wondering why you would want to create somany different versions of your mobile site? Isn't following the XHTML MPstandard enough? On the Web, you could make sure that you followed XHTML and the site will work in all browsers. The browser-specific quirks are limited and fixes are easy. However, in the mobile world, you have thousands of devices using hundreds of different browsers. You need adaptation precisely for that reason! If you want to serve all users well, you need to worry about adaptation. WML devices will give up if they encounter a <b> tag within an <a> tag. Some XHTML MP browsers will not be able to process a form if it is within a table. But a table within a form will work just fine. If your target audience is limited, and you know that they are going to use a limited range of browsers, you can live without adaptation. Can't I just Use Common Capabilities and Ignore the Rest? You can. Finding the Lowest Common Denominator (LCD) of the capabilities of target devices, you can design a site that will work reasonably well in all devices. Devices with better capabilities than LCD will see a version that may not be very beautiful but things will work just as well. How to Determine the LCD? If you are looking for something more than the W3C DDC guidelines, you may be interested in finding out the capabilities of different devices to decide on your own what features you want to use in your application. There is a nice tool that allows you to search on device capabilities and compare them side by side. Take a look at the following screenshot showing mDevInf (http://mdevinf.sourceforge.net/) in action, showing image formats supported on a generic iMode device. You can search for devices and compare them, and then come to a conclusion about features you want to use. This is all good. But when you want to cater to wider mobile audience, you must consider adaptation. You don't want to fight with browser quirks and silly compatibility issues. You want to focus on delivering a good solution. Adaptation can help you there. OK, So How do I Adapt? You have three options to adapt: Design alternative CSS: this will control the display of elements and images. This is the easiest method. You can detect the device and link an appropriate CSS file. Create multiple versions of pages: redirect the user to a device-specific version. This is called "alteration". This way you get the most control over what is shown to each device. Automatic Adaptation: create content in one format and use a tool to generate device-specific versions. This is the most elegant method. Let us rebuild the pizza selection page on POTR to learn how we can detect the device and implement automatic adaptation. Fancy Pizza Selection Luigi has been asking to put up photographs of his delicious pizzas on the mobile site, but we didn't do that so far to save bandwidth for users. Let us now go ahead and add images to the pizza selection page. We want to show larger images to devices that can support them. Review the code shown below. It's an abridged version of the actual code. <?php include_once("wall_prepend.php"); ?> <wall:document><wall:xmlpidtd /> <wall:head> <wall:title>Pizza On The Run</wall:title> <link href="assets/mobile.css" type="text/css" rel="stylesheet" /> </wall:head> <wall:body> <?php echo '<wall:h2>Customize Your Pizza #'.$currentPizza.':</wall:h2> <wall:form enable_wml="false" action="index.php" method="POST"> <fieldset> <wall:input type="hidden" name="action" value="order" />'; // If we did not get the total number of pizzas to order, // let the user select if ($_REQUEST["numPizza"] == -1) { echo 'Pizzas to Order: <wall:select name="numPizza">'; for($i=1; $i<=9; $i++) { echo '<wall:option value="'.$i.'">'.$i.'</wall:option>'; } echo '</wall:select><wall:br/>'; } else { echo '<wall:input type="hidden" name="numPizza" value="'.$_REQUEST["numPizza"].'" />'; } echo '<wall:h3>Select the pizza</wall:h3>'; // Select the pizza $checked = 'checked="checked"'; foreach($products as $product) { // Show a product image based on the device size echo '<wall:img src="assets/pizza_'.$product[ "id"].'_120x80.jpg" alt="'.$product["name"].'"> <wall:alternate_img src="assets/pizza_'.$product[ "id"].'_300x200.jpg" test="'.($wall->getCapa( 'resolution_width') >= 200).'" /> <wall:alternate_img nopicture="true" test="'.( !$wall->getCapa('jpg')).'" /> </wall:img><wall:br />'; echo '<wall:input type="radio" name="pizza[ '.$currentPizza.']" value="'.$product["id"].'" '.$checked.'/>'; echo '<strong>'.$product["name"].' ($'.$product[ "price"].')</strong> - '; echo $product["description"].'<wall:br/>'; $checked = ''; } echo '<wall:input type="submit" class="button" name= "option" value="Next" /> </fieldset></wall:form>'; ?> <p><wall:a href="?action=home">Home</wall:a> - <wall:caller tel="+18007687669"> +1-800-POTRNOW</wall:caller></p> </wall:body> </wall:html> What are Those <wall:*> Tags? All those <wall:*> tags are at the heart of adaptation. Wireless Abstraction Library (WALL) is an open-source tag library that transforms the WALL tags into WML, XHTML, or cHTML code. E.g. iMode devices use <br> tag and simply ignore <br />. WALL will ensure that cHTML devices get a <br> tag and XHTML devices get a <br /> tag. You can find a very good tutorial and extensive reference material on WALL from: http://wurfl.sourceforge.net/java/wall.php. You can download WALL and many other tools too from that site. WALL4PHP—a PHP port of WALL is available from http://wall.laacz.lv/. That's what we are using for POTR. Let's Make Sense of This Code! What are the critical elements of this code? Most of it is very similar to standard XHTML MP. The biggest difference is that tags have a "wall:" prefix. Let us look at some important pieces: The wall_prepend.php file at the beginning loads the WALL class, detects the user's browser, and loads its capabilities. You can use the $wall object in your code later to check device capabilities etc. <wall:document> tells the WALL parser to start the document code. <wall:xmlpidtd /> will insert the XHTML/WML/CHTML prolog as required. This solves part of the headache in adaptation. The next few lines define the page title and meta tags. Code that is not in <wall:*> tags is sent to the browser as is. The heading tag will render as a bold text on a WML device. You can use many standard tags with WALL. Just prefix them with "wall:". We do not want to enable WML support in the form. It requires a few more changes in the document structure, and we don't want it to get complex for this example! If you want to support forms on WML devices, you can enable it in the <wall:form> tag. The img and alternate_img tags are a cool feature of WALL. You can specify the default image in the img tag, and then specify alternative images based on any condition you like. One of these images will be picked up at run time. WALL can even skip displaying the image all together if the nopicture test evaluates to true. In our code, we show a 120x100 pixels images by default, and show a larger image if the device resolution is more than 200 pixels. As the image is a JPG, we skip showing the image if the device cannot support JPG images. The alternate_img tag also supports showing some icons available natively on the phone. You can refer to the WALL reference for more on this. Adapting the phone call link is dead simple. Just use the <wall:caller> tag. Specify the number to call in the tel attribute, and you are done. You can also specify what to display if the phone does not support phone links in alt attribute. When you load the URL in your browser, WALL will do all the heavy liftingand show a mouth-watering pizza—a larger mouth-watering pizza if you have a large screen! Can I Use All XHTML Tags? WALL supports many XHTML tags. It has some additional tags to ease menu display and invoke phone calls. You can use <wall:block> instead of code <p> or <div> tags because it will degrade well, and yet allow you to specify CSS class and id. WALL does not have tags for tables, though it can use tables to generate menus. Here's a list of WALL tags you can use: a, alternate_img, b, block, body, br, caller, cell, cool_menu, cool_menu_css, document, font, form, h1, h2, h3, h4, h5, h6, head, hr, i, img, input, load_capabilities, marquee, menu, menu_css, option, select, title, wurfl_device_id, xmlpidtd. Complete listings of the attributes available with each tag, and their meanings are available from: http://wurfl.sourceforge.net/java/refguide.php. Complete listings of the attributes available with each tag, and their meanings are available from: http://wurfl.sourceforge.net/java/refguide.php. Will This Work Well for WML? WALL can generate WML. WML itself has limited capabilities so you will be restricted in the markup that you can use. You have to enclose content in <wall:block> tags and test rigorously to ensure full WML support. WML handles user input in a different way and we can't use radio buttons or checkboxes in forms. A workaround is to change radio buttons to a menu and pass values using the GET method. Another is to convert them to a select drop down. We are not building WML capability in POTR yet. WALL is still useful for us as it can support cHTML devices and will automatically take care of XHTML implementation variations in different browsers. It can even generate some cool menus for us! Take a look at the following screenshot.
Read more
  • 0
  • 0
  • 2786

article-image-digitally-signing-and-verifying-messages-web-services-part-1
Packt
22 Oct 2009
8 min read
Save for later

Digitally Signing and Verifying Messages in Web Services ( part 1 )

Packt
22 Oct 2009
8 min read
Confidentiality and integrity are two critical components of web services. While confidentiality can be ensured by means of encryption, the encrypted data can still be overwritten and the integrity of the message can be compromised. So it becomes is equally important to protect the integrity of the message; digital signatures helps us in doing just that. Overview of Digital Signatures In the web services scenario, XML messages are exchanged between the client application and the web services. Certain messages contain critical business information, and therefore the integrity of the message should be ensured. Ensuring the integrity of the message is not a new concept, it has been there for a long time. The concept is to make sure that the data was not tampered while in transit between the sender and the receiver. Consider, for example, that Alice and Bob are exchanging emails that are critical to business. Alice wants to make sure that Bob receives the correct email that she sent and no one else tampered with or modified the email in between. In order to ensure the integrity of the message, Alice digitally signs the message using her private key, and when Bob receives the message, he will check to make sure that the signature is still valid before he can trust or read the email. What is this digital signature? And how does it prove that no one else tampered with the data? When a message is digitally signed, it basically follows these steps: Create a digest value of the message(a unique string value for the message using a SHA1 or MD5 algorithm). Encrypt the digest value using the private key—known only to the sender. Exchange the message along with the encrypted digest value. MD5 and SHA1 are message digest algorithms to calculate the digest value. The digest or hash value is nothing but a non-reversible unique string for any given data, i.e. the digest value will change even if a space is added or removed. SHA1 produces a 160 bit digest value, while MD5 produces a 128 bit value. When Bob receives the message, his first task is to validate the signature. Validation of signature goes through a sequence of steps: Create a digest value of the message again using the same algorithm. Encrypt the digest value using the public key of Alice(obtained out of band or part of message, etc.) Validate to make sure that the digest value encrypted using the public key matches the one that was sent by Alice. Since the public key is known or exchanged along with the message, Bob can check the validity of the certificate itself. Digital certificates are issued by a trusted party such as Verisign. When a certificate is compromised, you can cancel the certificate, which will invalidate the public key. Once the signature is verified, Bob can trust that the message was not tampered with by anyone else. He can also validate the certificate to make sure that it is not expired or revoked, and also to ensure that no one actually tampered with the private key  of Alice. Digital Signatures in Web Services In the last section, we learnt about digital signatures. Since web services are all about interoperability, digital-signature-related information is represented in an industry standard format called XML Signature (standardized by W3C). The following are the key data elements that are represented in an interoperable manner by XML Signature: What data (what part of SOAP message) is digitally signed? What hash algorithm (MD5 or SHA1) is used to create the digest value? What signature algorithm is used? Information about the certificate or key. In the next section, we will describe how the Oracle Web Services Manager can help generate and verify signatures in web services. Signature Generation Using Oracle WSM Oracle Web Services Manager can centrally manage the security policy, including digital signature generation. One of the greatest advantages in using Oracle WSM to digitally sign messages is that the policy information and the digital certificate information are centrally stored and managed. An organization can have many web services, and some of them might exchange certain business critical information and require that the messages be digitally signed. Oracle WSM will play a key role when different web services have different requirements to sign the message, or when it is required to take certain actions before or after signing the message. Oracle WSM can be used to configure the signature at each web service level and that reduces the burden of deploying certificates across multiple systems. In this section, we will discuss more about how to digitally sign the response message of the web service using Oracle WSM. Sign Message Policy Step As a quick refresher, in Oracle WSM, each web service is registered within a gateway or an agent and a policy is attached to each web service. The policy steps are divided mainly into request pipeline template and response pipeline template, where different policies can be applied for request or response message processing. In this section, I will describe how to configure the policy for a response pipeline template to digitally sign the response message. It is assumed that the web service is registered within a gateway and a detailed example will be described later in this article . In the response pipeline, we can add a policy step called Sign Message to digitally sign the message. In order to digitally sign a message, the key components that are required are: Private key store Private key password The part of SOAP message that is being signed The signature algorithm being used The following screenshot describes the Sign Message policy step with certain values populated.   In the previous screenshot, the values that are populated are: Keystore location—The location where the private key file is located. Keystore type—Whether or not it is PKCS12 or JKS. Keystore password—The password to the keystore. Signer's private-key alias—The alias to gain access to the private key from the keystore. Signer's private-key password—The password to access the private key. Signed Content—Whether the BODY or envelope of the SOAP message should be signed. The above information is a part of a policy that is attached to the time service which will sign the response message. As per the information that is shown in the screenshot, the BODY of the SOAP message response will be digitally signed us in the SHA1 as the digest algorithm, and PKCS12 key store. Once the message is signed, the SOAP message will look like: <?xml version="1.0" encoding="UTF-8"?><soap:Envelope soap_encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" > <soap:Header> <wsse:Security soap_mustUnderstand="1"> <wsse:BinarySecurityToken ValueType="http://docs. oasis-open.org/wss/2004/01/oasis-200401-wss- x509-token-profile-1.0#X509v3" EncodingType="http://docs.oasis-open. org/wss/2004/01/oasis-200401-wss-soap-message- security-1.0#Base64Binary" wsu_Id="_ VLL9yEsi09I9f5ihwae2lQ22" >SecurityTOkenoKE2ZA==< /wsse:BinarySecurityToken> <dsig:Signature > <dsig:SignedInfo> <dsig:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/ xml-exc-c14n#"/> <dsig:SignatureMethod Algorithm="http://www.w3.org/2000/09/ xmldsig#rsa-sha1"/> <dsig:Reference URI="#ishUwYWW2AAthrx hlpv1CA22"> <dsig:Transforms> <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <dsig:DigestValue>ynuqANuYM3qzdTnGOLT7SMxWHY=</dsig:DigestValue> </dsig:Reference> <dsig:Reference URI="#UljvWiL8yjedImz 6zy0pHQ22"> <dsig:Transforms> <dsig:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </dsig:Transforms> <dsig:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <dsig:DigestValue>9ZebvrbVYLiPv1BaVLDaLJVhwo=</dsig:DigestValue> </dsig:Reference> </dsig:SignedInfo> <dsig:SignatureValue>QqmUUZDLNeLpAEFXndiBLk</dsig:SignatureValue> <dsig:KeyInfo> <wsse:SecurityTokenReference wsu_Id="_7vjdWs1ABULkiLeE7Y4lAg22" > <wsse:Reference URI="#_VLL9yEsi09I9f5ihwae2lQ22"/> </wsse:SecurityTokenReference> </dsig:KeyInfo> </dsig:Signature> <wsu:Timestamp wsu_Id="UljvWiL8yjedImz6zy0pHQ22"> <wsu:Created>2007-11-16T15:13:48Z</wsu:Created> </wsu:Timestamp> </wsse:Security> </soap:Header> <soap:Body wsu_Id="ishUwYWW2AAthrxhlpv1CA22" > <n:getTimeResponse > <Result xsi_type="xsd:string">10:13 AM</Result> </n:getTimeResponse> </soap:Body></soap:Envelope>
Read more
  • 0
  • 0
  • 4281

article-image-digitally-signing-and-verifying-messages-web-services-part-2
Packt
22 Oct 2009
4 min read
Save for later

Digitally Signing and Verifying Messages in Web Services ( part 2 )

Packt
22 Oct 2009
4 min read
Signature Verification by Oracle WSM Oracle Web Services Manager can actually validate the signature in the incoming i.e. request SOAP message. By using Oracle WSM to validate the signature, organizations can actually centralize the policy enforcement and also the public key management. As organizations deploy more web services that are accessed by other divisions and business partners, managing the signature verification process might become tedious, as with each new consumer, the certificate information should be maintained. Oracle WSM can address such issues by centralizing those operations. This section  will describe how to configure Oracle WSM policy to validate the signature of the SOAP request message. In order to view the policy, you can click on Policy Management and then Manage Policies. This will bring you to the screen with the gateway information and a hyperlink for policies (see the following screen capture).   You can then click on Policies to see all the policies and you will see theVerifyAndSign policy too that is created by default. A default policy is attached to the service. We can now click Edit to edit the  policy. When you click Edit, you will see the policy steps as shown in the  following screenshot. In this section, we want to configure the Request pipeline to validate the signature of the incoming SOAP message. In order to validate the signature, click Add Step Below to add the Verify Signature policy step as shown in the following screenshot. Once you click OK, the verify signature policy step is added, but that policy step should be configured. If you click on the Configure button on the verify signature policy step, it will take you to the screen where you can configure the verify signature policy information as shown in the following screen capture. In the previous screenshot, I configured Verify Signature policy steps with: Location of the key store Key store type as PKCS12 Password of the key store Public key alias in the key store Set Remove Signatures to true to remove the digital signature after the signature validation Enforce Signing is set to true to make sure that the incoming requests are signed In order to generate a PKCS12 key store from certifcate that is installed already in Microsoft certifcate services, you should frst export the certifcate (with or without private key) and then import that certifcate in FireFox (Advanced option) and then export back to PKCS12. Once the verify signature policy has been configured and saved (Commit Policy), the policy would enforce that any request for the time service with the particular service ID be digitally signed. Signature Generation by Oracle WSM In the last section, we discussed how to digitally sign a web service request by Microsoft .NET application and how to validate the signature by Oracle WSM. In this section, we will discuss how to digitally sign the web service response message. In the earlier section, we discussed how to register the service and how to attach the verify signature policy step to the request pipeline. In order to digitally sign the response message, the response pipeline of the policy should be modified to include the sign message policy step. The policy with the request pipeline that is already configured to verify signature would look like: Now we have to add the step in the Response pipeline to actually sign the response message. In order to add the policy step, click on Add Step Below and then select the Sign Message policy step. Once the Sign Message policy step is added, it can then be configured, as shown in the following screenshot, to include the appropriate key store location for the public key to digitally sign the message. In the previous figure, the location of the key store that has the private key, along with the Keystore password, alias and part of message to be signed are specified. Once the policy is created, it would look like: In the previous screenshot, the Response pipeline has two log steps—one to log the message before digitally signing and one to log the message after digitally signing the message. In this sample, we are using the same WSEQuickStartServer certificate to sign the message. Once the policy is saved, the response message will be digitally signed. The client application (Microsoft .NET) can be configured to validate the signature.
Read more
  • 0
  • 0
  • 1866

article-image-customer-management-joomla-and-virtuemart
Packt
22 Oct 2009
7 min read
Save for later

Customer Management in Joomla! and VirtueMart

Packt
22 Oct 2009
7 min read
Note that all VirtueMart customers must be registered with Joomla!. However, not all Joomla! users need to be the VirtueMart customers. Within the first few sections of this article, you will have a clear concept about user management in Joomla! and VirtueMart. Customer management Customer management in VirtueMart includes registering customers to the VirtueMart shop, assigning them to user groups for appropriate permission levels, managing fields in the registration form, viewing and editing customer information, and managing the user groups. Let's dive in to these activities in the following sections. Registration/Authentication of customers Joomla! has a very strong user registration and authentication system. One core component in Joomla! is com_users, which manages user registration and authentication in Joomla!. However, VirtueMart needs some extra information for customers. VirtueMart collects this information through its own customer registration process, and stores the information in separate tables in the database. The extra information required by VirtueMart is stored in a table named jos_vm_user_info, which is related to the jos_users table by the user id field. Usually, when a user registers to the Joomla! site, they also register with VirtueMart. This depends on some global settings. In the following sections, we are going to learn how to enable the user registration and authentication for VirtueMart. Revisiting registration settings We configure the registration settings from VirtueMart's administration panel Admin | Configuration | Global screen. There is a section titled User Registration Settings, which defines how the user registration will be handled: Ensure that your VirtueMart shop has been configured as shown in the screenshot above. The first field to configure is the User Registration Type. Selecting Normal Account Creation in this field creates both a Joomla! and VirtueMart account during user registration. For our example shop, we will be using this setting. Joomla!'s new user activation should be disabled when we are using VirtueMart. That means the Joomla! New account activation necessary? field should read No. Enabling VirtueMart login module There is a default module in Joomla! which is used for user registrations and login. When we are using this default Login Form (mod_login module), it does not collect information required by VirtueMart, and does not create customers in VirtueMart. By default, when published, the mod_login module looks like the following screenshot: As you see, registered users can log in to Joomla! through this form, recover their forgotten password by clicking on the Forgot your password? link, and create a new user account by clicking on the Create an account link. When a user clicks on the Create an account link, they get the form as shown in the following screenshot: We see that normal registration in Joomla! only requires four pieces of information: Name, Username, Email, and Password. It does not collect information needed in VirtueMart, such as billing and shipping address, to be a customer. Therefore, we need to disable the mod_login module and enable the mod_virtuemart_login module. We have already learned how to enable and disable a module in Joomla!. We have also learned how to install modules. By default, the mod_virtuemart_login module's title is VirtueMart Login. You may prefer to show this title as Login only. In that case, click on the VirtueMart Login link in the Module Name column. This brings the Module:[Edit] screen: In the Title field, type Login (or any other text you want to show as the title of this module). Make sure the module is enabled and position is set to left or right. Click  on the Save icon to save your settings. Now, browse to your site's front-page  (for example, http://localhost/bdosn/), and you will see the login form as shown in the following screenshot: As you can see, this module has the same functionalities as we saw in the mod_login module of Joomla!. Let us test the account creation in this module. Click on the Register link. It brings the following screen: The registration form has three main sections: Customer Information, Bill To Information, and Send Registration. At the end, there is the Send Registration button for submitting the form data. In the Customer Information section, type your email address, the desired username, and password. Confirm the password by typing it again in the Confirm password field. In the Bill To Information section, type the address details where bills are to be sent. In the entire form, required fields are marked with an asterisk (*). You must provide information for these required fields. In the Send Registration section, you need to agree to the Terms of Service. Click on the Terms of Service link to read it. Then, check the I agree to the Terms of Service checkbox and click on the Send Registration button to submit the form data: If you have provided all of the required information and submitted a unique  email address, the registration will be successful. On successful completion of registration, you get the following screen notification, and will be logged in to  the shop automatically: If you scroll down to the Login module, you will see that you are logged in and greeted by the store. You also see the User Menu in this screen: Both the User Menu and the Login modules contain a Logout button. Click on either of these buttons to log out from the Joomla! site. In fact, links in the User Menu module are for Joomla! only. Let us try the link Your Details. Click on the Your Details link, and you will see the information shown in the following screenshot: As you see in the screenshot above, you can change your full name, email, password, frontend language, and time zone. You cannot view any information regarding billing address, or other information of the customer. In fact, this information is for regular Joomla! users. We can only get full customer information by clicking on the Account Maintenance link in the Login module. Let us try it. Click on the Account Maintenance link, and it shows the following screenshot: The Account Maintenance screen has three sections: Account Information, Shipping Information, and Order Information. Click on the Account Information link to see what happens. It shows the following screen: This shows Customer Information and Bill To Information, which have been entered during user registration. The last section on this screen is the Bank Information, from where the customer can add bank account information. This section looks like the following screenshot: As you can see, from the Bank Account Info section, the customers can enter their bank account information including the account holder's name, account number, bank's sorting code number, bank's name, account type, and IBAN (International Bank Account Number). Entering this information is important when you are using  a Bank Account Debit payment method. Now, let us go back to the Account Maintenance screen and see the other sections. Click on the Shipping Information link, and you get the following screen: There is one default shipping address, which is the same as the billing address. The customers can create additional shipping addresses. For creating a new shipping address, click on the Add Address link. It shows the following screen:
Read more
  • 0
  • 0
  • 11449

article-image-customizing-default-expression-engine-v167-website-template
Packt
22 Oct 2009
5 min read
Save for later

Customizing the Default Expression Engine (v1.6.7) website template

Packt
22 Oct 2009
5 min read
Introduction The blogging revolution has changed the nature of the internet and as more and more individuals and organizations choose the blog format as the preferred web solution (i.e content driven websites which are regularly updated by one or more users). The question of which blogging application to use becomes an increasingly important one. I have tested all the popular blogging solutions and the one I have found to be the most impressive for designers with a little CSS and XHTML know-how is Expression Engine. After spending a little time on the Expression Engine support forums I have noticed that a high number of the support questions are relating to solving CSS problems within EE. This demonstrates that many designers who choose EE are used to working from within Adobe Dreamweaver (or other WYSWYG applications) and although there are no compatibility issues between the two systems, it is clear that they are making the transition from graphics based web design to lovely CSS web based design. When you are installing Expression Engine 1.6.7 for the first time, you are asked to choose a theme from a drop-down menu, which you may expect would offer you a satisfactory selection of pre-installed themes to choose from. Sadly this is not the case in Expression Engine (EE) version 1.6.7, and if you have not downloaded and manually saved a theme inside the right folder within the EE system file structure you will be given only one theme to choose from: the dreaded "default weblog theme". Once you complete the installation, no other themes can be imported or installed, so you can either restart the installation, opt to select the default weblog theme or start from a blank canvas (sorry about that). The good news is that the next release of EE (v.2.0) ships with a very nice default template, but the bad news is that you will have to wait a few months longer to get a copy and you will need to renew your license to upgrade for a fee.  Even then you will probably want to modify the new and improved default template in EE 2.0, or you may due to your needs opt to  choose another EE template altogether so that your website does not look like a thousand other websites based on the same template (not good). This article will demonstrate how to use Cascading Style Sheets (CSS) to improve the layout of the default EE website/blog template, and how to keep the structure (XHTML) and the presentation (CSS) entirely separate-which is best practice in web development. This article is intended as a practical guide to intermediate level web designers wanting to program with CSS more effectively within Expression Engine, and to create better websites, which adhere to current WC3 web standards. It will not attempt teach you the fundamentals of programming with CSS and XHTML or how to install, use or develop a website with EE. This article will demonstrate how to use CSS to effectively take control of the appearance of any EE template. If you are new to EE then it is recommended that you consult the book "Building a Website with Expression Engine 1.6.7", published by Packt Publishing and visiting www.expressionengine.com to consult the EE user guide. If you get stuck at any time when using Expression Engine you can visit the EE support forums via the man EE site to get help with EE, XHTML and CSS issues and for regular updates, the Elislab EE feed within EE is an excellent source of news from the EE community. The trouble with templates Lets open up EE’s default "weblog" template in Firefox. I have the very useful "developers toolbar" add-on installed. You can see the many options which are available lined-up across the bottom of Firefox’s toolbar. When you select "Outline > Outline Current Element", Firefox renders an outline around the block of the element and is set by default to display the name of the selected element. This add-on features many other timesaving and task-facilitating functions, which range from DOM tools for JavaScript development to nifty layout tools like displaying a ruler inside the Firefox window. The default template is a useful guide to some basic EE tags embedded into the XHTML, but the CSS should render a more clean and simple to customize design-so lets make some much needed changes. I will not be looking onto the EE tags in this article because EE tags are very powerful and are beyond the scope of this article. Inside the template module create a new template group and call it "site_extended". Template groups in EE organize your templates into virtual folders. We will make a copy of the existing template group and templates so all the changes we make are non-destructive. Choose, do not duplicate any existing template groups, but select "Make the index template in this group your site's home page?" and press submit. Easy. Next create a new template and call it "site_extended_css" and lets duplicate the site/site_css template. This powerful feature instructs EE to clone an existing template with a new name and location. Now let's create a copy of the default site weblog and call it "index_extended". Select "duplicate an existing template", choose "site/index" from the options drop-down list. The first part of the location is "site"/ being the template group and the site/"index" the actual template. Now your template management tab should look like: Notice that the index template has a red star next to it.
Read more
  • 0
  • 0
  • 3366

article-image-categories-and-attributes-magento-part-2
Packt
22 Oct 2009
7 min read
Save for later

Categories and Attributes in Magento: Part 2

Packt
22 Oct 2009
7 min read
Time for action: Creating Attributes In this section we will create an Attribute set for our store. First, we will create Attributes. Then, we will create the set. Before you begin Because Attributes are the main tool for describing your Products, it is important to make the best use of them. Plan which Attributes you want to use. What aspects or characteristics of your Products will a customer want to search for? Make those Attributes. What aspects of your Products will a customer want to choose? Make these Attributes, too. Attributes are organized into Attribute Sets. Each set is a collection of Attributes. You should create different sets to describe the different types of Products that you want to sell. In our coffee store, we will create two Attribute Sets: one for Single Origin coffees and one for Blends. They will differ in only one way. For Single Origin coffees, we will have an Attribute showing the country or region where the coffee is grown. We will not have this Attribute for blends because the coffees used in a blend can come from all over the world. Our sets will look like the following: Single Origin Attribute set Blended Attribute set Name Name Description Description Image Image Grind Grind Roast Roast Origin SKU SKU Price Price Size Size   Now, let's create the Attributes and put them into sets. The result of the following directions will be several new Attributes and two new Attribute Sets: If you haven't already, log in to your site's backend, which we call the Administrative Panel: Select Catalog | Attributes | Manage Attributes. list of all the Attributes is displayed. These attributes have been created for you. Some of these Attributes (such as color, cost, and description) are visible to your customers. Other Attributes affect the display of a Product, but your customers will never see them. For example, custom_design can be used to specify the name of a custom layout, which will be applied to a Product's page. Your customers will never see the name of the custom layout. We will add our own attributes to this list. Click the Add New Attribute button. The New Product Attribute page displays: There are two tabs on this page: Properties and Manage Label / Options. You are in the Properties tab. The Attribute Properties section contains settings that only the Magento administrator (you) will see. These settings are values that you will use when working with the Attribute. The Frontend Properties section contains settings that affect how this Attribute will be presented to your shoppers. We will cover each setting on this page. Attribute Code is the name of the Attribute. Your customers will never see this value. You will use it when managing the Attribute. Refer back to the list of Attributes that appeared in Step 2. The Attribute identifier appears in the first column, labelled Attribute Code. The Attribute Code must contain only lowercase letters, numbers, and the underscore character. And, it must begin with a letter. The Scope of this Attribute can be set as Store View, Website, or Global. For now, you can leave it set to the default—Store View. The other values become useful when you use one Magento installation to create multiple stores or multiple web sites. That is beyond the scope of this quick-start guide. After you assign an Attribute set to a Product, you will fill in values for the Attributes. For example, suppose you assign a set that contains the attributes color, description, price, and image. You will then need to enter the color, description, price, and image for that Product. Notice that each of the Attributes in that set is a different kind of data. For color, you would probably want to use a drop-down list to make selecting the right color quick and easy. This would also avoid using different terms for the same color such as "Red" and "Magenta". For description, you would probably want to use a freeform text field. For price, you would probably want to use a field that accepts only numbers, and that requires you to use two decimal places. And for image, you would want a field that enables you to upload a picture. The field Catalog Input Type for Store Owner enables you to select the kind of data that this Attribute will hold: In our example we are creating an Attribute called roast. When we assign this value to a Product, we want to select a single value for this field from a list of choices. So, we will select Dropdown. If you select Dropdown or Multiple Select for this field, then under the Manage Label/Options tab, you will need to enter the list of choices (the list of values) for this field. If you select Yes for Unique Value, then no two products can have the same value for this Attribute. For example, if I made roast a unique Attribute, that means only one kind of coffee in my store could be a Light roast, only one kind of coffee could be a French roast, only one could be Espresso, and so on. For an Attribute such as roast, this wouldn't make much sense. However, if this Attribute was the SKU of the Product, then I might want to make it unique. That would prevent me from entering the same SKU number for two different Products. If you select Yes for Values Required, then you must select or enter a value for this Attribute. You will not be able to save a Product with this Attribute if you leave it blank. In the case of roast, it makes sense to require a value. Our customers would not buy a coffee without knowing what kind of roast the coffee has. Input Validation for Store Owner causes Magento to check the value entered for an Attribute, and confirm that it is the right kind of data. When entering a value for this Attribute, if you do not enter the kind of data selected, then Magento gives you a warning message. The Apply To field determines which Product Types can have this Attribute applied to them. Remember that the three Product Types in Magento are Simple, Grouped, and Configurable. Recall that in our coffee store, if a type of coffee comes in only one roast, then it would be a Simple Product. And, if the customer gets to choose the roast, it would be a Configurable Product. So we want to select at least Simple Product and Configurable Product for the Apply To field: But what about Grouped Product? We might sell several different types of coffee in one package, which would make it a Grouped Product. For example, we might sell a Grouped Product that consists of a pound of Hawaiian Kona and a pound of Jamaican Blue Mountain. We could call this group something like "Island Coffees". If we applied the Attribute roast to this Grouped Product, then both types of coffee would be required to have the same roast. However, if Kona is better with a lighter roast and Blue Mountain is better with a darker roast, then we don't want them to have the same roast. So in our coffee store, we will not apply the Attribute roast to Grouped Products. When we sell coffees in special groupings, we will select the roast for each coffee. You will need to decide which Product Types each Attribute can be applied to. If you are the store owner and the only one using your site, you will know which Attributes should be applied to which Products. So, you can safely choose All Product Types for this setting.
Read more
  • 0
  • 0
  • 3197
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-blogger-improving-your-blog-google-analytics-and-search-engine-optimization
Packt
22 Oct 2009
7 min read
Save for later

Blogger: Improving Your Blog with Google Analytics and Search Engine Optimization

Packt
22 Oct 2009
7 min read
If you've ever wondered how people find your website or how to generate more traffic, then this article tells you more about your visitors. Knowing where they come from, what posts they like, how long they stay, and other site metrics are all valuable information to have as a blogger. You would expect to pay for such a deep look into the underbelly of your blog, but Google wants to give it to you for free. Why for free? The better your site does, the more likely you are to pay for AdWords or use other Google tools. The Google Analytics online statistics application is a delicious carrot to encourage content rich sites and better ad revenue for everyone involved. You also want people to find your blog when they perform a search about your topic. The painful truth is that search engines have to find your blog first before it will show up in their results. There are thousands of new blogs being created everyday. If you want people to be able to find your blog in the increasingly crowded blogosphere, optimizing your blog for search engines will improve the odds. Improving Your Blog with Google Analytics Analytics gives you an overwhelming amount of data to use for measuring the success of your sites, and ads. Once you've had time to analyze that data, you will want to take action to improve the performance of your blog, and ads. We'll now look at how Analytics can help you make decisions about the design, and content of your site. Analyzing Navigation The Navigation section of the Content Overview report reveals how your visitors actually navigate your blog. Visitors move around a site in ways we can't predict. Seeing how they actually navigate a site and where they entered the site are powerful tools we can use to diagnose where we need to improve our blog. Exploring the Navigation Summary The Navigation Summary shows you the path people take through your site, including how they get there and where they go. We can see from the following graphical representation that our visitors entered the site through the main page of the blog most of the time. After reaching that page, over half the time, they went to other pages within the site. Entrance Paths We can see the path, the visitors take to enter our blog using the Entrance Paths report. It will show us from where they entered our site, which pages they looked at, and the last page they viewed before exiting. Visitors don't always enter by the main page of a site, especially if they find the site using search engines or trackbacks. The following screenshot displays a typical entrance path. The visitor comes to the site home page, and then goes to the full page of one of the posts. It looks like our visitors are highly attracted to the recipe posts. Georgia may want to feature more posts about recipes that tie in with her available inventory. Optimizing your Landing Page The Landing Page reports tell you where your visitors are coming from, and if they have used keywords to find you. You have a choice between viewing the source visitors used to get to your blog, or the keywords. Knowing the sources will give you guidance on the areas you should focus your marketing or advertising efforts on. Examining Entrance Sources You can quickly see how visitors are finding your site, whether through a direct link, or a search engine, locally from Blogger, or from social networking applications such as Twitter.com. In the Entrance Sources graph shown in the following screenshot, we can see that the largest among the number of people are coming to the blog using a direct link. Blogger is also responsible for a large share of our visitors, which is over 37%. There is even a visitor drawn to the blog from Twitter.com, where Georgia has an account. Discovering Entrance Keywords When visitors arrive at your site using keywords, the words they use will show up on the report. If they are using words in a pattern that do not match your site content, you may see a high bounce rate. You can use this report to redesign your landing page to better represent the purpose of your site by the words, and phrases that you use. Interpreting Click Patterns When visitors visit your site they show their attraction to links, and interactive content by clicking on them. Click Patterns are the representation of all those mouse clicks over a set time period. Using the Site Overlay reporting feature, you can visually see the mouse clicks represented in a graphical pattern. Much like collared pins stuck on a wall chart they will quickly reveal to you, which areas of your site visitors clicked on the most, and which links they avoided. Understanding Site Overlay Site Overlay shows the number of clicks for your site by laying them transparently in a graphical format on top of your site. Details with the number of clicks, and goal tracking information pop up in a little box when you hover over a click graphic with your mouse. At the top of the screen are options that control the display of the Site Overlay. Clicking the Hide Overlay link will hide the overlay from view. The Displaying drop-down list lets you choose how to view mouse Clicks on the page, or goals. The date range is the last item displayed. The graphical bars shown on top of the page content indicate where visitors clicked, and how many of them did so. You can quickly see what areas of the page interest your visitors the most. Based on the page clicks you see, you will have an idea of the content, and advertising that is most interesting to your visitors. Yes, Site Overlay will show the content areas of the page the visitors clicked on, and the advertisement areas. It will also help you see which links are tied to goals, and whether they are enticing your visitors to click. Optimizing Your Blog for Search Engines We are going to take our earlier checklists and use them as guides on where to make changes to our blog. When the changes are complete, the blog will be more attractive to search engines and visitors. We will start with changes we can make "On-site", and then progress to ways we can improve search engine results with "Off-site" improvements. Optimizing On-site The most crucial improvements we identified earlier were around the blog settings, template, and content. We will start with the easiest fixes, then dive into the template to correct validation issues. Let's begin with the settings in our Blogger blog. Seeding the Blog Title and Description with Keywords When you created your blog, did you take a moment to think about what words potential visitors were likely to type in when searching for your blog? Using keywords in the title and description of your blog gives potential visitors a preview and explanation of the topics they can expect to encounter in your blog. This information is what will also display in search results when potential visitors perform a search. Updating the Blog Title and Description It's never too late to seed your blog title and description with keywords. We will edit the blog title and description to optimize them for search engines. Login to your blog and navigate to Settings | Basic. We are going to replace the current title text with a phrase that more closely fits the blog. Type Organic Fruit for All into the Title field. Now, we are going to change the description of the blog. Type Organic Fruit Recipes, seasonal tips, and guides to healthy living into the description field. Scroll down to the bottom of the screen and click the Save Settings button. Y ou can enter up to 500 characters of descriptive text. What Just Happened? When we changed the title and description of our blog in the Basic Settings section, Blogger saved the changes and updated the template information as well. Now, when search engines crawl our blog, they will see richer descriptions of our blog in the blog title and blog description. The next optimization task is to verify that search engines can index our blog.
Read more
  • 0
  • 0
  • 4913

article-image-obtaining-alfresco-web-content-management-wcm
Packt
22 Oct 2009
11 min read
Save for later

Obtaining Alfresco Web Content Management (WCM)

Packt
22 Oct 2009
11 min read
You must obtain and install an additional download to enable Alfresco WCM functionality. The download includes a new Spring bean configuration file, a standalone Tomcat instance pre-configured with JARs, and server settings that allow a separate Tomcat instance (which is called the virtualization server) to run web applications stored in Alfresco WCM web folders. This capability is used when content managers "preview" an asset or a website. Just as in the core Alfresco server, you can either build the WCM distribution from source or obtain a binary distribution. Step-by-Step: Installing Alfresco WCM If you are building from source, the source code for Alfresco WCM is included with the source code for the rest of the product. Once the source code is checked out, all you have to do is run the distribute Ant task as follows: ant -f continuous.xml distribute After several minutes, the WCM distribution will be placed in the build|dist directory of your source code's root directory. Alternatively, if you are using binaries, download the binary distribution of the Alfresco WCM extension. Where you get it depends on whether you are running Labs or Enterprise. The Labs version is available for download from http://www.alfresco.com. The Enterprise version can be downloaded from the customer or partner site using the credentials provided by your Alfresco representative. Regardless of whether you chose source or binary, you should now have an Alfresco WCM archive. For example, the Labs edition for Linux is named alfresco-labs-wcm-3b.tar.gz. To complete the installation, follow these steps: Expand the archive into any directory that makes sense to you. For example, on my machine I use |usr|local|bin|alfresco-labs-3.0-wcm. Copy the wcm-bootstrap-context.xml file to the Alfresco server's extension directory ($TOMCAT_HOME|shared|classes|alfresco|extension). Edit the startup script (virtual_alf.sh) to ensure that the APPSERVER variable is pointing to the virtual-tomcat directory in the location to which you expanded the archive. Using the example from the previous step, the APPSERVER variable would be: APPSERVER=|usr|local|bin|alfresco-labs-3.0-wcm|virtual-tomcat Start the virtual server by running: |virtual_alf.sh start</i> Start the Alfresco server (or restart it if it was already running). You now have Alfresco with Alfresco WCM up and running. You'll test it out in the next section, but you can do a smoke test by logging in to the web client and confirming that you see the Web Projects folder under Company Home. Creating Web Projects A web project is a collection of assets, settings, and deployment targets that make up a website or a part of a website. Web projects are stored in web project folders, which are regular folders with a bunch of web project metadata. The number of web project folders you use to represent a site, or whether multiple sites are contained within a single web project folder is completely up to you. There is no "right way" that works for everybody. Permissions are one factor. The ability to set permissions stops at the website. Therefore, if you have multiple groups that maintain a site that are concerned with the ability of one to change the other's files, your only remedy is to split the site across web project folders. Web form and workflow sharing is another thing to think about. As you'll soon learn, workflows and web forms are defined globally, and then selectively chosen and configured by each site. Once made available to a web project, they are available to the entire web project. For example, you can't restrict the use of a web form to only a subset of the users of a particular site. SomeCo has chosen the approach of using one web project folder to manage the entire SomeCo.com website. Step-by-Step: Creating the SomeCo Web Project The first thing you need to do is create a new web project folder for the SomeCo website. Initially, you don't need to worry about web forms, deployment targets, or workflows. The goal is simply to create the web project and import the contents of the website. To create the initial SomeCo web project, follow these steps: Log in as admin. Go to Web Projects under Company Home. Click Create, and then Create Web Project. Specify the name of the web project as SomeCo Corporate Site. Specify the DNS name as someco-site. Click Next for the remaining steps, taking all defaults. You'll come back later and configure some of these settings. On the summary page, click Finish. You now have a web project folder for the SomeCo corporate site. Click SomeCo Corporate Site. You should see one Staging Sandbox and one User Sandbox. Click the Browse Website button for the User Sandbox. Now you can import SomeCo's existing website into the web project folder. Click Create, and then Bulk Import. Navigate to the "web-site" project in your Eclipse workspace. Assuming you've already run Ant for this project, there should be a ZIP file in the build folder called someco-web-site.zip. Select the file. Alfresco will import the ZIP into your User Sandbox. What Just Happened You just created a new web project folder for SomeCo's corporate website. But upon creation of a web project folder, there is no website to manage. This is a big disappointment for some people. The most crestfallen are those who didn't realize that Alfresco is a "decoupled" content management system—it has no frontend framework and no "default" website like "coupled" content management systems such as Drupal. This will change in the 3.0 releases as Alfresco introduces its new set of clients. But for now, it's up to you to give Alfresco a website to manage. You just happened to have a start on the SomeCo website sitting in your Eclipse workspace. Alfresco knows how to import WAR and ZIP files, which is a convenient way to migrate the website into Alfresco for the first time. Because web project sandboxes are mountable via CIFS, simply copying the website into the sandbox via CIFS is another way to go. The difference between the two approaches is that the WAR/ZIP import can only happen once. The import action complains if an archive contains nodes that already exist in the repository. If you haven't already done so, take a look at the contents of your sandbox. You should see index.html in the root of your User Sandbox and a someco folder that contains additional folders for CSS, images, JavaScript, and so on. The HTML file in the root is the same index.html file you deployed to the Alfresco web application in order to implement the AJAX ratings widget. Click the preview icon. (Am I the only one who thinks it looks eerily similar to the Turkish nazar talisman used to ward off the "evil eye"?) You should see the index page in a new tab or window. The list of Whitepapers won't be displayed. That's because the page is running in the context of the virtualization server, which is a different domain than your Alfresco server. Therefore, it is subject to the cross-domain restriction, which will be addressed later. Playing Nicely in the Sandbox Go back to the root of your web project folder. The link in the breadcrumb trail is likely to be the fastest way to navigate back. Click the Browse Website link in the Staging Sandbox. It's empty. If you were to invite another user to this website, his/her sandbox would be empty as well. Sandboxes are used to isolate changes each content owner makes, while still providing him/her the full context of the website. The Staging Sandbox represents your live website. Or in source code control terms, it is the HEAD of your site. It is assumed that whatever is in the Staging Sandbox can be safely deployed to the live website at any time. It is currently empty because you have not yet submitted any content to staging. Let's go ahead and do that now. If you click the Modified Items link in the User Sandbox, you'll see the index.html file and the someco folder. You could submit these individually. But you want everything to go to staging, so click Submit All: Provide a label and a description such as initial population and click OK. It is safe to ignore the warning that a suitable workflow was not found. That's expected because you haven't configured a workflow for this web project yet. Now the files have been submitted to staging. Here are some things to notice: If you click the Preview Website link in the Staging Sandbox, you'll see the website just as you did in the User Sandbox earlier. If you browse the website in the Staging Sandbox, you'll see the same files currently shown when you browse the website in your User Sandbox. A snapshot of the site was automatically taken when the files were committed and is listed under Recent Snapshots: Inviting Users To get a feel for how sandboxes work, invite one or more users to the web project (Actions, Invite Web Project Users). The following table describes the out of the box web project roles:   WCM User Role Can do these things Content Contributor Create and submit new content; but cannot edit or delete existing content Content Reviewer Create, edit, and submit new content; but cannot delete existing content Content Collaborator See all sandboxes, but only have full control over their own Create, edit, and submit new content; but cannot delete existing content Edit web project settings Content Manager See and modify content in all sandboxes; exert full control over all content See and deploy snapshots and manage deployment reports Edit web project settings Invite new users to the web project Delete the web project and individual sandboxes You'll notice that each new user gets his/her own sandbox, and that the sandbox automatically contains everything that is currently in staging. If a user makes a change to his/her sandbox, it is only visible within their sandbox until they commit the change to staging. If this is done, everyone else sees the change immediately. Unlike some content management and source code control systems, there is no need for other users to do an "update" or a "get latest" to copy the latest changes from staging into their sandbox. It is important to note that Alfresco will not merge conflicts. When a user makes a change to a file in his/her sandbox, it will be locked in all other sandboxes to prevent conflicts. If you were to customize Alfresco to disable locking, the last change would win. Alfresco would not warn you of the conflict. The Alfresco admin user and any user with Content Manager Access can see (and work within) all User Sandboxes. Everyone else sees only their own sandboxes. Mounting Sandboxes via CIFS All sandboxes are individually mountable via CIFS. In fact, in staging, each snapshot is individually mountable. This gives content owners the flexibility to continue managing content in their sandbox using the tools they are familiar with. The procedure for mounting a sandbox is identical to that of mounting the regular repository via CIFS, except that you use "AVM" as the mount point instead of "Alfresco". One difference between mounting the AVM repository through CIFS and mounting the DM repository is that the AVM repository directory structure is more complicated. For example, the path to the root of admin's sandbox in the SomeCo site is: |someco-site--admin|HEAD|DATA|www|avm_webapps|ROOT The first part of the path, someco-site, is the DNS name you assigned when you set up the web project. The admin string indicates which User Sandbox we are looking at. If you wanted to mount to the Staging Sandbox, the first part of the path would be someco-site without --admin. The next part of the path, HEAD, specifies the latest-and-greatest version of the website. Alternatively, you could mount a specific snapshot like this: |someco-site--admin|VERSION|v2|DATA|www|avm_webapps|ROOT As you might expect, the normal permissions apply. Users who aren't able to see another user's sandbox in the web client won't be able to do so through CIFS.
Read more
  • 0
  • 0
  • 2307

article-image-professional-plone-development-foreword-alexander-limi
Packt
22 Oct 2009
9 min read
Save for later

Professional Plone Development: Foreword by Alexander Limi

Packt
22 Oct 2009
9 min read
  Foreword by Alexander Limi, co-founder of Plone It's always fascinating how life throws you a loop now and then that changes your future in a profound way—and you don't realize it at the time. As I sit here almost six years after the Plone project started, it seems like a good time to reflect on how the last years changed everything, and some of the background of why you are holding this book in your hands—because the story about the Plone community is at least as remarkable as the software itself. It all started out in a very classic way—I had just discovered Zope and Python, and wanted to build a simple web application to teach myself how they worked. This was back in 1999, when Zope was still a new, unproven technology, and had more than a few rough spots. I have never been a programmer, but Python made it all seem so simple that I couldn't resist trying to build a simple web application with it. After reading what I could find of documentation at the time, I couldn't quite figure it out—so I ended up in the online Zope chat rooms to see if I could get any help with building my web application. Little did I know that what happened that evening would change my life in a significant way. I met Alan Runyan online, and after trying to assist me, we ended up talking about music instead. We also reached the conclusion that I should focus on what I was passionate about—instead of coding, I wanted to build great user interfaces and make things easy to use. Alan wanted to provide the plumbing to make the system work. For some reason, it just clicked at that point, and we collaborated online and obsessed over the details of the system for months. External factors were probably decisive here too: I was without a job, and my girlfriend had left me a few months prior; Alan had just given up his job as a Java programmer at a failed dot-com company and decided to start his own company doing Python instead—so we both ended up pouring every living hour into the project, and moving at a break-neck pace towards getting the initial version out. We ended up getting a release ready just before the EuroPython Conference in 2002, and this was actually the first time I met Alan in person. We had been working on Plone for the past year just using email and IRC chat—two technologies that are still cornerstones of Plone project communication. I still remember the delight in discovering that we had excellent communication in person as well. What happened next was somewhat surreal for people new to this whole thing: we were sitting in the audience in the "State of Zope" talk held by Paul Everitt. He got to the part of his talk where he called attention to people and projects that he was especially impressed with. When he called out our names and talked about how much he liked Plone—which at this point was still mostly the effort of a handful of people—it made us feel like we were really onto something. This was our defining moment. For those of you who don't know Paul, he is one of the founders of Zope Corporation, and would go on to become our most tireless and hard-working supporter. He got involved in all the important steps that would follow—he put a solid legal and marketing story in place and helped create the Plone Foundation—and did some great storytelling along the way. There is no way to properly express how much Paul has meant to us personally—and to Plone—five years later. His role was crucial in the story of Plone's success, and the project would not be where it is now without him. Looking back, it sounds a bit like the classic romanticized start-up stories of Silicon Valley, except that we didn't start a company together. We chose to start two separate companies—in hindsight a very good decision. It never ceases to amaze me how much of an impact the project has had since. We are now an open-source community of hundreds of companies doing Plone development, training, and support. In just the past month, large companies like Novell and Akamai—as well as government agencies like the CIA, and NGOs like Oxfam—have revealed that they are using Plone for their web content management, and more will follow. The Plone Network site, plone.net, lists over 150 companies that offer Plone services, and the entire ecosystem is estimated to have revenues in the hundreds of millions of US dollars annually. This year's Plone Conference in Naples, Italy is expected to draw over 300 developers and users from around the world. Not bad for a system that was conceived and created by a handful of people standing on the shoulders of the giants of the Zope and Python communities. But the real story here is about an amazing community of people—individuals and organizations, large and small—all coming together to create the best content management system on the planet. We meet in the most unlikely locations—from ancient castles and mountain-tops in Austria, to the archipelagos and fjords of Norway, the sandy beaches of Brazil, and the busy corporate offices of Google in Silicon Valley. These events are at the core of the Plone experience, and developers nurture deep friendships within the community. I can say without a doubt that these are the smartest, kindest, most amazing people I have ever had the pleasure to work with. One of those people is Martin Aspeli, whose book you are reading right now. Even though we're originally from the same country, we didn't meet that way. Martin was at the time—and still is—living in London. He had contributed some code to one of our community projects a few months prior, and suggested that we should meet up when he was visiting his parents in Oslo, Norway. It was a cold and dark winter evening when we met at the train station—and ended up talking about how to improve Plone and the community process at a nearby café. I knew there and then that Martin would become an important part of the Plone project. Fast-forward a few years, and Martin has risen to become one of Plone's most important and respected—not to mention prolific—developers. He has architected and built several core components of the Plone 3 release; he has been one of the leaders on the documentation team, as well as an active guide in Plone's help forums. He also manages to fit in a day job at one of the "big four" consulting companies in the world. On top of all this, he was secretly working on a book to coincide with the Plone 3.0 release—which you are now the lucky owner of. This brings me to why this book is so unique, and why we are lucky to have Martin as part of our community. In the fast-paced world of open-source development—and Plone in particular—we have never had the chance to have a book that was entirely up-to-date on all subjects. There have been several great books in the past, but Martin has raised the bar further—by using the writing of a book to inform the development of Plone. If something didn't make sense, or was deemed too complex for the problem it was trying to solve—he would update that part of Plone so that it could be explained in simpler terms. It made the book better, and it has certainly made Plone better. Another thing that sets Martin's book apart is his unparalleled ability to explain advanced and powerful concepts in a very accessible way. He has years of experience developing with Plone and answering questions on the support forums, and is one of the most patient and eloquent writers around. He doesn't give up until you know exactly what's going on. But maybe more than anything, this book is unique in its scope. Martin takes you through every step from installing Plone, through professional development practices, unit tests, how to think about your application, and even through some common, non-trivial tasks like setting up external caching proxies like Varnish and authentication mechanisms like LDAP. In sum, this book teaches you how to be an independent and skillful Plone developer, capable of running your own company—if that is your goal—or provide scalable, maintainable services for your existing organization. Five years ago, I certainly wouldn't have imagined sitting here, jet-lagged and happy in Barcelona this Sunday morning after wrapping up a workshop to improve the multilingual components in Plone. Nor would I have expected to live halfway across the world in San Francisco and work for Google, and still have time to lead Plone into the future. Speaking of which, how does the future of Plone look like in 2007? Web development is now in a state we could only have dreamt about five years ago—and the rise of numerous great Python web frameworks, and even non-Python solutions like Ruby on Rails has made it possible for the Plone community to focus on what it excels at: content and document management, multilingual content, and solving real problems for real companies—and having fun in the process. Before these frameworks existed, people would often try to do things with Plone that it was not built or designed to do—and we are very happy that solutions now exist that cater to these audiences, so we can focus on our core expertise. Choice is good, and you should use the right tool for the job at hand. We are lucky to have Martin, and so are you. Enjoy the book, and I look forward to seeing you in our help forums, chat rooms, or at one of the many Plone conferences and workshops around the world. — Alexander Limi, Barcelona, July 2007 http://limi.net Alexander Limi co-founded the Plone project with Alan Runyan, and continues to play a key role in the Plone community. He is Plone's main user interface developer, and currently works as a user interaction designer at Google in California.
Read more
  • 0
  • 0
  • 5525

article-image-interacting-students-using-moodle-19-part-2
Packt
22 Oct 2009
10 min read
Save for later

Interacting with the Students using Moodle 1.9 (part 2)

Packt
22 Oct 2009
10 min read
We'll add a competitive element to the project and—just as we have seen on TV—let the children vote for the winner. The tasks we set will involve the students researching, collaborating, and reflecting. They will be working hard, but we'll have a much easier time now, as all of their responses will be on Moodle for us to view and mark at our convenience—no more carrying heavy books around. Giving our class a chance to vote Moodle has an activity, known as Choice, which allows you to present students with a number of options that they can choose from. We're actually going to use it twice in our project, for two different purposes. Let's us try and set it up. Time for action-giving students a chance to choose a winner The students have posted their suggestions, comments, and views on Moodle. A choice is to be made of the best suggestion. Who better, than the students themselves to choose and vote for the best? With editing turned on, click on Add an Activity and then select Choice. In the Name field, enter an appropriate descriptive text—in our case, this is Vote for the best design here. In the Choice Text field, ask the question based on which you want the students to cast a vote. Leave the Limit field as it is if you don't mind any number of students casting a vote for any option available. Change it to enable, if you only want a certain number of people to vote for a particular choice. We shall leave the Limit block as it is, but we shall inform the students that they can't vote for themselves. In the Choice block, type in the options (a minimum of two) you want the students to be able to cast their vote for. Clicking on Add more fields will provide you with more choice boxes. We will need one field for each member of the class, for this activity. Use the Restrict answering to this time period option to decide when to open and close your Choice—or have it always available. Miscellaneous settings: For our activity, we need to set Display Mode to Vertical set and Publish Results to Do Not Publish. The following table explains what the settings mean, so you can use them on other occasions. Setting What it is Why use it Display Mode Lets you have your buttons go across or down the screen Use Vertically if you have many options to avoid stretching your screen Publish Results Decide if and when you want students to see what others have put Choose Do not publish if you want them to tell you their progress privately; if you're doing a class survey, choose, for example, Always show results Privacy of Results Lets you choose whether to show names or not Are the results more important than who voted for what? Some students might be wary of responding if they think their names will be shown Allow choice to be updated Lets them change their mind-but they can still vote only once. Useful, if you are using this to assess progress over a period of time. Show column for unanswered Sets up a column showing those who haven't yet responded A clear visual way of knowing who hasn't done the task For now, you can ignore the Common Module Settings option, and just click on Save and return to course. What just happened? We've set up an area, on our course page, where the students can choose their favorite designs from a number of options, by clicking on the desired option button. On the screen, you will be able to see an icon (usually, a question mark) and some text next to it. If you click on the text next to the icon, the following information will appear: The students will click on the option button placed next to their choice—in our case, the name of the classmate whose design they prefer. Finding out the students' choice Access the Choice option and click on the words View *** responses on the upper right of the screen. The *** will be the number of students who have voted already. You will get a chart displaying the choices of the students. In my Moodle course, as shown in the following screenshot, nobody has voted yet—so they need a gentle nudge! Remember that we have set up this activity so that our students cannot see the results, in order to avoid peer pressure or bullying. However, we can see the results. Thus, if Mickey votes for himself (even after having been told not to) we will spot it and can reprimand him. Have a go hero-getting the class to give us feedback After we've gone through all of the effort to set up our project on Moodle, it would be nice to know how well it was received. Why not go off now and set up another Choice option, where the question asks how much did you enjoy planning and designing the campsite? You could give them three simple responses (displayed horizontally) as: A lot It was OK Not very much. Or you could be more specific, focusing on the individual activities and asking how much they feel they have benefited from, say, the wiki or the forum. Make sure it is set up, so that the students don't see the results—that way they're more likely to be truthful. Why use Choice? Here are a few other thoughts on Choice, based on my own experiences: It is a fast and simple method of gathering data for a class research project. I used this with a class of 13 year olds who had just returned from the summer break. I asked them to choose where they had been on vacation, giving them choices of our own country, several nearby countries in Europe, the United States of America, and a few more. I set up the choice, so that they could all see the answers when the time was up. I also set it up in such a way that the results were anonymous, to avoid any kind of uneasiness felt by those students who had stayed at home. The class then compared and contrasted the class results with Tourist Office statistics on the most popular tourist destinations. It offers a private way for students to evaluate and inform the teacher about their progress. Students might be too shy to tell you in person if they are struggling; they might be wary of being honest in the open voting methods that some teachers use (such as red, amber, or green traffic lights). However, if the students are aware of the fact that their classmates will not see their response, they are more likely to be honest with you. It acts as a way to involve the class in deciding the path that their learning will take. I first introduced my class of 11 year olds to rivers in Europe, South America, Africa, and Asia. Then, I offered the class, the chance to vote for the river that they wanted study in greater depth as part of their project. The majority opted for the Amazon—so the Amazon it was! Announcing the winner Well, you could give out the results in the classroom, of course! Alternatively, can encourage them to use Moodle by using the Compose a Webpage resource that we met in the previous article on Adding Worksheets and Resources with Moodle, and adding the information there. Writing creatively in Moodle Once a winner has been found, the next task for everyone is to create a cleverly-worded advertisement for this campsite, for which, you could use one of the names suggested in the glossary. This too can be done on Moodle. Why use Moodle and not their exercise books? The first reason is that it will save paper, the second reason is that the students enjoy working on the computer, and the third and final reason is that we can work at our leisure in school, at home, or in any room where there is an Internet connection. We're not tied to carrying around a pile of heavy books. We don't even need to manually hand-write the grades into our grade book. Moodle will put the grades that we give our students, into its grade book automatically and alphabetically. Moodle can also send our pupils an email telling them that we've graded their task, so that they can check their grades. This might be a different way of working from the one that you are used to, but do give it a try. It will take the pressure off your back and shoulders, if nothing else. Time for action-setting up an online creative writing exercise For our advert, we'll use an Online text assignment. With editing turned on, select online text option, within Assignments. In the Assignment name field, enter something descriptive—our students will click here to get to the task. In the Description field, enter the instructions. Our screen will then appear as shown in the following screenshot: If you need more space to type in, click on the icon on the far right of the bottom line of the HTML editor. This will enlarge the text box for you. Click it again when you're done, to return to the editing area. In the Grade field, enter the total marks out of which you will score the students (for now, we're sticking to a maximum of 100, but you can change this). Set a start and end date between which the students can send the work assigned to them, if you want. Leave the Prevent Late Submissions option as it is, unless you need to set a deadline by which the students must submit the assigned work. Set the Allow Resubmitting option to YES, if you want to let students redraft their work. Set the Email Alerts to teachers option to NO (unless you want 30 emails in your inbox!). Change the Comment inline option to YES, so that we can post a comment on the students work. Click on Save and return to course. What just happened? We've just explained to our class what we want them to do, and have also provided them with space in Moodle to do it. We used an Online Text assignment. If we go up to the top of our course, where the editing button is, you'll be able to see a very useful feature called Switch role to…. If we choose the Student option, it will allow us to see the tasks as the pupils will see them: In this case, there's a rather unfriendly command at the bottom of our assignment. Do you think that your students will know that they need to click here to get to their text box? Why not ask your Moodle administrator to look at the Language editing settings and change these words to something more child-friendly—such as Click here to type your answer?
Read more
  • 0
  • 0
  • 1543
article-image-codeigniter-and-objects
Packt
22 Oct 2009
12 min read
Save for later

CodeIgniter and Objects

Packt
22 Oct 2009
12 min read
To save the world from a lot of boring t-shirts, this article covers the way in which CI uses objects, and the different ways you can write and use your own objects. Incidentally, I've used 'variables/properties', and 'methods/functions' interchangeably, as CI and PHP often do. You write 'functions' in your controllers for instance, when the OO purist would call them 'methods'. You define class 'variables' when the purist would call them 'properties'. Object-Oriented Programming I'm assuming you—like me—have a basic knowledge of OOP, but may have learned it as an afterthought to 'normal' PHP 4. PHP 4 is not an OO language, though some OO functionality has been tacked on to it. PHP 5 is much better, with an underlying engine that was written from the ground up with OO in mind. But you can do most of the basics in PHP 4, and CI manages to do everything it needs internally, in either language. The key thing to remember is that, when an OO program is running, there is always one current object (but only one). Objects may call each other and hand over control to each other, in which case the current object changes; but only one of them can be current at any one time. The current object defines the 'scope'—in other words, which variables (properties) and methods (functions) are available to the program at that moment. So it's important to know, and control, which object is current. Like police officers and London buses, variables and methods belonging to objects that aren't current just aren't there for you when you most need them. PHP, being a mixture of functional and OO programming, also offers you the possibility that no object is current! You can start off as a functional program, call an object, let it take charge for a while, and then let it return control to the program. Luckily, CI takes care of this for you. Working of the CI 'Super-Object' CI works by building one 'super-object': it runs your whole program as one big object, in order to eliminate scoping issues. When you start CI, a complex chain of events occurs. If you set your CI installation to create a log, you'll see something like this:     1 DEBUG - 2006-10-03 08:56:39 --> Config Class Initialized    2 DEBUG - 2006-10-03 08:56:39 --> No URI present. Default controller    set.    3 DEBUG - 2006-10-03 08:56:39 --> Router Class Initialized    4 DEBUG - 2006-10-03 08:56:39 --> Output Class Initialized    5 DEBUG - 2006-10-03 08:56:39 --> Input Class Initialized    6 DEBUG - 2006-10-03 08:56:39 --> Global POST and COOKIE data    sanitized    7 DEBUG - 2006-10-03 08:56:39 --> URI Class Initialized    8 DEBUG - 2006-10-03 08:56:39 --> Language Class Initialized    9 DEBUG - 2006-10-03 08:56:39 --> Loader Class Initialized    10 DEBUG - 2006-10-03 08:56:39 --> Controller Class Initialized    11 DEBUG - 2006-10-03 08:56:39 --> Helpers loaded: security    12 DEBUG - 2006-10-03 08:56:40 --> Scripts loaded: errors    13 DEBUG - 2006-10-03 08:56:40 --> Scripts loaded: boilerplate    14 DEBUG - 2006-10-03 08:56:40 --> Helpers loaded: url    15 DEBUG - 2006-10-03 08:56:40 --> Database Driver Class Initialized    16 DEBUG - 2006-10-03 08:56:40 --> Model Class Initialized On startup—that is, each time a page request is received over the Internet—CI goes through the same procedure. You can trace the log through the CI files:      The index.php file receives a page request. The URL may indicate which controller is required, if not, CI has a default controller (line 2). Index.php makes some basic checks and calls the codeigniter.php file (codeignitercodeigniter.php).      The codeigniter.php file instantiates the Config, Router, Input, URL, (etc.) classes (lines 1, and 3 to 9). These are called the 'base' classes: you rarely interact directly with them, but they underlie almost everything CI does.      codeigniter.php tests to see which version of PHP it is running on, and calls Base4 or Base5 (/codeigniter/Base4(or 5).php). These create a 'singleton' object: one which ensures that a class has only one instance. Each has a public &get_instance() function. Note the &:, this is assignment by reference. So if you assign to the &get_instance() method, it assigns to the single running instance of the class. In other words, it points you to the same pigeonhole. So, instead of setting up lots of new objects, you are starting to build up one 'super-object', which contains everything related to the framework.      After a security check, codeigniter.php instantiates the controller that was requested, or a default controller (line 10). The new class is called $CI. The function specified in the URL (or a default) is then called, and life as we know it starts to wake up and happen. Depending on what you wrote in your controller, CI will then initialize any other classes you need, and 'include' functional scripts you asked for. So in the log above, the model class is initialized. (line 16) The 'boilerplate' script, on the other hand, which is also shown in the log (line 13), is one I wrote to contain standard chunks of text. It's a .php file, saved in the scripts folder, but it's not a class: just a set of functions. If you were writing 'pure' PHP you might use 'include' or 'require' to bring it into the namespace: CI needs to use its own 'load' function to bring it into the super-object. The concept of 'namespace' or scope is crucial here. When you declare a variable, array, object, etc., PHP holds the variable name in its memory and assigns a further block of memory to hold its contents. However, problems might arise if you define two variables with the same name. (In a complex site, this is easily done.) For this reason, PHP has several sets of rules. For example:      Each function has its own namespace or scope, and variables defined within a function are usually 'local' to it. Outside the function, these are meaningless.      You can declare 'global' variables, which are held in a special global namespace and are available throughout the program.      Objects have their own namespaces: variables exist inside the object for as long as the object exists, but can only be referenced through the object. So $variable, global $variable, and $this->variable are three different things. Particularly, before OO, this could lead to all sorts of confusion: you may have too many variables in your namespace (so that conflicting names overwrite each other), or you may find that some variables are just not accessible from whatever scope you happen to be in. CI offers a clever way of sorting this out for you. So, now you've started CI, using the URL www.mysite.com/index.php/welcome/ index, which specifies that you want the index function of the welcome controller. If you want to see what classes and methods are now in the current namespace and available to you, try inserting this 'inspection' code in the welcome controller:     $fred = get_declared_classes();    foreach($fred as $value)    {$extensions = get_class_methods($value);    print "class is $value, methods are: ";    print_r($extensions);} When I ran this just now, it listed 270 declared classes. Most are other libraries declared in my installation of PHP. The last 11 came from CI: ten were the CI base classes (config, router, etc.) and last of all came the controller class I had called. Here's the last 11, with the methods omitted from all but the last two:     258: class is CI_Benchmark    259: class is CI_Hooks,    260: class is CI_Config,    261: class is CI_Router,    262: class is CI_Output,    263: class is CI_Input,    264: class is CI_URI,    265: class is CI_Language,    266: class is CI_Loader,    267: class is CI_Base,    268: class is Instance,    269: class is Controller, methods are: Array ( [0] => Controller [1]    => _ci_initialize [2] => _ci_load_model [3] => _ci_assign_to_models    [4] => _ci_autoload [5] => _ci_assign_core [6] => _ci_init_scaffolding    [7] => _ci_init_database [8] => _ci_is_loaded [9] => _ci_scaffolding    [10] => CI_Base )    270: class is Welcome, methods are: Array ( [0] => Welcome [1] =>    index [2] => Controller [3] => _ci_initialize [4] => _ci_load_model    [5] => _ci_assign_to_models [6] => _ci_autoload [7] => _ci_assign_core    [8] => _ci_init_scaffolding [9] => _ci_init_database [10] => _ci_is_    loaded [11] => _ci_scaffolding [12] => CI_Base ). Notice—in parentheses as it were—that the Welcome class (number 270: the controller I'm using) has all the methods of the Controller class (number 269). This is why you always start off a controller class definition by extending the controller class—you need your controller to inherit these functions. (And similarly, models should always extend the model class.) Welcome has two extra methods: Welcome and index. So far, out of 270 classes, these are the only two functions I wrote! Notice also that there's an Instance class. If you inspect the class variables of the 'Instance' class, you will find there are a lot of them! Just one class variable of the Instance class, taken almost at random, is the array input:     ["input"]=> &object(CI_Input)#6 (4) { ["use_xss_clean"]=> bool(false)    ["ip_address"]=> bool(false) ["user_agent"]=> bool(false) ["allow_get_    array"]=> bool(false) } Remember when we loaded the input file and created the original input class? Its class variables were:     use_xss_clean is bool(false)    ip_address is bool(false)    user_agent is bool(false)    allow_get_array is bool(false) As you see, they have now all been included within the 'instance' class. All the other CI 'base' classes (router, output, etc.) are included in the same way. You are unlikely to need to write code referencing these base classes directly, but CI itself needs them to make your code work. Copying by Reference You may have noticed that the CI_Input class is assigned by reference (["input"]=> &object(CI_Input)). This is to ensure that as its variables change, so will the variables of the original class. As assignment by reference can be confusing, here's a short explanation. We're all familiar with simple copying in PHP:     $one    =    1;    $two    =    $one;    echo $two; produces 1, because $two is a copy of $one. However, if you re-assign $one:     $one    =    1;    $two    =    $one;    $one    =    5;    echo $two; This code still produces 1, because changes to $one after $two has been assigned aren't reflected in $two. This was a one-off assignment of the value that happened to be in variable $one at the time, to a new variable $two, but once it was done, the two variables led separate lives. (In just the same way, if I alter $two, $one doesn't change.) In effect, PHP creates two pigeonholes: one called $one, one called $two. A separate value lives in each. You may, on any one occasion, make the values equal, but after that they each do their own thing. PHP also allows copying 'by reference'. If you add just a simple & to line 2 of the code:     $one = 1;    $two =& $one;    $one = 5;    echo $two; Then the code now echoes 5: the change we made to $one has also happened to $two. Changing the = to =& in the second line means that the assignment is 'by reference'. Now, it's as if there was only one pigeonhole, which has two names ($one and $two). Whatever happens to the contents of the pigeonhole happens both to $one and to $two, as if they were just different names for the same thing. The principle works for objects as well as simple string variables. You can copy or clone an object using the = operator, in which case you make a simple one-off new copy, which then leads an independent life. Or, you can assign one to the other by reference: now the two objects point to each other, so any changes made to the one will also happen to the other. Again, think of them as two different names for the same thing.
Read more
  • 0
  • 0
  • 7081

article-image-linux4afrika-interview-founder
Packt
22 Oct 2009
5 min read
Save for later

Linux4afrika: An Interview with the Founder

Packt
22 Oct 2009
5 min read
  "Linux4afrika has the objective of bridging the digital divide between developed and disadvantaged countries, especially in Africa, by supporting  access to information technology. This is done through the collection of used computers in Germany, the terminal server project and Ubuntu software which is open source, and by providing support to the involved schools and institutions." In this interview with the founder Hans-Peter Merkel, Packt's Kushal Sharma explores the idea, support, and the future of this movement. Kushal Sharma: Who are the chief promoters of this movement? Hans-Peter Merkel: FreiOSS (established in 2004) with currently about 300 members started the Linux4afrika project in 2006. The input was provided by some African trainees doing their internship at St. Ursula High school in Freiburg where we currently have 2 Terminal servers running. The asked FreiOSS to run similar projects in Africa. KS: What initiated this vision to bridge the IT gap between the developed and the under developed nations? During 2002 to 2005 we conducted IT trainings on Open Source products during 3 InWEnt trainings “Information Technology in African business” (see http://www.it-ab.org) with 60 African trainees (20 each year). This made FreiOSS to move OSS out of the local area and include other countries, especially those countries we had participants from. KS: Can you briefly recount the history of this movement, from the time it started to its popularity today? HPM: As mentioned before, the Linux4afrika project has its roots with FreiOSS and St. Ursula High school. There itself the idea was born. I conduct Open Source trainings and security trainings in several African countries (see http://www.hpmerkel.com/events). During a training in Dar es Salaam I demonstrated the Terminal Server solution to participants in a security training. One of the participants informed a Minister of Tanzanian Parliament who immediately came to get more information on this idea. He asked whether Linux4afrika could collect about 100 Computers and ship them to Tanzania. Tanzania would cover the shipping costs. After retuning to Germany I informed FreiOSS regarding this, and the collection activity started. We found out more information about the container costs and found that a container would fit about 200 computers for the same price. Therefore we decided to change the number from 100 to 200. One Terminalserver (AMD 64 Dual Core with 2 GB Memory) can run about 20 Thin Clients. This would serve about 10 schools in Tanzania. The Ubuntu Community Germany heard about our project and invited us to Linuxtag in Berlin (2007). This was a breakthrough for us; many organizations donated hardware. 3SAT TV also added greatly to our popularity by sending a 5 minute broadcast about our project (see http://www.linux4afrika.de). In June we met Markus Feilner from German Linux Magazin who contacted us and also published serveral online reports. In September Linux4afrika was invited to the German Parliament to join a meeting about education strategies for the under developed countries. In October Linux4afrika will start collection for a second container which will be shipped end of the year. In early 2008 about 5 members of FreiOSS will fly to Dar es Salaam on their own costs to conduct a one week training where teachers will be trained. This will be an addon to the service from Agumba Computers Ltd. (see http://www.agumba.biz). Agumba offers support in Tanzania to keep the network running. During the InWEnt trainings from 2002-2005, three employees from Agumba were in that training. Currently, 2 other people from Agumba are here for a three month internship to become familiar with our solution and make the project sustainable. KS: Who are the major contributors? HPM: Currently FreiOSS in Germany and Agumba Computers in Tanzania are the major contributors. KS: Do you get any internal support in Tanzania and Mozambique? Do their Governments support open source? HPM: Yes, we do. In Tanzania, it's Augumba Computers and in Mozambique we have some support from CENFOSS. All trainings conducted by me on Security and Forensics had a 70 percent part on Open Source in Tanzania. Currently, the Governmental agencies are implementing those technologies mainly on servers. KS: Do you have any individuals working full-time for this project? If so, how do the full-time individuals support themselves financially? HPM: All supporters are helping us without any financial support. They all come after work to our meetings which take place about once a month. After some starting problems the group is now able to configure and test about 50 Thin clients per evening meetings. KS: Tell us something more about the training program: what topics do you cover, how many participants do you have so far, etc.? HPM: Tanzania shows a big interest in Security trainings. Agumba computers offers those trainings for about 4-6 weeks a year. Participants come from Tanzania Revenue Authority, Police, Presidents office, banks, water/electricity companies and others. Currently Tanzania Revenue Authority has sent 5 participants to conduct a 3 month Forensic training in Germany. In Tanzania about 120 participants joined the trainings so far. Sessions for next year will start in January 2007. KS: Packt supported the project by sending some copies of our OpenVPN book. How will these be used and what do you hope to gain from them? HPM: Markus Feilner (the author of the OpenVPN book) is currently in Tanzania. He will conduct a one and a half day training on OpenVPN in Dar es Salaam. The participants in Germany who received the books will receive practical training on IPCop and OpenVPN for Microsoft and Linux clients. This will help them establish secure Wireless in their country. KS: What does the future hold for Linux4afrika? Our current plans include the second container, the visit to Dar early 2008, and Linuxtag 2008. Further actions will be discussed therafter. We already have a few requests to expand the Terminalserver Solution to other under developed countries. Also, currently we have a request to support Martinique after the hurricane has destroyed huge parts of the island. KS:Thanks very much Hans-Peter for taking out time for us, and all the very best for your plans.
Read more
  • 0
  • 0
  • 2939

article-image-seam-and-ajax
Packt
22 Oct 2009
11 min read
Save for later

Seam and AJAX

Packt
22 Oct 2009
11 min read
What is AJAX? AJAX (Asynchronous JavaScript and XML) is a technique rather than a new technology for developing highly interactive web applications. Traditionally, when JavaScript is written, it uses the browser's XMLHttp DOM API class to make asynchronous calls to a server-side component, for example, servlets. The server-side component generates a resulting XML package and returns this to the client browser, which can then update the browser page without having to re-render the entire page. The result of using AJAX technologies (many different technologies can be used to develop AJAX functionality, for example, PHP, Microsoft .NET, Servlets, and Seam) is to provide an appearance similar to a desktop, for web applications. AJAX and the Seam Framework The Seam Framework provides built-in support for AJAX via its direct integration with libraries such as RichFaces and AJAX4JSF. Discussing the AJAX support of RichFaces and AJAX4JSF could fill an entire book, if not two books, so we'll discuss these technologies briefly, towards the end of this article, where we'll give an overview of how they can be used in a Seam application. However, Seam provides a separate technology called Seam Remoting that we'll discuss in detail in this article. Seam Remoting allows a method on Seam components to be executed directly from JavaScript code running within a browser, allowing us to easily build AJAX-style applications. Seam Remoting uses annotations and is conversation-aware, so that we still get all of the benefits of writing conversationally-aware components, except that we can now access them via JavaScript as well as through other view technologies, such as Facelets. Seam Remoting provides a ready-to-use framework, making AJAX applications easier to develop. For example, it provides debugging facilities and logging facilities similar to the ones that we use everyday when writing Java components. Configuring Seam applications for Seam Remoting To use Seam Remoting, we need to configure the Seam web application to support JavaScript code that is making asynchronous calls to the server back end. In a traditional servlet-based system this would require writing complex servlets that could read, parse, and return XML as part of an HTTP GET or POST request. With Seam Remoting, we don't need to worry about managing XML data and its transport mechanism. We don't even need to worry about writing servlets that can handle the communication for us—all of this is a part of the framework. To configure a web application to use Seam Remoting, all we need to do is declare the Seam Resource servlet within our application's WEB-INF/web.xml file. We do this as follows. <servlet> <servlet-name>Seam Resource Servlet</servlet-name> <servlet-class> org.jboss.seam.servlet.SeamResourceServlet </servlet-class></servlet><servlet-mapping> <servlet-name>Seam Resource Servlet</servlet-name> <url-pattern>/seam/resource/*</url-pattern></servlet-mapping> That's all we need to do to make a Seam web application work with Seam Remoting. To make things even easier, this configuration is automatically done when applications are created with SeamGen, so you would have to worry about this configuration only if you are using non-SeamGen created projects. Configuring Seam Remoting server side To declare that a Seam component can be used via Seam Remoting, the methods that are to be exposed need to be annotated with the @WebRemote annotation. For simple POJO components, this annotation is applied directly on the POJO itself, as shown in the following code snippet. @Name("helloWorld")public class HelloWorld implements HelloWorldAction { @WebRemote public String sayHello() { return "Hello world !!";} For Session Beans, the annotation must be applied on the Session Beans business interface rather than on the implementation class itself. A Session Bean interface would be declared as follows. import javax.ejb.Local;import org.jboss.seam.annotations.remoting.WebRemote;@Localpublic interface HelloWorldAction { @WebRemote public String sayHello(); @WebRemote public String sayHelloWithArgs(String name);} The implementation class is defined as follows: import javax.ejb.Stateless;import org.jboss.seam.annotations.Name;@Stateless@Name("helloWorld")public class HelloWorld implements HelloWorldAction { public String sayHello() { return "Hello world !!"; } public String sayHelloWithArgs(String name) { return "Hello "+name; }} Note that, to make a method available to Seam Remoting, all we need to do is to annotate the method with @WebRemote and then import the relevant class. As we can see in the preceding code, it doesn't matter how many parameters our methods take. Configuring Seam Remoting client side In the previous sections, we've seen that minimal configuration is required to enable Seam Remoting and to declare Seam components as Remoting-aware. Similarly in this section, we'll see that minimal work is required within a Facelets file to enable Remoting. The Seam Framework provides built-in JavaScript to enable Seam Remoting. To use this JavaScript, we first need to define it within a Facelets file in the following way: <script type="text/javascript" src="/HelloWorld/seam/resource/ remoting/resource/remote.js"></script><script type="text/javascript" src="/HelloWorld/seam/resource/ remoting/interface.js?helloWorld"> To include the relevant JavaScript into a Facelets page, we need to import the /seam/resource/remoting/resource/remote.js and /seam/resource/remoting/interface.js JavaScript files. These files are served via the Seam resource servlet that we defined earlier in this article. You can see that the interface.js file takes an argument defining the name of the Seam component that we will be accessing (this is the name of the component for which we have defined methods with the @WebRemote annotation). If we wish to use two or more different Seam components from a Remoting interface, we would specify their names as parameters to the interface.js file separated by using an "&", for example: <script type="text/javascript" src="/HelloWorld/seam/resource/ remoting/interface.js?helloWorld&mySecondComponent& myThirdComponent"> To specify that we will use Seam components from the web tier is straight-forward, however, the Seam tag library makes this even easier. Instead of specifying the JavaScript shown in the preceding examples, we can simply insert the <s:remote /> tag into Facelets, passing the name of the Seam component to use within the include parameter. <ui:compositiontemplate="layout/template.xhtml"> <ui:define name="body"> <h1>Hello World</h1> <s:remote include="helloWorld"/> To use the <s:remote /> tag, we need to import the Seam tag library, as shown in this example. When the web page is rendered, Seam will automatically generate the relevant JavaScript. If we are using the <s:remote /> tag and we want to invoke methods on multiple Seam components, we need to place the component names as comma-separated values within the include parameter of the tag instead, for example: <s:remote include="helloWorld, mySecondComponent, myThirdComponent" /> Invoking Seam components via Remoting Now that we have configured our web application, defined the services to be exposed from the server, and imported the JavaScript to perform the AJAX calls, we can execute our remote methods. To get an instance of a Seam component within JavaScript, we use the Seam.Component.getInstance() method. This method takes one parameter, which specifies the name of the Seam component that we wish to interact with. Seam.Component.getInstance("helloWorld") This method returns a reference to Seam Remoting JavaScript to allow our exposed @WebReference methods to be invoked. When invoking a method via JavaScript, we must specify any arguments to the method (possibly there will be none) and a callback function. The callback function will be invoked asynchronously when the server component's method has finished executing. Within the callback function we can perform any JavaScript processing (such as DOM processing) to give our required AJAX-style functionality. For example, to execute a simple Hello World client, passing no parameters to the server, we could define the following code within a Facelets file. <ui:define name="body"> <h1>Hello World</h1> <s:remote include="helloWorld"/> <p> <button onclick="javascript:sayHello()">Say Hello</button> </p> <p> <div id="helloResult"></div> </p> <script type="text/javascript"> function sayHello() { var callback = function(result) { document.getElementById("helloResult").innerHTML=result; }; Seam.Component.getInstance("helloWorld"). sayHello(callback); } </script></ui:define> Let's take a look at this code, one piece at a time, to see exactly what is happening. <s:remote include="helloWorld"/> <p> <button onclick="javascript:sayHello()">Say Hello</button> </p> In this part of the code, we have specified that we want to invoke methods on the helloWorld Seam component by using the <s:remote /> tag. We've then declared a button and specified that the sayHello() JavaScript method will be invoked when the button is clicked. <div id="helloResult"></div> Next we've defined an empty <div /> called helloResult. This <div /> will be populated via the JavaScript DOM API with the results from out server side method invocation. <script type="text/javascript"> function sayHello() { var callback = function(result) { document.getElementById("helloResult").innerHTML=result; }; Seam.Component.getInstance("helloWorld"). sayHello(callback); }</script> Next, we've defined our JavaScript function sayHello(), which is invoked when the button is clicked. This method declares a callback function that takes one parameter. The JavaScript DOM API uses this parameter to set the contents of the helloResult <div /> that we have defined earlier. So far, everything that we've done here has been simple JavaScript and hasn't used any Seam APIs. Finally, we invoke the Seam component using the Seam.Component.getInstance().sayHello() method, passing the callback function as the final parameter. When we open the page, the following flow of events occurs: The page is displayed with appropriate JavaScript created via the<s:remote /> tag. The user clicks on the button. The Seam JavaScript is invoked, which causes the sayHello() method on the helloWorld component to be invoked. The server side component completes execution, causing the JavaScript callback function to be invoked. The JavaScript DOM API uses the results from the server method to change the contents of the <div /> in the browser, without causing the entire page to be refreshed. This process shows how we've developed some AJAX functionality by writing a minimal amount of JavaScript, but more importantly, by not dealing with XML or the JavaScript XMLHttp class. The preceding code shows how we can easily invoke server side methods without passing any parameters. This code can easily be expanded to pass parameters, as shown in the following code snippet: <s:remote include="helloWorld"/><p> <button onclick="javascript:sayHelloWithArgs()"> Say Hello with Args </button></p><p> <div id="helloResult"></div></p><script type="text/javascript"> function sayHelloWithArgs() { var name = "David"; var callback = function(result) { document.getElementById("helloResult").innerHTML=result; }; Seam.Component.getInstance("helloWorld"). sayHelloWithArgs(name, callback); }</script> The preceding code shows that the process for invoking remote methods with parameters is similar to the process for invoking remote methods with no parameters. The important point to note is that the callback function is specified as the last parameter. When our simple application is run, we get the following screenshot. Clicking on either of the buttons on the page causes our AJAX code to be executed, and the text of the <div /> component to be changed. If we want to invoke a server side method via Seam Remoting and we want the method to be invoked as a part of a Seam conversation, we can use the Seam.Remoting.getcontext.setConversationId() method to set the conversation ID. This ID will then by used by the Seam Framework to ensure that the AJAX request is a part of the appropriate conversation. Seam.Remoting.getContext().setConversationId(#{conversation.id});
Read more
  • 0
  • 0
  • 2556
article-image-writing-tips-bloggers
Packt
22 Oct 2009
9 min read
Save for later

Writing Tips for Bloggers

Packt
22 Oct 2009
9 min read
The first thing to look at regarding content is the quality of your writing itself. Good writing takes practice. The best way to learn is to study the work of other good writers and bloggers, and by doing so, develop an ear for a good sentence. However, there are guidelines to bear in mind that apply specifically to blogs, and we'll look at some of these here. Killer Headlines Ask any newspaper sub-editor and he or she will tell you that writing good headlines is an art to be mastered. This is equally true for blogs. Your headlines are the post titles and it's very important to get them right. Your headlines should be concise and to the point. You should try to convey the essence of the post in its title. Remember that blogs are often consumed quickly, and readers will use your post titles to decide if they want to carry on reading. People tend to scan through blogs, so the titles play a big part in helping them pick which posts they might be interested in. Your post titles also have a part to play in search engine optimization. Many search engines will use them to index your posts. As more and more people are using RSS feeds to subscribe to blogs it becomes even more important to make your post titles as descriptive and informative as possible. Many RSS readers and aggregators only display the post title, so it's essential that you convey as much information as possible whilst keeping it short and snappy. For example, The World's Best Salsa Recipe is a better post title than, A new recipe. Length of Posts Try to keep your posts manageable in terms of their word count. It's difficult to be prescriptive about post lengths. There's no one size fits all rule in blogging. You need to gauge the length of your posts based on your subject matter and target audience. There may be an element of experimentation to see how posts of different lengths are received by your readership. As with headlines, bear in mind that most people tend to read blogs fairly quickly and they may be put off by an overly long post. WordPress 2.6 includes a useful word count feature: An important factor in controlling the length of your posts is your writing skills. You will find that as you improve as a writer, you will be able to get your points across using fewer words. Good writing is all about making your point as quickly and concisely as possible. Inexperienced writers often feel the urge to embellish their sentences and use long, complicated phrases. This is usually unnecessary and when you read back that long sentence, you might see a few words that can be cut. Editing your posts is an important process. At the very least you should always proofread them before clicking the Publish button. Better still; try to get into the habit of actively editing everything you write. If you know someone who is willing to act as an editor for you, that's great. It's always useful to get some feedback on your writing. If, after re-reading and editing your post, it still seems very long, it might be a good idea to split the post in two and publish the second installment a few days later. Post Frequency Again, there are no rules set in stone about how frequently you should post. You will probably know from your own experience of other blogs that this varies tremendously from blogger to blogger. Some bloggers post several times a day and others just once a week or less. Figuring out the correct frequency of your posts is likely to take some trial and error. It will depend on your subject matter and how much you have to say about it. The length of your posts may also have a bearing on this. If you like to write short posts that make just one main point, you may find yourself posting quite regularly. Or, your may prefer to save up your thoughts and get them down in one longer post. As a general rule of thumb, try to post at least once per week. Any less than this and there is a danger your readers will lose interest in your blog. However, it's extremely important not to post just for the sake of it. This is likely to annoy readers and they may very well delete your feed from their news reader. As with many issues in blogging, post frequency is a personal thing. You should aim to strike a balance between posting once in a blue moon and subjecting your readers to 'verbal diarrhea'. Almost as important as getting the post frequency right is fine-tuning the timing of your posts, that is, the time you publish them. Once again, you can achieve this by knowing your target audience. Who are they, and when are they most likely to sit down in front of their computers and read your blog? If most of your readers are office workers, then it makes sense to have your new posts ready for them when they switch on their workstations in the morning. Maybe your blog is aimed at stay-at-home moms, in which case a good time to post might be mid-morning when the kids have been dropped off at school, the supermarket run is over, and the first round of chores are done. If you blog about gigs, bars, and nightclubs in your local area, the readers may well include twenty-something professionals who access your blog on their iPhones whilst riding the subway home—a good time to post for them might be late afternoon. Links to Other Blogs Links to other bloggers and websites are an important part of your content. Not only are they great for your blog's search engine findability, they also help to establish your place in the blogosphere. Blogging is all about linking to others and the resulting 'conversations'. Try to avoid over-using popular links that appear all over the Web, and instead introduce your readers to new websites and blogs that they may not have heard of. Admittedly, this is difficult nowadays with so many bloggers linking to each other's posts, but the more original you can be, the better. This may take quite a bit of research and trawling through the lower-ranked pages on search engines and indices, but it could be time well spent if your readers come to appreciate you as a source of new content beyond your own blog. Try to focus on finding blogs in your niche or key topic areas. Establishing Your Tone and Voice Tone and voice are two concepts that professional writers are constantly aware of and are attempting to improve. An in-depth discussion isn't necessary here, but it's worth being aware of them. The concept of 'tone' can seem rather esoteric to the non-professional writer but as you write more and more, it's something you will become increasingly aware of. For our purposes, we could say the 'tone' of a blog post is all about the way it feels or the way the blogger has pitched it. Some posts may seem very informal; others may be straight-laced, or appear overly complex and technical. Some may seem quite simplistic, while others come across as advanced material. These are all matters of tone. It can be quite subtle, but as far as most bloggers are concerned, it's usually a matter of formal or informal. How you pitch your writing boils down to understanding your target audience. Will they appreciate informal, first-person prose or should you keep it strictly third person, with no slang or casual language? On blogs, a conversational tone is often the most appropriate. With regards to 'voice', this is what makes your writing distinctly yours. Writers who develop a distinct voice become instantly recognizable to readers who know them. It takes a lot of practice to develop and is not something you can consciously aim for; it just happens as you gain more experience. The only thing you can do to help it along is step back from your writing and ask yourself if any of your habits stand in the way of clarity. While you read back your blog posts imagine yourself as one of your target readers and consider whether they would appreciate the language and style you've used. Employing tone and voice well is all about getting inside their heads and producing content they can relate to. Developing a distinctive voice can also be an important aspect of your company's brand identity. Your marketing department may already have brand guidelines, which allude to the tone and voice that should be used while producing written communications. Or you may wish to develop guidelines (such as this) yourself as a way of focusing your use of tone and voice. The Structure of a Post This may not apply to very short posts that don't go further than a couple of brief paragraphs, but for anything longer, it's worth thinking about a structure. The classic form is 'beginning, middle, and end'. Consider what your main point or argument is, and get it down in the first paragraph. In the middle section expand on it and back it up with secondary arguments. At the end reinforce it, and leave no doubt in the reader's mind what it is you've been trying to say. As we've already mentioned, blogs are often read quickly or even just scanned through. Using this kind of structure, which most people are sub-consciously aware of, can help them extract your main points quickly and easily. A Note about Keywords It's worth noting here that your writing has a big impact on search engine findability. This is what adds an extra dimension to writing for the Web. As well as all the usual considerations of style, tone, content, and so on, you also need to optimize your content for the search engines. This largely comes down to identifying your keywords and ensuring they're used with the right frequency. In the meantime, hold this thought.
Read more
  • 0
  • 0
  • 1532

article-image-resource-oriented-clients-rest-principles
Packt
22 Oct 2009
8 min read
Save for later

Resource-Oriented Clients with REST Principles

Packt
22 Oct 2009
8 min read
Designing Clients While designing the library service, the ultimate outcome was the mapping of business operations to URIs and HTTP verbs. The client design is governed by this mapping. Prior to service design, the problem statement was analyzed. For consuming the service and invoking the business operations of the service using clients, there needs to be some understanding of how the service intends to solve the problem. In other words, the service, by design, has already solved the problem. However, the semantics of the solution provided by the service needs to be understood by the developers implementing the clients. The semantics of the service is usually documented in terms of business operations and the relationships between those operations. And sometimes, the semantics are obvious. As an example, in the library system, a member returning a book must have already borrowed that book. Theborrow book operation precedes the return book operation. Client design must take these semantics into account. Resource Design Following is the URI and HTTP verb mapping for business operations of the library system: URI HTTP Method Collection Operation Business Operation /book GET books retrieve Get books /book POST books create Add book(s) /book/{book_id} GET books retrieve Get book data /member GET members retrieve Get members /member POST members create Add member(s) /member/{member_id} GET members retrieve Get member data /member/{member_id}/books GET members retrieve Get member borrowings /member/{member_id}/books/{book_id} POST members create Borrow book /member/{member_id}/books/{book_id} DELETE members delete Return book   When it comes to client design, the resource design is given, and is an input to the client design. When it comes to implementing clients, we have to adhere to the design given to us by the service designer. In this example, we designed the API given in the above table, so we are already familiar with the API. Sometimes, you may have to use an API designed by someone else, hence you would have to ensure that you have access to information such as: Resource URI formats HTTP methods involved with each resource URI The resource collection that is associated with the URI The nature of the operation to be executed combining the URI and the HTTP verb The business operation that maps the resource operation to the real world context Looking into the above resource design table, we can identify two resources, book and member. And we could understand some of the semantics associated with the business operations of the resources. Create, retrieve books Create, retrieve members Borrow book, list borrowed books and return book Book ID and member ID could be used to invoke operations specific to a particular book or member instance System Implementation In this section, we will use the techniques on client programming to consume the library service. These techniques include: Building requests using XML Sending requests with correct HTTP verbs using an HTTP client library like CURL Receiving XML responses and processing the received responses to extract information that we require from the response Retrieving Resource Information Here is the PHP source code to retrieve book information. <?php$url = 'http://localhost/rest/04/library/book.php';$client = curl_init($url);curl_setopt($client, CURLOPT_RETURNTRANSFER, 1);$response = curl_exec($client);curl_close($client);$xml = simplexml_load_string($response);foreach ($xml->book as $book) { echo "$book->id, $book->name, $book->author, $book->isbn <br/>n";}?> The output generated is shown below As per the service design, all that is required is to send a GET request to the URL of the book resource. And as per the service semantics, we are expecting the response to be something similar to: <books> <book> <id>1</id> <name>Book1</name> <author>Auth1</author> <isbn>ISBN0001</isbn> </book> <book> <id>2</id> <name>Book2</name> <author>Auth2</author> <isbn>ISBN0002</isbn> </book></books> So in the client, we convert the response to an XML tree. $xml = simplexml_load_string($response); And generate the output that we desire from the client. In this case we print all the books. foreach ($xml->book as $book) { echo "$book->id, $book->name, $book->author, $book->isbn <br/>n";} The output is: 1, Book1, Auth1, ISBN0001 2, Book2, Auth2, ISBN0002 Similarly, we could retrieve all the members with the following PHP script. <?php$url = 'http://localhost/rest/04/library/member.php';$client = curl_init($url);curl_setopt($client, CURLOPT_RETURNTRANSFER, 1);$response = curl_exec($client);curl_close($client);$xml = simplexml_load_string($response);foreach ($xml->member as $member) { echo "$member->id, $member->first_name, $member->last_name <br/>n";}?> Next, retrieving books borrowed by a member. <?php$url = 'http://localhost/rest/04/library/member.php/1/books';$client = curl_init($url);curl_setopt($client, CURLOPT_RETURNTRANSFER, 1);$response = curl_exec($client);curl_close($client);$xml = simplexml_load_string($response);foreach ($xml->book as $book) { echo "$book->id, $book->name, $book->author, $book->isbn <br/>n";}?> Here we are retrieving the books borrowed by member with ID 1. Only the URL differs, the rest of the logic is the same. Creating Resources Books, members, and borrowings could be created using POST operations, as per the service design. The following PHP script creates new book. <?php$url = 'http://localhost/rest/04/library/book.php';$data = <<<XML<books> <book><name>Book3</name><author>Auth3</author><isbn>ISBN0003</isbn></book> <book><name>Book4</name><author>Auth4</author><isbn>ISBN0004</isbn></book></books>XML;$ch = curl_init();curl_setopt($ch, CURLOPT_URL, $url);curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);curl_setopt($ch, CURLOPT_POST, true);curl_setopt($ch, CURLOPT_POSTFIELDS, $data);$response = curl_exec($ch);curl_close($ch);echo $response;?> When data is sent with POST verb to the URI of the book resource, the posted data would be used to create resource instances. Note that, in order to figure out the format of the XML message to be used, you have to look into the service operation documentation. This is where the knowledge on service semantics comes into play. Next is the PHP script to create members. <?php$url = 'http://localhost/rest/04/library/member.php';$data = <<<XML<members><member><first_name>Sam</first_name><last_name>Noel</last_name></member></members>XML;$ch = curl_init();curl_setopt($ch, CURLOPT_URL, $url);curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);curl_setopt($ch, CURLOPT_POST, true);curl_setopt($ch, CURLOPT_POSTFIELDS, $data);$response = curl_exec($ch);curl_close($ch);echo $response;?> This script is very similar to the script that creates books. Only differences are the endpoint address and the XML payload used. The endpoint address refers to the location where the service is located. In the above script the endpoint address of the service is: $url = 'http://localhost/rest/04/library/member.php'; Next, borrowing a book can be done by posting to the member URI with the ID of the member borrowing the book, and the ID of the book being borrowed. <?php$url = 'http://localhost/rest/04/library/member.php/1/books/2';$data = <<<XMLXML;$ch = curl_init();curl_setopt($ch, CURLOPT_URL, $url);curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);curl_setopt($ch, CURLOPT_POST, true);curl_setopt($ch, CURLOPT_POSTFIELDS, $data);$response = curl_exec($ch);curl_close($ch);echo $response;?> Note that, in the above sample, we are not posting any data to the URI. Hence the XML payload is empty: $data = <<<XMLXML; As per the REST architectural principles, we just send a POST request with all resource information on the URI itself. In this example, member with ID 1 is borrowing the book with ID 2. $url = 'http://localhost/rest/04/library/member.php/1/books/2'; One of the things to be noted in the client scripts is that we have used hard coded URLs and parameter values. When you are using these scripts with an application that uses a Web-based user interface, those hard coded values need to be parameterized. And we send a POST request to this URL: curl_setopt($ch, CURLOPT_URL, $url);curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);curl_setopt($ch, CURLOPT_POST, true);curl_setopt($ch, CURLOPT_POSTFIELDS, $data); Note that, even though the XML payload that we are sending to the service is empty, we still have to set the CURLOPT_POSTFIELDS option for CURL. This is because we have set CURLOPT_POST to true and the CRUL library mandates setting POST field option even when it is empty. This script would cause a book borrowing to be created on the server side. When the member.php script receives a request with the from /{member_id}/books/{book_id} with HTTP verb POST, it maps the request to borrow book business operation. So, the URL $url = 'http://localhost/rest/04/library/member.php/1/books/2'; means that member with ID 1 is borrowing the book with ID 2.
Read more
  • 0
  • 0
  • 3580
Modal Close icon
Modal Close icon