Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-google-earth-google-maps-and-your-photos-tutorial
Packt
22 Oct 2009
38 min read
Save for later

Google Earth, Google Maps and Your Photos: a Tutorial

Packt
22 Oct 2009
38 min read
Introduction A scholar who never travels but stays at home is not worthy to be accounted a scholar. From my youth on I had the ambition to travel, but could not afford to wander over the three hundred counties of Korea, much less the whole world. So, carrying out an ancient practise, I drew a geographical atlas. And while gazing at it for long stretches at a time I feel as though I was carrying out my ambition . . . Morning and evening while bending over my small study table, I meditate on it and play with it and there in one vast panorama are the districts, the prefectures and the four seas, and endless stretches of thousands of miles. WON HAK-SAENG Korean Student Preface to his untitled manuscript atlas of China during the Ming Dynasty, dated 1721. To say that maps are important is an understatement almost as big as the world itself. Maps quite literally connect us to the world in which we all live, and by extension, they link us to one another. The oldest preserved maps date back nearly 4500 years. In addition to connecting us to our past, they chart much of human progress and expose the relationships among people through time. Unfortunately, as a work of humankind, maps share many of the same shortcomings of all human endeavors. They are to some degree inaccurate and they reflect the bias of the map maker. Advancements in technology help to address the former issue, and offer us the opportunity to resist the latter. To the extent that it's possible for all of us to participate in the map making, the bias of a select few becomes less meaningful. Google Earth  and Google Maps  are two applications that allow each of us to assert our own place in the world and contribute our own unique perspective. I can think of no better way to accomplish this than by combining maps and photography. Photos reveal much about who we are, the places we have been, the people with whom we have shared those experiences, and the significant events in our lives. Pinning our photos to a map allows us to see them in their proper geographic context, a valuable way to explore and share them with friends and family. Photos can reveal the true character of a place, and afford others the opportunity to experience these destinations, perhaps faraway and unreachable for some of us, from the perspective of someone who has actually been there. In this tutorial I'll show you how to 'pin' your photos using Google Earth and Google Maps. Both applications are free, and available for Windows, Mac OS X, and Linux. In the case of Google Earth there are requirements associated with installing and running what is a local application. Google Maps has its own requirements: primary among them a compatible web browser (the highly regarded FireFox is recommended). In Google Earth, your photos show up on the map within the application complete with titles, descriptions, and other relevant information. You can choose to share your photos with everyone, only people you know, or even reserve them strictly for yourself. Google Maps offers the flexibility to present maps outside of a traditional application. For example you can embed a map on a webpage pinpointing the location of one particular photo, or mapping a collection of photos to present along with a photo gallery, or even collecting all of your digital photos together on one dynamic map. Over the course of a short couple of articles we'll cover everything you need to take advantage of both applications. I've put together two scripts to help us accomplish that goal. The first is a Perl script that works through your photos and generates a file in the proper format with all of the data necessary to include those photos in Google Earth. The second is a short bit of Javascript that works with the first file and builds a dynamic Google Map of those same photos. Both scripts are available for you to download, after which you are free to use them as is, or modify them to suit your own projects. I've purposefully kept the code as simple as possible to make them accessible to the widest audience, even those of you reading this who may be new to programming or unfamiliar with Perl, Javascript or both. I've taken care to comment the code generously so that everything is plainly obvious. I'm hopeful that you will be surprised at just how simple it is. There a couple of preliminary topics to examine briefly before we go any further. In the preceding paragraph I mentioned that the result of the first of our two scripts is a 'file in the proper format...'. This file, or more to the point the file format, is a very important part of the project. KML  (Keyhole Markup Language) is a fairly simple XML-based format that can be considered the native "language" of Google Earth. That description begs the question, 'What is XML?'. To oversimplify, because even a cursory discussion of XML is outside of the scope of this article, XML  (Extensible Markup Language) is an open data format (in contrast to proprietary formats) which allows us to present information in such way that we communicate not only the data itself, but also descriptive information about the data and the relationships among elements of the data. One of the technical terms that applies is 'metalanguage', which approximately translates to mean a language that makes it possible to describe other languages. If you're unfamiliar with the concept, it may be difficult to grasp at first or it may not seem impressive. However, metalanguages, and specifically XML, are an important innovation (I don't mean to suggest that it's a new concept. In fact XML has roots that are quite old, relative to the brief history of computing). These metalanguages make it possible for us to imbue data with meaning such that software can make use of that data. Let's look at an example taken from the Wikipedia entry for KML. <?xml version="1.0" encoding="UTF-8"?> <kml > <Placemark>  <description>New York City</description>  <name>New York City</name>  <Point>  <coordinates>-74.006393,40.714172,0</coordinates>  </Point> </Placemark> </kml>   Ignore all of the pro forma stuff before and after the <Placemark> tags and you might be able to guess what this bit of data represents. More importantly, a computer can be made to understand what it represents in some sense. Without the tags and structure, "New York City" is just a string, i.e. a sequence of characters. Considering the tags we can see that we're dealing with a place, (a Placemark), and that "New York City" is the name of this place (and in this example also its description). With all of this formal structure, programs can be made to roughly understand the concept of a Placemark, which is a useful thing for a mapping application. Let's think about this for a moment. There are a very large number of recognizable places on the planet, and a provably infinite number of strings. Given a block of text, how could a computer be made to identify a place from, for example the scientific name of a particular flower, or a person's name? It would be extremely difficult. We could try to create a database of every recognizable place and have the program check the database every time it encounters a string. That assumes it's possible to agree on a 'complete list of every place', which is almost certainly untrue. Keep in mind that we could be talking about informal places that are significant only to a small number of people or even a single person, e.g. 'the park where, as a child, I first learned to fly a kite'. It would be a very long list if we were going to include these sorts of places, and incredibly time-consuming to search. Relying on the structure of the fragment above, we can easily write a program that can identify 'New York City' as the name of a place, or for that matter 'The park where I first learned to fly a kite'. In fact, I could write a program that pulls all of the place names from a file like this, along with a description, and a pair of coordinate points for each, and includes them on a map. That's exactly what we're going to do. KML makes it possible. If I haven't made it clear, the structural bits of the file must be standardized. KML supports a limited set of elements (e.g. 'Placemark' is a supported element, as are 'Point' and 'coordinates'), and all of the elements used in a file must adhere to the standard for it to be considered valid. The second point we need to address before we begin is, appropriately enough... where to begin? Lewis Carroll famously tells us to "Begin at the beginning and go on till you come to the end: then stop."  Of course, Mr. Carroll  was writing a book at the time. If "Alice's Adventures in Wonderland" were an article, he might have had different advice. From the beginning to the end there is a lot of ground to cover. We're going to start somewhere further along, and make up the difference with the following assumptions. For the purposes of this discussion, I am going to assume that you have: Perl Access to Phil Harvey's excellent ExifTool , a Perl library and command-line application for reading, writing, and editing metadata in images (among other file types). We will be using this library in our first script A publicly accessible web server. Google requires the use of an API key  by which it can monitor the use of its map services. Google must be able to validate your key, and so your site must be publicly available. Note that this a requirement for Google Maps only Photos, preferably in a photo management application. Essentially, all you need is an app capable of generating both thumbnails and reduced size copies of your original photos . An app that can export a nice gallery for use on the web is even better Coordinate data as part of the EXIF metadata  embedded in your photos. If that sounds unfamiliar to you, then most likely you will have to take some additional steps before you can make use of this tutorial. I'm not aware of any digital cameras that automatically include this information at the time the photo is created. There are devices that can be used in combination with digital cameras, and there are a number of ways that you can 'tag' your photos with geographic data much the same way you would add keywords and other annotations. Let's begin!   Part 1: Google Earth Section 1: Introduction to Part 1 Time to begin the project in earnest. As I've already mentioned we'll spend the first half of this tutorial looking at Google Earth and putting together a Perl script whichthat, given a collection of geotagged photos, will build a set of KML files so that we can browse our photos in Google Earth. These same files will serve as the data source for our Google Maps application later on. Section 2: Some Advice Before We Begin Take your time to make sure you understand each topic before you move on to the next. Think of this as the first step in debugging your completed code. If you go slowly enough that you are to be able to identify aspects of the project that you don't quite understand, then you'll have some idea where to start looking for problems should things not go as expected. Furthermore, going slowly will give you the opportunity to identify those parts that you may want to modify to better fit the script to your own needs. If this is new to you, follow along as faithfully as possible with what I do here the first time through. Feel free to make notes for yourself as you go, but making changes on the first pass may make it difficult for you to catch back on to the narrative and piece together a functional script. After you have a working solution, it will be a simple matter to implement changes one at a time until you have something that works for you. Following this approach it will be easy to identify the silly mistakes that tend to creep in once you start making changes. There is also the issue of trust. This is probably the first time we're meeting each other, in which case you should have some doubt that my code works properly to begin with. If you minimize the number of changes you make, you can confirm that this works for you before blaming yourself or your changes for my mistakes. I will tell you up front that I'm building this project myself as we go. You can be certain at least that it functions as described for me as of the date attached to the article. I realize that this is quite different from being certain that the project will work for you, but at least it's something. The entirety of my project is available for you to download. You are free to use all of it for any legal purpose whatsoever,  including my photos in addition to all of the code, icons, etc. This is so you have some samples to use before you involve your own content. I don't promise that they are the most beautiful images you have ever seen, but they are all decent photos and properly annotated with the necessary metadata, including geographic tags. Section 3: Photos, Metadata and ExifTool To begin, we must have a clear understanding of what the Perl script will require from us. Essentially, we need to provide it with a selection of annotated image files, and information about how to reference those files. A simple folder of files is sufficient, and will be convenient for us, both as the programmer and end user. The script will be capable of negotiating nested folders, and if a directory contains both images and other file types, non-image types will be ignored. Typically, after a day of taking photos I'll have 100 to 200 that I want to keep. I delete the rest immediately after offloading them from the camera. For the files that are left, I preserve the original grouping, keeping all of the files together in a single folder. I place this folder of images in an appropriate location according to a scheme that serves to keep my complete collection of photos neatly organized. These are my digital 'negatives'. I handle all subsequent organization, editing, enhancements, and annotations within my photo management application. I use Apple Inc.'s Aperture; but there are many others that do the job equally well. Annotating your photos is well worth the investment of time and effort, but it's important that you have some strategy in mind so that you don't create meaningless tags that are difficult to use to your advantage. For the purposes of this project the tags we'll need are quite limited, which means that going forward we will be able to continue adding photos to our maps with a reasonable amount of work. The annotations we need are: Caption Latitude Longitude Image Date * Location/Sub-Location City Province/State Country Name Event People ImageWidth * ImageHeight * * Values for these Exif tags are generated by your camera. Note that these are labels used in Aperture, and are not necessarily consistent from one application to the next. Some of them are more likely than others to be used reliably. 'City' for example should be dependable, while the labels 'People', 'Events', and 'Location', among others, are more likely to differ. One explanation for these variations is that the meaning of these fields are more open to interpretation. Location, for example, is likely to be used to narrow down the area where the photo was taken within a particular city, but it is left to the person who is annotating the photo to decide that the field should name a specific street address, an informal place (e.g. 'home' or 'school'), or a larger area, for example a district or neighborhood. Fortunately things aren't so arbitrary as they seem. Each of these fields corresponds to a specific tag name that adheres to one of the common metadata formats (Exif1, IPTC2, XMP3, and there are others). These tag names are consistent as required by the standards. The trick is in determining the labels used in your application that correspond to the well-defined tag names. Our script relies on these metadata tags, so it is important that you know which fields to use in your application. This gives us an excuse to get acquainted with ExifTool4. From the project's website, we have this description of the application: ExifTool is a platform-independent Perl library plus a command-line application for reading, writing, and editing meta information in image, audio, and video files... ExifTool can seem a little intimidating at first. Just keep in mind that we will need to understand just a small part of it for this project, and then be happy that such a useful and flexible tool is freely available for you to use. The brief description above states in part that ExifTool is a Perl library and command line application that we can use to extract metadata from image files. With a single short command, we can have the app print all of the metadata contained in one of our image files. First, make sure ExifTool is installed. You can test for this by typing the name of the application at the command line. $ exiftool If it is installed, then running it with no options should prompt the tool to print its documentation. If this works, there will be more than one screen of text. You can page through it by pressing the spacebar. Press the 'q' key at any time to stop. If the tool is not installed, you will need to add it before continuing. See the appendix at the end of this tutorial for more information. Having confirmed that ExifTool is installed, typing the following command will result in a listing of the metadata for the named image: $ exiftool -f -s -g2 /path/image.jpg Where 'path' is an absolute path to image.jpg or a relative path from the current directory, and 'image.jpg' is the name of one of your tagged image files. We'll have more to say about ExifTool later, but because I believe that no tutorial should ask the reader to blindly enter commands as if they were magic incantations, I'll briefly describe each of the options used in the command above: -f, forces printing of tags even if their values are not found. This gives us a better idea about all of the available tag names, whether or not there are currently defined values for those tags. -s, prints tag names instead of descriptions. This is important for our purposes. We need to know the tag names so that we can request them in our Perl code. Descriptions, which are expanded, more human-readable forms of the same names obscure details we need. For example, compare the tag name 'GPSLatitude' to the description 'GPS Latitude'. We can use the tag name, but not the description to extract the latitude value from our files. -g2, organizes output by category. All location specific information is grouped together, as is all information related to the camera, date and time tags, etc.  You may feel, as I do, that this grouping makes it easier to examine the output. Also, this organization is more likely to reflect the grouping of field names used by your photo management application. If you prefer to save the output to a file, you can add ExifTool's -w option with a file extension. $ exiftool -f -s -g2 -w txt path/image.jpg This command will produce the same result but write the output to the file 'image.txt' in the current directory; again, where 'image.jpg' is the name of the image file. The -w option appends the named extension to the image file's basename, creates a new file with that name, and sets the new file as the destination for output. The tag names that correspond to the list of Aperture fields presented above are: metadata tag name Aperture field label Caption-Abstract Caption GPSLatitude Latitude GPSLongitude Longitude DateTimeOriginal Image Date Sub-location Location/Sub-Location City City Province-State Province/State Country-PrimaryLocationName Country Name FixtureIdentifier Event Contact People ImageWidth Pixel Width ImageHeight Pixel Height     Section 4: Making Photos Available to Google Earth We will use some of the metadata tags from our image files to locate our photos on the map (e.g. GPSLatitude, GPSLongitude), and others to describe the photos. For example, we will include the value of the People tag in the information window that accompanies each marker to identify friends and family who appear in the associated photo. Because we want to display and link to photos on the map, not just indicate their position, we need to include references to the location of our image files on a publicly accessible web server. You have some choice about how to do this, but for the implementation described here we will (1) Display a thumbnail in the info window of each map marker and (2) include a link to the details page for the image in a gallery created in our photo management app. When a visitor clicks on a map marker they will see a thumbnail photo along with other brief descriptive information. Clicking a link included as part of the description will open the viewer's web browser to a page displaying a large size image and additional details. Furthermore, because the page is part of a gallery, viewers can jump to an index page and step forward and back through the complete collection. This is a complementary way to browse our photos. Taking this one step further, we could add a comments section to the gallery pages or replace the gallery altogether, instead choosing to display each photo as a weblog post for example. The structure of the gallery created from my photo app is as follows... / (the root of the gallery directory) index.html large-1.html large-2.html large-3.html ... large-n.html assets/ css/ img/ catalog/ pictures/ picture-1.jpg picture-2.jpg picture-3.jpg ... picture-n.jpg thumbnails/ thumb-1.jpg thumb-2.jpg thumb-3.jpg ... thumb-n.jpg The application creates a root directory containing the complete gallery. Assuming we do not want to make any manual changes to the finished site, publishing is as easy as copying the entire directory to a location within the web server's document root. assets/: Is a subfolder containing files related to the theme itself. We don't need to concern ourselves with this sub-directory. catalog/: Contains a single catalog.plist file which is specific to Aperture and not relevant to this discussion. pictures/: Contains the large size images included on the detail gallery pages. thumbnails/: This subfolder contains the thumbnail images corresponding to the large size images in pictures/. Finally, there are a number of files at the root of the gallery. These include index pages and files named 'large-n.html', where n is a number starting at 1 and increasing sequentially e.g. large-1.html, large-2.html, large-3.html, ... The index files are the index pages of our gallery. The number of index pages generated will be dictated by the number of image files in the gallery, as well as the gallery's layout and design. index.html is always the first gallery page. The large-n.html files are the details pages of our gallery. Each page features an individual photo with links to the previous and next photos in sequence and a link back to the index. You can see the gallery I have created for this tutorial here: http://robreed.net/photos/tutorial_gallery/ If you take the time to look through the gallery, maybe you can appreciate the value of viewing these photos on a map. Web-based photo galleries like this one are nice enough, but the photos are more interesting when viewed in some meaningful context. There are a couple of things to notice about this gallery. Firstly, picture-1.jpg, thumb-1.jpg, and large-1.html all refer to the same image. So if we pick one of the three files we can easily predict the names of the other two. This relationship will be useful when it comes to writing our script. There is another important issue I need to call to your attention because it will not be apparent from looking only at the gallery structure. Aperture has renamed all of my photos in the process of exporting them. The name of the original file from which picture-1.jpg was generated (as well as large-1.html and thumb-1.jpg) is 'IMG_0277.JPG', which is the filename produced by my camera. Because I want to link to these gallery files, not my original photos which will stay safely tucked away on a local drive, I must run the script against the photos in the gallery. I cannot run it against the original image files because the filenames referenced in the output are unrelated to the files in the gallery. If my photo management app provided me the option of preserving the original filenames for the corresponding photos in the gallery, then I could run the script against the original image files or the gallery photos because all of the filenames would be consistent, but this is not the case. I don't have a problem as long as I run the script on the exported photos. However, if I'm running the script against the photos in the web gallery, either the pictures or thumbnail images must contain the same metadata as the original image files. Aperture preserves the metadata in both. Your application may not. A simple, dependable way to confirm that the metadata is present in the gallery files is to run ExifTool first against the original file and then the same photo in the pictures/ and thumbnails/ directories in the gallery. If ExifTool reports identical metadata, then you will have no trouble using one of pictures/ or thumbnails/ as your source directory. If the metadata is not present or not complete in the gallery files, you may need to use the script on your original image files. As has already been explained, this isn't a problem unless the gallery produces filenames that are inconsistent with the original filenames, as Aperture does. In this case you have a problem. You won't be able to run the script on the original image files because of the naming issue or on the gallery photos because they don't contain metadata. Make sure that you understand this point. If you find yourself in this situation, then your best bet is to generate files to use with your maps from your original photos in some other way, bypassing your photo management app's web gallery features altogether in favor of a solution that preserves the filenames, the metadata, or both. There is another option which involves setting up a relationship between the names of the original files and the gallery filenames. This tutorial does not include details about how to set up this association. Finally, keep in mind that though we've looked at the structure of a gallery generated by Aperture, virtually all photo management apps produce galleries with a similar structure. Regardless of the application used, you should find: A group of html files including index and details pages A folder of large size image files A folder of thumbnails Once you have identified the structure used by your application, as we have done here, it will be a simple task to translate these instructions. Section 5: Referencing Files Over the Internet Now we can talk about how to reference these files and gallery pages so that we can create a script to generate a KML file that includes these references. When we identify a file over the internet, it is not enough to use the filename, e.g. 'thumb-1.jpg', or even the absolute and relative path to the file on the local computer. In fact these paths are most likely not valid as far as your web server is concerned. Instead we need to know how to reference our files such that they can be accessed over the global internet, and the web more specifically. In other words, we need to be able to generate a URL (Uniform Resource Locator) which unambiguously describes the location of our files. The formal details of exactly what comprises a URL5; are more complicated than may be obvious at first, but most of us are familiar with the typical URL, like this one: http://www.ietf.org/rfc/rfc1630.txt which describes the location of a document titled "Universal Resource Identifiers in WWW" that just so happens to define the formal details of what comprises a URL. http://www.ietf.org This portion of the address is enough to describe the location of a particular web server over the public internet. In fact it does a little more than just specify the location of a machine. The http:// portion is called the scheme and it identifies a particular protocol (i.e. a set of rules governing communication) and a related application, namely the web. What I just said isn't quite correct; at one time, HTTP was used exclusively by the web, but that's no longer true. Many internet-based applications use the protocol because the popularity of the web ensures that data sent via HTTP isn't blocked or otherwise disrupted. You may not be accustomed to thinking of it as such, but the web itself is a highly-distributed, network-based application. /rfc/ This portion of the address specifies a directory on the server. It is equivalent to an absolute path on your local computer. The leading forward slash is the root of the web server's public document directory. Assuming no trickiness on the part of the server, all content lives under the document root. This tells us that rfc/ is a sub-directory contained within the document root. Though this directory happens to be located immediately under the root, this certainly need not be the case. In fact these paths can get quite long. We have now discussed all of the URL except for: rfc1630.txt which is the name of a specific file. The filename is no different than the filenames on your local computer. Let's manually construct a path to one of the large-n.html pages of the gallery we have created. The address of my server is robreed.net, so I know that the beginning of my URL will be: http://robreed.net I keep all of my galleries together within a photos/ directory, which is contained in the document root. http://robreed.net/photos/ Within photos/, each gallery is given its own folder. The name of the folder I have created for this tutorial is 'tutorial_gallery'. Putting this all together, the following URL brings me to the root of my photo gallery: http://robreed.net/photos/tutorial_gallery/ We've already gone over the directory structure of the gallery, so it should make sense you to that when referring to the 'large-1.html' detail page, the complete URL will be: http://robreed.net/photos/tutorial_gallery/large-1.html the URL of the image that corresponds to that detail page is: http://robreed.net/photos/tutorial_gallery/pictures/picture-1.jpg and the thumbnail can be found at: http://robreed.net/photos/tutorial_gallery/thumbnails/thumb-1.jpg Notice that the address of the gallery is shared among all of these resources. Also, notice that resources of each type (e.g. the large images, thumbnails, and html pages) share a more specific address with files of that same type. If we use the term 'base address' to refer to the shared portions of these URLs, then we can talk about several significant base addresses: The gallery base address: http://robreed.net/photos/tutorial_gallery/ The html pages base address: http://robreed.net/photos/tutorial_gallery/ The images base address: http://robreed.net/photos/tutorial_gallery/pictures/ The thumbnails base address: http://robreed.net/photos/tutorial_gallery/thumbnails/ Note that given the structure of this particular gallery, the html pages base address and the gallery base address are identical. This need not be the case, and may not be for the gallery produced by your application. We can hard-code the base addresses into our script. For each photo, we need only append the associated filename to construct valid URLs to any of these resources. As the script runs, it will have access to the name of the file that it is currently evaluating, and so it will be a simple matter to generate the references we need as we go. At this point we have discussed almost everything we need to put together our script. We have: Created a gallery at our server, which includes our photos with metadata in tow Identified all of the metadata tags we need to extract from our photos with the script and the corresponding field names in our photo management application Determined all of the base addresses we need to generate references to our image files Section 6: KML The last thing we need to understand is the format of the KML files we want to produce. We've already looked at a fragment of KML. The full details can be found on Google's KML documentation pages , which include samples, tutorials and a complete reference for the format. A quick look at the reference is enough to see that the language includes many elements and attributes, the majority of which we will not be including in our files. That statement correctly implies it is not necessary for every KML file to include all elements and attributes. The converse however is not true, which is to say that every element and attribute contained in any KML file must be a part of the standard. A small subset of KML will get us most, if not all, of what you will typically see in Google Earth from other applications. Many of the features we will not be using deal with aspects of the language that are either: Not relevant to this project, e.g. ground overlays (GroundOverlay) which "draw an image overlay draped onto the terrain" Minute details for which the default values are sensible There is no need to feel shortchanged because we are covering only a subset of the language. With the basic structure in place and a solid understanding of how to script the generation of KML, you will be able to extend the project to include any of the other components of the language as you see fit. The structure of our KML file is as follows:  1.    <?xml version="1.0" encoding="UTF-8"?> 2.    <kml > 3.        <Document> 4.            <Folder> 5.                <name>$folder_name</name> 6.                <description>$folder_description</description> 7.                <Placemark> 8.                    <name>$placemark_name</name> 9.                    <Snippet maxLines="1"> 10.                        $placemark_snippet 11.                    </Snippet> 12.                    <description><![CDATA[ 13.                        $placemark_description 14.                    ]]></description> 15.                    <Point> 16.                        <coordinates>$longitude,$latitude </coordinates> 17.                    </Point> 18.                </Placemark> 19.            </Folder> 20.        </Document> 21.    </kml> Line 1: XML header Every valid KML file must start with this line and nothing else is allowed to appear before it. As I've already mentioned, KML is an XML-based language and XML requires this header. Line 2: Namespace declaration More specifically this is the KML namespace declaration, and it is another formality. The value of the >Line 3: <Document> is a container element representing the KML file itself. If we do not explicitly name the document by including a name element then Google Earth will use the name of the KML file as the Document element <name>. The Document container will appear on the Google Earth 'Sidebar' within the 'Places' panel. Optionally we can control whether the container is closed or open by default. (This setting can be toggled in Google Earth using a typical disclosure triangle.) There are many other elements and attributes that can be applied to the Document element. Refer to the KML Reference for the full details. Line 4: <Folder> is another container element. The files we produce will include a single <Folder> containing all of our Placemarks, where each Placemark represents a single image. We could create multiple Folder elements to group our Placemarks according to some significant criteria. Think of the Folder element as being similar to your operating system's concept of a folder. At this point, note the structure of the fragment. The majority of it is contained within the Folder element. Folder, in turn, is an element of Document which is itself within the <kml> container. It should make sense that everything in the file that is considered part of the language must be contained within the kml element. From the KML reference: A Folder is used to arrange other Features hierarchically (Folders, Placemarks, NetworkLinks, or Overlays). A Feature is visible only if it and all its ancestors are visible. Line 5: The name element identifies an object, in this case the Folder object. The text that appears between the name tags can be any plain text that will serve as an appropriate label.   Line 6: <description> is any text that seems to adequately describe the object. The description element supports both plain text and a subset of HTML. We'll consider issues related to using HTML in <description> at the discussion of Placemark, lines 12 - 14. Lines 7 - 17 define a <Placemark> element. Note that Placemark contains a number of elements that also appear in Folder, including <name> (line 8), and <description> (lines 12 - 14). These elements serve the same purpose for Placemark as they do for the Folder element, but of course they refer to a different object. I've said that <description> can include a subset of HTML in addition to plain text. Under XML, some characters have special meaning. You may need to use these characters as part of the HTML included in your descriptions. Angle brackets (<, >) for example surround tag names in HTML, but serve a similar purpose in XML. When they are used strictly as part of the content, we want the XML parser to ignore these characters. We can accomplish this a few different ways: We can use entity references, either numeric character references or character entity references, to indicate that the symbol appears as part of the data and should not be treated as part of the syntax of the language. The character '<', which is required to include an image as part of the description (something we will be doing), (e.g. <img src=... />) can safely be included as the character entity reference '&lt;' or the numeric character reference '&#60'. The character entity references may be easier to remember and recognize on sight but are limited to the small subset of characters for which they have been defined. The numeric references on the other hand can specify any ASCII character. Of the two types, numeric character references should be preferred. There are also Unicode entity references which can specify any character at all. In the simple case of embedding short bits of HTML in KML descriptions, we can avoid the complication of these references altogether by enclosing the entire description in a CDATA6 element, which instructs the XML parser to ignore any special characters that appear until the end of the block set off by the CDATA tags. Notice the string '<![CDATA[' immediately after the opening <description> tag within <Placemark>, and the string ]]> immediately before the closing </description> tag. If we simply place all of our HTML and plain text between those two strings, it will all be ignored by the parser and we are not required to deal with special characters individually. This is how we'll handle the issue. Lines 9 - 11: <Snippet> The KML reference does a good job of clearly describing this element. From the KML reference: In Google Earth, this description is displayed in the Places panel under the name of the feature. If a Snippet is not supplied, the first two lines of the <description> are used. In Google Earth, if a Placemark contains both a description and a Snippet, the <Snippet> appears beneath the Placemark in the Places panel, and the <description> appears in the Placemark's description balloon. This tag does not support HTML markup. <Snippet> has a maxLines attribute, an integer that specifies the maximum number of lines to display. Default for maxLines is 2. Notice that in the block above at line 9, I have included a 'maxLines' attribute with a value of 1. Of course, you are free to substitute your own value for maxLines, or you can omit the attribute entirely to use the default value. The only element we have yet to review is <Point>, and again we need only look to the official reference for a nice description. From the KML reference: A geographic location defined by longitude, latitude, and (optional) altitude. When a Point is contained by a Placemark, the point itself determines the position of the Placemark's name and icon. <Point> in turn contains the element <coordinates> which is required. From the KML reference: A single tuple consisting of floating point values for longitude, latitude, and altitude (in that order). The reference also informs us that altitude is optional, and in fact we will not be generating altitude values. Finally the reference warns: Do not include spaces between the three values that describe a coordinate. This seems like an easy mistake to make. We'll need to be careful to avoid it. There will be a number of <Placemark> elements, one for each of our images. The question is how to handle these elements. The answer is that we'll treat KML as a 'fill in the blanks' style template. All of the structural and syntactic bits will be hard-coded, e.g. the XML header, namespace declaration, all of the element and attribute labels, and even the whitespace, which is not strictly required but will make it much easier for us to inspect the resulting files in a text editor. These components will form our template. The blanks are all of the text and html values of the various elements and attributes. We will use variables as place-holders everywhere we need dynamic data, i.e. values that change from one file to the next or one execution of the script to the next. Take a look at the strings I've used in the block above. $folder_name, $folder_description, $placemark_name, etc. For those of you unfamiliar with Perl, these are all valid variable names chosen to indicate where the variables slot into the structure. These are the same names used in the source file distributed with the tutorial. Section 7: Introduction to the Code At this point, having discussed every aspect of the project, we can succinctly describe how to write the code. We'll do this in 3 stages of increasing granularity. Firstly, we'll finish this tutorial with a natural language walk-through of the execution of the script. Secondly, if you look at the source file included with the project, you will notice immediately that comments dominate the code. Because instruction is as important an objective as the actual operation of the script, I use comments in the source to provide a running narration. For those of you who find this superabundant level of commenting a distraction, I'm distributing a second copy of the source file with much of the comments removed. Finally, there is the code itself. After all, source code is nothing more than a rigorously formal set of instructions that describe how to complete a task. Most programs, including this one, are a matter of transforming input in one form to output in another. In this very ge
Read more
  • 0
  • 0
  • 8682

article-image-working-report-builder-microsoft-sql-server-2008-part-2
Packt
22 Oct 2009
3 min read
Save for later

Working with the Report Builder in Microsoft SQL Server 2008: Part 2

Packt
22 Oct 2009
3 min read
Enabling and reviewing My Reports As described in Part 1 the My Reports folder needs to be enabled in order to use the folder or display it in the Open Report dialogue. The RC0 version had a documentation bug which has been rectified (https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=366413) Getting ready In order to enable the My Reports folder you need to carry out a few tasks. This will require authentication and working with the SQL Server Management Studio. These tasks are listed here: Make sure the Report Server has started. Make sure you have adequate permissions to access the Servers. Open the Microsoft SQL Server Management Studio as described previously. Connect to the Reporting Services after making sure you have started the Reporting Services. Right-click the Report Server node. General Execution History Logging Security Advanced The Server Properties window is displayed with a navigation list on the left consisting of the following: In the General page the name, version, edition, authentication mode, and URL of Reporting Service is displayed. Download of an ActiveX Client Print control is enabled by default. In order to work with Report Builder effectively and provide a My Reports folder for each user, you need to place a check mark for the check box Enable a My Reports folder for each user. The My Reports feature has been turned on as shown in the next screenshot. In the Execution page there is choice for report timeout execution, with the default set such that the report execution expires after 1800 seconds. In the History page there is choice between keeping an unlimited number of snapshots in the report history (default) or to limit the copies allowing you to specify how many to be kept. In the Logging page, report execution logging is enabled and the log entries older than 60 days are removed by default. This can be changed if desired. In the Security page, both Windows integrated security for report data sources and ad hoc report executions are enabled by default. The Advanced page shows several more items including the ones described thus far as shown in the next figure. In the General page enable the My Reports feature by placing a check mark. Click on the Advanced list item in the left. The Advanced page is displayed as shown: Now expand the Security node of Reporting Services and you will see that the My Reports role is present in the list of roles as shown. This is also added to the ReportServer database. The description of everything that a user with the assignment My Reports role can do is as follows: May publish reports and linked reports, manage folders, reports, and resources in a users My Reports folder. Now bring up Report Builder 2.0 by clicking Start | All Programs | Microsoft SQL Server 2008 Report Builder | Report Builder 2.0. Report Builder 2.0 is displayed. Click on Office Button | Open. The Open Report dialogue appears as shown. When the report Server is offline, the default location is My Documents, like Microsoft products Excel and MS Access. Choose the Recent sites and Servers. The Report server that is active should get displayed here as shown: Highlight the Server URL and click Open. All the folders and files on the server become accessible as shown: Open the Report Manager by providing its URL address. Verify that a My Reports folder is created for the user (current user). There could be slight differences in the look of the interface depending on whether you are using the RTM or the final version of SQL Server 2008 Enterprise edition.
Read more
  • 0
  • 0
  • 2596

article-image-languages-and-language-settings-moodle
Packt
22 Oct 2009
7 min read
Save for later

Languages and Language Settings in Moodle

Packt
22 Oct 2009
7 min read
Language The default Moodle installation includes many Language packs. A language pack is a set of translations for the Moodle interface. Language packs translate the Moodle interface, and not the course content. Here's the Front Page of a site when the user selects Spanish from the language menu: Note that every aspect of the interface is being presented in Spanish: menu names, menu items, section names, buttons, and system messages. Now, let's take a look at the same Front Page when the user selects Romanian from the language menu: Note that much of the interface has not been translated. For example, the Site Administration menu and section name for Site news are still in English. When a part of Moodle's interface is not translated into the selected language, Moodle uses the English version. Language Files When you install an additional language, Moodle places the language pack in its data directory under the subdirectory /lang. It creates a subdirectory for each language files. The following screenshot shows the results of installing the International Spanish and Romanian languages: For example, the subdirectory, /lang/en_us, holds files for the U.S. English translation, and /lang/es_es holds the files for traditional Spanish (Espanol / Espana). The name of the subdirectory is the 'language code'. Knowing this code will come in handy later. In the previous example, es_utf8 tells us that the language code for International Spanish is es. Inside a language pack's directory, we see a list of files that contain the translations: For example, the /lang/es_utf8/forum.php file holds text used on the forum pages. Let us suppose that we are creating a course for students. This file would include the text that is displayed to the course creator while creating the forum, and the text that is displayed to the students when they use the forum. Here are the first few lines from the English version of that file: $string['addanewdiscussion'] = 'Add a new discussion topic';$string['addanewtopic'] = 'Add a new topic';$string['advancedsearch'] = 'Advanced search'; And here are the same first three lines from the Spanish version of that file: $string['addanewdiscussion'] = 'Colocar un nuevo tema de discusión aquí';$string['addanewtopic'] = 'Agregar un nuevo tema';$string['advancedsearch'] = 'Búsqueda avanzada'; The biggest task in localizing Moodle consists of translating these language files into the appropriate languages. Some translations are surprisingly complete. For example, most of the interface has been translated to Irish Gaelic, even though this language is used by only about 350,000 people everyday. The Romanian interface remains mostly untranslated although Romania has a population of over 23 million. This means that if a Moodle user chooses the Romanian language (ro), most of the interface will still default to English. Language Settings You access the Language settings page from the Site Administration menu. Default Language and Display Language Menu The Default language setting specifies the language that users will see when they first encounter your site. If you also select Display language menu, users can change the language. Selecting this displays a language menu on your Front Page. Languages on Language Menu and Cache Language Menu The setting Languages on language menu enables you to specify the languages that users can pick from the language menu. There are directions for you to enter the 'language codes'. These codes are the names of the directories which hold the Language packs. In the subsection on Language Files on the previous page, you saw that the directory es_utf8 holds the language files for International Spanish. If you wanted to enter that language in the list, it would look like this: Leaving this field blank will enable your students to pick from all available languages. Entering the names of languages in this field limits the list to only those entered. Sitewide Locale Enter a language code into this field, and the system displays dates in the format appropriate to that language. Excel Encoding Most of the reports that Moodle generates can be downloaded as Excel files. User logs and grades are two examples. This setting lets you choose the encoding for those Excel files. Your choices are Unicode and Latin. The default is Unicode, because this character set includes many more characters other than Latin. In many cases, Latin encoding doesn't offer enough characters to completely represent a non-English language. Offering Courses in Multiple Languages The settings on the Language settings page are also applicable for translating the Moodle interface. However, they are not applicable for translating course content. If you want to offer course content in multiple languages, you have several choices. First, you could put all the different languages into each course. That is, each document would appear in a course in several languages. For example, if you offered a botany course in English and Spanish, you might have a document defining the different types of plants in both English and Spanish, side by side in the same course–Types of Plants or Tipos de Plantaras. While taking the course, students would select the documents in their language. Course names would appear in only one language. Second, you could create separate courses for each language, and offer them on the same site. Course names would appear in each language. In this case, students would select the course in English or Spanish– Basic Botany or Botánica Básica. Third, you could create a separate Moodle site for each language, for example, http://moodle.williamrice.com/english and http://moodle.williamrice.com/spanish. At the Home Page of your site, students would select their language and would be directed to the correct Moodle installation. In this case, the entire Moodle site would appear in the students' language: the site name, the menus, the course names, and the course content. These are things you should consider before installing Moodle. Installing Additional Languages To install additional languages, you must be connected to the Internet. Then, from the Site Administration menu, select Language | Language packs. The page displays a list of all available Language packs: This list is taken from Moodle's /install/lang directory. In that directory, you will find a folder for each language pack that can be installed. The folder contains a file called install.php. That file retrieves the most recent version of the language pack from the Web and installs it. This is why Moodle must be connected to the Web to use this feature. If Moodle is not connected, you will need to download the language pack and copy it into the /lang directory yourself. If you don't see the language you want on the list of Available language packs, either it's not available in the official Moodle site, or your list of available languages is out of date. Click to update this list. If the language doesn't appear, it's not available from official sources. Summary In this article, we have seen how Moodle website supports different languages and how to configure these different languages. This feature can be particularly useful while designing courses for students who come from different ethnic backgrounds. Language support can not only make the website more friendlier but also makes it more easy to browse.
Read more
  • 0
  • 0
  • 9098

article-image-creating-accessible-tables-joomla
Packt
22 Oct 2009
5 min read
Save for later

Creating Accessible Tables in Joomla!

Packt
22 Oct 2009
5 min read
Creating Accessible Tables Tables got a bad review in accessibility circles, because they used to create complex visual layouts. This was due to the limitations in the support for presentational specifications like CSS and using tables for layout was a hack—that worked in the real world—when you wanted to position something in a precise part of the web page. Tables were designed to present data of all shapes and sizes, and that is really what they should be used for. The Trouble with Tables So what are tables like for screen reader users? Tables often contain a lot of information, so sighted users need to look at the information at the top of the table (the header info), and sometimes the first column in each row to associate each data cell. Obviously this works for sighted users, but in order to make the tables accessible to a screen reader user we need to find a way of associating the data in each cell with its correct header so the screen reader can inform the user which header relates to each data cell. Screen reader users can navigate between data cells easily using the cursor keys. We will see how to make tables accessible in simple steps. There are methods of conveying the meaning and purpose of a table to the screen reader user by using the caption element and the summary attribute of the table element that you will find more on in the next section. We will learn how to build a simple table using Joomla! and the features contained within the WYSIWYG editors that can make the table more accessible. Before we do that though I want you to ask yourself about why you want to use tables (though sometimes it is unavoidable) and what forms should they take. Simple guidelines for tables: Try to make the table as simple as possible.    If possible don't span multiple cells etc. The simpler the table, the easier it is to make accessible.    Try to include the data you want to present in the body text of your site. Time for Action—Create an Accessible Table (Part 1) In the following example we will build a simple table that will list the names of some artists, some albums they have recorded, and the year in which they recorded the albums. First of all click the table icon from the TinyMCE interface and add a table with a suitable number of columns and rows.            By clicking on the Advanced tab you will see the Summary field. The summary information is very important. It provides the screen reader user a summary of the table. For example, I filled in the following text: "A list of some funk artists, my favorite among their records, and the year they recorded it in". My table then looked as follows: What Just Happened? There is still some work to be done in order to make the content more accessible. The controls that the WYSIWYG editor offers are also a little limited so we will have to edit the HTML by hand. Adding the summary information is a very good start. The text that I entered "A list of some funk artists, my favorite among their records, and the year they recorded it in." will be read out by the screen reader as soon as it receives a focus by the user. Time for Action—Create an Accessible Table (Part 2) Next we are going to add a Caption to the table, which will be helpful to both sighted and non-sighted users. This is how it's done. Firstly, select the top row of the table, as these items are the table heading. Then click on the Table Row properties icon beside the Tables icon and select Table Head under General Properties. Make sure that the Update current Row is selected in the dialogue box in the bottom left. You will apply these properties to your selected row. If you wish to add a caption to your table you need to add an extra row to the table and then select the contents of that row and add the Caption in the row properties dialogue box. This will tell the browser to display the caption text, in this case Funky Table Caption, else it will remain hidden. What Just Happened? By adding caption to the table, you provide useful information to the screen reader user. This caption should be informative and should describe something useful about the table. As the caption element is wrapped in a heading it is read out by the screen reader when the user starts exploring the table—so it is slightly different to the summary attribute, which is read out automatically. Does it Work? What we just did using the WYSIWYG editor, TinyMCE, is enough to make a good start towards creating a more accessible table, but we will have to work a little more in order to truly make the table accessible. So we will now edit the HTML. The good news is that you have made some good steps in the right direction and the final step is of associating the data cells with their suitable headers, as this is something that we cannot really do with the WYSIWYG editor alone, and is essential to make your tables truly accessible.
Read more
  • 0
  • 0
  • 2244

article-image-categories-and-attributes-magento-part-1
Packt
22 Oct 2009
10 min read
Save for later

Categories and Attributes in Magento: Part 1

Packt
22 Oct 2009
10 min read
Categories, Products, and Attributes Products are the items that are sold. In Magento, Categories organize your Products, and Attributes describe them. Think of a Category as the place where a Product lives, and an Attribute is anything that describes a Product. Each one of your Products can belong to one or more Categories. Also, each Product can be described by any number of Attributes. Is it a Category or an Attribute? Some things are clearly Categories. For example, if you have an electronics store, MP3 players would make a good Category. If you're selling jewellery, earrings would make a good Category. Other things are clearly Attributes. Color, description, picture, and SKU number are almost always Attributes. Sometimes, the same thing can be used for a Category or an Attribute. For example, suppose your site sells shoes. If you made size an Attribute, then after your shoppers have located a specific shoe, they can select the size they want. However, if you also made size a Category, the shoppers could begin their shopping by selecting their size. Then they could browse through the styles available in their size. So should size be an Attribute, a Category, or both? The answer depends upon what kind of shopping experience you want to create for your customers. Examples The hierarchy of Categories, Products, and Attributes looks like this: Category 1 Product 1 Attribute 1 Attribute 2 Product 2 Attribute 1 Attribute 2 Category 2 Product 3 Attribute 1 Attribute 3 Product 4 Attribute 1 Attribute 3 We are building a site that sells gourmet coffee, so we might organize our store like this: Single Origin Hawaiian Kona Grind (whole bean, drip, French press) Roast (light, medium, dark) Blue Mountain Grind Roast Blends Breakfast Blend Grind Caffeine (regular, decaffeinated) Afternoon Cruise Grind Caffeine In Magento, you can give your shoppers the ability to search your store. So if the shoppers know that they want Blue Mountain coffee, they can use the Search function to find it in our store. However, customers who don't know exactly what they want will browse the store. They will often begin browsing by selecting a category. With the organization that we just saw, when customers browse our store, they will start by selecting Single Origin or Blends. Then the shoppers will select the product they want: Hawaiian Kona, Blue Mountain, Breakfast Blend, or Afternoon Cruise. After our shoppers decide upon a Product, they select Attributes for that product. In our store, shoppers can select Grind for any of the products. For Single Origin, they can also select Roast. For blends, they can select Caffeine. This gives you a clue about how Magento handles attributes. To each Product, you can apply as many, or as few, attributes as you want. Now that we have definitions for Category, Product, and Attribute, let's look at each of them in detail. Then, we can start adding products. Categories Product Categories are important because they are the primary tool that your shoppers use to navigate your store. Product Categories organize your store for your shoppers. Categories can be organized into Parent Categories and Subcategories. To get to a Subcategory, you drill down through its Parent Category. Categories and the Navigation Menu If a Category is an Anchor Category, then it appears on the Navigation Menu. The term "Anchor" makes the category sound as if it must be a top-level category. This is not true. You can designate any category as an Anchor Category. Doing so puts that category into the Navigation Menu. When a shopper selects a normal Category from the Navigation Menu, its landing page and any subcategories are displayed. When a shopper selects an Anchor Category from the menu, Magento does not display the normal list of subcategories. Instead, it displays the Attributes of all the Products in that category and its subcategories. Instead of moving down into subcategories, the shopper uses the Attributes to filter all the Products in that Anchor Category and the Categories below it. The Navigation Menu will not display if: You don't create any Categories, or You create Categories, but you don't make any of them Anchors, or Your Anchor Categories are not subcategories under the Default Category. The Navigation Menu will display only if: You have created at least one Category You have made at least one Category an Anchor You have made the Anchor Category a Subcategory under Default. When you first create your Magento site and add Products, you won't see those Products on your site until you've met all of the previous conditions. For this reason I recommend that you create at least one Anchor Category before you start adding Products to your store. As you add each Product, add it to an Anchor Category. Then, the Product will display in your store, and you can preview it. If the Anchor Category is not the one that you want for that Product, you can change the Product's Category later Before we add Products to our coffee store, we will create two Anchor Categories: Single Origin and Blends. As we add Products, we will assign them to a Category so that we can preview them in our storefront. Making best use of Categories There are three things that Categories can accomplish. They can: Help the shoppers, who know exactly what they want, to find the product that they are looking for. Help the shoppers, who almost know what they want, to find a product that matches their desires. Entice the shoppers, who have almost no idea of what they want, to explore your store. We would like to organize our store so that our Categories accomplish all these goals. However, these goals are often mutually exclusive. For example, suppose you create an electronics store. In addition to many other products, your store sells MP3 players, including Apple iPods. A Category called iPods would help the shoppers who know that they want an iPod, as they can quickly find one. However, the iPods Category doesn't do much to help shoppers who know that they want an MP3 player, but don't know what kind. On the Web, you usually search something when you know what you want. But when you're not sure about what you want, you usually browse. In an online store, you usually begin browsing by selecting a Category. When you are creating Categories for your online store, try to make them helpful for shoppers who almost know what they want. However, what if a high percentage of your shoppers are looking for a narrow category of products? Consider creating a top-level Category to make those products easily accessible. Again, suppose you have an electronics store that sells a wide variety of items. If a high percentage of your customers want iPods, it might be worthwhile to create a Category just for those few products. The logs from the Search function on your site are one way you can determine whether your shoppers are interested in a narrow Category of a Product. Are 30 percent of the searches on your site for left-handed fishing reels? If so, you might want to create a top-level Category just for those Products. Attributes An Attribute is a characteristic of a Product. Name, price, SKU, size, color, and manufacturer are all examples of Attributes. System versus Simple Attributes Notice that the first few examples (name, price, and SKU) are all required for a Product to function in Magento. Magento adds these Attributes to every product, and requires you to assign a value for each of them. These are called System Attributes. The other three examples (size, color, and manufacturer) are optional Attributes. They are created by the store owner. They are called Simple Attributes. When we discuss creating and assigning Attributes, we are almost always discussing Simple Attributes. Attribute Sets Notice that the Single Origin coffees have two Attributes: Grind and Roast. Also notice that the blends have the Attributes of Grind and Caffeine. Single Origin Hawaiian Kona Grind (whole bean, drip, French press) Roast (light, medium, dark) Blue Mountain Grind Roast Blends Breakfast Blend Grind Caffeine (regular, decaffeinated) Afternoon Cruise Grind Caffeine In this example, the store owner created three Attributes: Grind, Roast, and Caffeine. Next, the store owner grouped the Attributes into two Attribute Sets: one set contains Grind and Roast, and the other set contains Grind and Caffeine. Then, an Attribute set was applied to each Product. Attributes are not applied directly to Products. They are first grouped into Attribute Sets, and then a set can be applied to a Product. This means that you will need to create a set for each different combination of Attributes in your store. You can name these Sets after the Attributes they contain, such as Grind-Roast. Or, you can name them after the type of Product which will use those Attributes, such as Single Origin Attributes. If each Product in a group will use the same Attribute as every other Product in that group, then you can name the set after that group. For example, at this time, all Single Origin coffees have the same Attributes: Grind and Roast. If they will all have these two Attributes and you will always add and remove Attributes to them as a group, then you could name the set Single Origin Attributes. If the Products in a group will likely use different Attributes, then name the set after the Attributes. For example, if you expect that some Single Origin coffees will use the Attributes Grind and Roast, while others will use just Roast, then it would not make sense to create a set called Single Origin Attributes. Instead, create a set called Grind-Roast, and another called Roast. Three types of Products In Magento, you can create three different types of Products: Simple, Configurable, and Grouped. The following is a very brief definition for each type of Product. Simple Product A Simple Product is a single Product, with Attributes that the store owner chooses. As the saying goes, "What you see is what you get." The customer does not get to choose anything about the Product. In our coffee store, a good example for a Simple Product might be a drip coffee maker. It comes in only one color. And while the customer can buy drip coffee makers of various sizes (4 cups, 8 cups, 12 cups, and so on), each of those is a separate Product. A bad example of a Simple Product would be a type of coffee. For example, we might want to allow the customer to choose the type of roast for our Hawaiian Kona coffee: light, medium, or dark. Because we want the customer to choose a value for an Attribute, that would not be a good Simple Product. Configurable Product A Configurable Product is a single Product, with at least one Attribute that the customer gets to choose. There is a saying that goes, "Have it your way." The customer gets to choose something about the Product. A good example of a Configurable Product would be a type of coffee that comes in several different roasts: light, medium, and dark. Because we want the customer to choose the roast(s) he wants, that would be a good Configurable Product. Grouped Product A Grouped Product is several Simple Products that are displayed on the same page. You can force the customer to buy the group, or allow the customer to buy each Product separately. The previous definitions are adequate for now. However, when you start creating Products, you will need to know more about each type of Product.
Read more
  • 0
  • 0
  • 6512

article-image-testing-help-system-java-application
Packt
22 Oct 2009
6 min read
Save for later

Testing a HELP System in a Java Application

Packt
22 Oct 2009
6 min read
Introduction {literal}As more and more features get added to your software, the Help system for the software becomes extensive. It could probably contains hundreds of HTML files plus a similar number of images. We could always face the problems listed below. There could be broken links in the help index. Some files may not be listed in the index, therefore they can not be read by the customers. Some of the contextual help buttons could show the wrong topic. Some of the HTML files can contain broken links or incorrect image tags. Not all of the file titles would match their index entry. Another problem could occur when the user does a free text search of the help system. The result of such a search is a list of files, each represented by its title. In our system,  documents could have the title "untitled". In fact, the JavaHelp 2.0 System User's Guide contains the recommendation "To avoid confusion, ensure that the <TITLE> tag corresponds to the title used in the table of contents." Given that customers mostly use the Help system when they are already frustrated by our software, we should always see to it that such errors do not exist in our help system. To do this, we will write a tool, HelpGenerator, that generates some of the boilerplate XML in the help system and checks the HTML and index files for the problems listed above. We will also build tools for displaying and testing the contextual help. We've re-engineered and improved these tools and present them in this article. In this article we are assuming familiarity with the JavaHelp system. Documentation and sample code for JavaHelp can be found at: http://java.sun.com/products/javahelp. Overview A JavaHelp package consists of: A collection of HTML and image files containing the specific Help information to be displayed. A file defining the index of the Help topics. Each index item in the file consists of the text of the index entry and a string representing the target of the HTML file to be displayed for that index entry, for example: <index version="1.0"> <indexitem text="This is an example topic."target="Topic"> <indexitem text="This is an sub-topic."target="SubTopic"/> </indexitem></index> A file associating each target with its corresponding HTML file (or more generally, a URL)—the map file. Each map entry consists of the target name and the URL it is mapped to, for example: <map version="1.0"> <mapID target="Topic" url="Topic.html"/> <mapID target="SubTopic" url="SubTopic.html"/></map> A HelpSet file (by default HelpSet.hs) which specifies the names of the index and map files and the folder containing the search database. Our software will normally have a main menu item to activate the Help and, in addition, buttons or menu items on specific dialogs to activate a Help page for a particular topic, that is, "context-sensitive" Help. What Tests Do We Need?? At an overall structural level, we need to check: For each target referred to in the index file, is there a corresponding entry in the map file? In the previous example, the index file refers to targets called Topic and SubTopic. Are there entries for these targets in the map file? For each URL referred to in the map file, is that URL reachable? In the example above, do the files Topic.html and SubTopic.html exist? Are there HTML files in our help package which are never referred to? If a Help button or menu item on some dialog or window is activated, does the Help facility show the expected topic? If the Help search facility has been activated, do the expected search results show? That is, has the search database been built on the latest versions of our Help pages? At a lower level, we need to check the contents of each of the HTML files: Do the image tags in the files really point to images in our help system? Are there any broken links? Finally, we need to check that the contents of the files and the indexes are consistent Does the title of each help page match its index? To simplify these tests, we will follow a simple naming pattern as follows: We adopt the convention that the name of each HTML file should be in CamelCase format (conventional Java class name format) plus the .html extension. Also, we use this name, without the extension, as the target name. For example, the target named SubTopic will correspond to the file SubTopic.html. Furthermore, we assume that there is a single Java package containing all the required help files, namely, the HTML files, the image files, the index file, and the map file. Finally, we assume a fixed location for the Help search database. With this convention, we can now write a program that: Generates the list of available targets from the names of the HTML files. Checks that this list is consistent with the targets referred to in the index file. Checks that the index file is well-formed in that: It is a valid XML document. It has no blank index entries. It has no duplicate index entries. Each index entry refers to a unique target. Generates the map file, thereby guaranteeing that it will be consistent with the index file and the HTML files. The class HelpGenerator in the package jet.testtools.help does all this,and, if there are no inconsistencies found, it generates the map file. If an inconsistency or other error is found, an assertion will be raised. HelpGenerator also performs the consistency checks at the level of individual HTML files. Let's look at some examples. An HTML File That is Not Indexed Here is a simple help system with just three HTML files: The index file, HelpIndex.xml, only lists two of the HTML files: <index version="1.0"> <indexitem text="This is an example topic." target="ExampleTopic"> <indexitem text="This is an example sub-topic." target="ExampleSubTopic"/> </indexitem></index> When we run HelpGenerator over this system (we'll see how to do this later in this article), we get an assertion with the error messageThe Help file: TopicWithoutTarget.html was not referenced in the Index file: HelpIndex.xml.
Read more
  • 0
  • 0
  • 2795
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-adding-newsletters-web-site-using-drupal-6
Packt
22 Oct 2009
4 min read
Save for later

Adding Newsletters to a Web Site Using Drupal 6

Packt
22 Oct 2009
4 min read
Creating newsletters A newsletter is a great way of keeping customers up-to-date without them needing to visit your web site. Customers appreciate well-designed newsletters because they allow the customer to keep tabs on their favorite places without needing to check every web site on a regular basis. Creating a newsletter Good Eatin' Goal: Create a new newsletter on the Good Eatin' site, which will contain relevant news about the restaurant, and will be delivered quarterly to subscribers. Additional modules needed: Simplenews (http://drupal.org/project/simplenews). Basic steps Newsletters are containers for individual issues. For example, you could have a newsletter called Seasonal Dining Guide, which would have four issues per year (Summer, Fall, Winter, and Spring). A customer subscribes to the newsletter and each issue is sent to them as it becomes available. Begin by installing and activating the Simplenews module, as shown below: At this point, we only need to enable the Simplenews module, and the Simplenews action module can be left disabled. Next, select Content management and then Newsletters, from the Administer menu. Drupal will display an administration area divided into the following sections: a) Sent issuesb) Draftsc) Newslettersd) Subscriptions Click on the Newsletters tab and Drupal will display a page similar to the following: As you can see, a default newsletter with the name of our site has been automatically created for us. We can either edit this default newsletter or click on the Add newsletter link to create a new newsletter. Let's click the Add newsletter option to create our seasonal newsletter. Drupal will display a standard form where we can enter the name, description, and relative importance (relative importance weight) of the newsletter. Click Save to save the newsletter. It will now appear in the list of available newsletters. If you want to modify the Sender information for the newsletter to use an alternate name or email address to your site's default ones, you can either expand the Sender information section when adding the newsletter, or you click Edit newsletter and modify the Sender information, as shown in the following screenshot: Allowing users to sign-up for the newsletter Good Eatin' Goal: Demonstrate how registered and unregistered users can sign-up for a newsletter, and configure the registration process. Additional modules needed: Simplenews (http://drupal.org/project/simplenews). Basic steps To allow customers to sign-up for the newsletter, we will begin by adding a block to the page. Open the Block Manager by selecting Site building and then Blocks, from the Administer menu. Add the block for the newsletter that you want to allow customers to subscribe to, as shown in the following screenshot: We will now need to give users permission to subscribe to newsletters by selecting User management and then Permissions, from the Administer menu. We will give all users permissions to subscribe to newsletters and to view newsletter links, as shown below: If the customer does not have permission to subscribe to newsletters then the block will appear as shown in the following screenshot: However, if the customer has permissions to subscribe to newsletters, and is logged in to the site, the block will appear as shown in the following screenshot: If the customer has permission to subscribe, but is not logged in, the block will appear as follows: To subscribe to the newsletter, the customer will simply click on the Subscribe button. Once they he subscribed, the Subscribe button will change to Unsubscribe so that the user can easily opt out of the newsletter. If the user does not have an active account with the site, they will need to confirm that they want to subscribe to the site.
Read more
  • 0
  • 0
  • 2721

article-image-joomla-installation-virtual-server-net
Packt
22 Oct 2009
3 min read
Save for later

Joomla! Installation on a Virtual Server on the Net

Packt
22 Oct 2009
3 min read
In principle the simplest approach that actually always works is the following: Load the Joomla! 1.5 beta.zip file onto your local PC and unpack it in a temporary directory. Load the just unpacked files by means of FTP onto your rented server. The files must be installed in the publicly accessible directory. These directories are usually named htdocs, public_html, or simply html. You can specify a subdirectory within the directory into which you install your Joomla!. Many web hosts allow you to link your rented domain name to a directory. This name is necessary to call your website from a browser. You have to find out what your database is called. Usually one or several databases are included in your web-hosting package. Sometimes the user name, database name, and password are fixed; sometimes you have to set them up. There is usually a browser-based configuration interface at your disposal. You can see an example of such an interface in the following figure. You will need these access data for Joomla!'s web installer. You can get going after you have loaded these data onto your server and are in possession of your access data. Joomla! Installation To install Joomla!, you need the source code. Download the Joomla_1.5.0-Beta-Full_Package.zip package and save it on your system. Selecting a Directory for Installation You have to decide whether Joomla! needs to be installed directly into a document directory or a subdirectory. This is important, since many users prefer a short URL to their homepage. An Example If Joomla! is unzipped directly in /htdocs, the web page starts when the domain name http://www.myhomepage.com is accessed from its local computer http://localhost/ and/or from the server on the Internet. If subdirectories are created under /htdocs/, for example, /htdocs/Joomla150/ and we unzip the package there, we have to enter http://localhost/Joomla150/ in the browser. This isn't a problem locally, but doesn't look good on a production Internet page. Some HTML files and subdirectories, however, are already in /htdocs in the local XAMPP Lite environment under Windows, which, for example, displays the start page of XAMPP Lite. In a local Linux environment, a starting page dependent on the distribution and the web server settings is also displayed. Directory I recommend that you create a subdirectory under the document directory named Joomla150 in Windows by using Windows Explorer. (With Linux, use the Shell, KDE Konqueror, or Midnight Commander.) [home]/htdocs/Joomla150/ The directory tree in Windows Explorer should look like this: An empty index appears in the XAMPP Lite version when the URL http://localhost/Joomla150 is entered in the browser: With Linux or with another configuration it can happen that you get a message saying that you don't have access to this directory. This depends on the configuration of the web server. For security reasons, the automatic directory display is often deactivated in Apache's configuration. A potential hacker could draw many interesting conclusions about the directory structure and the files on your homepage and target your computer for an attack. For security reasons, you are usually not allowed to access the appropriate configuration file of the Apache web server. Should you be able to, you should leave the content directories deactivated and/or only activated for those directories that contain files for downloading.        
Read more
  • 0
  • 0
  • 1685

article-image-content-modeling
Packt
22 Oct 2009
12 min read
Save for later

Content Modeling

Packt
22 Oct 2009
12 min read
Organizing content in a meaningful way is nothing new. We have been doing it for centuries in our libraries—the Dewey decimal system being a perfect example. So, why can't we take known approaches and apply them to the Web? The main reason is that a web page has more than two dimensions. A page on a book might have footnotes or refer to other pages, but the content only appears in one place. On a web page, content can directly link to other content and even show a summary of it. It goes way beyond just the content that appears on the page—links, related content, reviews, ratings, etc. All of this brings extra dimensions to the core content of the page and how it is displayed. This is why it's so important to ensure your content model is sound. However, there is no such thing as the "right" content model. Each content model can only be judged on how well it achieves the goals of the website now and in the future. The Purpose of a Content Model The idea of a content model is new, but it has similarities to both a database design and an object model. The purpose of both of these is to provide a foundation for the logic of the operation. With a database design, we want to structure the data in a meaningful way to make storage and retrieval effective. With an object model, we define the objects and how they relate to each other so that accessing and managing objects is efficient and effective. The same applies to a content model. It's about structuring the content and the relationships between the classes to allow the content to be accessed and displayed easily. The following diagram is a simple content model that shows the key content classes and how they relate to each other. In this diagram, we see that resources belong to a collection which in turn belongs to a context. Also, a particular resource can belong to more than one collection. As stated before, there is no such thing as the "right" model. What we are trying to achieve is the most "effective" model for the project at hand. This means coming up with a model that will provide the most effective way of organizing content so that it can be easily displayed in the manner defined in the functional specification. The way a content model is defined will have an impact on how easy it is to code templates, how quickly the code will run, how easy it is for the editors to input content, and also how easy it is to change down the track. From experience, rarely is a project completed and then never touched again. Usually, there are changes, modifications, updates, etc. down the track. If the model is well structured, these changes will be easy, if not, they can require a significant amount of work to implement. In some cases, the project has to be rebuilt entirely and content re-entered to achieve the goals of the client. This is why the model is so important. If done well, it means the client pays less and has a better-running solution. A poor model will take longer to implement and changes will be more difficult to implement. What Makes a Good Model? It's not easy to define exactly what makes a good model. Like any form of design, simplicity is the key. The more the elements, the more complex it gets. Ideally, a model should be technology independent, but there are certain ways in which eZ publish operates that can influence how we structure the content model. Do we always need a content model? No, it depends on the scale of the project. Smaller projects don't really need a formal model. It's only when there are specific relationships between content classes that we need to go to the effort of creating a model. For example, a basic website that has a number of sections, e.g., About Us, Services, Articles, Contact, etc., doesn't need a model. There's no need for an underlying structure. It's just content added to sections. The in-built content classes in eZ publish will be enough to cater for that type of site. It's when the content itself has specific relationships e.g., a book belongs to a category or a product belongs to a product group, which belongs to a division of the business—this is when you need to create a model to capture the objects and the relationships between them. T o start with, we need to understand the content we are dealing with. The broad categories are existing/known content and new content. If we know the structure of the content we are dealing with and it already exists, this can help to shape the model. If we are dealing with content that doesn't exist yet (i.e. is to be written or created for this project) it's harder to know if we are on the right track. For example, when dealing with products, generally the product data will already exist in a database or ERP system. This gives us a basis from which to work. We can establish the structure of the content and the relationships from the existing data. That doesn't mean that we simply copy what's there, but it can guide us in the right direction. Sometimes the structure of the data isn't effective for the way it's to be displayed on the Web or it's missing elements. (As a typical example, in a recent project, the product data was stored in three places—the core details were in the Point of Sale system, product details and categorization were in a spreadsheet, and the images were stored on a file system.) So, the first step is to get an understanding of all the content we are dealing with. If the content doesn't exist as yet, at least get some examples of what it is likely to be. Without knowing what you are dealing with, you can't be sure your model will accommodate everything. T his means you'll need to allow for modifications down the track. Of course we want to minimize this but it's not always possible. Clients change their minds so the best we can do is hope that our model will accommodate what we think are the likely changes. This really can only be done through experience. There are patterns in content as well as how it's displayed. Through these patterns e.g., a related-content box on each page, we can try to foresee the way things might alter and build room for this into the model. A good example was that on a recent project, for each object, there was the main content but there were also a number of related objects (widgets) that were to be displayed in the right-hand column of the page. Initially, the content class defined the specific widgets to be associated with the object. The table below contains the details of a particular resource (as shown in the previous content model). It captures the details of the "research report" resource content class. Attribute Type Notes Title Text line Short Title Text Line If present, will be used in menus and URLs Flash Flash Navigator object Hero Image Image (displays if no flash) Caption Rich text   Body* Rich Text   Free Form Widgets Related Objects Select one or more Multimedia Widget Related Object Select one This would mean that when the editor added content, they would pick the free-form widgets and then the multimedia widget to be associated with the research report. Displaying the content would be straightforward as from the parent object we would have the object IDs for each widget. The idea is sound but lacks flexibility. It would mean that the order in which the object was added would dictate the order in which it was displayed. It also means that if the editor wants to choose to add a different type of widget, they couldn't unless the model was changed, i.e., another attribute was added to the content class. We updated the content class as follows: Attribute Type Notes Title* Text line Short Title Text Line If present, will be used in menus and URLs Flash Flash Navigator object Hero Image Image (displays if no flash) Caption Rich text   Body* Rich Text   Widgets Related Objects Select one or more This approach is less strict and provides more flexibility. The editor can choose any widget and also select the order. In terms of programming the template, there's the same amount of work. But, if we decide to add another widget type down the track, there's no need to update the content class to accommodate it. Does this mean that anytime we have a related object we should use the latter approach? No, the reason we did it in this situation is that the content was still being written as we were creating the model, and there was a good chance that once the content was entered and we saw the end result, the client was going to say something like "can we add widget x" to the right-hand column of a context object? In a different project, in which a particular widget should only be related to a particular content class, it's better to enforce the rule by only allowing that widget to be associated with that content class. Defining a Content Model The process of creating a content model requires a number of steps. It's not just a matter of analyzing the content; the modeler also needs to take into consideration the domain, users, groups, and the relationships between different classes within the model. To do this, we start with a walkthrough of the domain. Step 1: Domain Walkthrough The client and domain experts walk us through the entire project. This is a vital part of the process. We need to get an understanding of the entire system, not just the part that is captured in the final solution. The model that we end up creating may need to interact with other systems and knowing what they are and how they work will inform the shape of the model. A good example is with e-commerce systems, any information captured on a sale will eventually need to be entered into the existing financial system (whether is it automated or manual). Without an understanding of the bigger picture, we lack the understanding of how the solution we are creating will fit in with what the business does. That's when there is an existing business process. Sometimes there is no business process and the client is making things up as they go along, e.g. they have decided to do online shopping but they have never dealt with overseas orders so don't know how that will work and have no idea how they would deal with shipping costs. One of the typical problems that will surface during the domain walkthrough is that the client will try to tell you how they want the solution to work. By doing this, they are actually defining the model and interactions. This is something to be wary of. It is unlikely that they would be aware of how best to structure a solution; what you want to be asking is what they currently do, what's their current business process. You want to deal with facts that are in existence so that you can decide how best to model the solution. To get the client back on track ask questions like: How do you currently do "it" (i.e. the business process)? What information to you currently capture? How do you capture that information? What format is that information in? How often is the information updated? Who updates it? This gives you a picture of what is currently happening. Then you can start to shape the model to ensure that you are dealing with the real world, not what the client thinks they want. Sometimes they won't be able to answer the question and you'll have to get the right person from the business involved to get the answers you want. Sometimes you discover that what the client thought was happening is not really what happens. Another benefit of this process is gaining a common understanding. If both you and the client are in the room when the process for calculating shipping costs is being explained by the Shipping Manager, you'll both appreciate how complex the process is. If the client thinks it's easy, they won't expect it to cost much. If they are in the room when the shipping manager explains there are five different shipping methods and each has its own way of calculating the costs for a shipment based on their own set of international zones, you know modeling that part of the system is not going to be straightforward unlike what the client initially thought. What this means is that the domain walkthrough gives you a sense of what's real, not what people think the situation is. It's the most important part of the process. Assumptions that "shipping costs" are straightforward, so you don't need to worry about that, can be a disaster later down the track when you find out it's not the case. Also, don't necessarily rely on requirements documents (unless you have written them yourself). A statement in a requirements document may not reflect what really happens; that's why you want to make sure you go through everything to confirm that you have all the facts. Sometimes, a particular requirement can be stated in the document but when you go through it in more detail, ask a few questions, pose a few scenarios, the client changes their mind on what it is that they really want as they realize what they thought they wanted is going to be difficult or expensive to implement. Or, you put an alternative approach to them and they are happy to achieve the same result in a different manner that is easier to implement. This is a valuable way to work out what's real and what really matters.
Read more
  • 0
  • 0
  • 2522

article-image-search-engine-optimization-joomla
Packt
22 Oct 2009
8 min read
Save for later

Search Engine Optimization in Joomla!

Packt
22 Oct 2009
8 min read
What is SEO? Search-engine optimization, or SEO, refers to the process of preparing your website to be spidered, indexed, and ranked by the major search engines so that when Internet users search for your keywords, your website will appear on their results page. Proper search engine optimization is a crucial step to ensure success and should be undertaken with care and diligence. It should also be noted that SEO is an interdisciplinary concern, combining web design functions with marketing and promotional concerns. If aimed properly, SEO would be a powerful weapon in your arsenal. Proper SEO is: Optimizing META data Optimizing page titles Optimizing page content Selecting proper keywords Testing your optimizations Promoting link popularity Using standards-compliant HTML Optimizing image ALT tags Using a logical website structure Validating your content Proper SEO isn't: Keyword spamming Hidden text Cloaking content Link-farming Excessive content or site duplication Paying for questionable links Structural Optimization Optimizing your site's actual structure and presentation is the most immediate approach to SEO. Since these factors are under the immediate control of the webmaster, they represent a foundational approach to the SEO problem. Once you've optimized your site's structural components, you can optimize the promotional aspects of SEO, which we'll discuss momentarily. Items That Search Engines Look for in Your Site's Content It's important to remember that today's search engine rankings are determined by highly sophisticated algorithms. Trying to stay one step ahead of the major engines with bad tactics is not only a very bad idea, but also a waste of time. Well written content will win repeatedly. Giving the search engine robots a well prepared sitepage contributes in promoting your site. Three items that many search engine robots look for are: Relevant page titles to your content Relevant keywords and descriptions (META tags) Relevant, keyword-rich content, presented in clean and valid HTML Take a note of the recurring theme—"relevancy". If your site is relevant in terms of what the user is looking for, you will achieve respectable search engine rankings without any additional promotion. However, this is not a place to stop, as search engines correlate your site's standings among your peers and competitors by evaluating certain external factors. External Views of Your Site by Search Engines Search giant, Google, likes to describe its proprietary algorithm, known as PageRankTM, by discussing how the external factors can accurately define your site's relevancy, when considered along with your site's actual content. Most search engines today follow this formula in determining link popularity. Some popular items that are used to measure are: How many websites link to yours Where they link in your content What words are used in the actual link text (i.e. the description of the page) The topical relevancy of the sites that link to your site The power of web search lies in the search engine's ability to provide accurate and relevant results that someone can quickly use to find the information they seek. More importantly, the other end of the search process guarantees that the visitors we draw from search engines are truly after the information or services we provide. Another way to look at it would be it's the right message, but the wrong person. Thus we see that our interests, the interests of the search engines, and the interests of web surfers actually coincide! If we tune our content properly, and connect our content with similarly relevant content, we can expect to be rewarded with targeted traffic eager to devour our information and buy our services. If we try to deceive the search engines, or common people, we deceive ourselves. It's that simple. Optimizing META Data Metadata is the data about the data. It's the section where you define what a search engine should expect to find on your page. If you've never taken note of META tags before, then take a brief tour of the Web and view the source code of several websites. You'll see how this data is organized, primarily into descriptions and keyword listings. Joomla! provides functionality for modifying and dynamically generating META tags, in the Site | Global Configuration | Metadata dialog, as well as within individual articles via the META tab on the right-hand panel. This is where the dynamic aspect of metadata becomes important—your main page will have certain needs for proper META optimization and your individual Joomla! content articles will require special tuning to make the best of their potential. This is accomplished though key words and phrases scattered through out the text. Keep in mind that each search engine is different; however keeping ratio of about 3 to 1 for keywords and META (keyword) in the top 1/3rd of the page is a decent rule of thumb. Using the Site | Global Configuration | Metadata dialog, is pretty straight forward.You can enter descriptions, keywords and key phrases that are pertinent to your siteon a global level. You should select the META keywords based on the keywords appearing in your content with the greatest frequency. Be honest and use META keywords that actually appear in your content. Search engines penalize you for over use of keywords, known as keyword stuffing. Title Optimization What's in the actual title of your page? The keywords you insert into your site and article's titles play a huge role in successful search engine optimization. As with META tags, the key is to insert frequently-used, but not stuffed, keywords into your title, which correlate the relevancy of the site's title (what we say about our site)with the metadata (how we describe what it's about) and the actual content, which is indisputably "what the website is about". Content Optimization Writing clear content that uses pertinent language in our intended message or service is the key to content optimization. In your content, include naturally-written, keyword-rich content. This will tie into your META tags, title description and other portions of your site to help you achieve content relevance and thus higher search engine rankings. One note of caution—while we use our best keywords frequently within our text; we should not cram these words into our content. So don't be afraid to break out the thesaurus and include some alternative words and descriptions! Good content SEO is about achieving a balance between what the search engines see, and what your readers expect on arrival. Keyword Research and Optimization Researching our keywords not only gives us an idea of how our competitors are optimizing their websites, but also gives us a treasure-trove of alternative keywords that we can use to further optimize our own sites. There are several online tools that can give us an idea of what keywords are most typically searched for, and how the end-users phrase their searches. This provides avital two-way pathway into the visitor's minds, showing not only how they reach the products and information they seek, but also how they perceive those items. You can find a listing of free keyword research tools at:http://www.joomlawarrior.com. For our example, we'll use Google's freely available keyword suggestion tool for its AdWords program, and use Joomla! itself as our intended optimization candidate.See http://www.google.com/adwords for the keyword tool. The following example will demonstrate the AdWords tool and how it helps you determine good keywords for your site. Entering joomla into Google's keyword suggestion tool yields the following display: The three key pieces of information as seen in the previous figure, which help us inmaking a decision about keywords, are as follows: Keywords: This column indicates the keyword whose Search Volume and Advertiser Competition we want to check. Advertiser Competition: This is graphical indicator of how many ads are in rotation for this keyword. Search Volume: Graphical indication of how many people in the world are searching this keyword for a product or service. As we see from the example, when we search for the keyword joomla we see a lower Advertiser Competition than content management system, but a higher SearchVolume. If we then examine open source we see a heavy Advertiser Competition, but the same Search Volume as joomla. What this means is that if we advertise in the crowded keyword space—"open source", we can expect a lot of competition. Changing our keyword to Joomla! would give us less competition and about the same Search Volume. If we advertise something related to Joomla! then that would be the best choice. However, if we were advertising a tool for open source, we would want to spend our money on the keyword "open source". The last take away from this is if we are selling a joomla template, you see from the figure that there isn't much competition (at the time thiswas taken), but a healthy amount of Search Volume.
Read more
  • 0
  • 0
  • 2232
article-image-aspnet-social-networks-making-friends-part-2
Packt
22 Oct 2009
18 min read
Save for later

ASP.NET Social Networks—Making Friends (Part 2)

Packt
22 Oct 2009
18 min read
Implementing the presentation layer Now that we have the base framework in place, we can start to discuss what it will take to put it all together. Searching for friends Let's see what it takes to implement a search for friends. SiteMaster Let's begin with searching for friends. We haven't covered too much regarding the actual UI and nothing regarding the master page of this site. Putting in simple words, we have added a text box and a button to the master page to take in a search phrase. When the button is clicked, this method in the MasterPage code behind is fired. protected void ibSearch_Click(object sender, EventArgs e){ _redirector.GoToSearch(txtSearch.Text);} As you can see it simply calls the Redirector class and routes the user to the Search.aspx page passing in the value of txtSearch (as a query string parameter in this case). public void GoToSearch(string SearchText){ Redirect("~/Search.aspx?s=" + SearchText); } Search The Search.aspx page has no interface. It expects a value to be passed in from the previously discussed text box in the master page. With this text phrase we hit our AccountRepository and perform a search using the Contains() operator. The returned list of Accounts is then displayed on the page. For the most part, this page is all about MVP (Model View Presenter) plumbing. Here is the repeater that displays all our data. <%@ Register Src="~/UserControls/ProfileDisplay.ascx" TagPrefix="Fisharoo" TagName="ProfileDisplay" %>...<asp:Repeater ID="repAccounts" runat="server" OnItemDataBound="repAccounts_ItemDataBound"> <ItemTemplate> <Fisharoo:ProfileDisplay ShowDeleteButton="false" ID="pdProfileDisplay" runat="server"> </Fisharoo:ProfileDisplay> </ItemTemplate></asp:Repeater> The fun stuff in this case comes in the form of the ProfileDisplay user control that was created so that we have an easy way to display profile data in various places with one chunk of reusable code that will allow us to make global changes. A user control is like a small self-contained page that you can then insert into your page (or master page). It has its own UI and it has its own code behind (so make sure it also gets its own MVP plumbing!). Also, like a page, it is at the end of the day a simple object, which means that it can have properties, methods, and everything else that you might think to use. Once you have defined a user control you can use it in a few ways. You can programmatically load it using the LoadControl() method and then use it like you would use any other object in a page environment. Or like we did here, you can add a page declaration that registers the control for use in that page. You will notice that we specified where the source for this control lives. Then we gave it a tag prefix and a tag name (similar to using asp:Control). From that point onwards we can refer to our control in the same way that we can declare a TextBox! You should see that we have <Fisharoo:ProfileDisplay ... />. You will also notice that our tag has custom properties that are set in the tag definition. In this case you see ShowDeleteButton="false". Here is the user control code in order of display, code behind, and the presenter: //UserControls/ProfileDisplay.ascx<%@ Import namespace="Fisharoo.FisharooCore.Core.Domain"%><%@ Control Language="C#" AutoEventWireup="true" CodeBehind="ProfileDisplay.ascx.cs" Inherits="Fisharoo.FisharooWeb.UserControls.ProfileDisplay" %><div style="float:left;"> <div style="height:130px;float:left;"> <a href="/Profiles/Profile.aspx?AccountID=<asp:Literal id='litAccountID' runat='server'></asp:Literal>"> <asp:Image style="padding:5px;width:100px;height:100px;" ImageAlign="Left" Width="100" Height="100" ID="imgAvatar" ImageUrl="~/images/ProfileAvatar/ProfileImage.aspx" runat="server" /></a> <asp:ImageButton ImageAlign="AbsMiddle" ID="ibInviteFriend" runat="server" Text="Become Friends" OnClick="lbInviteFriend_Click" ImageUrl="~/images/icon_friends.gif"></asp:ImageButton> <asp:ImageButton ImageAlign="AbsMiddle" ID="ibDelete" runat="server" OnClick="ibDelete_Click" ImageUrl="~/images/icon_close.gif" /><br /> <asp:Label ID="lblUsername" runat="server"></asp:Label><br /> <asp:Label ID="lblFirstName" runat="server"></asp:Label> <asp:Label ID="lblLastName" runat="server"></asp:Label><br /> Since: <asp:Label ID="lblCreateDate" runat="server"></asp:Label><br /> <asp:Label ID="lblFriendID" runat="server" Visible="false"></asp:Label> </div> </div>//UserControls/ProfileDisplay.ascx.csusing System;using System.Collections;using System.Configuration;using System.Data;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using Fisharoo.FisharooCore.Core.Domain;using Fisharoo.FisharooWeb.UserControls.Interfaces;using Fisharoo.FisharooWeb.UserControls.Presenters;namespace Fisharoo.FisharooWeb.UserControls{ public partial class ProfileDisplay : System.Web.UI.UserControl, IProfileDisplay { private ProfileDisplayPresenter _presenter; protected Account _account; protected void Page_Load(object sender, EventArgs e) { _presenter = new ProfileDisplayPresenter(); _presenter.Init(this); ibDelete.Attributes.Add("onclick","javascript:return confirm('Are you sure you want to delete this friend?')"); } public bool ShowDeleteButton { set { ibDelete.Visible = value; } } public bool ShowFriendRequestButton { set { ibInviteFriend.Visible = value; } } public void LoadDisplay(Account account) { _account = account; ibInviteFriend.Attributes.Add("FriendsID",_account.AccountID.ToString()); ibDelete.Attributes.Add("FriendsID", _account.AccountID.ToString()); litAccountID.Text = account.AccountID.ToString(); lblLastName.Text = account.LastName; lblFirstName.Text = account.FirstName; lblCreateDate.Text = account.CreateDate.ToString(); imgAvatar.ImageUrl += "?AccountID=" + account.AccountID.ToString(); lblUsername.Text = account.Username; lblFriendID.Text = account.AccountID.ToString(); } protected void lbInviteFriend_Click(object sender, EventArgs e) { _presenter = new ProfileDisplayPresenter(); _presenter.Init(this); _presenter.SendFriendRequest(Convert.ToInt32(lblFriendID.Text)); } protected void ibDelete_Click(object sender, EventArgs e) { _presenter = new ProfileDisplayPresenter(); _presenter.Init(this); _presenter.DeleteFriend(Convert.ToInt32(lblFriendID.Text)); } }}//UserControls/Presenter/ProfileDisplayPresenter.csusing System;using System.Data;using System.Configuration;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using Fisharoo.FisharooCore.Core;using Fisharoo.FisharooCore.Core.DataAccess;using Fisharoo.FisharooWeb.UserControls.Interfaces;using StructureMap;namespace Fisharoo.FisharooWeb.UserControls.Presenters{ public class ProfileDisplayPresenter { private IProfileDisplay _view; private IRedirector _redirector; private IFriendRepository _friendRepository; private IUserSession _userSession; public ProfileDisplayPresenter() { _redirector = ObjectFactory.GetInstance<IRedirector>(); _friendRepository = ObjectFactory.GetInstance<IFriendRepository>(); _userSession = ObjectFactory.GetInstance<IUserSession>(); } public void Init(IProfileDisplay view) { _view = view; } public void SendFriendRequest(Int32 AccountIdToInvite) { _redirector.GoToFriendsInviteFriends(AccountIdToInvite); } public void DeleteFriend(Int32 FriendID) { if (_userSession.CurrentUser != null) { _friendRepository.DeleteFriendByID(_userSession.CurrentUser.AccountID , FriendID); HttpContext.Current.Response.Redirect(HttpContext.Current.Request.Raw Url); } } }} All this logic and display is very standard. You have the MVP plumbing, which makes up most of it. Outside of that you will notice that the ProfileDisplay control has a LoadDisplay() method responsible for loading the UI for that control. In the Search page this is done in the repAccounts_ItemDataBound() method. protected void repAccounts_ItemDataBound(object sender, RepeaterItemEventArgs e){ if(e.Item.ItemType == ListItemType.Item || e.Item.ItemType == ListItemType.AlternatingItem) { ProfileDisplay pd = e.Item.FindControl("pdProfileDisplay") as ProfileDisplay; pd.LoadDisplay((Account)e.Item.DataItem); if(_webContext.CurrentUser == null) pd.ShowFriendRequestButton = false; }} The ProfileDisplay control also has a couple of properties one to show/hide the delete friend button and the other to show/hide the invite friend button. These buttons are not appropriate for every page that the control is used in. In the search results page we want to hide the Delete button as the results are not necessarily friends. We would want to be able to invite them in that view. However, in a list of our friends the Invite button (to invite a friend) would no longer be appropriate as each of these users would already be a friend. The Delete button in this case would now be more appropriate. Clicking on the Invite button makes a call to the Redirector class and routes the user to the InviteFriends page. //UserControls/ProfileDisplay.ascx.cspublic void SendFriendRequest(Int32 AccountIdToInvite){ _redirector.GoToFriendsInviteFriends(AccountIdToInvite);}//Core/Impl/Redirector.cspublic void GoToFriendsInviteFriends(Int32 AccoundIdToInvite){ Redirect("~/Friends/InviteFriends.aspx?AccountIdToInvite=" + AccoundIdToInvite.ToString());} Inviting your friends This page allows us to manually enter email addresses of friends whom we want to invite. It is a standard From, To, Message format where the system specifies the sender (you), you specify who to send to and the message that you want to send. //Friends/InviteFriends.aspx<%@ Page Language="C#" MasterPageFile="~/SiteMaster.Master" AutoEventWireup="true" CodeBehind="InviteFriends.aspx.cs" Inherits="Fisharoo.FisharooWeb.Friends.InviteFriends" %><asp:Content ContentPlaceHolderID="Content" runat="server"> <div class="divContainer"> <div class="divContainerBox"> <div class="divContainerTitle">Invite Your Friends</div> <asp:Panel ID="pnlInvite" runat="server"> <div class="divContainerRow"> <div class="divContainerCellHeader">From:</div> <div class="divContainerCell"><asp:Label ID="lblFrom" runat="server"></asp:Label></div> </div> <div class="divContainerRow"> <div class="divContainerCellHeader">To:<br /><div class="divContainerHelpText">(use commas to<BR />separate emails)</div></div> <div class="divContainerCell"><asp:TextBox ID="txtTo" runat="server" TextMode="MultiLine" Columns="40" Rows="5"></asp:TextBox></div> </div> <div class="divContainerRow"> <div class="divContainerCellHeader">Message:</div> <div class="divContainerCell"><asp:TextBox ID="txtMessage" runat="server" TextMode="MultiLine" Columns="40" Rows="10"></asp:TextBox></div> </div> <div class="divContainerFooter"> <asp:Button ID="btnInvite" runat="server" Text="Invite" OnClick="btnInvite_Click" /> </div> </asp:Panel> <div class="divContainerRow"> <div class="divContainerCell"><br /><asp:Label ID="lblMessage" runat="server"> </asp:Label><br /><br /></div> </div> </div> </div></asp:Content> Running the code will display the following: This is a simple page, so the majority of the code for it is MVP plumbing. The most important part to notice here is that when the Invite button is clicked the presenter is notified to send the invitation. //Friends/InviteFriends.aspx.csusing System;using System.Collections;using System.Configuration;using System.Data;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using Fisharoo.FisharooWeb.Friends.Interface;using Fisharoo.FisharooWeb.Friends.Presenter;namespace Fisharoo.FisharooWeb.Friends{ public partial class InviteFriends : System.Web.UI.Page, IInviteFriends { private InviteFriendsPresenter _presenter; protected void Page_Load(object sender, EventArgs e) { _presenter = new InviteFriendsPresenter(); _presenter.Init(this); } protected void btnInvite_Click(object sender, EventArgs e) { _presenter.SendInvitation(txtTo.Text,txtMessage.Text); } public void DisplayToData(string To) { lblFrom.Text = To; } public void TogglePnlInvite(bool IsVisible) { pnlInvite.Visible = IsVisible; } public void ShowMessage(string Message) { lblMessage.Text = Message; } public void ResetUI() { txtMessage.Text = ""; txtTo.Text = ""; } }} Once this call is made we leap across to the presenter (more plumbing!). //Friends/Presenter/InviteFriendsPresenter.csusing System;using System.Data;using System.Configuration;using System.Linq;using System.Web;using System.Web.Security;using System.Web.UI;using System.Web.UI.HtmlControls;using System.Web.UI.WebControls;using System.Web.UI.WebControls.WebParts;using System.Xml.Linq;using Fisharoo.FisharooCore.Core;using Fisharoo.FisharooCore.Core.DataAccess;using Fisharoo.FisharooCore.Core.Domain;using Fisharoo.FisharooWeb.Friends.Interface;using StructureMap;namespace Fisharoo.FisharooWeb.Friends.Presenter{ public class InviteFriendsPresenter { private IInviteFriends _view; private IUserSession _userSession; private IEmail _email; private IFriendInvitationRepository _friendInvitationRepository; private IAccountRepository _accountRepository; private IWebContext _webContext; private Account _account; private Account _accountToInvite; public void Init(IInviteFriends view) { _view = view; _userSession = ObjectFactory.GetInstance<IUserSession>(); _email = ObjectFactory.GetInstance<IEmail>(); _friendInvitationRepository = ObjectFactory.GetInstance< IFriendInvitationRepository>(); _accountRepository = ObjectFactory.GetInstance<IAccountRepository>(); _webContext = ObjectFactory.GetInstance<IWebContext>(); _account = _userSession.CurrentUser; if (_account != null) { _view.DisplayToData(_account.FirstName + " " + _account.LastName + " &lt;" + _account.Email + "&gt;"); if (_webContext.AccoundIdToInvite > 0) { _accountToInvite = _accountRepository.GetAccountByID (_webContext.AccoundIdToInvite); if (_accountToInvite != null) { SendInvitation(_accountToInvite.Email, _account.FirstName + " " + _account.LastName + " would like to be your friend!"); _view.ShowMessage(_accountToInvite.Username + " has been sent a friend request!"); _view.TogglePnlInvite(false); } } } } public void SendInvitation(string ToEmailArray, string Message) { string resultMessage = "Invitations sent to the following recipients:<BR>"; resultMessage += _email.SendInvitations (_userSession.CurrentUser,ToEmailArray, Message); _view.ShowMessage(resultMessage); _view.ResetUI(); } }} The interesting thing here is the SendInvitation() method, which takes in a comma delimited array of emails and the message to be sent in the invitation. It then makes a call to the Email.SendInvitations() method. //Core/Impl/Email.cspublic string SendInvitations(Account sender, string ToEmailArray, string Message){ string resultMessage = Message; foreach (string s in ToEmailArray.Split(',')) { FriendInvitation friendInvitation = new FriendInvitation(); friendInvitation.AccountID = sender.AccountID; friendInvitation.Email = s; friendInvitation.GUID = Guid.NewGuid(); friendInvitation.BecameAccountID = 0; _friendInvitationRepository.SaveFriendInvitation(friendInvitation); //add alert to existing users alerts Account account = _accountRepository.GetAccountByEmail(s); if(account != null) { _alertService.AddFriendRequestAlert(_userSession.CurrentUser, account, friendInvitation.GUID, Message); } //TODO: MESSAGING - if this email is already in our system add a message through messaging system //if(email in system) //{ // add message to messaging system //} //else //{ // send email SendFriendInvitation(s, sender.FirstName, sender.LastName, friendInvitation.GUID.ToString(), Message); //} resultMessage += "• " + s + "<BR>"; } return resultMessage;} This method is responsible for parsing out all the emails, creating a new FriendInvitation, and sending the request via email to the person who was invited. It then adds an alert to the invited user if they have an Account. And finally we have to add a notification to the messaging system once it is built. Outlook CSV importer The Import Contacts page is responsible for allowing our users to upload an exported contacts file from MS Outlook into our system. Once they have imported their contacts, the user is allowed to select which email addresses are actually invited into our system. Importing contacts As this page is made up of a couple of views, let's begin with the initial view. //Friends/OutlookCsvImporter.aspx<asp:Panel ID="pnlUpload" runat="server"> <div class="divContainerTitle">Import Contacts</div> <div class="divContainerRow"> <div class="divContainerCellHeader">Contacts File:</div> <div class="divContainerCell"><asp:FileUpload ID="fuContacts" runat="server" /></div> </div> <div class="divContainerRow"> <div class="divContainerFooter"><asp:Button ID="btnUpload" Text="Upload & Preview Contacts" runat="server" OnClick="btnUpload_Click" /></div> </div> <br /><br /> <div class="divContainerRow"> <div class="divContainerTitle">How do I export my contacts from Outlook?</div> <div class="divContainerCell"> <ol> <li> Open Outlook </li> <li> In the File menu choose Import and Export </li> <li> Choose export to a file and click next </li> <li> Choose comma seperated values and click next </li> <li> Select your contacts and click next </li> <li> Browse to the location you want to save your contacts file </li> <li> Click finish </li> </ol> </div> </div></asp:Panel> As you can see from the code we are working in panels here. This panel is responsible for allowing a user to upload their Contacts CSV File. It also gives some directions to the user as to how to go about exporting contacts from Outlook. This view has a file upload box that allows the user to browse for their CSV file, and a button to tell us when they are ready for the upload. There is a method in our presenter that handles the button click from the view. //Friends/Presenter/OutlookCsvImporterPresenter.cspublic void ParseEmails(HttpPostedFile file){ using (Stream s = file.InputStream) { StreamReader sr = new StreamReader(s); string contacts = sr.ReadToEnd(); _view.ShowParsedEmail(_email.ParseEmailsFromText(contacts)); }} This method is responsible for handling the upload process of the HttpPostedFile. It puts the file reference into a StreamReader and then reads the stream into a string variable named contacts. Once we have the entire list of contacts we can then call into our Email class and parse all the emails out. //Core/Impl/Email.cspublic List<string> ParseEmailsFromText(string text){ List<string> emails = new List<string>(); string strRegex = @"w+([-+.]w+)*@w+([-.]w+)*.w+([-.]w+)*"; Regex re = new Regex(strRegex, RegexOptions.Multiline); foreach (Match m in re.Matches(text)) { string email = m.ToString(); if(!emails.Contains(email)) emails.Add(email); } return emails;} This method expects a string that contains some email addresses that we want to parse. It then parses the emails using a regular expression (which we won't go into details about!). We then iterate through all the matches in the Regex and add the found email addresses to our list provided they aren't already present. Once we have found all the email addresses, we will return the list of unique email addresses. The presenter then passes that list of parsed emails to the view. Selecting contacts Once we have handled the upload process and parsed out the emails, we then need to display all the emails to the user so that they can select which ones they want to invite. Now you could do several sneaky things here. Technically the user has uploaded all of their email addresses to you. You have them. You could store them. You could invite every single address regardless of what the user wants. And while this might benefit your community over the short run, your users would eventually find out about your sneaky practice and your community would start to dwindle. Don't take advantage of your user's trust! //Friends/OutlookCsvImporter.aspx<asp:Panel visible="false" ID="pnlEmails" runat="server"> <div class="divContainerTitle">Select Contacts</div> <div class="divContainerFooter"><asp:Button ID="btnInviteContacts1" runat="server" OnClick="btnInviteContacts_Click" Text="Invite Selected Contacts" /></div> <div class="divContainerCell" style="text-align:left;"> <asp:CheckBoxList ID="cblEmails" RepeatColumns="2" runat="server"></asp:CheckBoxList> </div> <div class="divContainerFooter"><asp:Button ID="btnInviteContacts2" runat="server" OnClick="btnInviteContacts_Click" Text="Invite Selected Contacts" /></div></asp:Panel> Notice that we have a checkbox list in our panel. This checkbox list is bound to the returned list of email addresses. public void ShowParsedEmail(List<string> Emails){ pnlUpload.Visible = false; pnlResult.Visible = false; pnlEmails.Visible = true; cblEmails.DataSource = Emails; cblEmails.DataBind();} The output so far looks like this: Now the user has a list of all the email addresses that they uploaded, which they can then go through selecting the ones that they want to invite into our system. Once they are through selecting the emails that they want to invite, they can click on the Invite button. We then iterate through all the items in the checkbox list to locate the selected items. protected void btnInviteContacts_Click(object sender, EventArgs e){ string emails = ""; foreach (ListItem li in cblEmails.Items) { if(li != null && li.Selected) emails += li.Text + ","; } emails = emails.Substring(0, emails.Length - 1); _presenter.InviteContacts(emails);} Once we have gathered all the selected emails, we pass them to the presenter to run the invitation process. public void InviteContacts(string ToEmailArray){ string result = _email.SendInvitations(_userSession.CurrentUser, ToEmailArray, ""); _view.ShowInvitationResult(result);} The presenter promptly passes the selected items to the Email class to handle the invitations. This is the same method that we used in the last section to invite users. //Core/Impl/Email.cspublic string SendInvitations(Account sender, string ToEmailArray, string Message){...} We then output the result of the emails that we invited into the third display. <asp:Panel ID="pnlResult" runat="server" Visible="false"> <div class="divContainerTitle">Invitations Sent!</div> <div class="divContainerCell"> Invitations were sent to the following emails:<br /> <asp:Label ID="lblMessage" runat="server"></asp:Label> </div></asp:Panel>
Read more
  • 0
  • 0
  • 4768

article-image-php-data-objects-error-handling
Packt
22 Oct 2009
11 min read
Save for later

PHP Data Objects: Error Handling

Packt
22 Oct 2009
11 min read
In this article, we will extend our application so that we can edit existing records as well as add new records. As we will deal with user input supplied via web forms, we have to take care of its validation. Also, we may add error handling so that we can react to non-standard situations and present the user with a friendly message. Before we proceed, let's briefly examine the sources of errors mentioned above and see what error handling strategy should be applied in each case. Our error handling strategy will use exceptions, so you should be familiar with them. If you are not, you can refer to Appendix A, which will introduce you to the new object-oriented features of PHP5. We have consciously chosen to use exceptions, even though PDO can be instructed not to use them, because there is one situation where they cannot be avoided. The PDO constructors always throw an exception when the database object cannot be created, so we may as well use exceptions as our main error‑trapping method throughout the code. Sources of Errors To create an error handling strategy, we should first analyze where errors can happen. Errors can happen on every call to the database, and although this is rather unlikely, we will look at this scenario. But before doing so, let's check each of the possible error sources and define a strategy for dealing with them. This can happen on a really busy server, which cannot handle any more incoming connections. For example, there may be a lengthy update running in the background. The outcome is that we are unable to get any data from the database, so we should do the following. If the PDO constructor fails, we present a page displaying a message, which says that the user's request could not be fulfilled at this time and that they should try again later. Of course, we should also log this error because it may require immediate attention. (A good idea would be emailing the database administrator about the error.) The problem with this error is that, while it usually manifests itself before a connection is established with the database (in a call to PDO constructor), there is a small risk that it can happen after the connection has been established (on a call to a method of the PDO or PDO Statement object when the database server is being shutdown). In this case, our reaction will be the same—present the user with an error message asking them to try again later. Improper Configuration of the Application This error can only occur when we move the application across servers where database access details differ; this may be when we are uploading from a development server to production server, where database setups differ. This is not an error that can happen during normal execution of the application, but care should be taken while uploading as this may interrupt the site's operation. If this error occurs, we can display another error message like: "This site is under maintenance". In this scenario, the site maintainer should react immediately, as without correcting, the connection string the application cannot normally operate. Improper Validation of User Input This is an error which is closely related to SQL injection vulnerability. Every developer of database-driven applications must undertake proper measures to validate and filter all user inputs. This error may lead to two major consequences: Either the query will fail due to malformed SQL (so that nothing particularly bad happens), or an SQL injection may occur and application security may be compromised. While their consequences differ, both these problems can be prevented in the same way. Let's consider the following scenario. We accept some numeric value from a form and insert it into the database. To keep our example simple, assume that we want to update a book's year of publication. To achieve this, we can create a form that has two fields: A hidden field containing the book's ID, and a text field to enter the year. We will skip implementation details here, and see how using a poorly designed script to process this form could lead to errors and put the whole system at risk. The form processing script will examine two request variables:$_REQUEST['book'], which holds the book's ID and $_REQUEST['year'], which holds the year of publication. If there is no validation of these values, the final code will look similar to this: $book = $_REQUEST['book'];$year = $_REQUEST['year'];$sql = "UPDATE books SET year=$year WHERE id=$book";$conn->query($sql); Let's see what happens if the user leaves the book field empty. The final SQL would then look like: UPDATE books SET year= WHERE id=1; This SQL is malformed and will lead to a syntax error. Therefore, we should ensure that both variables are holding numeric values. If they don't, we should redisplay the form with an error message. Now, let's see how an attacker might exploit this to delete the contents of the entire table. To achieve this, they could just enter the following into the year field: 2007; DELETE FROM books; This turns a single query into three queries: UPDATE books SET year=2007; DELETE FROM books; WHERE book=1; Of course, the third query is malformed, but the first and second will execute, and the database server will report an error. To counter this problem, we could use simple validation to ensure that the year field contains four digits. However, if we have text fields, which can contain arbitrary characters, the field's values must be escaped prior to creating the SQL. Inserting a Record with a Duplicate Primary Key or Unique Index Value This problem may happen when the application is inserting a record with duplicate values for the primary key or a unique index. For example, in our database of authors and books, we might want to prevent the user from entering the same book twice by mistake. To do this, we can create a unique index of the ISBN column of the books table. As every book has a unique ISBN, any attempt to insert the same ISBN will generate an error. We can trap this error and react accordingly, by displaying an error message asking the user to correct the ISBN or cancel its addition. Syntax Errors in SQL Statements This error may occur if we haven't properly tested the application. A good application must not contain these errors, and it is the responsibility of the development team to test every possible situation and check that every SQL statement performs without syntax errors. If this type of an error occurs, then we trap it with exceptions and display a fatal error message. The developers must correct the situation at once. Now that we have learned a bit about possible sources of errors, let's examine how PDO handles errors. Types of Error Handling in PDO By default, PDO uses the silent error handling mode. This means that any error that arises when calling methods of the PDO or PDOStatement classes go unreported. With this mode, one would have to call PDO::errorInfo(), PDO::errorCode(), PDOStatement::errorInfo(), or PDOStatement::errorCode(), every time an error occurred to see if it really did occur. Note that this mode is similar to traditional database access—usually, the code calls mysql_errno(),and mysql_error() (or equivalent functions for other database systems) after calling functions that could cause an error, after connecting to a database and after issuing a query. Another mode is the warning mode. Here, PDO will act identical to the traditional database access. Any error that happens during communication with the database would raise an E_WARNING error. Depending on the configuration, an error message could be displayed or logged into a file. Finally, PDO introduces a modern way of handling database connection errors—by using exceptions. Every failed call to any of the PDO or PDOStatement methods will throw an exception. As we have previously noted, PDO uses the silent mode, by default. To switch to a desired error handling mode, we have to specify it by calling PDO::setAttribute() method. Each of the error handling modes is specified by the following constants, which are defined in the PDO class: PDO::ERRMODE_SILENT – the silent strategy. PDO::ERRMODE_WARNING – the warning strategy. PDO::ERRMODE_EXCEPTION – use exceptions. To set the desired error handling mode, we have to set the PDO::ATTR_ERRMODE attribute in the following way: $conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); To see how PDO throws an exception, edit the common.inc.php file by adding the above statement after the line #46. If you want to test what will happen when PDO throws an exception, change the connection string to specify a nonexistent database. Now point your browser to the books listing page. You should see an output similar to: This is PHP's default reaction to uncaught exceptions—they are regarded as fatal errors and program execution stops. The error message reveals the class of the exception, PDOException, the error description, and some debug information, including name and line number of the statement that threw the exception. Note that if you want to test SQLite, specifying a non-existent database may not work as the database will get created if it does not exist already. To see that it does work for SQLite, change the $connStr variable on line 10 so that there is an illegal character in the database name: $connStr = 'sqlite:/path/to/pdo*.db'; Refresh your browser and you should see something like this: As you can see, a message similar to the previous example is displayed, specifying the cause and the location of the error in the source code. Defining an Error Handling Function If we know that a certain statement or block of code can throw an exception, we should wrap that code within the try…catch block to prevent the default error message being displayed and present a user-friendly error page. But before we proceed, let's create a function that will render an error message and exit the application. As we will be calling it from different script files, the best place for this function is, of course, the common.inc.php file. Our function, called showError(), will do the following: Render a heading saying "Error". Render the error message. We will escape the text with the htmlspecialchars() function and process it with the nl2br() function so that we can display multi-line messages. (This function will convert all line break characters to tags.) Call the showFooter() function to close the opening and tags. The function will assume that the application has already called the showHeader() function. (Otherwise, we will end up with broken HTML.) We will also have to modify the block that creates the connection object in common.inc.php to catch the possible exception. With all these changes, the new version of common.inc.php will look like this: <?php/*** This is a common include file* PDO Library Management example application* @author Dennis Popel*/// DB connection string and username/password$connStr = 'mysql:host=localhost;dbname=pdo';$user = 'root';$pass = 'root';/*** This function will render the header on every page,* including the opening html tag,* the head section and the opening body tag.* It should be called before any output of the/*** This function will 'close' the body and html* tags opened by the showHeader() function*/function showFooter(){?></body></html><?php}/*** This function will display an error message, call the* showFooter() function and terminate the application* @param string $message the error message*/function showError($message){echo "<h2>Error</h2>";echo nl2br(htmlspecialchars($message));showFooter();exit();}// Create the connection objecttry{$conn = new PDO($connStr, $user, $pass);$conn->setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);}catch(PDOException $e){showHeader('Error');showError("Sorry, an error has occurred. Please try your requestlatern" . $e->getMessage());} As you can see, the newly created function is pretty straightforward. The more interesting part is the try…catch block that we use to trap the exception. Now with these modifications we can test how a real exception will get processed. To do that, make sure your connection string is wrong (so that it specifies wrong databasename for MySQL or contains invalid file name for SQLite). Point your browser to books.php and you should see the following window:
Read more
  • 0
  • 0
  • 6809

article-image-backing-and-restoring-typo3-websites
Packt
22 Oct 2009
4 min read
Save for later

Backing Up and Restoring TYPO3 Websites

Packt
22 Oct 2009
4 min read
What Needs Backing Up in TYPO3? We need to back up: The TYPO3 files A copy of the database These two things make up our TYPO3 installation. We need the database as it contains the website's content and records of the website's users. We need the TYPO3 files as they contain the website's settings in the configuration files, copies of the website's design, and copies of data that has been cached by TYPO3. Backing Up the TYPO3 Files Depending on the operating system we are using, there are a number of different ways in which we can back up the files from TYPO3. In this article, we will look into backing up the files on Windows and on Linux. This is because Windows is the most used operating system, and Linux is the most popular hosting environment for websites. Backing Up Our Files on Windows In Windows, we can easily create a compressed file containing all the TYPO3 files (known as a ZIP file), using the Windows Compressed Folder tool, or a program such as WinZip. Provided we've used the default installation path, TYPO3 will be located in the folder C:Program FilesTypo3_4.0.2Apache and the folder that we want to compress is typo3_src. We could just back up the fileadmin, typo3conf, and uploads folders. This way, should we lose our entire website, we can simply restore the whole thing instead of having to restore TYPO3 and then our extra TYPO3 files. Now that we have a backup, we should copy it to a separate location (preferably on an external disk, or on another computer) for safe keeping. Backing Up Our Files on Linux or Linux Hosting We can create a complete backup of our home directory on a Linux hosting environment. This home directory contains all of our files on the hosting account. Alternatively, we can run a simple command to compress a particular folder. If we have a web hosting account that provides us with access to the cPanel hosting control panel, we can use that to generate a backup of our entire website (except for the database—which is done separately via cPanel). To access the backup utilities, we need to log in to cPanel, which is located at www.ourdomain.com/cpanel, and then enter our hosting account's username and password. In cPanel, we have the backup option on the main screen, as shown in the following screenshot: The Backups section has a number of options, but the one that we want is the Download a home directory Backup. This will generate a backup of all the files of our website and allow us to download it. In the previous screenshot, there is a warning message. This is because my web server does not have the option to back up the entire server, just an individualuser's webspace. The backup tool then takes a moment or two of processing, and then prompts us to download the backup file. Command-Line Backup To create a backup via the command line, we need to have SSH access to the server that is hosting our website. SSH is a protocol that allows us to remotely administer another machine using the command line. We can use a program such as Putty to connect to the server. We can download Putty from http://www.chiark.greenend.org.uk/~sgtatham/putty/. Putty only needs to be downloaded, after which it can be run straight away, and does not require to be installed. When we open the program, we are presented with a screen similar to the one shown in the following screenshot. We enter the server's address (i.e. the web address) into the Host Name box and then click on Open. Putty will then try to connect to the server, and will prompt us to enter our username and password, as shown in the following screenshot: Once we are connected, we can type two commands to back up our site. The first is to navigate to the folder that contains our TYPO3 installation. This depends entirely on the server's setup and your username, but is generally /home/the first 8 characters of your web address/public_html (you should contact your web host for more information or if you need help). Once we are in the correct folder, we can use the tar command to compress our TYPO3 folder to a single file named TYPO3. cd /home/michaelp/public_html/tar cvzf file.tar.gz typo3 Now that we have our backup created, we can download it from www.ourwebaddress.com/file.tar.gz (where we will be prompted to save the file). We should then delete this from our server once we have downloaded it.
Read more
  • 0
  • 0
  • 7616
Packt
22 Oct 2009
8 min read
Save for later

Working with Rails – ActiveRecord, Migrations, Models, Scaffolding, and Database Completion

Packt
22 Oct 2009
8 min read
ActiveRecord, Migrations, and Models ActiveRecord is the ORM layer (see the section Connecting Rails to a Database in the previous article) used in Rails. It is used by controllers as a proxy to the database tables. What's really great about this is that it protects you against having to code SQL. Writing SQL is one of the least desirable aspects of developing with other web-centric languages (like PHP): having to manually build SQL statements, remembering to correctly escape quotes, and creating labyrinthine join statements to pull data from multiple tables. ActiveRecord does away with all of that (most of the time), instead presenting database tables through classes (a class which wraps around a database table is called a model) and instances of those classes (model instances). The best way to illustrate the beauty of ActiveRecord is to start using it. Model == Table The base concept in ActiveRecord is the model. Each model class is stored in the app/models directory inside your application, in its own file. So, if you have a model called Person, the file holding that model is in app/models/person.rb, and the class for that model, defined in that file, is called Person. Each model will usually correspond to a table in the database. The name of the database table is, by convention, the pluralized (in the English language), lower-case form of the model's class name. In the case of our Intranet application, the models are organized as follows: Table Model class File containing class definition (in app/models) people Person person.rb companies Company company.rb addresses Address address.rb We haven't built any of these yet, but we will shortly. Which Comes First: The Model or The Table? To get going with our application, we need to generate the tables to store data into, as shown in the previous section. It used to be at this point where we would reach for a MySQL client, and create the database tables using a SQL script. (This is typically how you would code a database for a PHP application.) However, things have moved on in the Rails world. The Rails developers came up with a pretty good (not perfect, but pretty good) mechanism for generating databases without the need for SQL: it's called migrations, and is a part of ActiveRecord. Migrations enable a developer to generate a database structure using a series of Ruby script files (each of which is an individual migration) to define database operations. The "operations" part of that last sentence is important: migrations are not just for creating tables, but also for dropping tables, altering them, and even adding data to them. It is this multi-faceted aspect of migrations which makes them useful, as they can effectively be used to version a database (in much the same way as Subversion can be used to version code). A team of developers can use migrations to keep their databases in sync: when a change to the database is made by one of the team and coded into a migration, the other developers can apply the same migration to their database, so they are all working with a consistent structure. When you run a migration, the Ruby script is converted into the SQL code appropriate to your database server and executed over the database connection. However, migrations don't work with every database adapter in Rails: check the Database Support section of the ActiveRecord::Migration documentation to find out whether your adapter is supported. At the time of writing, MySQL, PostgreSQL, SQLite, SQL Server, Sybase, and Oracle were all supported by migrations. Another way to check whether your database supports migrations is to run the following command in the console (the output shown below is the result of running this using the MySQL adapter): >> ActiveRecord::Base.connection.supports_migrations? => true We're going to use migrations to develop our database, so we'll be building the model first. The actual database table will be generated from a migration attached to the model. Building a Model with Migrations In this section, we're going to develop a series of migrations to recreate the database structure outlined in Chapter 2 of the book Ruby on Rails Enterprise Application Development: Plan, Program, Extend. First, we'll work on a model and migration for the people table. Rails has a generate script for generating a model and its migration. (This script is in the script directory, along with the other Rails built-in scripts.) The script builds the model, a base migration for the table, plus scripts for testing the model. Run it like this: $ ruby script/generate model Person exists app/models/  exists test/unit/    exists test/fixtures/    create app/models/person.rb    create test/unit/person_test.rb    create test/fixtures/people.yml    exists db/migrate    create db/migrate/001_create_people.rb Note that we passed the singular, uppercase version of the table name ("people" becomes "Person") to the generate script. This generates a Person model in the file app/models/person.rb; and a corresponding migration for a people table (db/ migrate/001_create_people.rb). As you can see, the script enforces the naming conventions, which connects the table to the model. The migration name is important, as it contains sequencing information: the "001" part of the name indicates that running this migration will bring the database schema up to version 1; subsequent migrations will be numbered "002...", "003..." etc., each specifying the actions required to bring the database schema up to that version from the previous one. The next step is to edit the migration so that it will create the people table structure. At this point, we can return to Eclipse to do our editing. (Remember that you need to refresh the file list in Eclipse to see the files you just generated). Once, you have started Eclipse, open the file db/migrate/001_create_people.rb. It should look like this:     class CreatePeople < ActiveRecord::Migration        def self.up            create_table :people do |t|                # t.column :name, :string            end        end        def self.down            drop_table :people        end    end This is a migration class with two class methods, self.up and self.down. The self.up method is applied when migrating up one database version number: in this case, from version 0 to version 1. The self.down method is applied when moving down a version number (from version 1 to 0). You can leave self.down as it is, as it simply drops the database table. This migration's self.up method is going to add our new table using the create_table method, so this is the method we're going to edit in the next section. Ruby syntaxExplaining the full Ruby syntax is outside the scope of this book. For our purposes, it suffices to understand the most unusual parts. For example, in the create_table method call shown above:,     create_table :people do |t|        t.column :title, :string        ...    end The first unusual part of this is the block construct, a powerful technique for creating nameless functions. In the example code above, the block is initialized by the do keyword; this is followed by a list of parameters to the block (in this case, just t); and closed by the end keyword. The statements in-between the do and end keywords are run within the context of the block. Blocks are similar to lambda functions in Lisp or Python, providing a mechanism for passing a function as an argument to another function. In the case of the example, the method call create_table:people is passed to a block, which accepts a single argument, t; t has methods called on it within the body of the block. When create_table is called, the resulting table object is "yielded" to the block; effectively, the object is passed into the block as the argument t, and has its column method called multiple times. One other oddity is the symbol: that's what the words prefixed with a colon are. A symbol is the name of a variable. However, in much of Rails, it is used in contexts where it is functionally equivalent to a string, to make the code look more elegant. In fact, in migrations, strings can be used interchangeably with symbols.  
Read more
  • 0
  • 0
  • 5395

article-image-basic-dijit-knowledge-dojo
Packt
22 Oct 2009
7 min read
Save for later

Basic Dijit Knowledge in Dojo

Packt
22 Oct 2009
7 min read
All Dijits can be subclassed to change parts of their behavior, and then used as the original Dijits, or you can create your own Dijits from scratch and include existing Dijits (Forms, buttons, calendars, and so on) in a hierarchical manner. All Dijits can be created in either of the following two ways: Using the dojoType markup property inside selected tags in the HTML page. Programmatic creation inside any JavaScript. For instance, if you want to have a ColorPalette in your page, you can write the following: <div dojoType="dijit.ColorPalette"></div> But you also need to load the required Dojo packages, which consist of the ColorPalette and any other things it needs. This is generally done in a script statement in the <head> part of the HTML page, along with any CSS resources and the djConfig declaration. So a complete example would look like this: <html> <head> <title>ColorPalette</title> <style> @import "dojo-1.1b1/dojo/resources/dojo.css"; @import "dojo-1.1b1/dijit/themes/tundra/tundra.css"; </style> <script type="text/javascript"> djConfig= { parseOnLoad: true } </script> <script type="text/javascript" src="dojo-1.1b1/dojo/dojo.js"></script> <script type="text/javascript"> dojo.require("dojo.parser"); dojo.require("dijit.ColorPalette"); </script> </head> <body class=”tundra”> <div dojoType="dijit.ColorPalette"></div> </body> </html> Obviously, this shows a simple color palette, which can be told to call a function when a choice has been made. But if we start from the top, I've chosen to include two CSS files in the <style> tag. The first one, dojo.css, is a reset.css, which gives lists, table elements, and various other things their defaults. The file itself is quite small and well commented. The second file is called tundra.css and is a wrapper around lots of other stylesheets; some are generic for the theme it represents, but most are specific for widgets or widget families. The two ways to create Dijits So putting a Dojo widget in your page is very simple. If you would want the ColorPalette dynamically in a script instead, remove the highlighted line just before the closing body tag and instead write the following: <script> new dijit.ColorPalette({}, dojo.byId('myPalette')); </script> This seems fairly easy, but what's up with the empty object literal ( {} ) as the first argument? Well, as some Dijits take few arguments and others more, all arguments to a Dijit get stuffed into the first argument and the others, the last argument is (if needed) the DOM node which the Dijit shall replace with its own content somewhere in the page. The default is, for all Dijits, that if we only give one argument to the constructor, this will be taken as the DOM node where the Dijit is to be created. Let's see how to create a more complex Dijit in our page, a NumberSpinner. This will create a NumberSpinner that is set at the value '200' and which has '500' as a maximum, showing no decimals. To create this NumberSpinner dynamically, we would write the following: <input type="text" name="date1" value="2008-12-30" dojoType="dijit.form.DateTextBox"/> One rather peculiar feature of markup-instantiation of Dijits is that you can use almost any kind of tag for the Dijit. The Dijit will replace the element with its own template when it is initialized. Certain Dijits work in a more complicated fashion and do not replace child nodes of the element where they're defined, but wrap them instead. However, each Dijit has support for template HTML which will be inserted, with variable substitutions whenever that Dijit is put in the page. This is a very powerful feature, since when you start creating your own widgets, you will have an excellent system in place already which constrains where things will be put and how they are called. This means that when you finish your super-complicated graph drawing widget and your client or boss wants three more just like it on the same page, you just slap up three more tags which have the dojoType defining your widget. How do I find my widget? You already know that you can use dojo.byId('foo') as a shorter version of document.getElementById('foo'). If you still think that dojo.byId is too long, you can create a shorthand function like this: var $ = dojo.byId; And then use $('foo') instead of dojo.byId for simple DOM node lookup. But Dijits also seem to have an id. Are those the same as the ids of the DOM node they reside in or what? Well, the answer is both yes and no. All created Dijit widgets have a unique id. That id can be the same string as the id that defines the DOM node where they're created, but it doesn't have to be. Suppose that you create a Dijit like this: <div id='foo' dojoType='dijit._Calendar'></div> The created Dijit will have the same Dijit id as the id of the DOM node it was created in, because no others were given. But can you define another id for the widget than for its DOM node? Sure thing. There's a magic attribute called widgetId. So we could do the following: <div id='foo' dojoType='dijit._Calendar' widgetId='bar'></div> This would give the widget the id of 'bar'. But, really, what is the point? Why would we care the widget / Dijit has some kind of obscure id? All we really need is the DOM node, right? Not at all. Sure, you might want to reach out and do bad things to the DOM node of a widget, but that object will not be the widget and have none of its functions. If you want to grab hold of a widget instance after it is created, you need to know its widget id, so you can call the functions defined in the widget. So it's almost its entire reason to exist! So how do I get hold of a widget obejct now that I have its id? By using dijit.byId(). These two functions look pretty similar, so here is a clear and easy to find (when browsing the book) explanation: dojo.byId(): Returns the DOM node for the given id. dijit.byId(): Returns the widget object for the given widget id. Just one more thing. What happens if we create a widget and don't give either a DOM or widget id? Does the created widget still get an id? How do we get at it? Yes, the widget will get a generated id, if we write the following: <div dojoType='dijit._Calendar'></div> The widget will get a widget id like this: dijit__Calendar_0. The id will be the string of the file or namespace path down to the .js file which declares the widget, with / exchanged to _, and with a static widget counter attached to the end.
Read more
  • 0
  • 0
  • 3249
Modal Close icon
Modal Close icon