Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - CMS and E-Commerce

830 Articles
article-image-integrating-twitter-and-youtube-mediawiki
Packt
23 Oct 2009
5 min read
Save for later

Integrating Twitter and YouTube with MediaWiki

Packt
23 Oct 2009
5 min read
Twitter in MediaWiki Twitter (http://www.twitter.com) is a micro-blogging service that allows users to convey the world (or at least the small portion of it on Twitter) what they are doing, in messages of 140 characters or less. It is possible to embed these messages in external websites, which is what we will be doing for JazzMeet. We can use the updates to inform our wiki's visitors of the latest JazzMeet being held across the world, and they can send a response to the JazzMeet Twitter account. Shorter Links Because Twitter only allows posts of up to 140 characters, many Twitter usersmake use of URL-shortening services such as Tiny URL (http://tinyurl.com), and notlong (http://notlong.com) to turn long web addresses into short, more manageable URLs. Tiny URL assigns a random URL such as http://tinyurl.com/3ut9p4, while notlong allows you to pick a free sub-domain to redirect to your chosen address, such as http://asgkasdgadg.notlong.com. Twitter automatically shortens web addresses in your posts. Creating a Twitter Account Creating a Twitter account is quite easy. Just fill in the username, password, and email address fields, and submit the registration form, once you have read and accepted the terms and conditions. If your chosen username is free, your account is created instantly. Once your account has been created, you can change the settings such as your display name and your profile's background image, to help blur the distinction between your website and your Twitter profile. Colors can be specified as "hex" values under the Design tab of your Twitter account's settings section. The following color codes change the link colors to our JazzMeet's palette of browns and reds: As you can see in the screenshot, JazzMeet's Twitter profile now looks a little more like the JazzMeet wiki. By doing this, the visitors catching up with JazzMeet's events on Twitter will not be confused by a sudden change in color scheme:   Embedding Twitter Feeds in MediaWiki Twitter provides a few ways to embed your latest posts in to your own website(s); simply log in and go to http://www.twitter.com/badges. Flash: With this option you can show just your posts, or your posts and your friends' most recent posts on Twitter. HTML and JavaScript: You can configure the code to show between 1 and 20 of your most recent Twitter posts. As JazzMeet isn't really the sort of wiki the visitors would expect to find on Flash, we will be using the HTML and JavaScript version. You are provided with the necessary code to embed in your website or wiki. We will add it to the JazzMeet skin template, as we want it to be displayed on every page of our wiki, just beneath our sponsor links. Refer to the following code: <div id="twitter_div"><h2 class="twitter-title">Twitter Updates</h2><ul id="twitter_update_list"></ul></div><script type="text/javascript" src="http://twitter.com/javascripts/blogger.js"></script><script type="text/javascript" src="http://twitter.com/statuses/user_timeline/jazzmeet.json?callback=twitterCallback2&count=5"></script> The JavaScript given at the bottom of the code can be moved just above the </body> tag of your wiki's skin template. This will help your wiki to load other important elements of your wiki before the Twitter status. You will need to replace "jazzmeet" in the code with your own Twitter username, otherwise you will receive JazzMeet's Twitter updates, and not your own. It is important to leave the unordered list of ID twitter_update_list as it is, as this is the element the JavaScript code looks for to insert a list item containing each of your twitter messages in the page. Styling Twitter's HTML We need to style the Twitter HTML by adding some CSS to change the colors and style of the Twitter status code: #twitter_div {background: #FFF;border: 3px #BEB798 solid;color: #BEB798;margin: 0;padding: 5px;width: 165px;}#twitter_div a {color: #8D1425 !important;}ul#twitter_update_list {list-style-type: none;margin: 0;padding: 0;}#twitter_update_list li {color: #38230C;display: block;}h2.twitter-title {color: #BEB798;font-size: 100%;} There are only a few CSS IDs and classes that need to be taken care of. They are as follows: #twitter_div is the element that contains the Twitter feeds. #twitter_update_list is the ID applied to the unordered list. Styling this affects how your Twitter feeds are displayed. .twitter-title is the class applied to the Twitter feed's heading (which you can remove, if necessary). Our wiki's skin for JazzMeet now has JazzMeet's Twitter feed embedded in the righthand column, allowing visitors to keep up-to-date with the latest JazzMeet news. Inserting Twitter as Page Content Media Wiki does not allow JavaScript to be embedded in a page via the "edit" function, so you won't be able to insert a Twitter status feed directly in a page unless it is in the template itself. Even if you inserted the relevant JavaScript links into your MediaWiki skin template, they are relevant only for one Twitter profile ("jazzmeet", in our case).  
Read more
  • 0
  • 0
  • 8266

article-image-customer-management-joomla-and-virtuemart
Packt
22 Oct 2009
7 min read
Save for later

Customer Management in Joomla! and VirtueMart

Packt
22 Oct 2009
7 min read
Note that all VirtueMart customers must be registered with Joomla!. However, not all Joomla! users need to be the VirtueMart customers. Within the first few sections of this article, you will have a clear concept about user management in Joomla! and VirtueMart. Customer management Customer management in VirtueMart includes registering customers to the VirtueMart shop, assigning them to user groups for appropriate permission levels, managing fields in the registration form, viewing and editing customer information, and managing the user groups. Let's dive in to these activities in the following sections. Registration/Authentication of customers Joomla! has a very strong user registration and authentication system. One core component in Joomla! is com_users, which manages user registration and authentication in Joomla!. However, VirtueMart needs some extra information for customers. VirtueMart collects this information through its own customer registration process, and stores the information in separate tables in the database. The extra information required by VirtueMart is stored in a table named jos_vm_user_info, which is related to the jos_users table by the user id field. Usually, when a user registers to the Joomla! site, they also register with VirtueMart. This depends on some global settings. In the following sections, we are going to learn how to enable the user registration and authentication for VirtueMart. Revisiting registration settings We configure the registration settings from VirtueMart's administration panel Admin | Configuration | Global screen. There is a section titled User Registration Settings, which defines how the user registration will be handled: Ensure that your VirtueMart shop has been configured as shown in the screenshot above. The first field to configure is the User Registration Type. Selecting Normal Account Creation in this field creates both a Joomla! and VirtueMart account during user registration. For our example shop, we will be using this setting. Joomla!'s new user activation should be disabled when we are using VirtueMart. That means the Joomla! New account activation necessary? field should read No. Enabling VirtueMart login module There is a default module in Joomla! which is used for user registrations and login. When we are using this default Login Form (mod_login module), it does not collect information required by VirtueMart, and does not create customers in VirtueMart. By default, when published, the mod_login module looks like the following screenshot: As you see, registered users can log in to Joomla! through this form, recover their forgotten password by clicking on the Forgot your password? link, and create a new user account by clicking on the Create an account link. When a user clicks on the Create an account link, they get the form as shown in the following screenshot: We see that normal registration in Joomla! only requires four pieces of information: Name, Username, Email, and Password. It does not collect information needed in VirtueMart, such as billing and shipping address, to be a customer. Therefore, we need to disable the mod_login module and enable the mod_virtuemart_login module. We have already learned how to enable and disable a module in Joomla!. We have also learned how to install modules. By default, the mod_virtuemart_login module's title is VirtueMart Login. You may prefer to show this title as Login only. In that case, click on the VirtueMart Login link in the Module Name column. This brings the Module:[Edit] screen: In the Title field, type Login (or any other text you want to show as the title of this module). Make sure the module is enabled and position is set to left or right. Click  on the Save icon to save your settings. Now, browse to your site's front-page  (for example, http://localhost/bdosn/), and you will see the login form as shown in the following screenshot: As you can see, this module has the same functionalities as we saw in the mod_login module of Joomla!. Let us test the account creation in this module. Click on the Register link. It brings the following screen: The registration form has three main sections: Customer Information, Bill To Information, and Send Registration. At the end, there is the Send Registration button for submitting the form data. In the Customer Information section, type your email address, the desired username, and password. Confirm the password by typing it again in the Confirm password field. In the Bill To Information section, type the address details where bills are to be sent. In the entire form, required fields are marked with an asterisk (*). You must provide information for these required fields. In the Send Registration section, you need to agree to the Terms of Service. Click on the Terms of Service link to read it. Then, check the I agree to the Terms of Service checkbox and click on the Send Registration button to submit the form data: If you have provided all of the required information and submitted a unique  email address, the registration will be successful. On successful completion of registration, you get the following screen notification, and will be logged in to  the shop automatically: If you scroll down to the Login module, you will see that you are logged in and greeted by the store. You also see the User Menu in this screen: Both the User Menu and the Login modules contain a Logout button. Click on either of these buttons to log out from the Joomla! site. In fact, links in the User Menu module are for Joomla! only. Let us try the link Your Details. Click on the Your Details link, and you will see the information shown in the following screenshot: As you see in the screenshot above, you can change your full name, email, password, frontend language, and time zone. You cannot view any information regarding billing address, or other information of the customer. In fact, this information is for regular Joomla! users. We can only get full customer information by clicking on the Account Maintenance link in the Login module. Let us try it. Click on the Account Maintenance link, and it shows the following screenshot: The Account Maintenance screen has three sections: Account Information, Shipping Information, and Order Information. Click on the Account Information link to see what happens. It shows the following screen: This shows Customer Information and Bill To Information, which have been entered during user registration. The last section on this screen is the Bank Information, from where the customer can add bank account information. This section looks like the following screenshot: As you can see, from the Bank Account Info section, the customers can enter their bank account information including the account holder's name, account number, bank's sorting code number, bank's name, account type, and IBAN (International Bank Account Number). Entering this information is important when you are using  a Bank Account Debit payment method. Now, let us go back to the Account Maintenance screen and see the other sections. Click on the Shipping Information link, and you get the following screen: There is one default shipping address, which is the same as the billing address. The customers can create additional shipping addresses. For creating a new shipping address, click on the Add Address link. It shows the following screen:
Read more
  • 0
  • 0
  • 11449

article-image-customizing-default-expression-engine-v167-website-template
Packt
22 Oct 2009
5 min read
Save for later

Customizing the Default Expression Engine (v1.6.7) website template

Packt
22 Oct 2009
5 min read
Introduction The blogging revolution has changed the nature of the internet and as more and more individuals and organizations choose the blog format as the preferred web solution (i.e content driven websites which are regularly updated by one or more users). The question of which blogging application to use becomes an increasingly important one. I have tested all the popular blogging solutions and the one I have found to be the most impressive for designers with a little CSS and XHTML know-how is Expression Engine. After spending a little time on the Expression Engine support forums I have noticed that a high number of the support questions are relating to solving CSS problems within EE. This demonstrates that many designers who choose EE are used to working from within Adobe Dreamweaver (or other WYSWYG applications) and although there are no compatibility issues between the two systems, it is clear that they are making the transition from graphics based web design to lovely CSS web based design. When you are installing Expression Engine 1.6.7 for the first time, you are asked to choose a theme from a drop-down menu, which you may expect would offer you a satisfactory selection of pre-installed themes to choose from. Sadly this is not the case in Expression Engine (EE) version 1.6.7, and if you have not downloaded and manually saved a theme inside the right folder within the EE system file structure you will be given only one theme to choose from: the dreaded "default weblog theme". Once you complete the installation, no other themes can be imported or installed, so you can either restart the installation, opt to select the default weblog theme or start from a blank canvas (sorry about that). The good news is that the next release of EE (v.2.0) ships with a very nice default template, but the bad news is that you will have to wait a few months longer to get a copy and you will need to renew your license to upgrade for a fee.  Even then you will probably want to modify the new and improved default template in EE 2.0, or you may due to your needs opt to  choose another EE template altogether so that your website does not look like a thousand other websites based on the same template (not good). This article will demonstrate how to use Cascading Style Sheets (CSS) to improve the layout of the default EE website/blog template, and how to keep the structure (XHTML) and the presentation (CSS) entirely separate-which is best practice in web development. This article is intended as a practical guide to intermediate level web designers wanting to program with CSS more effectively within Expression Engine, and to create better websites, which adhere to current WC3 web standards. It will not attempt teach you the fundamentals of programming with CSS and XHTML or how to install, use or develop a website with EE. This article will demonstrate how to use CSS to effectively take control of the appearance of any EE template. If you are new to EE then it is recommended that you consult the book "Building a Website with Expression Engine 1.6.7", published by Packt Publishing and visiting www.expressionengine.com to consult the EE user guide. If you get stuck at any time when using Expression Engine you can visit the EE support forums via the man EE site to get help with EE, XHTML and CSS issues and for regular updates, the Elislab EE feed within EE is an excellent source of news from the EE community. The trouble with templates Lets open up EE’s default "weblog" template in Firefox. I have the very useful "developers toolbar" add-on installed. You can see the many options which are available lined-up across the bottom of Firefox’s toolbar. When you select "Outline > Outline Current Element", Firefox renders an outline around the block of the element and is set by default to display the name of the selected element. This add-on features many other timesaving and task-facilitating functions, which range from DOM tools for JavaScript development to nifty layout tools like displaying a ruler inside the Firefox window. The default template is a useful guide to some basic EE tags embedded into the XHTML, but the CSS should render a more clean and simple to customize design-so lets make some much needed changes. I will not be looking onto the EE tags in this article because EE tags are very powerful and are beyond the scope of this article. Inside the template module create a new template group and call it "site_extended". Template groups in EE organize your templates into virtual folders. We will make a copy of the existing template group and templates so all the changes we make are non-destructive. Choose, do not duplicate any existing template groups, but select "Make the index template in this group your site's home page?" and press submit. Easy. Next create a new template and call it "site_extended_css" and lets duplicate the site/site_css template. This powerful feature instructs EE to clone an existing template with a new name and location. Now let's create a copy of the default site weblog and call it "index_extended". Select "duplicate an existing template", choose "site/index" from the options drop-down list. The first part of the location is "site"/ being the template group and the site/"index" the actual template. Now your template management tab should look like: Notice that the index template has a red star next to it.
Read more
  • 0
  • 0
  • 3366

article-image-categories-and-attributes-magento-part-2
Packt
22 Oct 2009
7 min read
Save for later

Categories and Attributes in Magento: Part 2

Packt
22 Oct 2009
7 min read
Time for action: Creating Attributes In this section we will create an Attribute set for our store. First, we will create Attributes. Then, we will create the set. Before you begin Because Attributes are the main tool for describing your Products, it is important to make the best use of them. Plan which Attributes you want to use. What aspects or characteristics of your Products will a customer want to search for? Make those Attributes. What aspects of your Products will a customer want to choose? Make these Attributes, too. Attributes are organized into Attribute Sets. Each set is a collection of Attributes. You should create different sets to describe the different types of Products that you want to sell. In our coffee store, we will create two Attribute Sets: one for Single Origin coffees and one for Blends. They will differ in only one way. For Single Origin coffees, we will have an Attribute showing the country or region where the coffee is grown. We will not have this Attribute for blends because the coffees used in a blend can come from all over the world. Our sets will look like the following: Single Origin Attribute set Blended Attribute set Name Name Description Description Image Image Grind Grind Roast Roast Origin SKU SKU Price Price Size Size   Now, let's create the Attributes and put them into sets. The result of the following directions will be several new Attributes and two new Attribute Sets: If you haven't already, log in to your site's backend, which we call the Administrative Panel: Select Catalog | Attributes | Manage Attributes. list of all the Attributes is displayed. These attributes have been created for you. Some of these Attributes (such as color, cost, and description) are visible to your customers. Other Attributes affect the display of a Product, but your customers will never see them. For example, custom_design can be used to specify the name of a custom layout, which will be applied to a Product's page. Your customers will never see the name of the custom layout. We will add our own attributes to this list. Click the Add New Attribute button. The New Product Attribute page displays: There are two tabs on this page: Properties and Manage Label / Options. You are in the Properties tab. The Attribute Properties section contains settings that only the Magento administrator (you) will see. These settings are values that you will use when working with the Attribute. The Frontend Properties section contains settings that affect how this Attribute will be presented to your shoppers. We will cover each setting on this page. Attribute Code is the name of the Attribute. Your customers will never see this value. You will use it when managing the Attribute. Refer back to the list of Attributes that appeared in Step 2. The Attribute identifier appears in the first column, labelled Attribute Code. The Attribute Code must contain only lowercase letters, numbers, and the underscore character. And, it must begin with a letter. The Scope of this Attribute can be set as Store View, Website, or Global. For now, you can leave it set to the default—Store View. The other values become useful when you use one Magento installation to create multiple stores or multiple web sites. That is beyond the scope of this quick-start guide. After you assign an Attribute set to a Product, you will fill in values for the Attributes. For example, suppose you assign a set that contains the attributes color, description, price, and image. You will then need to enter the color, description, price, and image for that Product. Notice that each of the Attributes in that set is a different kind of data. For color, you would probably want to use a drop-down list to make selecting the right color quick and easy. This would also avoid using different terms for the same color such as "Red" and "Magenta". For description, you would probably want to use a freeform text field. For price, you would probably want to use a field that accepts only numbers, and that requires you to use two decimal places. And for image, you would want a field that enables you to upload a picture. The field Catalog Input Type for Store Owner enables you to select the kind of data that this Attribute will hold: In our example we are creating an Attribute called roast. When we assign this value to a Product, we want to select a single value for this field from a list of choices. So, we will select Dropdown. If you select Dropdown or Multiple Select for this field, then under the Manage Label/Options tab, you will need to enter the list of choices (the list of values) for this field. If you select Yes for Unique Value, then no two products can have the same value for this Attribute. For example, if I made roast a unique Attribute, that means only one kind of coffee in my store could be a Light roast, only one kind of coffee could be a French roast, only one could be Espresso, and so on. For an Attribute such as roast, this wouldn't make much sense. However, if this Attribute was the SKU of the Product, then I might want to make it unique. That would prevent me from entering the same SKU number for two different Products. If you select Yes for Values Required, then you must select or enter a value for this Attribute. You will not be able to save a Product with this Attribute if you leave it blank. In the case of roast, it makes sense to require a value. Our customers would not buy a coffee without knowing what kind of roast the coffee has. Input Validation for Store Owner causes Magento to check the value entered for an Attribute, and confirm that it is the right kind of data. When entering a value for this Attribute, if you do not enter the kind of data selected, then Magento gives you a warning message. The Apply To field determines which Product Types can have this Attribute applied to them. Remember that the three Product Types in Magento are Simple, Grouped, and Configurable. Recall that in our coffee store, if a type of coffee comes in only one roast, then it would be a Simple Product. And, if the customer gets to choose the roast, it would be a Configurable Product. So we want to select at least Simple Product and Configurable Product for the Apply To field: But what about Grouped Product? We might sell several different types of coffee in one package, which would make it a Grouped Product. For example, we might sell a Grouped Product that consists of a pound of Hawaiian Kona and a pound of Jamaican Blue Mountain. We could call this group something like "Island Coffees". If we applied the Attribute roast to this Grouped Product, then both types of coffee would be required to have the same roast. However, if Kona is better with a lighter roast and Blue Mountain is better with a darker roast, then we don't want them to have the same roast. So in our coffee store, we will not apply the Attribute roast to Grouped Products. When we sell coffees in special groupings, we will select the roast for each coffee. You will need to decide which Product Types each Attribute can be applied to. If you are the store owner and the only one using your site, you will know which Attributes should be applied to which Products. So, you can safely choose All Product Types for this setting.
Read more
  • 0
  • 0
  • 3197

article-image-blogger-improving-your-blog-google-analytics-and-search-engine-optimization
Packt
22 Oct 2009
7 min read
Save for later

Blogger: Improving Your Blog with Google Analytics and Search Engine Optimization

Packt
22 Oct 2009
7 min read
If you've ever wondered how people find your website or how to generate more traffic, then this article tells you more about your visitors. Knowing where they come from, what posts they like, how long they stay, and other site metrics are all valuable information to have as a blogger. You would expect to pay for such a deep look into the underbelly of your blog, but Google wants to give it to you for free. Why for free? The better your site does, the more likely you are to pay for AdWords or use other Google tools. The Google Analytics online statistics application is a delicious carrot to encourage content rich sites and better ad revenue for everyone involved. You also want people to find your blog when they perform a search about your topic. The painful truth is that search engines have to find your blog first before it will show up in their results. There are thousands of new blogs being created everyday. If you want people to be able to find your blog in the increasingly crowded blogosphere, optimizing your blog for search engines will improve the odds. Improving Your Blog with Google Analytics Analytics gives you an overwhelming amount of data to use for measuring the success of your sites, and ads. Once you've had time to analyze that data, you will want to take action to improve the performance of your blog, and ads. We'll now look at how Analytics can help you make decisions about the design, and content of your site. Analyzing Navigation The Navigation section of the Content Overview report reveals how your visitors actually navigate your blog. Visitors move around a site in ways we can't predict. Seeing how they actually navigate a site and where they entered the site are powerful tools we can use to diagnose where we need to improve our blog. Exploring the Navigation Summary The Navigation Summary shows you the path people take through your site, including how they get there and where they go. We can see from the following graphical representation that our visitors entered the site through the main page of the blog most of the time. After reaching that page, over half the time, they went to other pages within the site. Entrance Paths We can see the path, the visitors take to enter our blog using the Entrance Paths report. It will show us from where they entered our site, which pages they looked at, and the last page they viewed before exiting. Visitors don't always enter by the main page of a site, especially if they find the site using search engines or trackbacks. The following screenshot displays a typical entrance path. The visitor comes to the site home page, and then goes to the full page of one of the posts. It looks like our visitors are highly attracted to the recipe posts. Georgia may want to feature more posts about recipes that tie in with her available inventory. Optimizing your Landing Page The Landing Page reports tell you where your visitors are coming from, and if they have used keywords to find you. You have a choice between viewing the source visitors used to get to your blog, or the keywords. Knowing the sources will give you guidance on the areas you should focus your marketing or advertising efforts on. Examining Entrance Sources You can quickly see how visitors are finding your site, whether through a direct link, or a search engine, locally from Blogger, or from social networking applications such as Twitter.com. In the Entrance Sources graph shown in the following screenshot, we can see that the largest among the number of people are coming to the blog using a direct link. Blogger is also responsible for a large share of our visitors, which is over 37%. There is even a visitor drawn to the blog from Twitter.com, where Georgia has an account. Discovering Entrance Keywords When visitors arrive at your site using keywords, the words they use will show up on the report. If they are using words in a pattern that do not match your site content, you may see a high bounce rate. You can use this report to redesign your landing page to better represent the purpose of your site by the words, and phrases that you use. Interpreting Click Patterns When visitors visit your site they show their attraction to links, and interactive content by clicking on them. Click Patterns are the representation of all those mouse clicks over a set time period. Using the Site Overlay reporting feature, you can visually see the mouse clicks represented in a graphical pattern. Much like collared pins stuck on a wall chart they will quickly reveal to you, which areas of your site visitors clicked on the most, and which links they avoided. Understanding Site Overlay Site Overlay shows the number of clicks for your site by laying them transparently in a graphical format on top of your site. Details with the number of clicks, and goal tracking information pop up in a little box when you hover over a click graphic with your mouse. At the top of the screen are options that control the display of the Site Overlay. Clicking the Hide Overlay link will hide the overlay from view. The Displaying drop-down list lets you choose how to view mouse Clicks on the page, or goals. The date range is the last item displayed. The graphical bars shown on top of the page content indicate where visitors clicked, and how many of them did so. You can quickly see what areas of the page interest your visitors the most. Based on the page clicks you see, you will have an idea of the content, and advertising that is most interesting to your visitors. Yes, Site Overlay will show the content areas of the page the visitors clicked on, and the advertisement areas. It will also help you see which links are tied to goals, and whether they are enticing your visitors to click. Optimizing Your Blog for Search Engines We are going to take our earlier checklists and use them as guides on where to make changes to our blog. When the changes are complete, the blog will be more attractive to search engines and visitors. We will start with changes we can make "On-site", and then progress to ways we can improve search engine results with "Off-site" improvements. Optimizing On-site The most crucial improvements we identified earlier were around the blog settings, template, and content. We will start with the easiest fixes, then dive into the template to correct validation issues. Let's begin with the settings in our Blogger blog. Seeding the Blog Title and Description with Keywords When you created your blog, did you take a moment to think about what words potential visitors were likely to type in when searching for your blog? Using keywords in the title and description of your blog gives potential visitors a preview and explanation of the topics they can expect to encounter in your blog. This information is what will also display in search results when potential visitors perform a search. Updating the Blog Title and Description It's never too late to seed your blog title and description with keywords. We will edit the blog title and description to optimize them for search engines. Login to your blog and navigate to Settings | Basic. We are going to replace the current title text with a phrase that more closely fits the blog. Type Organic Fruit for All into the Title field. Now, we are going to change the description of the blog. Type Organic Fruit Recipes, seasonal tips, and guides to healthy living into the description field. Scroll down to the bottom of the screen and click the Save Settings button. Y ou can enter up to 500 characters of descriptive text. What Just Happened? When we changed the title and description of our blog in the Basic Settings section, Blogger saved the changes and updated the template information as well. Now, when search engines crawl our blog, they will see richer descriptions of our blog in the blog title and blog description. The next optimization task is to verify that search engines can index our blog.
Read more
  • 0
  • 0
  • 4913

article-image-obtaining-alfresco-web-content-management-wcm
Packt
22 Oct 2009
11 min read
Save for later

Obtaining Alfresco Web Content Management (WCM)

Packt
22 Oct 2009
11 min read
You must obtain and install an additional download to enable Alfresco WCM functionality. The download includes a new Spring bean configuration file, a standalone Tomcat instance pre-configured with JARs, and server settings that allow a separate Tomcat instance (which is called the virtualization server) to run web applications stored in Alfresco WCM web folders. This capability is used when content managers "preview" an asset or a website. Just as in the core Alfresco server, you can either build the WCM distribution from source or obtain a binary distribution. Step-by-Step: Installing Alfresco WCM If you are building from source, the source code for Alfresco WCM is included with the source code for the rest of the product. Once the source code is checked out, all you have to do is run the distribute Ant task as follows: ant -f continuous.xml distribute After several minutes, the WCM distribution will be placed in the build|dist directory of your source code's root directory. Alternatively, if you are using binaries, download the binary distribution of the Alfresco WCM extension. Where you get it depends on whether you are running Labs or Enterprise. The Labs version is available for download from http://www.alfresco.com. The Enterprise version can be downloaded from the customer or partner site using the credentials provided by your Alfresco representative. Regardless of whether you chose source or binary, you should now have an Alfresco WCM archive. For example, the Labs edition for Linux is named alfresco-labs-wcm-3b.tar.gz. To complete the installation, follow these steps: Expand the archive into any directory that makes sense to you. For example, on my machine I use |usr|local|bin|alfresco-labs-3.0-wcm. Copy the wcm-bootstrap-context.xml file to the Alfresco server's extension directory ($TOMCAT_HOME|shared|classes|alfresco|extension). Edit the startup script (virtual_alf.sh) to ensure that the APPSERVER variable is pointing to the virtual-tomcat directory in the location to which you expanded the archive. Using the example from the previous step, the APPSERVER variable would be: APPSERVER=|usr|local|bin|alfresco-labs-3.0-wcm|virtual-tomcat Start the virtual server by running: |virtual_alf.sh start</i> Start the Alfresco server (or restart it if it was already running). You now have Alfresco with Alfresco WCM up and running. You'll test it out in the next section, but you can do a smoke test by logging in to the web client and confirming that you see the Web Projects folder under Company Home. Creating Web Projects A web project is a collection of assets, settings, and deployment targets that make up a website or a part of a website. Web projects are stored in web project folders, which are regular folders with a bunch of web project metadata. The number of web project folders you use to represent a site, or whether multiple sites are contained within a single web project folder is completely up to you. There is no "right way" that works for everybody. Permissions are one factor. The ability to set permissions stops at the website. Therefore, if you have multiple groups that maintain a site that are concerned with the ability of one to change the other's files, your only remedy is to split the site across web project folders. Web form and workflow sharing is another thing to think about. As you'll soon learn, workflows and web forms are defined globally, and then selectively chosen and configured by each site. Once made available to a web project, they are available to the entire web project. For example, you can't restrict the use of a web form to only a subset of the users of a particular site. SomeCo has chosen the approach of using one web project folder to manage the entire SomeCo.com website. Step-by-Step: Creating the SomeCo Web Project The first thing you need to do is create a new web project folder for the SomeCo website. Initially, you don't need to worry about web forms, deployment targets, or workflows. The goal is simply to create the web project and import the contents of the website. To create the initial SomeCo web project, follow these steps: Log in as admin. Go to Web Projects under Company Home. Click Create, and then Create Web Project. Specify the name of the web project as SomeCo Corporate Site. Specify the DNS name as someco-site. Click Next for the remaining steps, taking all defaults. You'll come back later and configure some of these settings. On the summary page, click Finish. You now have a web project folder for the SomeCo corporate site. Click SomeCo Corporate Site. You should see one Staging Sandbox and one User Sandbox. Click the Browse Website button for the User Sandbox. Now you can import SomeCo's existing website into the web project folder. Click Create, and then Bulk Import. Navigate to the "web-site" project in your Eclipse workspace. Assuming you've already run Ant for this project, there should be a ZIP file in the build folder called someco-web-site.zip. Select the file. Alfresco will import the ZIP into your User Sandbox. What Just Happened You just created a new web project folder for SomeCo's corporate website. But upon creation of a web project folder, there is no website to manage. This is a big disappointment for some people. The most crestfallen are those who didn't realize that Alfresco is a "decoupled" content management system—it has no frontend framework and no "default" website like "coupled" content management systems such as Drupal. This will change in the 3.0 releases as Alfresco introduces its new set of clients. But for now, it's up to you to give Alfresco a website to manage. You just happened to have a start on the SomeCo website sitting in your Eclipse workspace. Alfresco knows how to import WAR and ZIP files, which is a convenient way to migrate the website into Alfresco for the first time. Because web project sandboxes are mountable via CIFS, simply copying the website into the sandbox via CIFS is another way to go. The difference between the two approaches is that the WAR/ZIP import can only happen once. The import action complains if an archive contains nodes that already exist in the repository. If you haven't already done so, take a look at the contents of your sandbox. You should see index.html in the root of your User Sandbox and a someco folder that contains additional folders for CSS, images, JavaScript, and so on. The HTML file in the root is the same index.html file you deployed to the Alfresco web application in order to implement the AJAX ratings widget. Click the preview icon. (Am I the only one who thinks it looks eerily similar to the Turkish nazar talisman used to ward off the "evil eye"?) You should see the index page in a new tab or window. The list of Whitepapers won't be displayed. That's because the page is running in the context of the virtualization server, which is a different domain than your Alfresco server. Therefore, it is subject to the cross-domain restriction, which will be addressed later. Playing Nicely in the Sandbox Go back to the root of your web project folder. The link in the breadcrumb trail is likely to be the fastest way to navigate back. Click the Browse Website link in the Staging Sandbox. It's empty. If you were to invite another user to this website, his/her sandbox would be empty as well. Sandboxes are used to isolate changes each content owner makes, while still providing him/her the full context of the website. The Staging Sandbox represents your live website. Or in source code control terms, it is the HEAD of your site. It is assumed that whatever is in the Staging Sandbox can be safely deployed to the live website at any time. It is currently empty because you have not yet submitted any content to staging. Let's go ahead and do that now. If you click the Modified Items link in the User Sandbox, you'll see the index.html file and the someco folder. You could submit these individually. But you want everything to go to staging, so click Submit All: Provide a label and a description such as initial population and click OK. It is safe to ignore the warning that a suitable workflow was not found. That's expected because you haven't configured a workflow for this web project yet. Now the files have been submitted to staging. Here are some things to notice: If you click the Preview Website link in the Staging Sandbox, you'll see the website just as you did in the User Sandbox earlier. If you browse the website in the Staging Sandbox, you'll see the same files currently shown when you browse the website in your User Sandbox. A snapshot of the site was automatically taken when the files were committed and is listed under Recent Snapshots: Inviting Users To get a feel for how sandboxes work, invite one or more users to the web project (Actions, Invite Web Project Users). The following table describes the out of the box web project roles:   WCM User Role Can do these things Content Contributor Create and submit new content; but cannot edit or delete existing content Content Reviewer Create, edit, and submit new content; but cannot delete existing content Content Collaborator See all sandboxes, but only have full control over their own Create, edit, and submit new content; but cannot delete existing content Edit web project settings Content Manager See and modify content in all sandboxes; exert full control over all content See and deploy snapshots and manage deployment reports Edit web project settings Invite new users to the web project Delete the web project and individual sandboxes You'll notice that each new user gets his/her own sandbox, and that the sandbox automatically contains everything that is currently in staging. If a user makes a change to his/her sandbox, it is only visible within their sandbox until they commit the change to staging. If this is done, everyone else sees the change immediately. Unlike some content management and source code control systems, there is no need for other users to do an "update" or a "get latest" to copy the latest changes from staging into their sandbox. It is important to note that Alfresco will not merge conflicts. When a user makes a change to a file in his/her sandbox, it will be locked in all other sandboxes to prevent conflicts. If you were to customize Alfresco to disable locking, the last change would win. Alfresco would not warn you of the conflict. The Alfresco admin user and any user with Content Manager Access can see (and work within) all User Sandboxes. Everyone else sees only their own sandboxes. Mounting Sandboxes via CIFS All sandboxes are individually mountable via CIFS. In fact, in staging, each snapshot is individually mountable. This gives content owners the flexibility to continue managing content in their sandbox using the tools they are familiar with. The procedure for mounting a sandbox is identical to that of mounting the regular repository via CIFS, except that you use "AVM" as the mount point instead of "Alfresco". One difference between mounting the AVM repository through CIFS and mounting the DM repository is that the AVM repository directory structure is more complicated. For example, the path to the root of admin's sandbox in the SomeCo site is: |someco-site--admin|HEAD|DATA|www|avm_webapps|ROOT The first part of the path, someco-site, is the DNS name you assigned when you set up the web project. The admin string indicates which User Sandbox we are looking at. If you wanted to mount to the Staging Sandbox, the first part of the path would be someco-site without --admin. The next part of the path, HEAD, specifies the latest-and-greatest version of the website. Alternatively, you could mount a specific snapshot like this: |someco-site--admin|VERSION|v2|DATA|www|avm_webapps|ROOT As you might expect, the normal permissions apply. Users who aren't able to see another user's sandbox in the web client won't be able to do so through CIFS.
Read more
  • 0
  • 0
  • 2307
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-linux4afrika-interview-founder
Packt
22 Oct 2009
5 min read
Save for later

Linux4afrika: An Interview with the Founder

Packt
22 Oct 2009
5 min read
  "Linux4afrika has the objective of bridging the digital divide between developed and disadvantaged countries, especially in Africa, by supporting  access to information technology. This is done through the collection of used computers in Germany, the terminal server project and Ubuntu software which is open source, and by providing support to the involved schools and institutions." In this interview with the founder Hans-Peter Merkel, Packt's Kushal Sharma explores the idea, support, and the future of this movement. Kushal Sharma: Who are the chief promoters of this movement? Hans-Peter Merkel: FreiOSS (established in 2004) with currently about 300 members started the Linux4afrika project in 2006. The input was provided by some African trainees doing their internship at St. Ursula High school in Freiburg where we currently have 2 Terminal servers running. The asked FreiOSS to run similar projects in Africa. KS: What initiated this vision to bridge the IT gap between the developed and the under developed nations? During 2002 to 2005 we conducted IT trainings on Open Source products during 3 InWEnt trainings “Information Technology in African business” (see http://www.it-ab.org) with 60 African trainees (20 each year). This made FreiOSS to move OSS out of the local area and include other countries, especially those countries we had participants from. KS: Can you briefly recount the history of this movement, from the time it started to its popularity today? HPM: As mentioned before, the Linux4afrika project has its roots with FreiOSS and St. Ursula High school. There itself the idea was born. I conduct Open Source trainings and security trainings in several African countries (see http://www.hpmerkel.com/events). During a training in Dar es Salaam I demonstrated the Terminal Server solution to participants in a security training. One of the participants informed a Minister of Tanzanian Parliament who immediately came to get more information on this idea. He asked whether Linux4afrika could collect about 100 Computers and ship them to Tanzania. Tanzania would cover the shipping costs. After retuning to Germany I informed FreiOSS regarding this, and the collection activity started. We found out more information about the container costs and found that a container would fit about 200 computers for the same price. Therefore we decided to change the number from 100 to 200. One Terminalserver (AMD 64 Dual Core with 2 GB Memory) can run about 20 Thin Clients. This would serve about 10 schools in Tanzania. The Ubuntu Community Germany heard about our project and invited us to Linuxtag in Berlin (2007). This was a breakthrough for us; many organizations donated hardware. 3SAT TV also added greatly to our popularity by sending a 5 minute broadcast about our project (see http://www.linux4afrika.de). In June we met Markus Feilner from German Linux Magazin who contacted us and also published serveral online reports. In September Linux4afrika was invited to the German Parliament to join a meeting about education strategies for the under developed countries. In October Linux4afrika will start collection for a second container which will be shipped end of the year. In early 2008 about 5 members of FreiOSS will fly to Dar es Salaam on their own costs to conduct a one week training where teachers will be trained. This will be an addon to the service from Agumba Computers Ltd. (see http://www.agumba.biz). Agumba offers support in Tanzania to keep the network running. During the InWEnt trainings from 2002-2005, three employees from Agumba were in that training. Currently, 2 other people from Agumba are here for a three month internship to become familiar with our solution and make the project sustainable. KS: Who are the major contributors? HPM: Currently FreiOSS in Germany and Agumba Computers in Tanzania are the major contributors. KS: Do you get any internal support in Tanzania and Mozambique? Do their Governments support open source? HPM: Yes, we do. In Tanzania, it's Augumba Computers and in Mozambique we have some support from CENFOSS. All trainings conducted by me on Security and Forensics had a 70 percent part on Open Source in Tanzania. Currently, the Governmental agencies are implementing those technologies mainly on servers. KS: Do you have any individuals working full-time for this project? If so, how do the full-time individuals support themselves financially? HPM: All supporters are helping us without any financial support. They all come after work to our meetings which take place about once a month. After some starting problems the group is now able to configure and test about 50 Thin clients per evening meetings. KS: Tell us something more about the training program: what topics do you cover, how many participants do you have so far, etc.? HPM: Tanzania shows a big interest in Security trainings. Agumba computers offers those trainings for about 4-6 weeks a year. Participants come from Tanzania Revenue Authority, Police, Presidents office, banks, water/electricity companies and others. Currently Tanzania Revenue Authority has sent 5 participants to conduct a 3 month Forensic training in Germany. In Tanzania about 120 participants joined the trainings so far. Sessions for next year will start in January 2007. KS: Packt supported the project by sending some copies of our OpenVPN book. How will these be used and what do you hope to gain from them? HPM: Markus Feilner (the author of the OpenVPN book) is currently in Tanzania. He will conduct a one and a half day training on OpenVPN in Dar es Salaam. The participants in Germany who received the books will receive practical training on IPCop and OpenVPN for Microsoft and Linux clients. This will help them establish secure Wireless in their country. KS: What does the future hold for Linux4afrika? Our current plans include the second container, the visit to Dar early 2008, and Linuxtag 2008. Further actions will be discussed therafter. We already have a few requests to expand the Terminalserver Solution to other under developed countries. Also, currently we have a request to support Martinique after the hurricane has destroyed huge parts of the island. KS:Thanks very much Hans-Peter for taking out time for us, and all the very best for your plans.
Read more
  • 0
  • 0
  • 2939

article-image-writing-tips-bloggers
Packt
22 Oct 2009
9 min read
Save for later

Writing Tips for Bloggers

Packt
22 Oct 2009
9 min read
The first thing to look at regarding content is the quality of your writing itself. Good writing takes practice. The best way to learn is to study the work of other good writers and bloggers, and by doing so, develop an ear for a good sentence. However, there are guidelines to bear in mind that apply specifically to blogs, and we'll look at some of these here. Killer Headlines Ask any newspaper sub-editor and he or she will tell you that writing good headlines is an art to be mastered. This is equally true for blogs. Your headlines are the post titles and it's very important to get them right. Your headlines should be concise and to the point. You should try to convey the essence of the post in its title. Remember that blogs are often consumed quickly, and readers will use your post titles to decide if they want to carry on reading. People tend to scan through blogs, so the titles play a big part in helping them pick which posts they might be interested in. Your post titles also have a part to play in search engine optimization. Many search engines will use them to index your posts. As more and more people are using RSS feeds to subscribe to blogs it becomes even more important to make your post titles as descriptive and informative as possible. Many RSS readers and aggregators only display the post title, so it's essential that you convey as much information as possible whilst keeping it short and snappy. For example, The World's Best Salsa Recipe is a better post title than, A new recipe. Length of Posts Try to keep your posts manageable in terms of their word count. It's difficult to be prescriptive about post lengths. There's no one size fits all rule in blogging. You need to gauge the length of your posts based on your subject matter and target audience. There may be an element of experimentation to see how posts of different lengths are received by your readership. As with headlines, bear in mind that most people tend to read blogs fairly quickly and they may be put off by an overly long post. WordPress 2.6 includes a useful word count feature: An important factor in controlling the length of your posts is your writing skills. You will find that as you improve as a writer, you will be able to get your points across using fewer words. Good writing is all about making your point as quickly and concisely as possible. Inexperienced writers often feel the urge to embellish their sentences and use long, complicated phrases. This is usually unnecessary and when you read back that long sentence, you might see a few words that can be cut. Editing your posts is an important process. At the very least you should always proofread them before clicking the Publish button. Better still; try to get into the habit of actively editing everything you write. If you know someone who is willing to act as an editor for you, that's great. It's always useful to get some feedback on your writing. If, after re-reading and editing your post, it still seems very long, it might be a good idea to split the post in two and publish the second installment a few days later. Post Frequency Again, there are no rules set in stone about how frequently you should post. You will probably know from your own experience of other blogs that this varies tremendously from blogger to blogger. Some bloggers post several times a day and others just once a week or less. Figuring out the correct frequency of your posts is likely to take some trial and error. It will depend on your subject matter and how much you have to say about it. The length of your posts may also have a bearing on this. If you like to write short posts that make just one main point, you may find yourself posting quite regularly. Or, your may prefer to save up your thoughts and get them down in one longer post. As a general rule of thumb, try to post at least once per week. Any less than this and there is a danger your readers will lose interest in your blog. However, it's extremely important not to post just for the sake of it. This is likely to annoy readers and they may very well delete your feed from their news reader. As with many issues in blogging, post frequency is a personal thing. You should aim to strike a balance between posting once in a blue moon and subjecting your readers to 'verbal diarrhea'. Almost as important as getting the post frequency right is fine-tuning the timing of your posts, that is, the time you publish them. Once again, you can achieve this by knowing your target audience. Who are they, and when are they most likely to sit down in front of their computers and read your blog? If most of your readers are office workers, then it makes sense to have your new posts ready for them when they switch on their workstations in the morning. Maybe your blog is aimed at stay-at-home moms, in which case a good time to post might be mid-morning when the kids have been dropped off at school, the supermarket run is over, and the first round of chores are done. If you blog about gigs, bars, and nightclubs in your local area, the readers may well include twenty-something professionals who access your blog on their iPhones whilst riding the subway home—a good time to post for them might be late afternoon. Links to Other Blogs Links to other bloggers and websites are an important part of your content. Not only are they great for your blog's search engine findability, they also help to establish your place in the blogosphere. Blogging is all about linking to others and the resulting 'conversations'. Try to avoid over-using popular links that appear all over the Web, and instead introduce your readers to new websites and blogs that they may not have heard of. Admittedly, this is difficult nowadays with so many bloggers linking to each other's posts, but the more original you can be, the better. This may take quite a bit of research and trawling through the lower-ranked pages on search engines and indices, but it could be time well spent if your readers come to appreciate you as a source of new content beyond your own blog. Try to focus on finding blogs in your niche or key topic areas. Establishing Your Tone and Voice Tone and voice are two concepts that professional writers are constantly aware of and are attempting to improve. An in-depth discussion isn't necessary here, but it's worth being aware of them. The concept of 'tone' can seem rather esoteric to the non-professional writer but as you write more and more, it's something you will become increasingly aware of. For our purposes, we could say the 'tone' of a blog post is all about the way it feels or the way the blogger has pitched it. Some posts may seem very informal; others may be straight-laced, or appear overly complex and technical. Some may seem quite simplistic, while others come across as advanced material. These are all matters of tone. It can be quite subtle, but as far as most bloggers are concerned, it's usually a matter of formal or informal. How you pitch your writing boils down to understanding your target audience. Will they appreciate informal, first-person prose or should you keep it strictly third person, with no slang or casual language? On blogs, a conversational tone is often the most appropriate. With regards to 'voice', this is what makes your writing distinctly yours. Writers who develop a distinct voice become instantly recognizable to readers who know them. It takes a lot of practice to develop and is not something you can consciously aim for; it just happens as you gain more experience. The only thing you can do to help it along is step back from your writing and ask yourself if any of your habits stand in the way of clarity. While you read back your blog posts imagine yourself as one of your target readers and consider whether they would appreciate the language and style you've used. Employing tone and voice well is all about getting inside their heads and producing content they can relate to. Developing a distinctive voice can also be an important aspect of your company's brand identity. Your marketing department may already have brand guidelines, which allude to the tone and voice that should be used while producing written communications. Or you may wish to develop guidelines (such as this) yourself as a way of focusing your use of tone and voice. The Structure of a Post This may not apply to very short posts that don't go further than a couple of brief paragraphs, but for anything longer, it's worth thinking about a structure. The classic form is 'beginning, middle, and end'. Consider what your main point or argument is, and get it down in the first paragraph. In the middle section expand on it and back it up with secondary arguments. At the end reinforce it, and leave no doubt in the reader's mind what it is you've been trying to say. As we've already mentioned, blogs are often read quickly or even just scanned through. Using this kind of structure, which most people are sub-consciously aware of, can help them extract your main points quickly and easily. A Note about Keywords It's worth noting here that your writing has a big impact on search engine findability. This is what adds an extra dimension to writing for the Web. As well as all the usual considerations of style, tone, content, and so on, you also need to optimize your content for the search engines. This largely comes down to identifying your keywords and ensuring they're used with the right frequency. In the meantime, hold this thought.
Read more
  • 0
  • 0
  • 1532

article-image-google-earth-google-maps-and-your-photos-tutorial
Packt
22 Oct 2009
38 min read
Save for later

Google Earth, Google Maps and Your Photos: a Tutorial

Packt
22 Oct 2009
38 min read
Introduction A scholar who never travels but stays at home is not worthy to be accounted a scholar. From my youth on I had the ambition to travel, but could not afford to wander over the three hundred counties of Korea, much less the whole world. So, carrying out an ancient practise, I drew a geographical atlas. And while gazing at it for long stretches at a time I feel as though I was carrying out my ambition . . . Morning and evening while bending over my small study table, I meditate on it and play with it and there in one vast panorama are the districts, the prefectures and the four seas, and endless stretches of thousands of miles. WON HAK-SAENG Korean Student Preface to his untitled manuscript atlas of China during the Ming Dynasty, dated 1721. To say that maps are important is an understatement almost as big as the world itself. Maps quite literally connect us to the world in which we all live, and by extension, they link us to one another. The oldest preserved maps date back nearly 4500 years. In addition to connecting us to our past, they chart much of human progress and expose the relationships among people through time. Unfortunately, as a work of humankind, maps share many of the same shortcomings of all human endeavors. They are to some degree inaccurate and they reflect the bias of the map maker. Advancements in technology help to address the former issue, and offer us the opportunity to resist the latter. To the extent that it's possible for all of us to participate in the map making, the bias of a select few becomes less meaningful. Google Earth  and Google Maps  are two applications that allow each of us to assert our own place in the world and contribute our own unique perspective. I can think of no better way to accomplish this than by combining maps and photography. Photos reveal much about who we are, the places we have been, the people with whom we have shared those experiences, and the significant events in our lives. Pinning our photos to a map allows us to see them in their proper geographic context, a valuable way to explore and share them with friends and family. Photos can reveal the true character of a place, and afford others the opportunity to experience these destinations, perhaps faraway and unreachable for some of us, from the perspective of someone who has actually been there. In this tutorial I'll show you how to 'pin' your photos using Google Earth and Google Maps. Both applications are free, and available for Windows, Mac OS X, and Linux. In the case of Google Earth there are requirements associated with installing and running what is a local application. Google Maps has its own requirements: primary among them a compatible web browser (the highly regarded FireFox is recommended). In Google Earth, your photos show up on the map within the application complete with titles, descriptions, and other relevant information. You can choose to share your photos with everyone, only people you know, or even reserve them strictly for yourself. Google Maps offers the flexibility to present maps outside of a traditional application. For example you can embed a map on a webpage pinpointing the location of one particular photo, or mapping a collection of photos to present along with a photo gallery, or even collecting all of your digital photos together on one dynamic map. Over the course of a short couple of articles we'll cover everything you need to take advantage of both applications. I've put together two scripts to help us accomplish that goal. The first is a Perl script that works through your photos and generates a file in the proper format with all of the data necessary to include those photos in Google Earth. The second is a short bit of Javascript that works with the first file and builds a dynamic Google Map of those same photos. Both scripts are available for you to download, after which you are free to use them as is, or modify them to suit your own projects. I've purposefully kept the code as simple as possible to make them accessible to the widest audience, even those of you reading this who may be new to programming or unfamiliar with Perl, Javascript or both. I've taken care to comment the code generously so that everything is plainly obvious. I'm hopeful that you will be surprised at just how simple it is. There a couple of preliminary topics to examine briefly before we go any further. In the preceding paragraph I mentioned that the result of the first of our two scripts is a 'file in the proper format...'. This file, or more to the point the file format, is a very important part of the project. KML  (Keyhole Markup Language) is a fairly simple XML-based format that can be considered the native "language" of Google Earth. That description begs the question, 'What is XML?'. To oversimplify, because even a cursory discussion of XML is outside of the scope of this article, XML  (Extensible Markup Language) is an open data format (in contrast to proprietary formats) which allows us to present information in such way that we communicate not only the data itself, but also descriptive information about the data and the relationships among elements of the data. One of the technical terms that applies is 'metalanguage', which approximately translates to mean a language that makes it possible to describe other languages. If you're unfamiliar with the concept, it may be difficult to grasp at first or it may not seem impressive. However, metalanguages, and specifically XML, are an important innovation (I don't mean to suggest that it's a new concept. In fact XML has roots that are quite old, relative to the brief history of computing). These metalanguages make it possible for us to imbue data with meaning such that software can make use of that data. Let's look at an example taken from the Wikipedia entry for KML. <?xml version="1.0" encoding="UTF-8"?> <kml > <Placemark>  <description>New York City</description>  <name>New York City</name>  <Point>  <coordinates>-74.006393,40.714172,0</coordinates>  </Point> </Placemark> </kml>   Ignore all of the pro forma stuff before and after the <Placemark> tags and you might be able to guess what this bit of data represents. More importantly, a computer can be made to understand what it represents in some sense. Without the tags and structure, "New York City" is just a string, i.e. a sequence of characters. Considering the tags we can see that we're dealing with a place, (a Placemark), and that "New York City" is the name of this place (and in this example also its description). With all of this formal structure, programs can be made to roughly understand the concept of a Placemark, which is a useful thing for a mapping application. Let's think about this for a moment. There are a very large number of recognizable places on the planet, and a provably infinite number of strings. Given a block of text, how could a computer be made to identify a place from, for example the scientific name of a particular flower, or a person's name? It would be extremely difficult. We could try to create a database of every recognizable place and have the program check the database every time it encounters a string. That assumes it's possible to agree on a 'complete list of every place', which is almost certainly untrue. Keep in mind that we could be talking about informal places that are significant only to a small number of people or even a single person, e.g. 'the park where, as a child, I first learned to fly a kite'. It would be a very long list if we were going to include these sorts of places, and incredibly time-consuming to search. Relying on the structure of the fragment above, we can easily write a program that can identify 'New York City' as the name of a place, or for that matter 'The park where I first learned to fly a kite'. In fact, I could write a program that pulls all of the place names from a file like this, along with a description, and a pair of coordinate points for each, and includes them on a map. That's exactly what we're going to do. KML makes it possible. If I haven't made it clear, the structural bits of the file must be standardized. KML supports a limited set of elements (e.g. 'Placemark' is a supported element, as are 'Point' and 'coordinates'), and all of the elements used in a file must adhere to the standard for it to be considered valid. The second point we need to address before we begin is, appropriately enough... where to begin? Lewis Carroll famously tells us to "Begin at the beginning and go on till you come to the end: then stop."  Of course, Mr. Carroll  was writing a book at the time. If "Alice's Adventures in Wonderland" were an article, he might have had different advice. From the beginning to the end there is a lot of ground to cover. We're going to start somewhere further along, and make up the difference with the following assumptions. For the purposes of this discussion, I am going to assume that you have: Perl Access to Phil Harvey's excellent ExifTool , a Perl library and command-line application for reading, writing, and editing metadata in images (among other file types). We will be using this library in our first script A publicly accessible web server. Google requires the use of an API key  by which it can monitor the use of its map services. Google must be able to validate your key, and so your site must be publicly available. Note that this a requirement for Google Maps only Photos, preferably in a photo management application. Essentially, all you need is an app capable of generating both thumbnails and reduced size copies of your original photos . An app that can export a nice gallery for use on the web is even better Coordinate data as part of the EXIF metadata  embedded in your photos. If that sounds unfamiliar to you, then most likely you will have to take some additional steps before you can make use of this tutorial. I'm not aware of any digital cameras that automatically include this information at the time the photo is created. There are devices that can be used in combination with digital cameras, and there are a number of ways that you can 'tag' your photos with geographic data much the same way you would add keywords and other annotations. Let's begin!   Part 1: Google Earth Section 1: Introduction to Part 1 Time to begin the project in earnest. As I've already mentioned we'll spend the first half of this tutorial looking at Google Earth and putting together a Perl script whichthat, given a collection of geotagged photos, will build a set of KML files so that we can browse our photos in Google Earth. These same files will serve as the data source for our Google Maps application later on. Section 2: Some Advice Before We Begin Take your time to make sure you understand each topic before you move on to the next. Think of this as the first step in debugging your completed code. If you go slowly enough that you are to be able to identify aspects of the project that you don't quite understand, then you'll have some idea where to start looking for problems should things not go as expected. Furthermore, going slowly will give you the opportunity to identify those parts that you may want to modify to better fit the script to your own needs. If this is new to you, follow along as faithfully as possible with what I do here the first time through. Feel free to make notes for yourself as you go, but making changes on the first pass may make it difficult for you to catch back on to the narrative and piece together a functional script. After you have a working solution, it will be a simple matter to implement changes one at a time until you have something that works for you. Following this approach it will be easy to identify the silly mistakes that tend to creep in once you start making changes. There is also the issue of trust. This is probably the first time we're meeting each other, in which case you should have some doubt that my code works properly to begin with. If you minimize the number of changes you make, you can confirm that this works for you before blaming yourself or your changes for my mistakes. I will tell you up front that I'm building this project myself as we go. You can be certain at least that it functions as described for me as of the date attached to the article. I realize that this is quite different from being certain that the project will work for you, but at least it's something. The entirety of my project is available for you to download. You are free to use all of it for any legal purpose whatsoever,  including my photos in addition to all of the code, icons, etc. This is so you have some samples to use before you involve your own content. I don't promise that they are the most beautiful images you have ever seen, but they are all decent photos and properly annotated with the necessary metadata, including geographic tags. Section 3: Photos, Metadata and ExifTool To begin, we must have a clear understanding of what the Perl script will require from us. Essentially, we need to provide it with a selection of annotated image files, and information about how to reference those files. A simple folder of files is sufficient, and will be convenient for us, both as the programmer and end user. The script will be capable of negotiating nested folders, and if a directory contains both images and other file types, non-image types will be ignored. Typically, after a day of taking photos I'll have 100 to 200 that I want to keep. I delete the rest immediately after offloading them from the camera. For the files that are left, I preserve the original grouping, keeping all of the files together in a single folder. I place this folder of images in an appropriate location according to a scheme that serves to keep my complete collection of photos neatly organized. These are my digital 'negatives'. I handle all subsequent organization, editing, enhancements, and annotations within my photo management application. I use Apple Inc.'s Aperture; but there are many others that do the job equally well. Annotating your photos is well worth the investment of time and effort, but it's important that you have some strategy in mind so that you don't create meaningless tags that are difficult to use to your advantage. For the purposes of this project the tags we'll need are quite limited, which means that going forward we will be able to continue adding photos to our maps with a reasonable amount of work. The annotations we need are: Caption Latitude Longitude Image Date * Location/Sub-Location City Province/State Country Name Event People ImageWidth * ImageHeight * * Values for these Exif tags are generated by your camera. Note that these are labels used in Aperture, and are not necessarily consistent from one application to the next. Some of them are more likely than others to be used reliably. 'City' for example should be dependable, while the labels 'People', 'Events', and 'Location', among others, are more likely to differ. One explanation for these variations is that the meaning of these fields are more open to interpretation. Location, for example, is likely to be used to narrow down the area where the photo was taken within a particular city, but it is left to the person who is annotating the photo to decide that the field should name a specific street address, an informal place (e.g. 'home' or 'school'), or a larger area, for example a district or neighborhood. Fortunately things aren't so arbitrary as they seem. Each of these fields corresponds to a specific tag name that adheres to one of the common metadata formats (Exif1, IPTC2, XMP3, and there are others). These tag names are consistent as required by the standards. The trick is in determining the labels used in your application that correspond to the well-defined tag names. Our script relies on these metadata tags, so it is important that you know which fields to use in your application. This gives us an excuse to get acquainted with ExifTool4. From the project's website, we have this description of the application: ExifTool is a platform-independent Perl library plus a command-line application for reading, writing, and editing meta information in image, audio, and video files... ExifTool can seem a little intimidating at first. Just keep in mind that we will need to understand just a small part of it for this project, and then be happy that such a useful and flexible tool is freely available for you to use. The brief description above states in part that ExifTool is a Perl library and command line application that we can use to extract metadata from image files. With a single short command, we can have the app print all of the metadata contained in one of our image files. First, make sure ExifTool is installed. You can test for this by typing the name of the application at the command line. $ exiftool If it is installed, then running it with no options should prompt the tool to print its documentation. If this works, there will be more than one screen of text. You can page through it by pressing the spacebar. Press the 'q' key at any time to stop. If the tool is not installed, you will need to add it before continuing. See the appendix at the end of this tutorial for more information. Having confirmed that ExifTool is installed, typing the following command will result in a listing of the metadata for the named image: $ exiftool -f -s -g2 /path/image.jpg Where 'path' is an absolute path to image.jpg or a relative path from the current directory, and 'image.jpg' is the name of one of your tagged image files. We'll have more to say about ExifTool later, but because I believe that no tutorial should ask the reader to blindly enter commands as if they were magic incantations, I'll briefly describe each of the options used in the command above: -f, forces printing of tags even if their values are not found. This gives us a better idea about all of the available tag names, whether or not there are currently defined values for those tags. -s, prints tag names instead of descriptions. This is important for our purposes. We need to know the tag names so that we can request them in our Perl code. Descriptions, which are expanded, more human-readable forms of the same names obscure details we need. For example, compare the tag name 'GPSLatitude' to the description 'GPS Latitude'. We can use the tag name, but not the description to extract the latitude value from our files. -g2, organizes output by category. All location specific information is grouped together, as is all information related to the camera, date and time tags, etc.  You may feel, as I do, that this grouping makes it easier to examine the output. Also, this organization is more likely to reflect the grouping of field names used by your photo management application. If you prefer to save the output to a file, you can add ExifTool's -w option with a file extension. $ exiftool -f -s -g2 -w txt path/image.jpg This command will produce the same result but write the output to the file 'image.txt' in the current directory; again, where 'image.jpg' is the name of the image file. The -w option appends the named extension to the image file's basename, creates a new file with that name, and sets the new file as the destination for output. The tag names that correspond to the list of Aperture fields presented above are: metadata tag name Aperture field label Caption-Abstract Caption GPSLatitude Latitude GPSLongitude Longitude DateTimeOriginal Image Date Sub-location Location/Sub-Location City City Province-State Province/State Country-PrimaryLocationName Country Name FixtureIdentifier Event Contact People ImageWidth Pixel Width ImageHeight Pixel Height     Section 4: Making Photos Available to Google Earth We will use some of the metadata tags from our image files to locate our photos on the map (e.g. GPSLatitude, GPSLongitude), and others to describe the photos. For example, we will include the value of the People tag in the information window that accompanies each marker to identify friends and family who appear in the associated photo. Because we want to display and link to photos on the map, not just indicate their position, we need to include references to the location of our image files on a publicly accessible web server. You have some choice about how to do this, but for the implementation described here we will (1) Display a thumbnail in the info window of each map marker and (2) include a link to the details page for the image in a gallery created in our photo management app. When a visitor clicks on a map marker they will see a thumbnail photo along with other brief descriptive information. Clicking a link included as part of the description will open the viewer's web browser to a page displaying a large size image and additional details. Furthermore, because the page is part of a gallery, viewers can jump to an index page and step forward and back through the complete collection. This is a complementary way to browse our photos. Taking this one step further, we could add a comments section to the gallery pages or replace the gallery altogether, instead choosing to display each photo as a weblog post for example. The structure of the gallery created from my photo app is as follows... / (the root of the gallery directory) index.html large-1.html large-2.html large-3.html ... large-n.html assets/ css/ img/ catalog/ pictures/ picture-1.jpg picture-2.jpg picture-3.jpg ... picture-n.jpg thumbnails/ thumb-1.jpg thumb-2.jpg thumb-3.jpg ... thumb-n.jpg The application creates a root directory containing the complete gallery. Assuming we do not want to make any manual changes to the finished site, publishing is as easy as copying the entire directory to a location within the web server's document root. assets/: Is a subfolder containing files related to the theme itself. We don't need to concern ourselves with this sub-directory. catalog/: Contains a single catalog.plist file which is specific to Aperture and not relevant to this discussion. pictures/: Contains the large size images included on the detail gallery pages. thumbnails/: This subfolder contains the thumbnail images corresponding to the large size images in pictures/. Finally, there are a number of files at the root of the gallery. These include index pages and files named 'large-n.html', where n is a number starting at 1 and increasing sequentially e.g. large-1.html, large-2.html, large-3.html, ... The index files are the index pages of our gallery. The number of index pages generated will be dictated by the number of image files in the gallery, as well as the gallery's layout and design. index.html is always the first gallery page. The large-n.html files are the details pages of our gallery. Each page features an individual photo with links to the previous and next photos in sequence and a link back to the index. You can see the gallery I have created for this tutorial here: http://robreed.net/photos/tutorial_gallery/ If you take the time to look through the gallery, maybe you can appreciate the value of viewing these photos on a map. Web-based photo galleries like this one are nice enough, but the photos are more interesting when viewed in some meaningful context. There are a couple of things to notice about this gallery. Firstly, picture-1.jpg, thumb-1.jpg, and large-1.html all refer to the same image. So if we pick one of the three files we can easily predict the names of the other two. This relationship will be useful when it comes to writing our script. There is another important issue I need to call to your attention because it will not be apparent from looking only at the gallery structure. Aperture has renamed all of my photos in the process of exporting them. The name of the original file from which picture-1.jpg was generated (as well as large-1.html and thumb-1.jpg) is 'IMG_0277.JPG', which is the filename produced by my camera. Because I want to link to these gallery files, not my original photos which will stay safely tucked away on a local drive, I must run the script against the photos in the gallery. I cannot run it against the original image files because the filenames referenced in the output are unrelated to the files in the gallery. If my photo management app provided me the option of preserving the original filenames for the corresponding photos in the gallery, then I could run the script against the original image files or the gallery photos because all of the filenames would be consistent, but this is not the case. I don't have a problem as long as I run the script on the exported photos. However, if I'm running the script against the photos in the web gallery, either the pictures or thumbnail images must contain the same metadata as the original image files. Aperture preserves the metadata in both. Your application may not. A simple, dependable way to confirm that the metadata is present in the gallery files is to run ExifTool first against the original file and then the same photo in the pictures/ and thumbnails/ directories in the gallery. If ExifTool reports identical metadata, then you will have no trouble using one of pictures/ or thumbnails/ as your source directory. If the metadata is not present or not complete in the gallery files, you may need to use the script on your original image files. As has already been explained, this isn't a problem unless the gallery produces filenames that are inconsistent with the original filenames, as Aperture does. In this case you have a problem. You won't be able to run the script on the original image files because of the naming issue or on the gallery photos because they don't contain metadata. Make sure that you understand this point. If you find yourself in this situation, then your best bet is to generate files to use with your maps from your original photos in some other way, bypassing your photo management app's web gallery features altogether in favor of a solution that preserves the filenames, the metadata, or both. There is another option which involves setting up a relationship between the names of the original files and the gallery filenames. This tutorial does not include details about how to set up this association. Finally, keep in mind that though we've looked at the structure of a gallery generated by Aperture, virtually all photo management apps produce galleries with a similar structure. Regardless of the application used, you should find: A group of html files including index and details pages A folder of large size image files A folder of thumbnails Once you have identified the structure used by your application, as we have done here, it will be a simple task to translate these instructions. Section 5: Referencing Files Over the Internet Now we can talk about how to reference these files and gallery pages so that we can create a script to generate a KML file that includes these references. When we identify a file over the internet, it is not enough to use the filename, e.g. 'thumb-1.jpg', or even the absolute and relative path to the file on the local computer. In fact these paths are most likely not valid as far as your web server is concerned. Instead we need to know how to reference our files such that they can be accessed over the global internet, and the web more specifically. In other words, we need to be able to generate a URL (Uniform Resource Locator) which unambiguously describes the location of our files. The formal details of exactly what comprises a URL5; are more complicated than may be obvious at first, but most of us are familiar with the typical URL, like this one: http://www.ietf.org/rfc/rfc1630.txt which describes the location of a document titled "Universal Resource Identifiers in WWW" that just so happens to define the formal details of what comprises a URL. http://www.ietf.org This portion of the address is enough to describe the location of a particular web server over the public internet. In fact it does a little more than just specify the location of a machine. The http:// portion is called the scheme and it identifies a particular protocol (i.e. a set of rules governing communication) and a related application, namely the web. What I just said isn't quite correct; at one time, HTTP was used exclusively by the web, but that's no longer true. Many internet-based applications use the protocol because the popularity of the web ensures that data sent via HTTP isn't blocked or otherwise disrupted. You may not be accustomed to thinking of it as such, but the web itself is a highly-distributed, network-based application. /rfc/ This portion of the address specifies a directory on the server. It is equivalent to an absolute path on your local computer. The leading forward slash is the root of the web server's public document directory. Assuming no trickiness on the part of the server, all content lives under the document root. This tells us that rfc/ is a sub-directory contained within the document root. Though this directory happens to be located immediately under the root, this certainly need not be the case. In fact these paths can get quite long. We have now discussed all of the URL except for: rfc1630.txt which is the name of a specific file. The filename is no different than the filenames on your local computer. Let's manually construct a path to one of the large-n.html pages of the gallery we have created. The address of my server is robreed.net, so I know that the beginning of my URL will be: http://robreed.net I keep all of my galleries together within a photos/ directory, which is contained in the document root. http://robreed.net/photos/ Within photos/, each gallery is given its own folder. The name of the folder I have created for this tutorial is 'tutorial_gallery'. Putting this all together, the following URL brings me to the root of my photo gallery: http://robreed.net/photos/tutorial_gallery/ We've already gone over the directory structure of the gallery, so it should make sense you to that when referring to the 'large-1.html' detail page, the complete URL will be: http://robreed.net/photos/tutorial_gallery/large-1.html the URL of the image that corresponds to that detail page is: http://robreed.net/photos/tutorial_gallery/pictures/picture-1.jpg and the thumbnail can be found at: http://robreed.net/photos/tutorial_gallery/thumbnails/thumb-1.jpg Notice that the address of the gallery is shared among all of these resources. Also, notice that resources of each type (e.g. the large images, thumbnails, and html pages) share a more specific address with files of that same type. If we use the term 'base address' to refer to the shared portions of these URLs, then we can talk about several significant base addresses: The gallery base address: http://robreed.net/photos/tutorial_gallery/ The html pages base address: http://robreed.net/photos/tutorial_gallery/ The images base address: http://robreed.net/photos/tutorial_gallery/pictures/ The thumbnails base address: http://robreed.net/photos/tutorial_gallery/thumbnails/ Note that given the structure of this particular gallery, the html pages base address and the gallery base address are identical. This need not be the case, and may not be for the gallery produced by your application. We can hard-code the base addresses into our script. For each photo, we need only append the associated filename to construct valid URLs to any of these resources. As the script runs, it will have access to the name of the file that it is currently evaluating, and so it will be a simple matter to generate the references we need as we go. At this point we have discussed almost everything we need to put together our script. We have: Created a gallery at our server, which includes our photos with metadata in tow Identified all of the metadata tags we need to extract from our photos with the script and the corresponding field names in our photo management application Determined all of the base addresses we need to generate references to our image files Section 6: KML The last thing we need to understand is the format of the KML files we want to produce. We've already looked at a fragment of KML. The full details can be found on Google's KML documentation pages , which include samples, tutorials and a complete reference for the format. A quick look at the reference is enough to see that the language includes many elements and attributes, the majority of which we will not be including in our files. That statement correctly implies it is not necessary for every KML file to include all elements and attributes. The converse however is not true, which is to say that every element and attribute contained in any KML file must be a part of the standard. A small subset of KML will get us most, if not all, of what you will typically see in Google Earth from other applications. Many of the features we will not be using deal with aspects of the language that are either: Not relevant to this project, e.g. ground overlays (GroundOverlay) which "draw an image overlay draped onto the terrain" Minute details for which the default values are sensible There is no need to feel shortchanged because we are covering only a subset of the language. With the basic structure in place and a solid understanding of how to script the generation of KML, you will be able to extend the project to include any of the other components of the language as you see fit. The structure of our KML file is as follows:  1.    <?xml version="1.0" encoding="UTF-8"?> 2.    <kml > 3.        <Document> 4.            <Folder> 5.                <name>$folder_name</name> 6.                <description>$folder_description</description> 7.                <Placemark> 8.                    <name>$placemark_name</name> 9.                    <Snippet maxLines="1"> 10.                        $placemark_snippet 11.                    </Snippet> 12.                    <description><![CDATA[ 13.                        $placemark_description 14.                    ]]></description> 15.                    <Point> 16.                        <coordinates>$longitude,$latitude </coordinates> 17.                    </Point> 18.                </Placemark> 19.            </Folder> 20.        </Document> 21.    </kml> Line 1: XML header Every valid KML file must start with this line and nothing else is allowed to appear before it. As I've already mentioned, KML is an XML-based language and XML requires this header. Line 2: Namespace declaration More specifically this is the KML namespace declaration, and it is another formality. The value of the >Line 3: <Document> is a container element representing the KML file itself. If we do not explicitly name the document by including a name element then Google Earth will use the name of the KML file as the Document element <name>. The Document container will appear on the Google Earth 'Sidebar' within the 'Places' panel. Optionally we can control whether the container is closed or open by default. (This setting can be toggled in Google Earth using a typical disclosure triangle.) There are many other elements and attributes that can be applied to the Document element. Refer to the KML Reference for the full details. Line 4: <Folder> is another container element. The files we produce will include a single <Folder> containing all of our Placemarks, where each Placemark represents a single image. We could create multiple Folder elements to group our Placemarks according to some significant criteria. Think of the Folder element as being similar to your operating system's concept of a folder. At this point, note the structure of the fragment. The majority of it is contained within the Folder element. Folder, in turn, is an element of Document which is itself within the <kml> container. It should make sense that everything in the file that is considered part of the language must be contained within the kml element. From the KML reference: A Folder is used to arrange other Features hierarchically (Folders, Placemarks, NetworkLinks, or Overlays). A Feature is visible only if it and all its ancestors are visible. Line 5: The name element identifies an object, in this case the Folder object. The text that appears between the name tags can be any plain text that will serve as an appropriate label.   Line 6: <description> is any text that seems to adequately describe the object. The description element supports both plain text and a subset of HTML. We'll consider issues related to using HTML in <description> at the discussion of Placemark, lines 12 - 14. Lines 7 - 17 define a <Placemark> element. Note that Placemark contains a number of elements that also appear in Folder, including <name> (line 8), and <description> (lines 12 - 14). These elements serve the same purpose for Placemark as they do for the Folder element, but of course they refer to a different object. I've said that <description> can include a subset of HTML in addition to plain text. Under XML, some characters have special meaning. You may need to use these characters as part of the HTML included in your descriptions. Angle brackets (<, >) for example surround tag names in HTML, but serve a similar purpose in XML. When they are used strictly as part of the content, we want the XML parser to ignore these characters. We can accomplish this a few different ways: We can use entity references, either numeric character references or character entity references, to indicate that the symbol appears as part of the data and should not be treated as part of the syntax of the language. The character '<', which is required to include an image as part of the description (something we will be doing), (e.g. <img src=... />) can safely be included as the character entity reference '&lt;' or the numeric character reference '&#60'. The character entity references may be easier to remember and recognize on sight but are limited to the small subset of characters for which they have been defined. The numeric references on the other hand can specify any ASCII character. Of the two types, numeric character references should be preferred. There are also Unicode entity references which can specify any character at all. In the simple case of embedding short bits of HTML in KML descriptions, we can avoid the complication of these references altogether by enclosing the entire description in a CDATA6 element, which instructs the XML parser to ignore any special characters that appear until the end of the block set off by the CDATA tags. Notice the string '<![CDATA[' immediately after the opening <description> tag within <Placemark>, and the string ]]> immediately before the closing </description> tag. If we simply place all of our HTML and plain text between those two strings, it will all be ignored by the parser and we are not required to deal with special characters individually. This is how we'll handle the issue. Lines 9 - 11: <Snippet> The KML reference does a good job of clearly describing this element. From the KML reference: In Google Earth, this description is displayed in the Places panel under the name of the feature. If a Snippet is not supplied, the first two lines of the <description> are used. In Google Earth, if a Placemark contains both a description and a Snippet, the <Snippet> appears beneath the Placemark in the Places panel, and the <description> appears in the Placemark's description balloon. This tag does not support HTML markup. <Snippet> has a maxLines attribute, an integer that specifies the maximum number of lines to display. Default for maxLines is 2. Notice that in the block above at line 9, I have included a 'maxLines' attribute with a value of 1. Of course, you are free to substitute your own value for maxLines, or you can omit the attribute entirely to use the default value. The only element we have yet to review is <Point>, and again we need only look to the official reference for a nice description. From the KML reference: A geographic location defined by longitude, latitude, and (optional) altitude. When a Point is contained by a Placemark, the point itself determines the position of the Placemark's name and icon. <Point> in turn contains the element <coordinates> which is required. From the KML reference: A single tuple consisting of floating point values for longitude, latitude, and altitude (in that order). The reference also informs us that altitude is optional, and in fact we will not be generating altitude values. Finally the reference warns: Do not include spaces between the three values that describe a coordinate. This seems like an easy mistake to make. We'll need to be careful to avoid it. There will be a number of <Placemark> elements, one for each of our images. The question is how to handle these elements. The answer is that we'll treat KML as a 'fill in the blanks' style template. All of the structural and syntactic bits will be hard-coded, e.g. the XML header, namespace declaration, all of the element and attribute labels, and even the whitespace, which is not strictly required but will make it much easier for us to inspect the resulting files in a text editor. These components will form our template. The blanks are all of the text and html values of the various elements and attributes. We will use variables as place-holders everywhere we need dynamic data, i.e. values that change from one file to the next or one execution of the script to the next. Take a look at the strings I've used in the block above. $folder_name, $folder_description, $placemark_name, etc. For those of you unfamiliar with Perl, these are all valid variable names chosen to indicate where the variables slot into the structure. These are the same names used in the source file distributed with the tutorial. Section 7: Introduction to the Code At this point, having discussed every aspect of the project, we can succinctly describe how to write the code. We'll do this in 3 stages of increasing granularity. Firstly, we'll finish this tutorial with a natural language walk-through of the execution of the script. Secondly, if you look at the source file included with the project, you will notice immediately that comments dominate the code. Because instruction is as important an objective as the actual operation of the script, I use comments in the source to provide a running narration. For those of you who find this superabundant level of commenting a distraction, I'm distributing a second copy of the source file with much of the comments removed. Finally, there is the code itself. After all, source code is nothing more than a rigorously formal set of instructions that describe how to complete a task. Most programs, including this one, are a matter of transforming input in one form to output in another. In this very ge
Read more
  • 0
  • 0
  • 8682

article-image-working-report-builder-microsoft-sql-server-2008-part-2
Packt
22 Oct 2009
3 min read
Save for later

Working with the Report Builder in Microsoft SQL Server 2008: Part 2

Packt
22 Oct 2009
3 min read
Enabling and reviewing My Reports As described in Part 1 the My Reports folder needs to be enabled in order to use the folder or display it in the Open Report dialogue. The RC0 version had a documentation bug which has been rectified (https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=366413) Getting ready In order to enable the My Reports folder you need to carry out a few tasks. This will require authentication and working with the SQL Server Management Studio. These tasks are listed here: Make sure the Report Server has started. Make sure you have adequate permissions to access the Servers. Open the Microsoft SQL Server Management Studio as described previously. Connect to the Reporting Services after making sure you have started the Reporting Services. Right-click the Report Server node. General Execution History Logging Security Advanced The Server Properties window is displayed with a navigation list on the left consisting of the following: In the General page the name, version, edition, authentication mode, and URL of Reporting Service is displayed. Download of an ActiveX Client Print control is enabled by default. In order to work with Report Builder effectively and provide a My Reports folder for each user, you need to place a check mark for the check box Enable a My Reports folder for each user. The My Reports feature has been turned on as shown in the next screenshot. In the Execution page there is choice for report timeout execution, with the default set such that the report execution expires after 1800 seconds. In the History page there is choice between keeping an unlimited number of snapshots in the report history (default) or to limit the copies allowing you to specify how many to be kept. In the Logging page, report execution logging is enabled and the log entries older than 60 days are removed by default. This can be changed if desired. In the Security page, both Windows integrated security for report data sources and ad hoc report executions are enabled by default. The Advanced page shows several more items including the ones described thus far as shown in the next figure. In the General page enable the My Reports feature by placing a check mark. Click on the Advanced list item in the left. The Advanced page is displayed as shown: Now expand the Security node of Reporting Services and you will see that the My Reports role is present in the list of roles as shown. This is also added to the ReportServer database. The description of everything that a user with the assignment My Reports role can do is as follows: May publish reports and linked reports, manage folders, reports, and resources in a users My Reports folder. Now bring up Report Builder 2.0 by clicking Start | All Programs | Microsoft SQL Server 2008 Report Builder | Report Builder 2.0. Report Builder 2.0 is displayed. Click on Office Button | Open. The Open Report dialogue appears as shown. When the report Server is offline, the default location is My Documents, like Microsoft products Excel and MS Access. Choose the Recent sites and Servers. The Report server that is active should get displayed here as shown: Highlight the Server URL and click Open. All the folders and files on the server become accessible as shown: Open the Report Manager by providing its URL address. Verify that a My Reports folder is created for the user (current user). There could be slight differences in the look of the interface depending on whether you are using the RTM or the final version of SQL Server 2008 Enterprise edition.
Read more
  • 0
  • 0
  • 2596
article-image-creating-accessible-tables-joomla
Packt
22 Oct 2009
5 min read
Save for later

Creating Accessible Tables in Joomla!

Packt
22 Oct 2009
5 min read
Creating Accessible Tables Tables got a bad review in accessibility circles, because they used to create complex visual layouts. This was due to the limitations in the support for presentational specifications like CSS and using tables for layout was a hack—that worked in the real world—when you wanted to position something in a precise part of the web page. Tables were designed to present data of all shapes and sizes, and that is really what they should be used for. The Trouble with Tables So what are tables like for screen reader users? Tables often contain a lot of information, so sighted users need to look at the information at the top of the table (the header info), and sometimes the first column in each row to associate each data cell. Obviously this works for sighted users, but in order to make the tables accessible to a screen reader user we need to find a way of associating the data in each cell with its correct header so the screen reader can inform the user which header relates to each data cell. Screen reader users can navigate between data cells easily using the cursor keys. We will see how to make tables accessible in simple steps. There are methods of conveying the meaning and purpose of a table to the screen reader user by using the caption element and the summary attribute of the table element that you will find more on in the next section. We will learn how to build a simple table using Joomla! and the features contained within the WYSIWYG editors that can make the table more accessible. Before we do that though I want you to ask yourself about why you want to use tables (though sometimes it is unavoidable) and what forms should they take. Simple guidelines for tables: Try to make the table as simple as possible.    If possible don't span multiple cells etc. The simpler the table, the easier it is to make accessible.    Try to include the data you want to present in the body text of your site. Time for Action—Create an Accessible Table (Part 1) In the following example we will build a simple table that will list the names of some artists, some albums they have recorded, and the year in which they recorded the albums. First of all click the table icon from the TinyMCE interface and add a table with a suitable number of columns and rows.            By clicking on the Advanced tab you will see the Summary field. The summary information is very important. It provides the screen reader user a summary of the table. For example, I filled in the following text: "A list of some funk artists, my favorite among their records, and the year they recorded it in". My table then looked as follows: What Just Happened? There is still some work to be done in order to make the content more accessible. The controls that the WYSIWYG editor offers are also a little limited so we will have to edit the HTML by hand. Adding the summary information is a very good start. The text that I entered "A list of some funk artists, my favorite among their records, and the year they recorded it in." will be read out by the screen reader as soon as it receives a focus by the user. Time for Action—Create an Accessible Table (Part 2) Next we are going to add a Caption to the table, which will be helpful to both sighted and non-sighted users. This is how it's done. Firstly, select the top row of the table, as these items are the table heading. Then click on the Table Row properties icon beside the Tables icon and select Table Head under General Properties. Make sure that the Update current Row is selected in the dialogue box in the bottom left. You will apply these properties to your selected row. If you wish to add a caption to your table you need to add an extra row to the table and then select the contents of that row and add the Caption in the row properties dialogue box. This will tell the browser to display the caption text, in this case Funky Table Caption, else it will remain hidden. What Just Happened? By adding caption to the table, you provide useful information to the screen reader user. This caption should be informative and should describe something useful about the table. As the caption element is wrapped in a heading it is read out by the screen reader when the user starts exploring the table—so it is slightly different to the summary attribute, which is read out automatically. Does it Work? What we just did using the WYSIWYG editor, TinyMCE, is enough to make a good start towards creating a more accessible table, but we will have to work a little more in order to truly make the table accessible. So we will now edit the HTML. The good news is that you have made some good steps in the right direction and the final step is of associating the data cells with their suitable headers, as this is something that we cannot really do with the WYSIWYG editor alone, and is essential to make your tables truly accessible.
Read more
  • 0
  • 0
  • 2244

article-image-categories-and-attributes-magento-part-1
Packt
22 Oct 2009
10 min read
Save for later

Categories and Attributes in Magento: Part 1

Packt
22 Oct 2009
10 min read
Categories, Products, and Attributes Products are the items that are sold. In Magento, Categories organize your Products, and Attributes describe them. Think of a Category as the place where a Product lives, and an Attribute is anything that describes a Product. Each one of your Products can belong to one or more Categories. Also, each Product can be described by any number of Attributes. Is it a Category or an Attribute? Some things are clearly Categories. For example, if you have an electronics store, MP3 players would make a good Category. If you're selling jewellery, earrings would make a good Category. Other things are clearly Attributes. Color, description, picture, and SKU number are almost always Attributes. Sometimes, the same thing can be used for a Category or an Attribute. For example, suppose your site sells shoes. If you made size an Attribute, then after your shoppers have located a specific shoe, they can select the size they want. However, if you also made size a Category, the shoppers could begin their shopping by selecting their size. Then they could browse through the styles available in their size. So should size be an Attribute, a Category, or both? The answer depends upon what kind of shopping experience you want to create for your customers. Examples The hierarchy of Categories, Products, and Attributes looks like this: Category 1 Product 1 Attribute 1 Attribute 2 Product 2 Attribute 1 Attribute 2 Category 2 Product 3 Attribute 1 Attribute 3 Product 4 Attribute 1 Attribute 3 We are building a site that sells gourmet coffee, so we might organize our store like this: Single Origin Hawaiian Kona Grind (whole bean, drip, French press) Roast (light, medium, dark) Blue Mountain Grind Roast Blends Breakfast Blend Grind Caffeine (regular, decaffeinated) Afternoon Cruise Grind Caffeine In Magento, you can give your shoppers the ability to search your store. So if the shoppers know that they want Blue Mountain coffee, they can use the Search function to find it in our store. However, customers who don't know exactly what they want will browse the store. They will often begin browsing by selecting a category. With the organization that we just saw, when customers browse our store, they will start by selecting Single Origin or Blends. Then the shoppers will select the product they want: Hawaiian Kona, Blue Mountain, Breakfast Blend, or Afternoon Cruise. After our shoppers decide upon a Product, they select Attributes for that product. In our store, shoppers can select Grind for any of the products. For Single Origin, they can also select Roast. For blends, they can select Caffeine. This gives you a clue about how Magento handles attributes. To each Product, you can apply as many, or as few, attributes as you want. Now that we have definitions for Category, Product, and Attribute, let's look at each of them in detail. Then, we can start adding products. Categories Product Categories are important because they are the primary tool that your shoppers use to navigate your store. Product Categories organize your store for your shoppers. Categories can be organized into Parent Categories and Subcategories. To get to a Subcategory, you drill down through its Parent Category. Categories and the Navigation Menu If a Category is an Anchor Category, then it appears on the Navigation Menu. The term "Anchor" makes the category sound as if it must be a top-level category. This is not true. You can designate any category as an Anchor Category. Doing so puts that category into the Navigation Menu. When a shopper selects a normal Category from the Navigation Menu, its landing page and any subcategories are displayed. When a shopper selects an Anchor Category from the menu, Magento does not display the normal list of subcategories. Instead, it displays the Attributes of all the Products in that category and its subcategories. Instead of moving down into subcategories, the shopper uses the Attributes to filter all the Products in that Anchor Category and the Categories below it. The Navigation Menu will not display if: You don't create any Categories, or You create Categories, but you don't make any of them Anchors, or Your Anchor Categories are not subcategories under the Default Category. The Navigation Menu will display only if: You have created at least one Category You have made at least one Category an Anchor You have made the Anchor Category a Subcategory under Default. When you first create your Magento site and add Products, you won't see those Products on your site until you've met all of the previous conditions. For this reason I recommend that you create at least one Anchor Category before you start adding Products to your store. As you add each Product, add it to an Anchor Category. Then, the Product will display in your store, and you can preview it. If the Anchor Category is not the one that you want for that Product, you can change the Product's Category later Before we add Products to our coffee store, we will create two Anchor Categories: Single Origin and Blends. As we add Products, we will assign them to a Category so that we can preview them in our storefront. Making best use of Categories There are three things that Categories can accomplish. They can: Help the shoppers, who know exactly what they want, to find the product that they are looking for. Help the shoppers, who almost know what they want, to find a product that matches their desires. Entice the shoppers, who have almost no idea of what they want, to explore your store. We would like to organize our store so that our Categories accomplish all these goals. However, these goals are often mutually exclusive. For example, suppose you create an electronics store. In addition to many other products, your store sells MP3 players, including Apple iPods. A Category called iPods would help the shoppers who know that they want an iPod, as they can quickly find one. However, the iPods Category doesn't do much to help shoppers who know that they want an MP3 player, but don't know what kind. On the Web, you usually search something when you know what you want. But when you're not sure about what you want, you usually browse. In an online store, you usually begin browsing by selecting a Category. When you are creating Categories for your online store, try to make them helpful for shoppers who almost know what they want. However, what if a high percentage of your shoppers are looking for a narrow category of products? Consider creating a top-level Category to make those products easily accessible. Again, suppose you have an electronics store that sells a wide variety of items. If a high percentage of your customers want iPods, it might be worthwhile to create a Category just for those few products. The logs from the Search function on your site are one way you can determine whether your shoppers are interested in a narrow Category of a Product. Are 30 percent of the searches on your site for left-handed fishing reels? If so, you might want to create a top-level Category just for those Products. Attributes An Attribute is a characteristic of a Product. Name, price, SKU, size, color, and manufacturer are all examples of Attributes. System versus Simple Attributes Notice that the first few examples (name, price, and SKU) are all required for a Product to function in Magento. Magento adds these Attributes to every product, and requires you to assign a value for each of them. These are called System Attributes. The other three examples (size, color, and manufacturer) are optional Attributes. They are created by the store owner. They are called Simple Attributes. When we discuss creating and assigning Attributes, we are almost always discussing Simple Attributes. Attribute Sets Notice that the Single Origin coffees have two Attributes: Grind and Roast. Also notice that the blends have the Attributes of Grind and Caffeine. Single Origin Hawaiian Kona Grind (whole bean, drip, French press) Roast (light, medium, dark) Blue Mountain Grind Roast Blends Breakfast Blend Grind Caffeine (regular, decaffeinated) Afternoon Cruise Grind Caffeine In this example, the store owner created three Attributes: Grind, Roast, and Caffeine. Next, the store owner grouped the Attributes into two Attribute Sets: one set contains Grind and Roast, and the other set contains Grind and Caffeine. Then, an Attribute set was applied to each Product. Attributes are not applied directly to Products. They are first grouped into Attribute Sets, and then a set can be applied to a Product. This means that you will need to create a set for each different combination of Attributes in your store. You can name these Sets after the Attributes they contain, such as Grind-Roast. Or, you can name them after the type of Product which will use those Attributes, such as Single Origin Attributes. If each Product in a group will use the same Attribute as every other Product in that group, then you can name the set after that group. For example, at this time, all Single Origin coffees have the same Attributes: Grind and Roast. If they will all have these two Attributes and you will always add and remove Attributes to them as a group, then you could name the set Single Origin Attributes. If the Products in a group will likely use different Attributes, then name the set after the Attributes. For example, if you expect that some Single Origin coffees will use the Attributes Grind and Roast, while others will use just Roast, then it would not make sense to create a set called Single Origin Attributes. Instead, create a set called Grind-Roast, and another called Roast. Three types of Products In Magento, you can create three different types of Products: Simple, Configurable, and Grouped. The following is a very brief definition for each type of Product. Simple Product A Simple Product is a single Product, with Attributes that the store owner chooses. As the saying goes, "What you see is what you get." The customer does not get to choose anything about the Product. In our coffee store, a good example for a Simple Product might be a drip coffee maker. It comes in only one color. And while the customer can buy drip coffee makers of various sizes (4 cups, 8 cups, 12 cups, and so on), each of those is a separate Product. A bad example of a Simple Product would be a type of coffee. For example, we might want to allow the customer to choose the type of roast for our Hawaiian Kona coffee: light, medium, or dark. Because we want the customer to choose a value for an Attribute, that would not be a good Simple Product. Configurable Product A Configurable Product is a single Product, with at least one Attribute that the customer gets to choose. There is a saying that goes, "Have it your way." The customer gets to choose something about the Product. A good example of a Configurable Product would be a type of coffee that comes in several different roasts: light, medium, and dark. Because we want the customer to choose the roast(s) he wants, that would be a good Configurable Product. Grouped Product A Grouped Product is several Simple Products that are displayed on the same page. You can force the customer to buy the group, or allow the customer to buy each Product separately. The previous definitions are adequate for now. However, when you start creating Products, you will need to know more about each type of Product.
Read more
  • 0
  • 0
  • 6512

article-image-adding-newsletters-web-site-using-drupal-6
Packt
22 Oct 2009
4 min read
Save for later

Adding Newsletters to a Web Site Using Drupal 6

Packt
22 Oct 2009
4 min read
Creating newsletters A newsletter is a great way of keeping customers up-to-date without them needing to visit your web site. Customers appreciate well-designed newsletters because they allow the customer to keep tabs on their favorite places without needing to check every web site on a regular basis. Creating a newsletter Good Eatin' Goal: Create a new newsletter on the Good Eatin' site, which will contain relevant news about the restaurant, and will be delivered quarterly to subscribers. Additional modules needed: Simplenews (http://drupal.org/project/simplenews). Basic steps Newsletters are containers for individual issues. For example, you could have a newsletter called Seasonal Dining Guide, which would have four issues per year (Summer, Fall, Winter, and Spring). A customer subscribes to the newsletter and each issue is sent to them as it becomes available. Begin by installing and activating the Simplenews module, as shown below: At this point, we only need to enable the Simplenews module, and the Simplenews action module can be left disabled. Next, select Content management and then Newsletters, from the Administer menu. Drupal will display an administration area divided into the following sections: a) Sent issuesb) Draftsc) Newslettersd) Subscriptions Click on the Newsletters tab and Drupal will display a page similar to the following: As you can see, a default newsletter with the name of our site has been automatically created for us. We can either edit this default newsletter or click on the Add newsletter link to create a new newsletter. Let's click the Add newsletter option to create our seasonal newsletter. Drupal will display a standard form where we can enter the name, description, and relative importance (relative importance weight) of the newsletter. Click Save to save the newsletter. It will now appear in the list of available newsletters. If you want to modify the Sender information for the newsletter to use an alternate name or email address to your site's default ones, you can either expand the Sender information section when adding the newsletter, or you click Edit newsletter and modify the Sender information, as shown in the following screenshot: Allowing users to sign-up for the newsletter Good Eatin' Goal: Demonstrate how registered and unregistered users can sign-up for a newsletter, and configure the registration process. Additional modules needed: Simplenews (http://drupal.org/project/simplenews). Basic steps To allow customers to sign-up for the newsletter, we will begin by adding a block to the page. Open the Block Manager by selecting Site building and then Blocks, from the Administer menu. Add the block for the newsletter that you want to allow customers to subscribe to, as shown in the following screenshot: We will now need to give users permission to subscribe to newsletters by selecting User management and then Permissions, from the Administer menu. We will give all users permissions to subscribe to newsletters and to view newsletter links, as shown below: If the customer does not have permission to subscribe to newsletters then the block will appear as shown in the following screenshot: However, if the customer has permissions to subscribe to newsletters, and is logged in to the site, the block will appear as shown in the following screenshot: If the customer has permission to subscribe, but is not logged in, the block will appear as follows: To subscribe to the newsletter, the customer will simply click on the Subscribe button. Once they he subscribed, the Subscribe button will change to Unsubscribe so that the user can easily opt out of the newsletter. If the user does not have an active account with the site, they will need to confirm that they want to subscribe to the site.
Read more
  • 0
  • 0
  • 2721
article-image-joomla-installation-virtual-server-net
Packt
22 Oct 2009
3 min read
Save for later

Joomla! Installation on a Virtual Server on the Net

Packt
22 Oct 2009
3 min read
In principle the simplest approach that actually always works is the following: Load the Joomla! 1.5 beta.zip file onto your local PC and unpack it in a temporary directory. Load the just unpacked files by means of FTP onto your rented server. The files must be installed in the publicly accessible directory. These directories are usually named htdocs, public_html, or simply html. You can specify a subdirectory within the directory into which you install your Joomla!. Many web hosts allow you to link your rented domain name to a directory. This name is necessary to call your website from a browser. You have to find out what your database is called. Usually one or several databases are included in your web-hosting package. Sometimes the user name, database name, and password are fixed; sometimes you have to set them up. There is usually a browser-based configuration interface at your disposal. You can see an example of such an interface in the following figure. You will need these access data for Joomla!'s web installer. You can get going after you have loaded these data onto your server and are in possession of your access data. Joomla! Installation To install Joomla!, you need the source code. Download the Joomla_1.5.0-Beta-Full_Package.zip package and save it on your system. Selecting a Directory for Installation You have to decide whether Joomla! needs to be installed directly into a document directory or a subdirectory. This is important, since many users prefer a short URL to their homepage. An Example If Joomla! is unzipped directly in /htdocs, the web page starts when the domain name http://www.myhomepage.com is accessed from its local computer http://localhost/ and/or from the server on the Internet. If subdirectories are created under /htdocs/, for example, /htdocs/Joomla150/ and we unzip the package there, we have to enter http://localhost/Joomla150/ in the browser. This isn't a problem locally, but doesn't look good on a production Internet page. Some HTML files and subdirectories, however, are already in /htdocs in the local XAMPP Lite environment under Windows, which, for example, displays the start page of XAMPP Lite. In a local Linux environment, a starting page dependent on the distribution and the web server settings is also displayed. Directory I recommend that you create a subdirectory under the document directory named Joomla150 in Windows by using Windows Explorer. (With Linux, use the Shell, KDE Konqueror, or Midnight Commander.) [home]/htdocs/Joomla150/ The directory tree in Windows Explorer should look like this: An empty index appears in the XAMPP Lite version when the URL http://localhost/Joomla150 is entered in the browser: With Linux or with another configuration it can happen that you get a message saying that you don't have access to this directory. This depends on the configuration of the web server. For security reasons, the automatic directory display is often deactivated in Apache's configuration. A potential hacker could draw many interesting conclusions about the directory structure and the files on your homepage and target your computer for an attack. For security reasons, you are usually not allowed to access the appropriate configuration file of the Apache web server. Should you be able to, you should leave the content directories deactivated and/or only activated for those directories that contain files for downloading.        
Read more
  • 0
  • 0
  • 1685

article-image-content-modeling
Packt
22 Oct 2009
12 min read
Save for later

Content Modeling

Packt
22 Oct 2009
12 min read
Organizing content in a meaningful way is nothing new. We have been doing it for centuries in our libraries—the Dewey decimal system being a perfect example. So, why can't we take known approaches and apply them to the Web? The main reason is that a web page has more than two dimensions. A page on a book might have footnotes or refer to other pages, but the content only appears in one place. On a web page, content can directly link to other content and even show a summary of it. It goes way beyond just the content that appears on the page—links, related content, reviews, ratings, etc. All of this brings extra dimensions to the core content of the page and how it is displayed. This is why it's so important to ensure your content model is sound. However, there is no such thing as the "right" content model. Each content model can only be judged on how well it achieves the goals of the website now and in the future. The Purpose of a Content Model The idea of a content model is new, but it has similarities to both a database design and an object model. The purpose of both of these is to provide a foundation for the logic of the operation. With a database design, we want to structure the data in a meaningful way to make storage and retrieval effective. With an object model, we define the objects and how they relate to each other so that accessing and managing objects is efficient and effective. The same applies to a content model. It's about structuring the content and the relationships between the classes to allow the content to be accessed and displayed easily. The following diagram is a simple content model that shows the key content classes and how they relate to each other. In this diagram, we see that resources belong to a collection which in turn belongs to a context. Also, a particular resource can belong to more than one collection. As stated before, there is no such thing as the "right" model. What we are trying to achieve is the most "effective" model for the project at hand. This means coming up with a model that will provide the most effective way of organizing content so that it can be easily displayed in the manner defined in the functional specification. The way a content model is defined will have an impact on how easy it is to code templates, how quickly the code will run, how easy it is for the editors to input content, and also how easy it is to change down the track. From experience, rarely is a project completed and then never touched again. Usually, there are changes, modifications, updates, etc. down the track. If the model is well structured, these changes will be easy, if not, they can require a significant amount of work to implement. In some cases, the project has to be rebuilt entirely and content re-entered to achieve the goals of the client. This is why the model is so important. If done well, it means the client pays less and has a better-running solution. A poor model will take longer to implement and changes will be more difficult to implement. What Makes a Good Model? It's not easy to define exactly what makes a good model. Like any form of design, simplicity is the key. The more the elements, the more complex it gets. Ideally, a model should be technology independent, but there are certain ways in which eZ publish operates that can influence how we structure the content model. Do we always need a content model? No, it depends on the scale of the project. Smaller projects don't really need a formal model. It's only when there are specific relationships between content classes that we need to go to the effort of creating a model. For example, a basic website that has a number of sections, e.g., About Us, Services, Articles, Contact, etc., doesn't need a model. There's no need for an underlying structure. It's just content added to sections. The in-built content classes in eZ publish will be enough to cater for that type of site. It's when the content itself has specific relationships e.g., a book belongs to a category or a product belongs to a product group, which belongs to a division of the business—this is when you need to create a model to capture the objects and the relationships between them. T o start with, we need to understand the content we are dealing with. The broad categories are existing/known content and new content. If we know the structure of the content we are dealing with and it already exists, this can help to shape the model. If we are dealing with content that doesn't exist yet (i.e. is to be written or created for this project) it's harder to know if we are on the right track. For example, when dealing with products, generally the product data will already exist in a database or ERP system. This gives us a basis from which to work. We can establish the structure of the content and the relationships from the existing data. That doesn't mean that we simply copy what's there, but it can guide us in the right direction. Sometimes the structure of the data isn't effective for the way it's to be displayed on the Web or it's missing elements. (As a typical example, in a recent project, the product data was stored in three places—the core details were in the Point of Sale system, product details and categorization were in a spreadsheet, and the images were stored on a file system.) So, the first step is to get an understanding of all the content we are dealing with. If the content doesn't exist as yet, at least get some examples of what it is likely to be. Without knowing what you are dealing with, you can't be sure your model will accommodate everything. T his means you'll need to allow for modifications down the track. Of course we want to minimize this but it's not always possible. Clients change their minds so the best we can do is hope that our model will accommodate what we think are the likely changes. This really can only be done through experience. There are patterns in content as well as how it's displayed. Through these patterns e.g., a related-content box on each page, we can try to foresee the way things might alter and build room for this into the model. A good example was that on a recent project, for each object, there was the main content but there were also a number of related objects (widgets) that were to be displayed in the right-hand column of the page. Initially, the content class defined the specific widgets to be associated with the object. The table below contains the details of a particular resource (as shown in the previous content model). It captures the details of the "research report" resource content class. Attribute Type Notes Title Text line Short Title Text Line If present, will be used in menus and URLs Flash Flash Navigator object Hero Image Image (displays if no flash) Caption Rich text   Body* Rich Text   Free Form Widgets Related Objects Select one or more Multimedia Widget Related Object Select one This would mean that when the editor added content, they would pick the free-form widgets and then the multimedia widget to be associated with the research report. Displaying the content would be straightforward as from the parent object we would have the object IDs for each widget. The idea is sound but lacks flexibility. It would mean that the order in which the object was added would dictate the order in which it was displayed. It also means that if the editor wants to choose to add a different type of widget, they couldn't unless the model was changed, i.e., another attribute was added to the content class. We updated the content class as follows: Attribute Type Notes Title* Text line Short Title Text Line If present, will be used in menus and URLs Flash Flash Navigator object Hero Image Image (displays if no flash) Caption Rich text   Body* Rich Text   Widgets Related Objects Select one or more This approach is less strict and provides more flexibility. The editor can choose any widget and also select the order. In terms of programming the template, there's the same amount of work. But, if we decide to add another widget type down the track, there's no need to update the content class to accommodate it. Does this mean that anytime we have a related object we should use the latter approach? No, the reason we did it in this situation is that the content was still being written as we were creating the model, and there was a good chance that once the content was entered and we saw the end result, the client was going to say something like "can we add widget x" to the right-hand column of a context object? In a different project, in which a particular widget should only be related to a particular content class, it's better to enforce the rule by only allowing that widget to be associated with that content class. Defining a Content Model The process of creating a content model requires a number of steps. It's not just a matter of analyzing the content; the modeler also needs to take into consideration the domain, users, groups, and the relationships between different classes within the model. To do this, we start with a walkthrough of the domain. Step 1: Domain Walkthrough The client and domain experts walk us through the entire project. This is a vital part of the process. We need to get an understanding of the entire system, not just the part that is captured in the final solution. The model that we end up creating may need to interact with other systems and knowing what they are and how they work will inform the shape of the model. A good example is with e-commerce systems, any information captured on a sale will eventually need to be entered into the existing financial system (whether is it automated or manual). Without an understanding of the bigger picture, we lack the understanding of how the solution we are creating will fit in with what the business does. That's when there is an existing business process. Sometimes there is no business process and the client is making things up as they go along, e.g. they have decided to do online shopping but they have never dealt with overseas orders so don't know how that will work and have no idea how they would deal with shipping costs. One of the typical problems that will surface during the domain walkthrough is that the client will try to tell you how they want the solution to work. By doing this, they are actually defining the model and interactions. This is something to be wary of. It is unlikely that they would be aware of how best to structure a solution; what you want to be asking is what they currently do, what's their current business process. You want to deal with facts that are in existence so that you can decide how best to model the solution. To get the client back on track ask questions like: How do you currently do "it" (i.e. the business process)? What information to you currently capture? How do you capture that information? What format is that information in? How often is the information updated? Who updates it? This gives you a picture of what is currently happening. Then you can start to shape the model to ensure that you are dealing with the real world, not what the client thinks they want. Sometimes they won't be able to answer the question and you'll have to get the right person from the business involved to get the answers you want. Sometimes you discover that what the client thought was happening is not really what happens. Another benefit of this process is gaining a common understanding. If both you and the client are in the room when the process for calculating shipping costs is being explained by the Shipping Manager, you'll both appreciate how complex the process is. If the client thinks it's easy, they won't expect it to cost much. If they are in the room when the shipping manager explains there are five different shipping methods and each has its own way of calculating the costs for a shipment based on their own set of international zones, you know modeling that part of the system is not going to be straightforward unlike what the client initially thought. What this means is that the domain walkthrough gives you a sense of what's real, not what people think the situation is. It's the most important part of the process. Assumptions that "shipping costs" are straightforward, so you don't need to worry about that, can be a disaster later down the track when you find out it's not the case. Also, don't necessarily rely on requirements documents (unless you have written them yourself). A statement in a requirements document may not reflect what really happens; that's why you want to make sure you go through everything to confirm that you have all the facts. Sometimes, a particular requirement can be stated in the document but when you go through it in more detail, ask a few questions, pose a few scenarios, the client changes their mind on what it is that they really want as they realize what they thought they wanted is going to be difficult or expensive to implement. Or, you put an alternative approach to them and they are happy to achieve the same result in a different manner that is easier to implement. This is a valuable way to work out what's real and what really matters.
Read more
  • 0
  • 0
  • 2522
Modal Close icon
Modal Close icon