Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-using-web-pages-upk-35
Packt
16 Nov 2009
12 min read
Save for later

Using Web Pages in UPK 3.5

Packt
16 Nov 2009
12 min read
Using Web Pages in the Concept pane The most common use of Web Pages is to provide context information for Topics. Look at the following image of the Outline for our example course: You will see that the upper-right section of the Outline window contains a pane labeled Concept. If you want any information to be displayed in this pane, then you need to create a Web Page and attach it to the content object in the Outline. Version Difference Although the Concept pane always appears in the Outline view, if it is empty, then it does not appear in the Player. This is, thankfully, a new feature in UPK 3.5. In previous versions, the Concept pane always appeared in the Player, often appearing as a blank frame, where developers couldn't be bothered to provide any concepts. To create a new Web Page and attach it to a content object, carry out the following steps: Open the Outline containing the content object to which you want to attach the Web Page, in the Outline Editor. Click on the content object to select it. Although in this example we are attaching a Web Page to the concept pane for a Topic, Modules, and Sections also have concept panes, so you can also attach Web Pages to these as well. Click on the Create new web page button () in the Concept pane. The Save As dialog box is displayed. Navigate to the folder in which you want to save the Web Page (we will use an Assets sub-folder for all of our Web Pages), enter a name for the Web Page (it makes sense to use a reference to the content object to which the Web Page relates), and then click on the Save button. The Web Page Editor is opened on the Developer screen, as shown in the next screenshot: Enter the information that you want to appear in the Concept pane in the editor (as has already been done in the previous example). You can set the font face and size; set text to bold, italics, and underlined; change the text color, and change change the background color (again, as has been done in the earlier example). You can also change the paragraph alignment, and format numbered and bulleted lists. Once you have entered the required text, click on the Save button () to save your changes, and then close the Web Page Editor tabbed page. You are returned to the Outline Editor. Now, the Concept pane shows the contents of the Web Page, as shown in the next screenshot: Version Difference In UPK 3.5 (and OnDemand 9.1) you can only attach a single Web Page to the Concept pane. This is a change from OnDemand 8.7, where you could attach multiple Infoblocks and they would be displayed sequentially. (But note that if you convert content from OnDemand 8.7 where there are multiple Infoblocks in a single Concept pane, then all of the attached Infoblocks will be converted to a single Web Page in UPK 3.5.) The above steps explain how to attach a Web Page to the Concept pane for an exercise from the Outline. Although this is done from the Outline, the Web Page is attached to the Topic content object and not to the outline element. If you subsequently insert the same Topic in another Outline, the same Web Page will be used in the Concept pane of the new Outline. You can also attach a Web Page to the Concept pane for an exercise from within the Topic Editor. This is a useful option if you want to create concept Web Page but have not yet built an Outline to house the Topic. To do this, follow these steps: Open the Topic in the Topic Editor. Select menu option View|Concept Properties. The Concept Properties dialog box is displayed. This is very similar to the Concept pane seen in the Overview. It contains the same buttons. Create and save the Web Page as described above. When you have finished, and the Web Page is displayed in the Concept Properties dialog box, click on the OK button. Using images in Web Pages As stated above, a Web Page can contain an image. This can be instead of, or in addition to, any text (although if you only wanted to include a single image in the Web Page you could always use a Package, as explained later in this article). Images are a nice way of adding interest to a Web Page (and therefore to your training), or of providing additional information that is better explained graphically (after all, a picture is worth a thousand words). However, if you are using images in the Concept pane, then you should consider the overall size of the image and the likely width of the Concept pane, bearing in mind that the trainee may run the Player in a smaller window than the one you design. For our sample exercise, given that we are providing simulations for the SAP system, we will include a small SAP logo in the Web Page that appears in the Concept pane for our course Module. For the sake of variety, we will do this from the Library, and not from the Outline Editor. To add an image to a Web Page, carry out the steps described below. In the Library, locate the folder containing the Web Page to which you want to add the image. Double-click on the Web Page, to open it in the Web Page Editor. As before, this is opened in a separate tab in the main UPK Developer window, as can be seen in the next screenshot: Within the Web Page, position the cursor at the place in the text where you want the image to appear. Select menu option Insert|Image. The Insert Image dialog box is displayed, as shown in the next screenshot: In the Link to bar on the leftmost side of the dialog box, select the location of the image file that you want to insert into the Web Page. You can insert an image that you have already imported into your Library (for example, in a Package), an image that is located on your computer (or any attached drive) (option Image on My Computer), or an image from the Internet (option URL). For our sample exercise, we will insert an image from our computer. In the rightmost side of the dialog box, navigate to and select the image file that you want to insert into the Web Page. Click on OK. The image is inserted, as shown in the following screenshot: Save the Web Page, and close the Web Page Editor, if you have finished editing this Web Page. (We have not; we want to adjust the image, as explained below). Of course, this doesn't look too pretty. Thankfully, we can do something about this, because UPK provides some rudimentary features for adjusting images in Web Pages. To adjust the size or position of an image in a Web Page, carry out the following steps: With the Web Page open in the Web Page Editor, right-click on the image that you want to adjust, and then select Image Properties from the context menu. The Image Properties dialog box is displayed, as shown in the next screenshot: In the Alternative Text field, enter a short text description of the image. This will be used as the ToolTip in some web browsers. Under Size, select whether you want to use the original image size or not, and specify a new height and width if you choose not. Under Appearance, select a border width and indicate where the image should be aligned on the Web Page. Your choices are: Top, Middle, Bottom, Left, and Right. The first three of these (the vertical options) control the position of the image relative to the line of text in which it is located. The last two (the horizontal options) determine whether the image is left-aligned or right-aligned within the overall Web Page. Although these are two independent things (vertical and horizontal), you can only select one, so if you want the image to be right-aligned and halfway down the page, you can't do it. Click on OK to confirm your changes. Save and close the Web Page. For our sample exercise, we resized the image, set it to be right-aligned, and added a 1pt border around it (because it looked odd with a white background against the blue of the Web Page, without a border). A better option would be to use an image with a transparent background. In this example we have used an image with a solid background just for the purposes of illustration. These settings are as shown in the previous screenshot. This gives us a final Web Page as shown in the next screenshot. Note that this screenshot is taken from the Player, so that you can see how images are handled in the Player. Note that the text flows around the image. You will see that there is a little more space to the right of the image than there is above it. This is to leave room for a scrollbar. Creating independent Web Pages In the previous section, we looked at how to use a Web Page to add information to the Concept pane of a content object. In this section, we will look at how to use Web Pages to provide information in other areas. Observant readers will have noticed that a Web Page is in fact an independent content object itself. When you created a Web Page to attach to a Concept pane, you edited this Web Page in its own tabbed editor and saved it to its own folder (our Assets folder). Hopefully, you also noticed that in addition to the Create new web page button (), the Concept pane also has a Create link button () that can be used to attach an existing Web Page to the Concept pane. It should, therefore, come as no surprise to learn that Web Pages can be created independently of the Concept pane. In fact, the Concept pane is only one of several uses of Web Pages. To create a Web Page that is independent of a Concepts pane (or anything else), carry out these steps. In the Library, navigate to the folder in which you want to store the Web Page. In our example, we are saving all of the Web Pages for a module in a sub-folder called Assets within the course folder. Select menu option File|New|Web Page. A Web Page Editor tabbed page is opened up on the Developer screen. The content and use of this is exactly as described above in the explanation of how to create a Web Page from within an Outline. Enter the required information into the Web Page, and format it as required. We have already covered most of the available options, above. Once you have made your changes, click on the Save button (). You will be prompted to select the destination folder (which will default to the folder selected in Step 1, although you can change this) and a file name. Refer to the description of the Save As dialog box above for additional help if necessary. Close the Web Page Editor. You have now created a new Web Page. Now let's look at how to use it. Using Web Pages in Topics If you recall our long-running exercise on maintaining your SAP user profile, you will remember that we ask the user to enter their last name and their first name. These terms may be confusing in some countries—especially in countries where the "family name" actually comes before the "given name"—so we want to provide some extra explanation of some common name formats in different countries, and how these translate into first name and last name. We'll provide this information in a Web Page, and then link to this Web Page at the relevant place(s) within our Topic. First, we need to create the Web Page. How to do this is explained in the section Creating independent Web Pages, above. For our exercise, our created Web Page is as follows: There are two ways in which you can link to a Web Page from a Topic. These are explained separately, below. Linking via a hyperlink With a hyperlink, the Web Page is linked from a word or phrase within the Bubble Text of a Frame. (Note that it is only possible to do this for Custom Text because you can't select the Template Text to hyperlink from.) To create a hyperlink to a Web Page from within a Frame in a Topic, carry out the steps described below: Open up the Topic in the Topic Editor. Navigate to the Frame from which you want to provide the hyperlink. In our exercise, we will link from the Explanation Frame describing the Last name field (this is Frame 5B). In the Bubble Properties pane, select the text that you want to form the hyperlink (that is, the text that the user will click on to display the Web Page). Click on the Bubble text link button () in the Bubble Properties pane. The Bubble Text Link Properties dialog box is displayed. Click on the Create link button () to create a link to an existing Web Page (you could also click on the Create new web page button () to create a new Web Page if you have not yet created it). The Insert Hyperlink dialog box is displayed, as shown in the next screenshot: Make sure that the Document in Library option is selected in the Link to: bar. In the Look in: field, navigate to the folder containing the Web Page (the Assets folder, in our example). In the file list, click on the Web Page to which you want to create a hyperlink. Click on the Open button. Back in the Bubble Text Link Properties dialog box, click on OK. This hyperlink will now appear as follows, in the Player: Note that there is no ToolTip for this hyperlink. There was no opportunity to enter one in the steps above, so UPK doesn't know what to use. Version Difference In OnDemand 8.7 the Infoblock name was used as the ToolTip, but this is not the case from OnDemand 9.x onwards.
Read more
  • 0
  • 0
  • 3648

article-image-developing-application-symfony-13-part-1
Packt
10 Nov 2009
5 min read
Save for later

Developing an Application in Symfony 1.3 (Part 1)

Packt
10 Nov 2009
5 min read
The milkshake shop Our application will be designed for a fictitious milkshake shop. The functional requirements for our shop are: Display of the menu, job vacancies, and locations; these details will be updated from the back office Sign up to a newsletter, where users can submit their details Search facility for the menu Secure backend for staff to update the site The site must be responsive Option for the users to view our vacancies in three languages Creating the skeleton folder structure Symfony has many ways in which it can help the developer create applications with less efforts—one way is by using the Symfony tasks available on the Command Line Interface (CLI). These Symfony tasks do the following: Generate the folder structure for your project, modules, applications, and tasks Clear the generated cache and rotate log files Create controllers to enable and disable an application and set permissions Interact with the ORM layer to build models and forms Interact with SQL to insert data into the database Generate initial tests and execute them Search for text in your templates that need i18N dictionaries and create them If you'd like to see all the tasks that are available with Symfony, just type the following on your command line:symfony Let's begin with using one of the Symfony tasks to generate the file structure for our project. Our project will reside in the milkshake folder. In your terminal window, change your folder path to the path that will hold your project and create this milkshake folder. My folder will reside in workspace folder. Therefore, I would type this: $mkdir ~/workspace/milkshake && cd ~/workspace/milkshake Now that I have the project folder and have changed the path to within the milkshake folder, let's use a Symfony task to generate the project file structure. In your terminal window, type the following: $/home/timmy/workspace/milkshake>symfony generate:project -orm=Propel milkshake We can generate our project we can also specify which ORM we would like to use. Our application is going to use the Propel ORM, but you can also opt for Doctrine ORM. By default, Doctrine ORM is enabled. After pressing the Enter key, the task goes into action. Now you will see output like the one in the following screenshot on your terminal window. This screenshot shows the folder structure being created: Let's have a look at the top-level folders that have been created in our project folder: Folders Description apps This folder contains our frontend application and any other applications that we will create batch If there are any batch scripts, they are placed here cache This folder is the location for cached files and/or scaffolded modules config This folder holds all the main configuration files and the database schema definitions data This folder holds data files such as test or fixture data doc All our documentation should be stored in this folder; this includes briefs, PhpDoc for API, Entity Relationship Diagrams, and so on lib Our models and classes are located in this folder so that all applications can access them log This folder holds the development controllers log files and our Apache log plugins All installed plugins are located in this folder test All unit and functional tests should be placed in this folder; Symfony will create and place stubs in this folder when we create our modules web This is the web root folder that contains all web resources such as images, CSS, JavaScript, and so on There are three folders that must be writable by the web server. These are the cache, log, and web/uploads/. If these are not writable to your web server, then the server itself will either produce warnings at startup or fail to start. The files permissions are usually set during the generation process, but sometimes this can fail. You can use the following task to set the permissions:$/home/timmy/workspace/milkshake>symfony project:permissions With the initial project skeleton built, next we must create an application. Symfony defines an application as operations that are grouped logically and that can run independent of one another. For example, many sites have a frontend and back office/admin area. The admin area is where a user logs in and updates areas on the frontend. Of course, our application will also provide this functionality. We will call these areas frontend and backend. Our first application is going to be the frontend. Again, let's use a Symfony task to generate the frontend application folder structure. Enter the following task in your terminal window: $/home/timmy/workspace/milkshake>symfony generate:app frontend Again, after executing this task, you will see the following in your terminal window: Let's have a look at the top-level folders that have just been created in the apps folder: Directory Description config This folder will have all configuration files for our application i18N This folder will contain all Internationalization files lib This folder will hold all application-specific classes modules All our modules will be built in this folder templates The default layout template is created in this folder, and any other global templates that will be created will also be placed in this folder These steps will generate our project and frontend application folder structure, and we can now view our project in a web browser. Open your web browser and go to http://milkshake/frontend_dev.php and you will see the following default Symfony page: Now we can see the default project page along with the instructions on what to do next. The frontend_dev.php file is our index file while developing the frontend application, and adheres to the naming convention of <application>_<environment>.php. This controller is the development controller and is rather helpful while developing. Before we continue though, I urge you to also have a look at the web debug toolbar.
Read more
  • 0
  • 0
  • 3081

article-image-developing-application-symfony-13-part-2
Packt
10 Nov 2009
7 min read
Save for later

Developing an Application in Symfony 1.3 (Part 2)

Packt
10 Nov 2009
7 min read
Building the database The last step is to create the database and then create all of the tables. I have created my database called milkshake on the CLI using the following command: $/home/timmy/workspace/milkshake>mysqladmin create milkshake -u root -p Now that we have created the database, we need to generate the SQL that will create our tables. Again, we are going to use a Symfony task for this. Just like creating the ORM layer, the task will build the SQL based on the schema.xml file. From the CLI, execute the following task: $/home/timmy/workspace/milkshake>symfony propel:build-sql This has now generated a SQL file that contains all of the SQL statements needed to build the tables in our database. This file is located in the data/sql folder within the project folder. Looking at the generated lib.model.schema.sql file in this folder, you can view the SQL. Next, we need to insert the SQL into the database. Again using a Symfony task, execute the following on the CLI: $/home/timmy/workspace/milkshake>symfony propel:insert-sql During the execution of the task, you will be prompted to enter a y or N as to whether you want to delete the existing data. As this command will delete your existing tables and then create new tables, enter y. During development, the confirmation can become tiring. To get around this you can append the no-confirmation switch to the end as shown here: >symfony propel:insert-sql --no-confirmation Afterwards, check in your database and you should see all of the tables created as shown in the following screenshot: I have showed you how to execute each of the tasks in order to build everything. But there is a simpler way to do this, and that is with yet another Symfony task which executes all of the above tasks: $/home/timmy/workspace/milkshake>symfony propel:build-all or $/home/timmy/workspace/milkshake>symfony propel:build-all --no-confirmation Our application is now all set up with a database and the ORM layer configured. Next, we can start on the application logic and produce a wireframe. Creating the application modules In Symfony, all requests are initially handled by a front controller before being passed to an action. The actions then implement the application logic before returning the presentation template that will be rendered. Our application will initially contain four areas—home, location, menu, and vacancies. These areas will essentially form modules within our frontend application. A module is similar to an application, which is the place to group all application logic and is self contained. Let's now create the modules on the CLI by executing the following tasks: $/home/timmy/workspace/milkshake>symfony generate:module frontend home$/home/timmy/workspace/milkshake>symfony generate:module frontend location$/home/timmy/workspace/milkshake>symfony generate:module frontend menu$/home/timmy/workspace/milkshake>symfony generate:module frontend vacancies Executing these tasks will create all of the modules' folder structures along with default actions, templates, and tests in our frontend application. You will see the following screenshot when running the first task: Let's examine the folder structure for a module: Folder Description actions This folder contains the actions class and components class for a module templates All modules templates are stored in this folder Now browse to http://milkshake/frontend_dev.php/menu and you will see Symfony's default page for our menu module. Notice that this page also provides useful information on what to do next. This information, of course, is to render our template rather than have Symfony forward the request. Handling the routing We have just tested our menu module and Symfony was able to handle this request without us having to set anything. This is because the URL was interpreted as http://milkshake/module/action/:params. If the action is missing, Symfony will automatically append index and execute the index action if one exists in the module. Looking at the URL for our menu module, we can use either http://milkshake/frontend_dev.php/menu or http://milkshake/frontend_dev.php/menu/index for the moment. Also, if you want to pass variables from the URL, then we can just add them to the end of the URL. For example, if we wanted to also pass page=1 to the menu module, the URL would be http://milkshake/frontend_dev.php/menu/index/page/1. The problem here is that we must also specify the name of the action, which doesn't leave much room for customizing a URL. Mapping the URL to the application logic is called routing. In the earlier example, we browsed to http://milkshake/frontend_dev.php/menu and Symfony was able to route that to our menu module without us having to configure anything. First, let's take a look at the routing file located at apps/frontend/config/routing.yml. # default ruleshomepage: URL: / param: { module: default, action: index }default_index: URL: /:module param: { action: index }default: URL: /:module/:action/* This is the default routing file that was generated for us. Using the home page routing rules as an example, the route is broken down into three parts: A unique label: homepage A URL pattern: URL: / An array of request parameters: param: { module: menu, action: index } We refer to each one of the rules within the routing file using a unique label. A URL pattern is what Symfony uses to map the URL to a rule, and the array of parameters is what maps the request to the module and the action. By using a routing file, Symfony caters for complicated URLs, which can restrict parameter types, request methods, and associate parameters to our Propel ORM layer. In fact, Symfony includes an entire framework that handles the routing for us. The application logic As we have seen, Symfony routes all requests to an action within a module. So let's open the actions class for our menu module, which is located at apps/frontend/modules/menu/actions/actions.class.php. class menuActions extends sfActions{ /** * Executes index action * * @param sfRequest $request A request object */ public function executeIndex(sfWebRequest $request) { $this->forward('default', 'module'); }} This menuActions class contains all of the menu actions and as you can see, it extends the sfActions base class. This class was generated for us along with a default 'index' action (method). The default index action simply forwards the request to Symfony's default module, which in turn generates the default page that we were presented with. All of the actions follow the same naming convention, that is, the action name must begin with the word execute followed by the action name starting with a capital letter. Also, the request object is passed to the action, which contains all of the parameters that are in the request. Let's begin by modifying the default behavior of the menu module to display our own template. Here we need the application logic to return the template name that needs to be rendered. To do this, we simply replace the call to the forward function with a return statement that has the template name: public function executeIndex(sfWebRequest $request) { return sfView::SUCCESS; } A default index template was also generated for us in the templates folder, that is, apps/frontend/modules/menu/templates/indexSuccess.php. Returning the sfView::SUCCESS constant will render this template for us. The template rendered will depend on the returned string from the action. All templates must also follow the naming convention of actionNameReturnString.php. Therefore, our action called index returns the sfView constant SUCCESS, meaning that the indexSuccess.php template needs to be present within the templates folder for our menu module. We can return other strings, such as these: return sfView::ERROR: Looks for the indexError.php template return myTemplate: Looks for the indexmyTemplate.php template return sfView::NONE: Will not return a template and, therefore, bypass the view layer; this could be used as an example for an AJAX request However, just removing the $this->forward('default', 'module') function will also return indexSuccess.php by default. It is worth adding the return value for ease in reading. Now that we have rendered the menu template, go ahead and do the same for the home, locations, and vacancies modules.
Read more
  • 0
  • 0
  • 1440

article-image-keyword-research-search-engine-optimization-drupal-6
Packt
10 Nov 2009
11 min read
Save for later

Keyword Research for Search Engine Optimization in Drupal 6

Packt
10 Nov 2009
11 min read
SEO is necessary—you've got to do it if you want to rank well for keywords. Simple in concept, keywords are actually very complicated things. They bring order to chaos, define markets, and reveal intent. Keyword data simultaneously tells you how many people are looking for your product or service and what those people will do once they find you. The results of a keyword search can tell you who the top people are in an industry and inform you of upcoming trends in the market. Keywords are the most visible focal point of free market competition between business interests. Search engine optimization is a popularity contest for keywords and this is a popularity contest you want to win. The most critical part of an SEO project is finding the right keywords. You will spend months working on your web site, getting links, and telling the world that your site is the authority on that keyword. It's critical that when you finally arrive, your customers are there to embrace you. If you pick the wrong keywords, you'll spend months working only to find that there is nobody who wants to buy your product. Ouch! An extra few hours researching your keywords in the beginning will help you avoid this fate. What a keyword is Keywords are many things to many people. For the purpose of this SEO campaign, there are really only two things about keywords that we need to understand to get the job done. Keywords aggregate searchers into organized groups and a keyword defines a market. Keywords are single words that a search engine user types into the search box to try to find what they're looking for. Key phrases are the same as keywords except for the fact that they consist of two or more words. For the sake of simplicity, throughout this article let's use keywords to mean both, keywords and key phrases. Keywords aggregate searchers into organized groups Millions of random people visit Google every day. When they arrive, they are amorphous—a huddled mass yearning for enlightenment with nothing more than a blank Google search form to guide them. As each person types keywords into Google and clicks the Search button, this random mass of people becomes extraordinarily organized. Each keyword identifies exactly what that person is looking for and allows Google to show them results that would satisfy their query. Much like a labor union, the more searchers there are looking for a particular phrase, the more clout they have with the businesses who want to sell to them. However, instead of more pay and better health benefits, you get better search results. If there are a thousand people per month looking for keyword A and a hundred people per month looking for keyword B, then chances are good that there are more competitors focused on keyword A. More competition means better optimization is required to show up at the top. Better optimization requires more content, closer attention to meeting the needs of the group, and more interesting web sites. A keyword defines a market This organization of searchers is what gives Google such power. In a very real way, Google creates billions of tiny markets every day. There is a buyer (the searcher) looking for a product, the seller (the web site owners) selling what they've got, and the middleman (Google) organizing everything and facilitating the transaction. The transaction takes place when the searcher clicks on the result of a keyword search and is whisked off to the seller's web site. However, it doesn't always go smoothly. In fact, very high percentages of the time the searcher doesn't find what they're looking for so they hit the back button and try again. They may try a different result on the same page or type in a different keyword and do the entire search again. Each time you have an opportunity to convince them that yours is the right site with the best information and most promising solution to their questions. It is in your best interest to provide a web site that quickly engages the searchers, pulls them in, and keeps the dialogue going. Why keyword research is important As a Drupal site owner, you have the opportunity to position yourself as the best site available for the keywords people are searching for. Know thy customerThere are hundreds of good marketing books out there to help you better understand your audience. All that good information applies to SEO as well. The better you know your audience, the better you can guess what keywords they are typing in Google to find companies like yours. You're an expert in your field, so of course you know many of the keywords that people use to find your products and services. But, are you sure you know them all? A few years ago Tom, a friend of mine, hired me to do SEO for his high-end landscaping firm. His company designs and installs yards, trees, retaining walls, and so on, outside million dollar homes in the hill country near Austin, Texas. We sat down in an early morning meeting and he said, "Ben, the right keyword is landscaping. I know it so there's no reason to do all this research. Don't waste your time and my money. Just do landscaping and that's that". Being the thorough person that I am, I did the keyword research anyway. Guess what I found? The number one phrase in his business was landscaping. However, a very close second was landscaper. And, while landscaping had dozens of competitors—some of them were very well entrenched—there were only a handful of competitors showing up for landscaper. The next day, I called Tom and told him what I found. "You know what?" he said, "Now that you mention it, many of our customers do refer to us as landscapers—'I need a landscaper. Call a landscaper' ". So, we started his campaign targeting the keyword landscaper. Because there was so little competition, he ranked in the top five in a matter of weeks and was number one in Google within two months. He was dominating half the search traffic within two months! The leads were rolling in so we switched to the keyword landscaping. It took longer—about three months—for him to break into the top ten. By that time, he had so many inquiries, he hardly even noticed. The lesson here is three-fold: You may know some of the keywords, however, that doesn't mean you know them all. Just because you think of yourself in one particular way doesn't mean your customers do. By taking the time to do keyword research, you will reveal opportunities in your market that you didn't know existed. What your keyword goal is Before you start looking at keywords, you need to fix your goal firmly in your mind. There are basically two major reasons to do SEO. Goal 1: Brand awareness This may come as a surprise but there are people out there who don't know that you exist. SEO is a powerful and inexpensive way to get your name out there and build some credibility with your target customers. There are three major types of brand awareness: Company brand awareness Company brand awareness works on getting the name of your company into the market. If you want to build credibility for Big Computers Unlimited as a whole, then you probably want a campaign focused on getting your company listed where other top producers of PCs are listed. PC, computer, or fast computer all might be good terms. Product brand awareness Product brand awareness focuses on building general market knowledge of one product or line of products that your company produces. If you work for Big Computers Unlimited and you want to sell more Intergalactic Gamer brand computers at retail stores throughout the country, then you probably want to build a campaign around keywords like Gaming PC or even high-end PC. Credibility A 2004 survey by iProspect found that two out of three search engine users believed that the highest search results in Google were the top brands for their industry; there is little reason to believe this perception has changed. That means that just by being at the top of Google will gain you a certain level of trust among search engine users. If Big Computers Unlimited can rank in the top three for Gaming PCs, they'll develop a lot of creed among gamers. Goal 2: Conversions Conversions are a fancy way of saying that the visitor did what you wanted them to do. There are three typical types of conversions: Transactional A transaction is just what it sounds like. Someone puts in a credit card and buys your product. This is typical of most product-focused web sites but isn't limited to this narrow category. Your web site may sell registrations to online training, subscriptions to magazines, or even credit monitoring. The bottom line is that it can be purchased on the site. You need to focus your keyword research on terms that will bring buyers who are ready to purchase right now. Words like buy, price, merchant, store, and shop indicate a desire for immediate purchase. Give them the transactional information they need like price, color choices, size, quantity discounts, return policy, and delivery options. With this information and a great checkout experience you'll have them buying from you in no time. UbercartUbercart is simply the best shopping cart solution for Drupal. If you're a transactional web site and you need an e-commerce solution, start here: http://www.ubercart.org/. Lead Generation If you're in an industry with a long sales cycle like real estate, legal services, or medical, then you're probably interested in generating leads rather than online transactions. If you sell a service or product that requires direct contact with your customer, like consulting or personal training, then you probably want leads too. Lead generation means that instead of buying from your web site, you're interested in someone expressing an interest in doing business with you so that you can follow up with them later. You let them express interest by filling out a contact form, emailing you, or even picking up the phone and calling you. You need to focus your keyword research on terms that will bring people who are perhaps a little earlier in the buying process. Words like review, compare, best, information, and generic names of your product indicate a user is researching but not quite ready to buy. You'll need to provide a lot of information on your web site to inform them and shape their thinking about your product. Page impression (or ad impression) Some web sites make money when visitors view an ad. To these sites, a conversion may simply be someone clicking on one or more pages so that they'll see one more ads. You need to focus your keyword research on terms that will bring people seeking information or news to your web site. Keyword research tools There are many tools to help you find the right keywords. It's not important that you use them all but you should try a few of them just so you can see what's out there. Here are my favorites in order of preference: Your own web site The most important keyword research tool at your disposal is your own web site, http://www.yourDrupalsite.com/. If you already have some traffic, chances are that they're coming from somewhere. With analytics installed, you should be able to find out what they're searching on with just a few clicks. If you have Google Analytics installed then you can easily see this data by logging in to its admin section and then going to Traffic Sources | Search Engines | Google. This information can be invaluable if you cross-reference it with your current positions in the search engines. Say, for example, that you're getting 100 searchers a month on a term that you're on page 2 of Google. That's a good indicator that people are searching hard to find a company like yours and that may be a very good term to focus on with your search engine campaign. If you're getting that much traffic on page 2, imagine if you were in the top three on page 1. Drupal has a built-in search engine—another great tool to see what the people are searching for, after they've already visited your site. There's an insanely useful module for that, called Top Searches (http://drupal.org/project/top_searches) that does a better job that Drupal's built-in list. This module was developed by the founder of Linnovate, Zohar Stolar. Thanks, Zohar!
Read more
  • 0
  • 0
  • 1499

article-image-search-engine-optimization-using-sitemaps-drupal-6
Packt
10 Nov 2009
8 min read
Save for later

Search Engine Optimization using Sitemaps in Drupal 6

Packt
10 Nov 2009
8 min read
Let's get started. As smart as the Google spider is, it's possible for them to miss pages on your site. Maybe you've got an orphaned page that isn't in your navigation anymore. Or, perhaps you have moved a link to a piece of content so that it's not easily accessible. It's also possible that your site is so big that Google just can't crawl it all without completely pulling all your server's resources—not pretty! The solution is a sitemap. In the early 2000s, Google started supporting XML sitemaps. Soon after Yahoo came out with their own standard and other search engines started to follow suit. Fortunately, in 2006, Google, Yahoo, Microsoft, and a handful of smaller players all got together and decided to support the same sitemap specification. That made it much easier for site owners to make sure every page of their web site is crawled and added to the search engine index. They published their specification at http://sitemaps.org. Shortly thereafter, the Drupal community stepped up and created a module called (surprise!) the XML sitemap module. This module automatically generates an XML sitemap containing every node and taxonomy on your Drupal site. Actually, it was written by Matthew Loar as part of the Google Summer of Code. The Drupal 6 version of the module was developed by Kiam LaLuno. Finally, in mid-2009, Dave Reid began working on a version 2.0 of the module to address performance, scalability, and reliability issues. Thanks, guys! According to www.sitemaps.org: Sitemaps are an easy way for Webmasters to inform search engines about pages on their sites that are available for crawling. In its simplest form, a Sitemap is an XML file that lists URLs for a site along with additional metadata about each URL (when it was last updated, how often it usually changes, and how important it is, relative to other URLs in the site) so that search engines can more intelligently crawl the site. Web crawlers usually discover pages from links within the site and from other sites. Sitemaps supplement this data to allow crawlers that support Sitemaps to pick up all URLs in the Sitemap and learn about those URLs using the associated metadata. Using a sitemap does not guarantee that every page will be included in the search engines. Rather, it helps the search engine crawlers find more of your pages. In my experience, submitting an XML Sitemap to Google will greatly increase the number of pages when you do a site: search. The keyword site: searches show you how many pages of your site are included in the search engine index, as shown in the following screenshot: Setting up the XML Sitemap module The XML Sitemap module creates a sitemap that conforms to the sitemap.org specification. Which XML Sitemap module should you use? There are two versions of the XML Sitemap module for Drupal 6. The 1.x version is, as of this writing, considered the stable release and should be used for production sites. However, if you have a site with more than about 2000 nodes, you should probably consider using the 2.x version. From www.drupal.org: 'The 6.x-2.x branch is a complete refactoring with considerations for performance, scalability, and reliability. Once the 6.x-2.x branch is tested and upgradeable, the 6.x-1.x branch will no longer be supported'. What this means is that in the next few months (quite possibly by the time you're reading this) everyone should be using the 2.x version of this module. That's the beauty of open source software—there are always improvements coming that make your Drupal site better Search Engine Optimized. Carry out the following steps to set up the XML Sitemap module: Download the XML Sitemap module and install it just like a normal Drupal module.  When you go to turn on the module, you'll be presented with a list that looks similar to the following screenshot: Now that you have the XML sitemap module properly installed and configured, you can start defining the priority of the content on your site—by default, the priority is .5. However, there are times when you may want Google to visit some content more often and other times when you may not want your content in the sitemap at all (like the comment or contact us submission forms). Each node now has an XML sitemap section that looks like the following screenshot: Before you turn on any included modules, consider what pieces of content on your site you want to show up in the search engines and only turn on the modules you need. The XML sitemap module is required. Turn it on. XML sitemap custom allows you to add your own customized links to the sitemap. Turn it on. XML sitemap engines will automatically submit your sitemap to the search engines each time it changes. This is not necessary and there are better ways to submit your sitemap. However, it does a nice job of helping you verify your site with each search engine. Turn it on. XML sitemap menu adds your menu items to the sitemap. This is probably a good idea. Turn it on. XML sitemap node adds all your nodes. That's usually the bulk of your content so this is a must-have. Turn it on. XML sitemap taxonomy adds all your taxonomy term pages to the sitemap. Generally a good idea but some might not want this listed. Term pages are good category pages so I recommend it. Turn it on. Don't forget to click Save configuration. Go to http://www.yourDrupalsite.com/admin/settings/xmlsitemap or go to your admin screen and click on Administer | Site Configuration | XML sitemap link. You'll be able to see the XML sitemap, as shown in the following screenshot: Click on Settings and you'll see a few options, as shown in the following screenshot: Minimum sitemap lifetime: It determines that minimum amount of time that the module will wait before renewing the sitemap. Use this feature if you have an enormous sitemap that is taking too many server resources. Most sites should leave this set on No minimum. Include a stylesheet in the: The sitemaps will generate a simple css file to include with the sitemap that is generated. It's not necessary for the search engines but very helpful for troubleshooting or if any humans are going to view the sitemap. Leave it checked. Generate sitemaps for the following languages: In the future, this option will allow you to actually specify sitemaps for different languages. This is very important for international sites who want to show up in localized search engines. For now, English is the only option and should remain checked. Click the Advanced settings drop-down and you'll see several additional options. Number of links in each sitemap page allows you to specify how many links to pages on your web site will be in each sitemap. Leave it on Automatic unless you are having trouble with the search engines accepting the sitemap. Maximum number of sitemap links to process at once sets the number of additional links that the module will add to your sitemap each time the cron runs. This highlights one of the biggest differences between the new XML sitemap and the old one. The new sitemap only processes new nodes and updates the existing sitemap instead of reprocessing every time the sitemap is accessed. Leave this setting alone unless you notice that cron is timing out. Sitemap cache directory allows you to set where the sitemap data will be stored. This is data that is not shown to the search engines or users; it's only used by the module. Base URL is the base URL of your site and generally should be left as it is. Click on the Front page drop-down and set these options: Front page priority: 1.0 is the highest setting you can give a page in the XML sitemap. On most web sites, the front page is the single most important part of your site so, this setting should probably be left at 1.0. Front page change frequency: Tells the search engines how often they should revisit your front page. Adjust this setting to reflect how often the front page of your site changes. What is priority and how does it work? Priority is an often-misunderstood part of a sitemap. For instance, the priority is only used to compare pages of your own site and you cannot increase your ranking in the Search Engine Results Page (SERPS) by increasing the priority of your pages. However, it does help let the search engines know which pages of your site you feel are more important. They could use this information to select between two different pages on your site when deciding which page to show to a search engine user.
Read more
  • 0
  • 0
  • 1775

article-image-extending-search-engine-optimization-using-sitemaps-drupal-6
Packt
09 Nov 2009
5 min read
Save for later

Extending Search Engine Optimization using Sitemaps in Drupal 6

Packt
09 Nov 2009
5 min read
We have discussed some of the techniques earlier in Search Engine Optimization using Sitemaps in Drupal 6 article which mainly covered XML sitemaps. The Google spider is quite smart, but then the possibility of them missing pages on your site is also quite high. You may have pages that are not in navigation anymore or you have moved a link to a piece of content so that it's not easily accessible. A possibility is also that your site is very big for Google to just crawl it all without completely pulling all your server's resources. The solution to this is sitemaps. Google News XML Sitemap Google has created one of the most popular news sources on the Internet just by collecting and organizing news articles from other sites. It's called Google News and if you are running a news web site then you know how powerful it can be for picking up your comment. One front page article can generate 50,000 or more visitors in an hour or two. To be listed in Google News takes more than luck. You need to write great content and proactively seek to create timely and news-worthy content. If you've done that and you're still not showing up in Google news then it's time to create a Google News XML Sitemap. The Google News sitemap generator module was originally created by Adam Boyse at Webopius and is being maintained by Dave Reid. Thanks to both of you! Setting up the Google News sitemap generator module Carry out the following steps to set up the Google News sitemap generator module: Download the Google News sitemap module from http://drupal.org/project/googlenews and install it just like a normal Drupal module. Go to http://www.yourDrupalsite.com/admin/settings/googlenews or go to your admin screen, and click the Administer | Site Configuration | Google News sitemap feed link. You'll see a screen similar to the following screenshot: Select the content types that you wish to show up in the news feed. If all your story content types are newsworthy, pick Story. Your blog or page content types are probably not a good fit and selecting them may hurt the chances of your content being approved by Google. Click on Save configuration. Point your browser to http://www.yourDrupalsite.com/googlenews.xml and double check that you can see the sitemap, as shown in the following screenshot: Submitting your Google News sitemap to Google News Once you've assembled your new articles for a single publication label, submit them to Google News sitemaps by carrying out the following steps: Check Google News to see if your site is already included. If not, you can request inclusion by visiting the following link, http://www.google.com/support/news_pub/bin/request.py?ctx=answer. The inclusion process may take up to a few weeks, and you'll only be able to submit a News sitemap once this process is complete. If your site is already showing up in Google News then proceed. If not, you should wait a couple of weeks and try again. Log in to Google Webmaster Tools by pointing your browser to http://www.google.com/webmasters/tools/. On the Webmaster Tools Dashboard, click on Add next to the site you want to submit. From the Choose type drop-down menu, select News sitemap, and then type the sitemap URL, in this case http://www.yourDrupalsite.com/googlenews.xml. In the list, select the publication label for the articles. You can select only one label for each sitemap. Click on OK. URL list The XML Sitemap is the ideal choice because it allows you to specify a lot of information about the content of your site. But, say for some reason that you can't install an XML Sitemap. Maybe there's a conflict with another module that you just have to have. Perhaps your server doesn't have the power to handle the large overhead that an XML sitemap needs for large sites. Or, possibly you want to submit a sitemap to a search engine that doesn't support XML yet. Well, there is an alternative. It's not as robust but it is a functional, albeit rudimentary, solution. Just make a list of every URL in your site and put the list in one big text document with one URL on each line. Too much work, you say? Good thing there is a Drupal module that does all the work for you. It's called the URL list module. It's maintained by David K. Norman. Thank you, David! Setting up a URL list sitemap Download the Sitemap module from http://drupal.org/project/urllist and install it just like a normal Drupal module. Go to http://www.yourDrupalsite.com/admin/settings/urllist or go to your admin screen and click the Administer | Site Configuration | URL list link. You'll see the URL list screen, as shown in the following screenshot: You can adjust the settings to keep track of who accessed the URL list to submit your site to Yahoo! and to help you authenticate with Yahoo! However, you can leave all these settings untouched for now. Point your browser to http://www.yourDrupalsite.com/urllist.txt (http://www.yourDrupalsite.com/?q=urllist.txt if you don't have clean URLs installed) and you'll see your URL list sitemap, as shown in the following screenshot: You can submit this sitemap to Google, Yahoo!, and many other search engines in lieu of an XML sitemap. Just follow the same steps as defined in the Submit your XML Sitemap to Google section above but use # http://www.yourDrupalsite.com/urllist.txt as the URL. Remember to use http://www.yourDrupalsite.com/?q=urllist.txt if Google has problems with your URL.
Read more
  • 0
  • 0
  • 1661
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-simple-item-selector-using-jquery
Packt
09 Nov 2009
4 min read
Save for later

Simple Item Selector Using jQuery

Packt
09 Nov 2009
4 min read
(For more resources on jQuery, see here.) Adding jQuery to your page You can download the latest version of jQuery from jQuery site (http://jquery.com/) and can be added as a reference to your web pages accordingly. You can reference a local copy of jQuery using <script> tag in the page. Either you can reference your local copy or you can directly reference remote copy from jQuery.com or Google Ajax API (http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js) Prerequisite Knowledge In order to understand the code, one should have the basic knowledge of HTML, CSS, JavaScript and basic knowledge of jQuery. Ingredients Used HTML CSS jQuery Photoshop (Used for Designing of Image Buttons and Backgrounds) Preview / Download If you would like to see the working example, please do click here http://www.developersnippets.com/snippets/jquery/item_selector/item_selector.html). And if you would like to download the snippet, click here (http://www.developersnippets.com/snippets/jquery/item_selector/item_selector.zip) Figure 1: Snapshot of "Simple Item Selector using jQuery" Figure 2: Overview of div containers and image buttons used Successfully tested The above application has been successfully tested on various browsers like IE 6.0, IE 7, IE 8, Mozilla Firefox (Latest Version), Google Chrome and Safari Browser (4.0.2) respectively. HTML Code Below is the HTML code with comments for you to understand it better. <!-- Container --><div id="container"> <!-- From Container --> <div class="from_container"> <select id="fromSelectBox" multiple="multiple"> <option value="1">Adobe</option> <option value="2">Oracle</option> <option value="3">Google</option> <option value="4">Microsoft</option> <option value="5">Google Talk</option> <option value="6">Google Wave</option> <option value="7">Microsoft Silver Light</option> <option value="8">Adobe Flex Professional</option> <option value="9">Oracle DataBase</option> <option value="10">Microsoft Bing</option> </select><br /> <input type="image" src="images/selectall.jpg" class="selectall" onclick="selectAll('fromSelectBox')" /><input type="image" src="images/deselectall.jpg" class="deselectall" onclick="clearAll('fromSelectBox')" /> </div> <!-- From Container [Close] --> <!-- Buttons Container --> <div class="buttons_container"> <input type="image" src="images/topmost.jpg" id="topmost" /><br /> <input type="image" src="images/moveup.jpg" id="moveup" /><br /> <input type="image" src="images/moveright.jpg" id="moveright" /><br /> <input type="image" src="images/moveleft.jpg" id="moveleft" /><br /> <input type="image" src="images/movedown.jpg" id="movedown" /><br /> <input type="image" src="images/bottommost.jpg" id="bottommost" /><br /> </div> <!-- Buttons Container [Close] --> <!-- To Container --> <div class="to_container"> <select id="toSelectBox" multiple="multiple"></select><br /> <input type="image" src="images/selectall.jpg" class="selectall" onclick="selectAll('toSelectBox')" /><input type="image" src="images/deselectall.jpg" class="deselectall" onclick="clearAll('toSelectBox')" /> </div> <!-- To Container [Close] --> <!-- To Container --> <div class="ascdes_container"> <input type="image" src="images/ascending.jpg" id="ascendingorder" style="margin:1px 0px 2px 0px;" onclick="ascOrderFunction()" /><br /> <input type="image" src="images/descending.jpg" id="descendingorder" onclick="desOrderFunction()" /> </div> <!-- To Container [Close] --> <div style="clear:both"></div></div><!-- Container [Close] -->
Read more
  • 0
  • 0
  • 2945

article-image-using-javascript-effects-joomla
Packt
06 Nov 2009
7 min read
Save for later

Using JavaScript Effects with Joomla!

Packt
06 Nov 2009
7 min read
Customizing Google Maps Google Maps has a comprehensive API for interacting with maps on your website. MooTools can be used to load the Google Maps engine at the correct time. It can also act as a bridge between the map and other HTML elements on your site. To get started, you will first need to get an API key to use Google Maps on your domain. You can sign up for a free key at http://code.google.com/apis/maps/signup.html. Even if you are working on your local computer, you still need the key. For instance, if the base URL of your Joomla installation is http://localhost/joomla, you will enter localhost as the domain for your API key. Once you have an API key ready, create the file basicmap.js in the /components/com_js folder, and fill it with the following code: window.addEvent('domready', function() { if (GBrowserIsCompatible()) { var map = new GMap2($('map_canvas')); map.setCenter(new GLatLng(38.89, -77.04), 12); window.onunload=function() { GUnload(); }; }}); The entire script is wrapped within a call to the MooTools-specific addEvent() member function of window. Because we want this code to execute once the DOM is ready, the first parameter is the event name 'domready'. The second parameter is an anonymous function containing our code. What does the call to function() do?Using function() in JavaScript is a way of creating an anonymous function. This way, you can create functions that are used in only one place (such as event handlers) without cluttering the namespace with a needless function name. Also, the code within the anonymous function operates within its own scope; this is referred to as a closure. Closures are very frequently used in modern JavaScript frameworks, for event handling and other distinct tasks. Once inside of the function, GBrowserIsCompatible() is used to determine if the browser is capable of running Google Maps. If it is, a new instance of GMap2() is declared and bound to the HTML element that has an id of 'map_canvas' and is stored into map. The call to $('map_canvas') is a MooTools shortcut for document.GetElementById(). Next, the setCenter() member function of map is called to tell Google Maps where to center the map and how far to zoom in. The first parameter is a GLatLng() object, which is used to set the specific latitude and longitude of the map's center. The other parameter determines the zoom level, which is set to 12 in this case. Finally, the window.onunload event is set to a function that calls GUnload(). When the user navigates away from the page, this function removes Google Maps from memory, to prevent memory leaks. With our JavaScript in place, it is now time to add a function to the controller in /components/com_js/js.php that will load it along with some HTML. Add the following basicMap() function to this file: function basicMap(){ $key = 'DoNotUseThisKeyGetOneFromCodeDotGoogleDotCom'; JHTML::_('behavior.mootools'); $document =& JFactory::getDocument(); $document->addScript('http://maps.google.com/maps?file=api&v= 2&key=' . $key); $document->addScript( JURI::base() . 'components/com_js/basicmap.js'); ?> <div id="map_canvas" style="width: 500px; height: 300px"></div> <?php} The basicMap() function starts off by setting $key to the API key received from Google. You should replace this value with the one you receive at http://code.google.com/apis/maps/signup.html. Next, JHTML::_('behavior.mootools'); is called to load MooTools into the <head> tag of the HTML document. This is followed by getting a reference to the current document object through the getDocument() member function of JFactory. The addScript() member function is called twice—once to load in the Google Maps API (using our key), then again to load our basicmap.js script. (The Google Maps API calls in all of the functions and class definitions beginning with a capital 'G'.) Finally, a <div> with an id of 'map_canvas' is sent to the browser. Once this function is in place and js.php has been saved, load index.php?option=com_js&task=basicMap in the browser. Your map should look like this: We can make this map slightly more interesting by adding a marker to a specific address. To do so, add the highlighted code below to the basicmap.js file: window.addEvent('domready', function() { if (GBrowserIsCompatible()) { var map = new GMap2($('map_canvas')); map.setCenter(new GLatLng(38.89, -77.04), 12); var whitehouse = new GClientGeocoder(); whitehouse.getLatLng('1600 Pennsylvania Ave NW', function(latlng) { marker = new GMarker( latlng ); marker.bindInfoWindowHtml('<strong>The White House</strong>'); map.addOverlay(marker); }); window.onunload=function(){ GUnload(); }; }}); This code sets whitehouse as an instance of the GClientGeocoder class. Next, the getLatLng() member function of GClientGeocoder is called. The first parameter is the street address to be looked up. The second parameter is an anonymous function where the GLatLng object is passed once the address lookup is complete. Within this function, marker is set as a new GMarker object, which takes the passed-in latlng object as a parameter. The bindInfoWindowHTML() member function of GMarker is called to add an HTML message to appear in a balloon above the marker. Finally, the maker is passed into the addOverlay() member function of GMap2, to place it on the map. Save basicmap.js and then reload index.php?option=com_js&task=basicMap. You should now see the same map, only with a red pin. When you click on the red pin, your map should look like this: Interactive Maps These two different maps show the basic functionality of getting Google Maps on your own website. These maps are very basic; you could easily create them at maps.google.com then embed them in a standard Joomla! article with the HTML code they provide you. However, you would not have the opportunity to add functions that interact with the other elements on your page. To do that, we will create some more HTML code and then write some MooTools-powered JavaScript to bridge our content with Google Maps. Open the /components/com_js/js.php file and add the following selectMap() function to the controller: function selectMap(){ $key = 'DoNotUseThisKeyGetOneFromCodeDotGoogleDotCom'; JHTML::_('behavior.mootools'); $document =& JFactory::getDocument(); $document->addScript('http://maps.google.com/maps?file=api&v =2&key=' . $key); $document->addScript( JURI::base() . 'components/com_js/selectmap.js'); ?> <div id="map_canvas" style="width: 500px; height: 300px"></div> <select id="map_selections"> <option value="">(select...)</option> <option value="1200 K Street NW">Salad Surprises</option> <option value="1221 Connecticut Avenue NW">The Daily Dish</option> <option value="701 H Street NW">Sushi and Sashimi</option> </select><?php} This function is almost identical to basicMap() except for two things—selectmap.js is being added instead of basicmap.js, and a <select> element has been added beneath the <div>. The <select> element has an id that will be used in the JavaScript. The options of the <select> element are restaurants, with different addresses as values. The JavaScript code will bind a function to the onChange event so that the marker will move as different restaurants are selected.
Read more
  • 0
  • 0
  • 5030

article-image-adding-calendar-web-site-using-drupal-6
Packt
28 Oct 2009
11 min read
Save for later

Adding Calendar to a Web Site using Drupal 6

Packt
28 Oct 2009
11 min read
Adding new events to the calendar Good Eatin' Goal: Create an event that will be displayed on the calendar. Additional modules needed: Event (http://drupal.org/project/event). Basic steps In order to add an event, you must first install and activate the Event module in the Module manager as shown in the following screenshot: Activating the Event module will a create a new Event content type. There are also several settings that control how events are displayed and how time zones are handled. To modify the time zone settings, select Site configuration, then Events, and finally Timezone handling, from the Administer menu. Drupal will display a page similar to the following: You will want to customize these settings according to where the majority of your users live, and the types of events that you are holding. For example, if most of your users are in the U.S., 12 hour notation is probably appropriate, but if most of you users are in Europe, 24 hour notation is better. If the events are held online with a mix of users in different time zones, it would make sense to have the events displayed in the user's time zone. However, if the event is being held at a single site, it would make sense to use the local time of the event. For the Good Eatin' site, we will use the site's time zone for events, display the events in the event time zone, and use the 12 hour time notification. Before we can create an event, we must set the default time zone for the site. This is done by selecting Site configuration and then Date and time, from the Administer menu. The Good Eatin' restaurant is located in Colorado, so we will set the time zone to US/Mountain. Click Save configuration to save your changes. To add an event, select Create content and then Event, from the main Navigation menu. Enter a title and a description for the event, as shown in the following screenshot, and then set the start time and optionally the end time for the event. Click Save when you are happy with the event settings. Displaying events Good Eatin' Goal: Display events on the site in various formats including a block of upcoming events, a table of events, and a calendar of events. Additional modules needed: Event (http://drupal.org/project/event). Basic steps The Event module provides several methods for allowing customers to view events. We will explore each of these in turn. The easiest way to allow visitors to browse events is by using the event page, which is accessed by at http://yoursite.com/event. The page appears as follows: If you want the user to be able to access this page without knowing the URL in advance, you can create a menu item for the page. Open the Menu Manager by selecting on Site building and then menus, from the Administer menu. Select the menu that you want to add to the menu item and then click the Add item tab. Enter the information about the new menu item, as shown in the following screenshot, and then click Save when you are satisfied. The second method of presenting events to users is by using the upcoming events block. To add this, open the Blocks Manager by selecting Site building and then Blocks, from the Administer menu. Set the region for the List of upcoming events to Right sidebar. The new block will appear as follows: The final method of displaying calendar entries is a block showing upcoming events in a calendar view. To add this block, open the Block Manager by selecting Site building and then Blocks, from the Administer menu. Set the region for Calendar to browse events to Right sidebar. The display for the calendar will appear as follows: You can decide which of these methods to use for your own site, based on how the user will work with your site. Adding other content types to the event calendar Good Eatin' Goal: Discuss how to add custom content types to the event calendar. Additional modules needed: Event (http://drupal.org/project/event). An easy way of adding additional content types to your existing event calendar is by modifying the content type and then setting the Event calendar options. Open the Content Manager by selecting Content management and thenContent types, from the Administer menu. Edit the type that you want to add to the event calendar. Open the Event calendar section and modify the options, as shown here: If you prefer to have a calendar just for the type, you can use the Only in views for this type option. Save the changes to your content type, and the event calendar will be automatically updated. Creating events using CCK Good Eatin' Goal: Build events using the CCK module and the Date module, rather than the Event module, thereby giving additional control over the events. Additional modules needed: CCK (http://drupal.org/project/cck), Date (http://drupal.org/project/date). Basic steps Depending on your site, it may be more convenient to use CCK and the Date API to build dates. This strategy also gives you additional control over what information is included in the event and in the display. In addition, all required modules should be updated more quickly after each new Drupal release. However, you will need to carry out more initial setup for events and displays if you use this strategy. Install and activate the CCK and Date modules if you have not done so already. Open the Content Type Manager by selecting Content management and then Content types, from the Administer menu. Click Add content type to begin creating your new event type. We will call this type Event CCK to avoid conflicts with the Event module, as shown below: After you are satisfied with the information for the new content type, click Save Content Type to create the new event type. We now need to add fields to store the date and the time of the event. Click on the Add field link to begin the process. We will call the field event_time_cck and make the type a Datetime field so that we can enter both the day on which the event occurs and the time of day at which it starts, as shown in the following screenshot: Click Continue to save the new field. You will now need to select the display widget for the field. Text field with jQuery pop-up calendar is appropriate. Click Continue to complete the field definition. You can optionally modify various settings related to how the field is displayed. You should make the time Required. If you want to define end dates or times for the event, you should modify the To Date to Optional or Required. You can now create CCK-based events using the same techniques that we used to create other content—just select Create content and then Event CCK, from the main Navigation menu. Enter the information for the event, as shown in the following screenshot: When you are satisfied with the event, click Save to add the new event to the site's calendar.   Good Eatin' Goal: Display a calendar that gives more details than a block view on a page. Additional modules needed: Calendar (http://drupal.org/project/calendar), Views(http://drupal.org/project/views), Date API (http://drupal.org/ project/date). Basic steps Now that we can create events using CCK, we need to display them on the site. We will begin by creating a page where visitors can browse all of the upcoming events using a convenient calendar. Begin by installing and activating the Views and Calendar modules if you have not done so already. Note that, some versions of Calendar released prior to June 28, 2008 require you to activate both Calendar and iCal at the same time. If you experience an error when installing the Calendar module, either upgrade to the latest development module or install both modules at the same time. The easiest way to build new views using the calendar is to clone the default calendar view and customize it to meet your needs. Go to the Views Manager by selecting Site building and then Views, from the Administer menu. Drupal will display a list of all of the views that have currently been established on the site. If you scroll the list, you will see the Default Node view: calendar as shown in the following screenshot: Temporarily enable the default view by clicking on the Enable link. After the view has been activated, a new set of links will appear, labeled: Edit, Export, Clone, and Disable. Click on the Clone link to make a copy of the calendar. Drupal will allow you to change the name and description of the view. Change the name to event_calendar and then click next to edit the view. The default settings for the view are shown in the following screenshot. We will edit several settings for our purposes. The first change we need to make is to create a new Filter by clicking on the + symbol next to the Filters label. Select the Node: Type filter, as shown in the following screenshot: In most cases, you should also select the Node: published or admin filter to prevent unauthorized access to private information. Click the Add button and set the allowable types to Event CCK. The next change we will need to make is to modify the fields by selecting Node: Updated date. Click Remove to remove this field from the view. Click the + next to the Fields label to add a new field. Select Content: Event Time for the new field to be added, as shown in the following screenshot: Click Add to save the changes. You will now need to configure the display of the field. In most cases, including this one, the defaults are acceptable. So we will just click Update to continue. You will also need to update the settings for the end time (value 2), as described above. The final change we need to make to the view is in the Arguments. Select the Date:Date link in the Arguments section. Drupal will display a list of parameters that you can use to customize the arguments. We will need to change this to use our Content Event time fields, and then click Update to save the changes. Now that all of the required changes have been made, click Save to finish building the View. We can now return to the list of all views by clicking on the List link, and disable the default calendar view by selecting the Disable link for the default calendar view. Now that our view has been completely set up, we can use it to browse our events. The calendar view, which we used as a starting point, provides several methods of displaying the content as shown below: You may use any of these views, or you can add more views according to your site's needs. If you do not want to use a display type, you can delete it. If you click on the Calendar Page display type and review the Page settings, you will see that a Page is provided, which can be accessed using the path http://yoursite.com/calendar. No menu is provided. You can either add a menu link here, or use the Menu Manager if desired. If you open the calendar page, the display appears as follows: The calendar view also provides several block displays that can be activated and added to your site via the Block Manager. These blocks include a Calendar block that is similar to the display provided by the event block, and a Legend block that can be used to allow visitors to understand the information in the calendar more easily. Summary Congratulations! You have now added calendar and events to your sites. These will provide valuable ways of communicating with your customers to ensure that they keep coming back to your web site and, more importantly, to your business.  
Read more
  • 0
  • 0
  • 2052

article-image-troubleshooting-freenas-server
Packt
28 Oct 2009
9 min read
Save for later

Troubleshooting FreeNAS server

Packt
28 Oct 2009
9 min read
Where to Look for Log Information The first place to head whenever you have a configuration problem with FreeNAS is to the related configuration section and check that it is configured as expected. If, having double checked the settings, the problem persists, the next port of call is the log and information files in the Diagnostics: section of the web interface. Keep Diagnostics Section ExpandedBy default, the menu tree in the Diagnostics section of the web interface is collapsed, meaning the menu items aren't visible. To see the menu items, you need to click the word Diagnostics and the tree will expand. During initial setup and if you are doing lots of troubleshooting, you can save yourself a click by having the Diagnostics section permanently expanded. To set this option, go to System: Advanced and click on the Navigation - Keep diagnostics in navigation expanded tick box. The Diagnostics sections has five sections, the first two are logs and information pages about the status of the FreeNAS server. The other three are networking diagnostic tools and information. Diagnostics: Logs This section collates all the different log files that are generated by the FreeNAS server into one convenient place. There are several tabs, one for each different service to log file type. Some of the information can be very technical, especially in the System tab. However, with some key information they can become more readable. The tabs are as follows: Tab Meaning System When FreeBSD (the underlying OS of FreeNAS) boots, various log entries are recorded here about the hardware of the server and various messages about the boot process. FTP This shows the activity on the FTP server including successful logins and failed logins. RSYNC The log information for the RSYNC server is divided into three sections: Server, Client, and Local. Depending on which type of RSYNC operation you are interested, click the appropriate tab. SSHD Here you will find log entries from the SSH server including some limited startup information and records of logins and failed login attempts. SMARTD This tab logs the output of the S.M.A.R.T daemon. Daemon Any other minor system service like the built-in HTTP server, the Apple Filing Protocol server and Windows networking server (Samab) will log information to this page. UPnP The log information from the FreeNAS UPnP server called "MediaTomb" is displayed here. The logging can be quite verbose so careful attention is needed when reading it. Don't be distracted by entires such as "INFO: Config: option not found:" as this is just the server logging that it will be using a default value for that particular attribute. Settings The settings tab allows you to change how the log information is displayed including the sort order and the number of entries shown. What is a Daemon?In UNIX speak, a Daemon is a system service. It is a program that runs in the background performing certain tasks. The Daemons in FreeNAS don't work with the users in an interactive mode (via the monitor, mouse, and keyboard) and as such need a place to log the results (or problems)of their actives. The FreeNAS Daemons are launched automatically by FreeBSD when it boots and some are dependent on being enabled in the web interface. Understanding Diagnostics Logs: System The most complicated of all the log pages is the System log page. Here, FreeBSD logs information about the system, its hardware, and the startup process. At first, this page can seem intimidating but with a little help, this page can be very helpful particularly in tracking down hardware or driver related problems. 50 Log Entries Might Not be EnoughThe default number of log entries shown on the Diagnostics: Logs page is 50. For most situations, this will be sufficient but there can be times when it is not enough. For example in the Diagnostics: Logs: System tab, the total number of log entries made during the boot up process is more than 50. If you want to see how much system memory has been recognized by FreeBSD, you won't find it within the standard 50 entries. The solution is to increase the Number of log entries to show parameter on the Diagnostics: Logs: Setting tab. The best way to learn to read the Diagnostics: Logs: System page is by example, below are several different log entry examples including logs about the CPU, memory, disks, and disk controllers: kernel: FreeBSD 6.2-RELEASE-p11 #0: Wed Mar 12 18:17:49 CET 2008 This first entry shows the heritage of the FreeNAS server. It is based on FreeBSD and in this particular case, we see that this version of FreeNAS is using FreeBSD 6.2. There are plans (which may have already become reality) to use FreeBSD version 7.0 as the base for FreeNAS. kernel: CPU: Intel(R) Xeon(TM) CPU 1.70GHz (1680.52-MHz 686-class CPU) Here, the type of CPU that was detected by the FreeBSD is displayed. In this case, it is an Intel Xeon CPU running at 1.7GHz. kernel: FreeBSD/SMP: Multiprocessor System Detected: 2 CPUs If your system has more than one CPU or is a dual core machine then you will see an entry in the log file (like the one above) recognizing the second CPU. If your machine has Hyper-threading technology, then the second logical processor will be reported like this: Logical CPUs per core:2 Apr 1 11:06:00 kernel: real memory = 268435456 (256 MB)Apr 1 11:06:00 kernel: avail memory = 252907520 (241 MB) These log entries show how much memory the system has detected. The difference in size between real memory and available memory is the difference between the amount of RAM physically installed in the computer and the amount of memory left over after the FreeBSD kernel is loaded. kernel: atapci0: <Intel PIIX4 UDMA33 controller> port 0x1f0-0x1f7,0x3f6,0x170-0x177,0x376,0x1050-0x105f at device 7.1 on pci0 kernel: ata0: <ATA channel 0> on atapci0 kernel: ata1: <ATA channel 1> on atapci0 For disks to work on your FreeNAS server, a disk controller is needed and it will be either a standard ATA/IDE controller, a SATA controller or a SCSI controller. Above are the log entries for a standard ATA controller built into the motherboard. You can see that it is an Intel controller and that two channels have been seen (the primary and the secondary). kernel: atapci1: <SiS 181 SATA150 controller> irq 17 at device 5.0 on pci0kernel: ata2: <ATA channel 0> on atapci1kernel: ata3: <ATA channel 1> on atapci1 Like the ATA controller listed a moment ago, SATA controllers are all recognized at boot up. Here is a SiS 181 SATA 150 controller with two channels. They are listed as devices ata2 and ata3 as ata0 and ata1 are used by the standard ATA/IDE controller. kernel: mpt0: <LSILogic 1030 Ultra4 Adapter> irq 17 at device 16.0 on pci0 Like IDE and SATA controllers, all recognized SCSI drivers are listed in the boot up system log. Here, the controller is an LSILogic 1030 Ultra4. kernel: ad0: 476940MB <WDC WD5000AAJB-00YRA0 12.01C02> at ata0-master UDMA100kernel: ad4: 476940MB <Seagate ST3500320AS SD04> at ata2-master SATA150 Once the disk controllers are recognized by the system, FreeBSD can search to see which disks are attached. Above is an example of a Western Digital 500GB hard drive using the standard ATA100 interface at 100MB/s. There is also a 500GB Seagate drive connected using the SATA interface. acd0: CDROM <TOSHIBA CD-ROM XM-7002B/1005> at ata1 as master UDMA33 When the CDROM (which is normally attached to an ATA/IDE controller) is recognized, it will look like the above. kernel: da0 at ahd0 bus 0 target 0 lun 0kernel: da0: <MAXTOR ATLAS10K4_73WLS DFL0> Fixed Direct Access SCSI-3 devicekernel: da0: 320.000MB/s transfers (160.000MHz, offset 127, 16bit), Tagged Queueing Enabledkernel: da0: 70149MB (143666192 512 byte sectors: 255H 63S/T 8942C) SCSI addressing is a little more complicated than that of ATA/IDE. In SCSI land, you have a controller, a channel (bus), a disk (target), and the Logical Unit Number (LUN). The example above shows that a disk (which has been assigned the device name da0) is found on the controller ahd0 on bus 0, as target 0 with the LUN 0. SCSI controllers can have multiple buses and multiple targets. Further down, you can see that the disk is a MAXTOR 73GB SCSI-3 disk. kernel: da0 at umass-sim0 bus 0 target 0 lun 0kernel: da0: <Verbatim Store 'n' Go 1.30> Removable Direct Access SCSI-2 devicekernel: da0: 40.000MB/s transferskernel: da0: 963MB (1974271 512 byte sectors: 64H 32S/T 963C) If you are using a USB flash disk for storing the configuration information, it will most likely appear in the log file as a type of SCSI disk. The above example shows a 1GB Verbatim Store 'n' Go disk. kernel: lnc0: <PCNet/PCI Ethernet adapter> irq 18 at device 17.0 on pci0kernel: lnc0: Ethernet address: 00:0c:29:a5:9a:28 Another important device that needs to work correctly on your system is the network interface card. Like disk controllers and disks, it will be logged in the log file when FreeBSD recognizes it. Above is an example of an AMD Lance/PCNet-based Ethernet adapter. Each Ethernet card has a unique address know as the Ethernet address or the MAC address. It is made up of 6 numbers specified using a colon notation. Once found, FreeBSD queries the card to find its MAC address and logs the result. In the above example, it is "00:0c:29:a5:9a:28". Converting between Device Names and the Real World In the SCSI example above, the SCSI controller listed is ahd0. The trick to understanding these log entries better is to know how to interpret the device name ahd0. First of all ahd0 means it is a device using the ahd driver and it is the first one in the system (with numbering starting from 0). So what is a ahd? The first place to look is further up in the log file. There should be an entry like: kernel: ahd0: <Adaptec 39320 Ultra320 SCSI adapter> irq 11 at device 1.0 on pci2 This shows that the particular device is an Adaptec 39320 SCSI 3 controller. You can also find out more about the the ahd driver (and all FreeBSD drivers) at: http://www.freebsd.org/releases/6.2R/hardware-i386.html Search for ahd and you will find which controllers this driver supports (in this case, they are all controllers from Adaptec. If you click on the link provided, you will be taken to a specific help page about this driver. When FreeNAS moves to FreeBSD 7, then the relevant web page will be: http://www.freebsd.org/releases/7.0R/hardware.html.
Read more
  • 0
  • 0
  • 13837
article-image-aspnet-social-networks-making-friends-part-1
Packt
28 Oct 2009
19 min read
Save for later

ASP.NET Social Networks—Making Friends (Part 1)

Packt
28 Oct 2009
19 min read
Problem There are many aspects to building relationships in any community—real or virtual. First and foremost is initiating contact with the people whom you will eventually call your friends. We will do this in a few ways. First, we will provide a way for our users to search the site for friends who are also members. Second, we will create a form that allows you to enter your friends' email IDs and invite them directly. Third, we will create a form that allows you to import all of your contacts from Outlook. All of these methods of inviting a friend into the system would of course generate an email invite. The user would have the ability to then follow the link into the system and either sign up or log in to accept the request. The preceding screenshot shows a sample email that the user would receive in their inbox. And following is the message that would be seen: Once the user has clicked on the link in their email, he/she will be taken to a page displaying the request. Once we have a way for our users to attach friends to their profile, we need to start integrating the concept of friends into the fabric of our site. We will need a way for our users to view all of their friends. We will also need a way for our users to remove the relationships (for those users who are no longer friends!). Then we will need to add friends to our user's public profile. While this is a good first pass at integrating the concept of friends into our site, there are a couple more steps for true integration. We need to add friend request and friend confirm alerts. We also need to modify the alert system so that when users modify their profile, change their avatar, or any other alert that is triggered by users of our system, all of their friends are notified on The Filter. Once this is done we have one final topic to cover—which sort of fits in the realm of friends—the concept of Status Updates. This is a form of a micro blog. It allows users to post something about: What they are currently doing Where they are or What they are thinking about This is then added to their profile and sent out to their friends' filters. The box in the preceding screenshot is where the user can enter his/herStatus Updates. Each of these updates will also be shown on the updates view and in their filter views. This really helps to keep The Filter busy and helps people feel involved with their friends. Design Now let's talk about the design of these features. Friends This article is an attempt to throw light on the infrastructure needs and more heavily focused on the UI side for creating and managing relationships. That being said, there is always some form of groundwork that has to be in place prior to adding new features. In this case we need to add the concept of a friend prior to having the ability to create friendships. This concept is a relatively simple one as it is really only defining a relationship between two accounts. We have the account that requested the relationship and the account that accepted the relationship. This allows an account to be linked to as many other accounts as they wish. Finding friends Like in life, it is very difficult to create friends without first locating and meeting people. For that reason the various ways to locate and invite someone to be your friend is our first topic. Searching for a friend The easiest way to locate friends who might be interested in the same site that you are is to search through the existing user base. For that reason we will need to create a simple keyword search box that is accessible from any page on the site. This search feature should take a look at several fields of data pertaining to an account and return all possible users. From the search results page we should be able to initiate a friend request. Inviting a friend The next best thing to locating friends who are already members of the site is to invite people who you know out of the site. The quickest way to implement this is to allow a user to manually enter an email address or many email addresses, type a message, and then submit. This would be implemented with a simple form that generates a quick email to the recipient list. In the body of the email will be a link that allows the recipients to come in to our site. Importing friends from external sources An obvious extension of the last topic is to somehow automate the importing process of contacts from an email management tool. We will create a toolset that allows the user to export their contacts from Outlook and import them via a web form. The user should then be able to select the contacts that they want to invite. Sending an invitation With all the three of the above methods we will end up sending out an invitation email. We could simply send out an email with a link to the site. However, we need to maintain: Who has been invited Who initiated the invitation and When this occurred Then in the email, rather than just invite people in, we want to assign the user a key so that we can easily identify them on their way in. We will use a system-generated GUID to do this. In the case of inviting an existing user, we will allow him/her to log in to acknowledge the new friendship. In the case of a non-member user who was invited, we will allow him/her to create a new account. In both cases we will populate the invitation with the invited user's Account ID so that we have some history about the relationship. Adding friend alerts to the filter Once we have the framework in place for inviting and accepting friendship requests, we need to extend our existing system with alerts. These alerts should show up on existing user's Filters to show that they sent an invitation. We should also have alerts showing that a user has been invited. Once a user has accepted a friendship we should also have an alert. Interacting with your friends Now let's discuss some of the features that we need to interact with our friends. Viewing your friends Friends are only good if a user can interact with them. The first stop along this train of thought is to provide a page that allows a user to see all the friends he/she has. This is a jumping off point for a user to view the profile of friends. Also, as the concept of a user's profile grows, more data can be shown about each friend in an at-a-glance format. In addition to an all Friends page, we can add friends' views to a user's public profile so that other users can see the relationships. Managing your friends Now that we can see into all the relationships, we can finally provide the users with the ability to remove a relationship. In our initial pass this will be a permanent deletion of the relationship. Following your friends Now, we can extend the alert system so that when alerts are generated for a common user, such as updating their profile information, uploading a new photo, or any other user specific task, all the user's friends are automatically notified via their Filter. Providing status updates to your friends Somewhat related to friend-oriented relationships and The Filter is the concept of micro blogs. We need to add a way for a user to send a quick blurb about what they are doing, what they are thinking, and so on. This would also show up on the Filters of all the user's friends. This feature creates a lot of dynamic content on an end user's homepage, which keeps things interesting. Solution Now let's take a look at our solution. Implementing the database Let's look at the tables that are needed to support these new features. The Friends Table As the concept of friends is our base discussion for this article, we will immediately dive in and start creating the tables around this subject. As you have seen previously this is very straightforward table structure that simply links one account to the other. Friend Invitations This table is responsible for keeping track of who has been invited to the site, by whom, and when. It also holds the key (GUID) that is sent to the friends so that they can get back into the system under the appropriate invitation. Once a friend has accepted the relationship, their AccountID is stored here too, so that we can see how relationships were created in the past. Status Updates Status Updates allow a user to tell their friends what they are doing at that time. This is a micro blog so to speak. A micro blog allows a user to write small blurbs about anything. Examples of this are Twitter and Yammer. For more information take a look here: http://en.wikipedia.org/wiki/Micro-blogging The table needed for this is also simple. It tracks who said what, what was said, and when. Creating the Relationships Here are the relationships that we need for the tables we just discussed: Friends and Accounts via the owning account Friends and Accounts via the friends account FriendInvitations and Accounts StatusUpdates and Accounts Setting up the data access layer Let's extend the data access layer now to handle these new tables. Open your Fisharoo.dbml file and drag in these three new tables. We are not allowing LINQ to manage these relationships for us. So go ahead and remove the relationships from the surrounding tables. Once you hit Save we should have three new classes to work with! Building repositories As always, with these new tables will come new repositories. The following repositories will be created: FriendRepository FriendInvitationRepository StatusUpdateRepository In addition to the creation of the above repositories, we will also need to modify the AccountRepository. FriendRepository Most of our repositories will always follow the same design. They provide a way to get at one record, many records by a parent ID, save a record, and delete a record. This repository differs slightly from the norm when it is time to retrieve a list of friends in that it has two sides of the relationship to look at—on one side where it is the owning Account of the Friend relationship and on the other side where the relationship is owned by another account. Here is that method: public List<Friend> GetFriendsByAccountID(Int32 AccountID){ List<Friend> result = new List<Friend>(); using(FisharooDataContext dc = conn.GetContext()) { //Get my friends direct relationship IEnumerable<Friend> friends = (from f in dc.Friends where f.AccountID == AccountID && f.MyFriendsAccountID AccountID select f).Distinct(); result = friends.ToList(); //Getmy friends indirect relationship var friends2 = (from f in dc.Friends where f.MyFriendsAccountID == AccountID && f.AccountID != AccountID select new { FriendID = f.FriendID, AccountID = f.MyFriendsAccountID, MyFriendsAccountID = f.AccountID, CreateDate = f.CreateDate, Timestamp = f.Timestamp }).Distinct(); foreach (object o in friends2) { Friend friend = o as Friend; if(friend != null) result.Add(friend); } } return result;} This method queries for all friends that are owned by this account. It then queries for the reverse relationship where this account is owned by another account. Then it adds the second query to the first and returns that result. Here is the method that gets the Accounts of our Friends: public List<Account> GetFriendsAccountsByAccountID(Int32 AccountID){ List<Friend> friends = GetFriendsByAccountID(AccountID); List<int> accountIDs = new List<int>(); foreach (Friend friend in friends) { accountIDs.Add(friend.MyFriendsAccountID); } List<Account> result = new List<Account>(); using(FisharooDataContext dc = conn.GetContext()) { IEnumerable<Account> accounts = from a in dc.Accounts where accountIDs.Contains(a.AccountID) select a; result = accounts.ToList(); } return result;} This method first gathers all the friends (via the first method we discussed) and then queries for all the related accounts. It then returns the result. FriendInvitationRepository Like the other repositories this one has the standard methods. In addition to those we also need to be able to retrieve an invitation by GUID or the invitation key that was sent to the friend. public FriendInvitation GetFriendInvitationByGUID(Guid guid){ FriendInvitation friendInvitation; using(FisharooDataContext dc = conn.GetContext()) { friendInvitation = dc.FriendInvitations.Where(fi => fi.GUID == guid).FirstOrDefault(); } return friendInvitation;} This is a very straightforward query matching the GUID values. In addition to the above method we will also need a way for invitations to be cleaned up. For this reason we will also have a method named CleanUpFriendInvitations(). public void CleanUpFriendInvitationsForThisEmail(FriendInvitation friendInvitation){ using (FisharooDataContext dc = conn.GetContext()) { IEnumerable<FriendInvitation> friendInvitations = from fi in dc.FriendInvitations where fi.Email == friendInvitation.Email && fi.BecameAccountID == 0 && fi.AccountID == friendInvitation.AccountID select fi; foreach (FriendInvitation invitation in friendInvitations) { dc.FriendInvitations.DeleteOnSubmit(invitation); } dc.SubmitChanges(); }} This method is responsible for clearing out any invitations in the system that are sent from account A to account B and have not been activated (account B never did anything with the invite). Rather than checking if the invitation already exists when it is created, we will allow them to be created time and again (checking each invite during the import process of 500 contacts could really slow things down!). When account B finally accepts one of the invitations all of the others will be cleared. Also, in case account B never does anything with the invites, we will need a database process that periodically cleans out old invitations. StatusUpdateRepository Other than the norm, this repository has a method that gets topN StatusUpdates for use on the profile page. public List<StatusUpdate> GetTopNStatusUpdatesByAccountID(Int32 AccountID, Int32 Number){ List<StatusUpdate> result = new List<StatusUpdate>(); using (FisharooDataContext dc = conn.GetContext()) { IEnumerable<StatusUpdate> statusUpdates = (from su in dc.StatusUpdates where su.AccountID == AccountID orderby su.CreateDate descending select su).Take(Number); result = statusUpdates.ToList(); } return result;} This is done with a standard query with the addition of the Take() method, which translates into a TOP statement in the resulting SQL. AccountRepository With the addition of our search capabilities we will require a new method in our AccountRepository. This method will be the key for searching accounts. public List<Account> SearchAccounts(string SearchText){ List<Account> result = new List<Account>(); using (FisharooDataContext dc = conn.GetContext()) { IEnumerable<Account> accounts = from a in dc.Accounts where(a.FirstName + " " + a.LastName).Contains(SearchText) || a.Email.Contains(SearchText) || a.Username.Contains(SearchText) select a; result = accounts.ToList(); } return result;} This method currently searches through a user's first name, last name, email address, and username. This could of course be extended to their profile data and many other data points (all in good time!). Implementing the services/application layer Now that we have the repositories in place, we can begin to create the services that sit on top of those repositories. We will be creating the following services: FriendService In addition to that we will also be extending these services: AlertService PrivacyService FriendService The FriendService currently has a couple of duties. We will need it to tell us whether or not a user is a Friend, so that we can extend the PrivacyService to consider friends (recall that we currently only understand public and private settings!). In addition to that we need our FriendService to be able to handle creating Friends from a FriendInvitation. public bool IsFriend(Account account, Account accountBeingViewed){ if(account == null) return false; if(accountBeingViewed == null) return false; if(account.AccountID == accountBeingViewed.AccountID) return true; else { Friend friend = _friendRepository.GetFriendsByAccountID (accountBeingViewed.AccountID). Where(f => f.MyFriendsAccountID == account.AccountID).FirstOrDefault(); if(friend != null) return true; } return false;} This method needs to know who is making the request as well as who it is making the request about. It then verifies that both accounts are not null so that we can use them down the road and returns false if either of them are null. We then check to see if the user that is doing the viewing is the same user as is being viewed. If so we can safely return true. Then comes the fun part—currently we are using the GetFriendsByAccountID method found in the FriendRepository. We iterate through that list to see if our friend is there in the list or not. If we locate it, we return true. Otherwise the whole method has failed to locate a result and returns false. Keep in mind that this way of doing things could quickly become a major performance issue. If you are checking security around several data points frequently in the same page, this is a large query and moves a lot of data around. If someone had 500 friends this would not be acceptable. As our goal is for people to have lots of friends, we generally would not want to follow this way. Your best bet then is to create a LINQ query in the FriendsRepository to handle this logic directly only returning true or false. Now comes our CreateFriendFromFriendInvitation method, which as the name suggests, creates a friend from a friend invitation! public void CreateFriendFromFriendInvitation(Guid InvitationKey, Account InvitationTo){ //update friend invitation request FriendInvitation friendInvitation = _friendInvitationRepository. GetFriendInvitationByGUID(InvitationKey); friendInvitation.BecameAccountID = InvitationTo.AccountID; _friendInvitationRepository.SaveFriendInvitation(friendInvitation); _friendInvitationRepository.CleanUpFriendInvitationsForThisEmail(frie ndInvitation); //create friendship Friend friend = new Friend(); friend.AccountID = friendInvitation.AccountID; friend.MyFriendsAccountID = InvitationTo.AccountID; _friendRepository.SaveFriend(friend); Account InvitationFrom = _accountRepository.GetAccountByID (friendInvitation.AccountID); _alertService.AddFriendAddedAlert(InvitationFrom, InvitationTo); //TODO: MESSAGING - Add message to inbox regarding new friendship!} This method expects the InvitationKey (in the form of a system generated GUID) and the Account that is wishing to create the relationship. It then gets the FriendInvitation and updates the BecameAccountID property of the new friend. We then make a call to flush any other friend invites between these two users. Once we have everything cleaned up, we add a new alert to the system letting the account that initiated this invitation know that the invitation was accepted. AlertService The alert service is essentially a wrapper to post an alert to the user's profile on The Filter. Go through the following methods. They do not need much explanation! public void AddStatusUpdateAlert(StatusUpdate statusUpdate){ alert = new Alert(); alert.CreateDate = DateTime.Now; alert.AccountID = _userSession.CurrentUser.AccountID; alert.AlertTypeID = (int)AlertType.AlertTypes.StatusUpdate; alertMessage = "<div class="AlertHeader">" + GetProfileImage(_userSession.CurrentUser.AccountID) + GetProfileUrl(_userSession.CurrentUser.Username) + " " + statusUpdate.Status + "</div>"; alert.Message = alertMessage; SaveAlert(alert); SendAlertToFriends(alert);}public void AddFriendRequestAlert(Account FriendRequestFrom, Account FriendRequestTo, Guid requestGuid, string Message){ alert = new Alert(); alert.CreateDate = DateTime.Now; alert.AccountID = FriendRequestTo.AccountID; alertMessage = "<div class="AlertHeader">" + GetProfileImage(FriendRequestFrom.AccountID) + GetProfileUrl(FriendRequestFrom.Username) + " would like to be friends!</div>"; alertMessage += "<div class="AlertRow">"; alertMessage += FriendRequestFrom.FirstName + " " + FriendRequestFrom.LastName + " would like to be friends with you! Click this link to add this user as a friend: "; alertMessage += "<a href="" + _configuration.RootURL + "Friends/ConfirmFriendshipRequest.aspx?InvitationKey=" + requestGuid.ToString() + "">" + _configuration.RootURL + "Friends/ConfirmFriendshipRequest.aspx?InvitationKey=" + requestGuid.ToString() + "</a><HR>" + Message + "</div>"; alert.Message = alertMessage; alert.AlertTypeID = (int) AlertType.AlertTypes.FriendRequest; SaveAlert(alert);}public void AddFriendAddedAlert(Account FriendRequestFrom, Account FriendRequestTo){ alert = new Alert(); alert.CreateDate = DateTime.Now; alert.AccountID = FriendRequestFrom.AccountID; alertMessage = "<div class="AlertHeader">" + GetProfileImage(FriendRequestTo.AccountID) + GetProfileUrl(FriendRequestTo.Username) + " is now your friend!</div>"; alertMessage += "<div class="AlertRow">" + GetSendMessageUrl(FriendRequestTo.AccountID) + "</div>"; alert.Message = alertMessage; alert.AlertTypeID = (int)AlertType.AlertTypes.FriendAdded; SaveAlert(alert); alert = new Alert(); alert.CreateDate = DateTime.Now; alert.AccountID = FriendRequestTo.AccountID; alertMessage = "<div class="AlertHeader">" + GetProfileImage(FriendRequestFrom.AccountID) + GetProfileUrl(FriendRequestFrom.Username) + " is now your friend!</div>"; alertMessage += "<div class="AlertRow">" + GetSendMessageUrl(FriendRequestFrom.AccountID) + "</div>"; alert.Message = alertMessage; alert.AlertTypeID = (int)AlertType.AlertTypes.FriendAdded; SaveAlert(alert); alert = new Alert(); alert.CreateDate = DateTime.Now; alert.AlertTypeID = (int) AlertType.AlertTypes.FriendAdded; alertMessage = "<div class="AlertHeader">" + GetProfileUrl(FriendRequestFrom.Username) + " and " + GetProfileUrl(FriendRequestTo.Username) + " are now friends!</div>"; alert.Message = alertMessage; alert.AccountID = FriendRequestFrom.AccountID; SendAlertToFriends(alert); alert.AccountID = FriendRequestTo.AccountID; SendAlertToFriends(alert);} PrivacyService Now that we have a method to check if two people are friends or not, we can finally extend our PrivacyService to account for friends. Up to this point we are only interrogating whether something is marked as private or public. Friends is marked false by default! public bool ShouldShow(Int32 PrivacyFlagTypeID, Account AccountBeingViewed, Account Account, List<PrivacyFlag> Flags){ bool result; bool isFriend = _friendService.IsFriend(Account,AccountBeingViewed); //flag marked as private test if(Flags.Where(f => f.PrivacyFlagTypeID == PrivacyFlagTypeID && f.VisibilityLevelID == (int)VisibilityLevel.VisibilityLevels.Private) .FirstOrDefault() != null) result = false; //flag marked as friends only test else if (Flags.Where(f => f.PrivacyFlagTypeID == PrivacyFlagTypeID && f.VisibilityLevelID == (int)VisibilityLevel.VisibilityLevels.Friends) .FirstOrDefault() != null && isFriend) result = true; else if (Flags.Where(f => f.PrivacyFlagTypeID == PrivacyFlagTypeID && f.VisibilityLevelID == (int)VisibilityLevel.VisibilityLevels.Public) .FirstOrDefault() != null) result = true; else result = false; return result;} Summary The article started with the thought process of how we can apply the concept of Friends to our community site. We tried to figure out what we need to do to implement the concept, we then finalized our requirements, and finally we began implementing the features. In the next part of this article we will continue with the implementation process.  
Read more
  • 0
  • 0
  • 4549

article-image-your-first-application-aptana-radrails
Packt
28 Oct 2009
7 min read
Save for later

Your First Application in Aptana RadRails

Packt
28 Oct 2009
7 min read
Here we are! programming in a powerful language specially designed for the Web and using an IDE that promises to help us with many of the mechanical tasks involved in the coding. If you have been already programming with Rails, you probably know that if we take advantage of scaffolding we can have a simple web application for table maintenance in a matter of minutes (yes/no typo here, it really takes just a few minutes). And we are even talking about the database table creation process. If we wanted to add validations, a nice design, and some more complexity we would be talking about a few hours. Still pretty impressive, depending from which programming language (or framework) you are coming. The truth is, creating the wireframe of your application in Rails is quick and easy enough even from the command line, but we'll be learning in this article how to do it a bit more comfortably by using RadRails for creating your models, controllers, database migrations, and for starting your server and test your application. Basic Views Most of the time when working with our IDE we will be using the Editor Area. Apart from that, two of the views we will be working with more frequently are the Ruby Explorer—the enhanced equivalent of the Rails Navigator, if you were using RadRails Classic—and the Console. Both of these views are fairly easy to use, but since they will be present at almost every point of the development process, it's interesting to get familiar with them from the beginning. The Ruby Explorer View If you have already opened the Rails perspective, then you should be seeing the Ruby Explorer at the left of your workbench. This view looks like a file-tree pane. At the root level, you will find a folder for each of the projects in your workspace. By clicking on the icon to the left of the project name, you will unfold its files and folders. The Ruby files can be expanded too, displaying the modules, classes, variables, and methods defined in the selected file. By clicking on any of these elements you will be taken directly to the line in which it is defined. Before navigating through the contents of a project, we have to open it. Just right-click on its name and choose Open Project. When opening a project, Eclipse will ask you if you want to open the referenced projects. By default, your projects don't have any references and that's the most common scenario when working with a Rails application. If you want, you can include references to other projects on the workspace so you can open and close them together. To view and change the project references, you can right-click on the project name, then select Properties. Notice you can also get here from the Project menu by selecting Properties. In the properties dialog, you have to select Project References. Here you will see a list of all the available projects in the workspace. Just check or uncheck all the projects you want to reference. Once your project is open, the mechanism for navigating the contents is pretty straightforward. You can open or close any sub-folders and you can right-click on any item to get a context menu. From this menu you can perform common file operations like creating, renaming, or deleting a file. We will see more details about creating new files when talking about the Editor Area. There is also a Properties option from where you can change the encoding for a particular file, or the file attributes (read only, for example). The Properties option is also available at the project level. Also in the context menu, you can see there is a Tail option. This will work like the tail command in UNIX, displaying the contents of a file as it's changing. This option is especially useful for a quick monitoring of the log files of your application. You can also find in the context menu two options with the names Compare With and Replace With. If you select either of them, you will see a new menu in which there is an option named Local history. This functionality is really interesting. You can compare your current version against an older version of the same file, or you could replace the contents with a previous one. This can be a life-saver because when using it on a folder the local history will contain copies even of deleted files. Comparing a file against another copy is a powerful tool, which can also be used when working with repositories or to compare different files between them. Let's try it and see how it works. Open any of the files in your project tree by double-clicking on the file name. Now go to the Editor Area and add some lines with Mumbo-Jumbo text. After you are done, click on the save icon of the toolbar or select Save in the File menu. Now let's go back to the Ruby Explorer, double-click on the file name and select Compare With | Local History. You will see there are some entries here, one for each time we saved the file. If this was the first time you worked with the file, then there will be only two versions, the original and the one you just saved. Double-click on the oldest local version you have. Now a new editor will be opened. The editor is divided into three panes, the top one displaying structural differences, the bottom-left one with the code of the current version, and the bottom-right one with the old version of the code. At the top pane, you will see the structural differences between the versions being compared. For every added or deleted method or variable—at instance or class level, you will see the name of the element with an icon displaying a plus or a minus sign. If a method exists in both versions, but its content was changed, the name will be displayed without any additional icons. When reviewing the differences/changes you will see the editors at both sides are linked with a line representing the parts that are not equal between the files. When you are on a given change/difference you can select the icon for 'copying current change from right to left' (or the other way round, depending in which of the files the change is), which will override the contents of the left editor with those of the right. You can also just manually edit or copy/paste in your editor as usual. There is an interesting icon labeled 'Copy all non-conflicting changes from right to left' that will do exactly as it promises. Any changes that can be automatically merged, will be incorporated to your editor. Depending on the differences between the files, the icon could be the contrary 'Copy all non-conflicting changes from left to right'. When you finish comparing or modifying your current editor, remember to save the contents of the editor in order to keep your changes. If you just wanted to review the changes without any modifications, you can directly scroll down the editors, use the 'Previous' or 'Next' icons, or use the quick marks by the right margin. You can also compare two files instead of comparing a file against an older version. Go to the Ruby Explorer and select one of the files, then hold down the control key and select another one. With both files selected, you right-click and select Compare With and then Each Other. Once opened, the compare editor works exactly the same as when comparing with an old version of the same file.
Read more
  • 0
  • 0
  • 2406

article-image-installing-alfresco-software-development-kit-sdk
Packt
28 Oct 2009
6 min read
Save for later

Installing Alfresco Software Development Kit (SDK)

Packt
28 Oct 2009
6 min read
Obtaining the SDK If you are running the Enterprise network, it is likely that the SDK has been provided to you as a binary. Alternatively, you can check out the Enterprise source code and build it yourself. In the Enterprise SVN repository, specific releases are tagged. So if you wanted 2.2.0, for example, you'd check out V2.2.0-ENTERPRISE-FINAL. The Enterprise SVN repository for the Enterprise network is password-protected. Consult your Alfresco representative for the URL, port, and credentials that are needed to obtain the Enterprise source code. Labs network users can either download the SDK as a binary from SourceForge (https://sourceforge.net/project/showfiles.php?group_id=143373&package_id=189441) or check out the Labs source code and build it. The SVN URL for the Labs source code is svn://svn.alfresco.com. In the Labs repository, nothing is tagged. You must check out HEAD. Step-by-Step: Building Alfresco from Source Regardless of whether you are using Enterprise or Labs, if you've decided to build from the source it is very easy to do it. At a high level, you simply check out the source and then run Ant. If you've opted to use the pre-compiled binaries, skip to the next section. Otherwise, let's use Ant to create the same ZIP/TAR file that is available on the download page. To do that, follow these steps: Check out the source from the appropriate SVN repository, as mentioned earlier. Set the TOMCAT_HOME environment variable to the root of your Apache Tomcat install directory. Navigate to the root of the source directory, then run the default Ant target: ant build.xml ant build.xml It will take a few minutes to build everything. When it is done, run the distribute task like this: ant -f continuous.xml distribute Again, it may take several minutes for this to run. When it is done, you should see several archives in the build|dist directory. For example, running this Ant task for Alfresco 3.0 Labs produces several archives. The subset relevant to the article includes: alfresco-labs-sdk-*.tar.gz alfresco-labs-sdk-*.zip alfresco-labs-tomcat-*.tar.gz alfresco-labs-tomcat-*.zip alfresco-labs-war-*.tar.gz alfresco-labs-war-*.zip alfresco-labs-wcm-*.tar.gz alfresco-labs-wcm-*.zip You should extract the SDK archive somewhere handy. The next step will be to import the SDK into Eclipse. Setting up the SDK in Eclipse Nothing about Alfresco requires you to use Eclipse or any other IDE. But Eclipse is very widely used and the Alfresco SDK distribution includes Eclipse projects that can easily be imported into Eclipse, so that's what these instructions will cover. In addition to the Alfresco JARs, dependent JARs, Javadocs, and source code, the SDK bundle has several Eclipse projects. Most of the Eclipse projects are sample projects showing how to write code for a particular area of Alfresco. Two are special, however. The SDK AlfrescoEmbedded project and the SDK AlfrescoRemote project reference all of the JARs needed for the Java API and the Web Services API respectively. The easiest way to make sure your own Eclipse project has everything it needs to compile is to import the projects bundled with the SDK into your Eclipse workspace, and then add the appropriate SDK projects to your project's build path. Step-by-Step: Importing the SDK into Eclipse Every developer has his or her own favorite way of configuring tools. If you are going to work with multiple versions of Alfresco, you should use version-specific Eclipse workspaces. For example, you might want to have a workspace-alfresco-2.2 workspace as well as a workspace-alfresco-3.0 workspace, each with the corresponding Alfresco SDK projects imported. Then, if you need to test customizations against a different version of the Alfresco SDK, all you have to do is switch your workspace, import your customization project if it isn't in the workspace already, and build it. Let's go ahead and set this up. Follow these steps: In Eclipse, select File|Switch Workspace or specify a new workspace location. This will be your workspace for a specific version of the Alfresco SDK so use a name such as workspace-alfresco-3.0. Eclipse will restart with an empty workspace. Make sure the Java compiler compliance level preference is set to 5.0 (Window|Preferences|Java|Compiler). If you forget to do that, Eclipse won't be able to build the projects after they are imported. Select File|Import|Existing Projects into Workspace. For the root directory, specify the directory where the SDK was uncompressed. For the root directory, specify the directory where the SDK was uncompressed. You want the root SDK directory, not the Samples directory. Select all of the projects that are listed and click Import. After the import, Eclipse should be able to build all projects cleanly. If not, double-check the compiler compliance level. If that is set but there are still errors, make sure you imported all SDK projects including SDK AlfrescoEmbedded and SDK AlfrescoRemote. Now that the files are in the workspace, take a look at the Embedded project. That's quite a list of dependent JAR files! The Alfresco-specific JARs all start with alfresco-. It depends on what you are doing, of course, but the JAR that is referenced most often is likely to be alfresco-repository.jar because that's where the bulk of the API resides. The SDK comes with zipped source code and Javadocs, which are both useful references (although the Javadocs are pretty sparse). It's a good idea to tell Eclipse where those files are, so you can drill in to the Alfresco source when debugging. To do that, right-click on the Alfresco JAR, and then select Properties. You'll see panels for Java Source Attachment and Javadoc Location that you can use to associate the JAR with the appropriate source and Javadoc archives. The following image shows the Java Source Attachment for alfresco-repository.jar: The following image shows the Javadoc Location panel for alfresco-repository.jar. Source and Javadoc are provided for each of the Alfresco JARs, as shown in the following table. Note that source and Javadoc for everything is available. This is open source software after all, not just all bundled with the SDK: Alfresco JAR Source archive Javadoc archive alfresco-core.jar Src|core-src.zip Doc|api|core-doc.zip alfresco-remote-api.jar Src|remote-api-src.zip Doc|api|remote-api-doc.zip alfresco-web-client.jar src|web-client-src.zip doc|api|web-client-doc.zip alfresco-repository.jar src|repository-src.zip doc|api|repository-doc.zip    
Read more
  • 0
  • 0
  • 2601
article-image-working-report-builder-microsoft-sql-server-2008-part-1
Packt
28 Oct 2009
16 min read
Save for later

Working with the Report Builder in Microsoft SQL Server 2008: Part 1

Packt
28 Oct 2009
16 min read
The Microsoft SQL Server 2008 Reporting Services Report Builder 2.0 tool can be installed from a standalone installer available at this Microsoft site, http://download.microsoft.com/download/a/f/6/af64f194-8b7e-4118-b040-4c515a7dbc46/ReportBuilder.msi. The same file is also available from a collection of download files when you access the Microsoft SQL Server 2008 Feature Pack, October 2008 at http://www.microsoft.com/downloads/details.aspx?FamilyId=228DE03F-3B5A-428A-923F-58A033D316E1&displaylang=en. Report Builder overview In the present version of SQL Server 2008 [Enterprise Evaluation edition] there  are two Report Builders available. Report Builder 1.0, which has remained as a program that can be launched from the Report Manager, and the new Report Builder 2.0, which is a stand alone report authoring tool that needs to be independently launched. Although Report Builder 1.0 can access Report Models built with Visual Studio 2008 and the Report Manager, it cannot be used to create reports using those models. It also does not work with Reports generated by Visual Studio 2008/BIDS/Report Builder 2.0. The errors can be summarized as follows: When you try to access the Report Server 2008 from the link provided on the Report Builder 1.0 interface you get the following error message: Specifying credentials in a URL is not supported When you try to open a report created using VS2008/BIDS/ReportBuilder2.0 using the Open Report… and Open File… navigational items in Report Builder 1.0 you get the following error message: System.IO.StreamReader: The Report element was not found Report Builder 1.0 allows you to access Report Models created with VS2008/BIDS/Report Manager and even allows you create a report in design view but this report cannot be processed on the Report Server. If you try to do so, you get the following error message: MemoryStream length must be non-negative and less than 2^31-1-origin. Parameter name: offset; Remote GDI stream version: ?. Expected version: 11.0.1 In this article the Report Builder 2.0 interface will be described along with the new features that are incorporated into this version. Report Builder 2.0 is admirably suited to address all items in the Report Definition Language of 2008. One of the important features of Report Builder 2.0 is the empowerment it provides business users to create ad hoc reports using the Report Models built on the databases they use. In this article you will be learning mostly about the Report Builder 2.0  interface details and working with it to create reports or modify them. It may be noted that Report Builder generates 2008 compliant RDL files as described in http://download.microsoft.com/download/6/5/7/6575f1c8-4607-48d2-941d-c69622e11c32/RDL_spec_08.pdf and therefore, cannot work with reports generated using 2005 technology. Report Builder 2.0 user interface description Report Builder is a report authoring tool and the basic procedure for authoring a report consists of the following steps: Report planning Connecting to a source of data Extracting a dataset from source Designing the report and data binding Previewing the report Although deploying the report is not included in the above, Report Builder can deploy the report as well. It is not always necessary to deploy a completed report, as any part of a report definition file can be deployed. This makes modifying a report on the server very flexible. In the following sections, the various parts of the Report Builder interface will be described starting at the very top and going to the bottom of the interface The menu for file operations Report Builder 2.0 can be accessed from Start | All Programs | Microsoft SQL Server 2008 Report Builder | Report Builder 2.0. This brings up the Report Builder Interface 2.0 as shown with the design area containing two icons: Table or Matrix and Chart. Each of these will launch a  related wizard which will step you through the various tasks. The Report Builder 2.0 interface is very similar to Office 2007. More than one instance of Report Builder can be launched. At the very top of the following screen shown you have the undo and redo controls as well as a save icon. When you click on the save icon the Save as Report window gets displayed as shown. Here you provide a name for the report. The default save extension is  *.rdl and it will be saved to the report server. It may also be persisted to a folder on your machine. Clicking on the Office Button (top left) opens a drop-down window shown in the following screenshot: In this window, you can carry out a number of tasks such as creating a new report, opening an existing report, saving a report, and saving a report with a different name. The Save button saves it to the default location seen earlier and Save as invokes the same window to save the report with a different name as seen earlier displying the report server instance as the Save to location. The Recent Documents pane shows the more recent reports created with this tool. New allows you to create a new report. When you click on Open, the following Open Report window gets displayed with the default location http://Hodentek2:8080/ReportServer_SANGAM/My Reports. You will also notice the message: This folder is not available because the My Reports feature is not enabled on the computer. Also the Open Reports window allows you look for reports with the extension .rdl. Therefore, unless the My Reports feature is enabled, this window is unusable. This is supposed to be possible from Report Manager but there are no controls in Report Manager that would do this. An alternative was suggested by one of the MSDN forum moderators (see http://social.msdn.microsoft.com/forums/en-US/sqlreportingservices/thread/6c695160-29e8-4185-be6d-5fe027a6975c/). Hands-on exercise (Part 2) will describe how you may enable My Reports. The idea of My Reports is similar to My Documents where each user can keep his reports. When the Options button (in the previous screenshot) is clicked it opens the window Report Builder Options window with two tabbed pages Settings and Resource shown as follows: Here you can view, as well as modify, Report Builder settings. The defaults are more than adequate to work with the examples in this book. Clicking on the Resources button brings up this interesting window which enables you to interact with Microsoft regarding SSRS activities, concerns, community, and so on. If you are serious about Reporting Services, these are very valuable links. The About button when clicked can provide you with Report Builder version information. The ribbon The main menu consists of Home, Insert, and View menu items which are part of the "ribbon". The ribbon introduced by Microsoft in Office 2007 is actually a container for other toolbar items. The ribbon is the replacement for the classic menus, toolbars, and is supposed to be more efficient and discoverable by the user. In fact you see a lot more on the "ribbon" than in the classic menu. Home The next figure shows the Home menu with its toolbar arranged from left to right and divided into sections. The Run toolbar item with the title Views when clicked would run the report open in the design view (in fact, even without a report open in the design view, the report can be run. The result would be the current date and time getting displayed in the center of the screen of an untitled report which has just ExecutionTime as the only item in the report). The Font, Paragraph, Border, and Number toolbar sections become enabled if parts of a report need editing. The formatting of textboxes in the report, the formatting of numbers in the report, and the alignment of components in the layout can all be independently managed using these toolbar items. Insert When you click on the Insert menu item on the "ribbon", the tabbed page for this item is displayed as shown in the following screenshot: It has four sections: Data Regions, Report Items, Subreports, and Header & Footer. These are all the normal items that are used either individually or together to make up a report. There can be more than one data region in a report. Data Regions In the Data Regions section you have both the Tablix (Table, Matrix, and List) and the graphic controls that can be bound to data—the Chart and the Gauge. Gauge is new in SQL Server Reporting Services 2008. Chart and gauge implementations are the off shoot of collaboration with Dundas (http://www.dundas.com/). Report Builder is built in such a way that the dataset must be defined before any of the data regions are added to the report body. For the purpose of describing the various data regions in this section, it is assumed (in order to get the screen shots shown here) that a dataset has been defined and the default wizards on the design surface have been removed. Table The Table is meant for displaying data retrieved from a database either all data detailed in groups or a combination (some grouped and some detailed) of both. It has a fixed number of columns which can be adjusted at design time. The table length expands to accommodate the rows. Data can be grouped by a single field or by multiple fields. Expression designer can be used in grouping as well. The grouping is carried out by creating row groups. Static rows can be added for row headings (labels) and totals. Aggregates for groups can be added. Both detailed data as well as grouped data can be hidden initially and the user can interactively reveal the data needed by drill downs. When you click on Insert | Table | Insert Table and then click on the design surface you can add a table to the design area. The table appears as shown with handles to adjust its dimensions. The table can be dragged to any other location on the design surface (the body of the report) as well. After placing the table, which by default has three columns and two rows, when you click on any other part of the design area you will see the table as shown. When you hover over the cell marked Data on the table you will see a little icon. This icon is a minimized version of the dataset fields. The grayed out feature that surrounds the table indicate the position of the rows and columns of the table. It also shows such other features as whether it is a detail, or whether it is a group. In the case of group, within a group the feature would indicate the nesting schematically as well. When you want to increase the size of a column or a row you can drag the double headed arrow that gets displayed when your cursor is placed between two columns or between two cells as shown. When you click on the dataset icon in the cell Data you get a drop-down list containing the fields in the dataset as shown. You can choose any of the fields to occupy the cell you clicked and the corresponding header will be added to the table. In this particular dataset there are nine fields and you can choose any of them to occupy the cell. When you right-click on a cell, a drop-down menu will be available. It can be used for the following: Work with the highlighted textbox (each cell of the table is a textbox) including to copy, cut, delete, and paste contents. Work with the properties of the Textbox. Populate the textbox with an expression using the expression builder. The expression builder gets displayed when fx Expression is clicked. Use Select to select the body or the Tablix. Insert a new column or a new row. Columns can be added to the right or the left of the clicked cell and rows can be added above or below the clicked cell. Delete columns and rows. Add a group. Both row and column groups can be added. When you click on the properties of the textbox, the Text Box Properties window is displayed. The textbox has several properties which are arranged on the left as a list with each item having its own page as shown. The Help button on any of the pages will take you directly to the definition of the properties and is extremely useful. In the General page, you can make changes to the elements in the Name, Value, and Sizing options page as shown. The Value is one which you choose among the column values (from the drop-down) from the dataset. You may also add a text for the ToolTip, which will display this text when the report is generated and this cell is accessed by hovering over it in the report. Alternatively you can set the Value and Tooltip using fx—the button that brings up the Expression window. In the Number page you can set the number and date data type formatting options for the cell that contains a number or a date. This is what you normally would find in most Microsoft products such as Excel and Access. In the Alignment page you can choose the vertical and horizontal alignments as well as the padding of the textbox content from the edges of the cell. Similarly the Font and Border properties are the same ones you find in most Microsoft products. The Fill property lets you add or change background color to the report as well as add a graphic element. The graphic element can be embedded, external, or originate from a database (being one of the fields accessed). Expressions can be developed to set a desired color for the Fill. The Visibility of the textbox can be any of Show, Hide, Show or Hide based on an expression. In each of these cases the visibility can be toggled when another table cell is clicked (which can be chosen). This page also gives access to the Expression window which is similar to the MS Access expression builder. The Interactive Sorting page allows you to define interactive sorting options on  the textbox. Matrix Matrix provides a similar functionality (roughly speaking rows against columns) to cross-tab reports in MS Access (http://aspalliance.com/1041_Creating_a_Crosstab_Report_in_Visual_Studio_2005_Using_Crystal_Reports.all) and Pivot Table dynamic views (http://www.aspfree.com/c/a/MS-SQL-Server/On-Accessing-Data-From-An-OLAP-Server-Using-MS-Excel/3/). The matrix should have at least one row group and one column group. The matrix can expand both ways to accommodate the data, horizontally for column groups and vertically for row groups. The matrix cells (intersection of rows and columns) display summary information (aggregates). When you click on Insert Matrix in the Insert menu and drop it on the design area of Report Builder 2.0, it gets displayed as shown in the following figure: Now if you click inside the boundary of the (2x2) empty matrix you will see more features of the matrix as shown in the following screenshot. The basic elements are the ColumnGroup (Column Groups), the RowGroup (Row Groups), and the Data. The group information is also displayed as shown by overlaid lines pointing to them. There needs to be a minimum of one group and one column for the matrix and there could be a hierarchy of column and row groups. The row and column group cells have their own properties which can be displayed when you right-click on them as shown in the next screenshot for the row group. When you right-click on the cell marked Rows, the following drop-down menu  pops up. In addition to the properties that you can set for the textbox in that cell, you have additional submenu items that work with the grouping and totaling. These are part of representing data in a matrix. Each of the Tablix for the Rows and Columns has the additional submenu items which are shown here for the Rows. Similar ones apply for the Columns as well. These are useful when you want to create nested groups. With the Matrix design interface in SQL Server 2005 this would not have been possible. Add Group Row Group Parent Group... Child Group... -------------------- Adjacent Above Adjacent Below Row Group Delete Group Group Properties Add Total Before After In addition to the above, each of the items Rows and Columns cells has the following items as well. These specify how new columns and rows are inserted with reference to the current cell as shown. The differences are due to the geometrical positions that are allowed for the new columns or rows as shown. For the "Columns" cell: Insert Column Inside Group-Left Inside Group-Right ------------------ Outside Group-Left Outside Group-Right Insert Row Inside Group-Above Inside Group-Below ------------------ Outside Group_Above For the "Rows" cell: Insert Column Inside Group-Left Inside Group-Right ------------------ Outside Group-Left Insert Row Inside Group-Above Inside Group-Below ------------------ Outside Group_Above Outside Group_Below Besides using a cell as a starting point, one could also use the rows as a whole or column as a whole to add further structure as shown in the next figure. Of course you need to use the proper submenu option to arrive at a particular matrix structure. Clicking at the indicated points would let you choose the structure you want for your matrix. If you click at the location shown for the Tablix you could choose to the delete the whole matrix. The Tablix graphical arrangement gives you the maximum flexibility in extending the matrix in 2-dimensions. List The list data region repeats for each row of data. List element provides a single container for the data which can be used to generate what are called Free Form Reports. In this kind of report there is no rigid structure such as a table for the data. You can also place a list inside another list or even a chart inside a list. You can drag a column from a dataset and drop it into the list. You can work with the list using the properties of the Rectangle it contains as well as its Tablix properties. As described earlier, the design interface is very flexible and you can leverage all features provided by the Tablix structure like displaying details and adding groups either independent, or nested. The properties pages described earlier allow you to sort and filter grouped data. When you drop a List on the design surface you will see just a single cell as shown. You can change its dimensions to suit your needs. When you click on the List you can access its handles as shown: When you add a List, there is one column and one row (just one cell). This can be extended in both directions by choosing the appropriate submenu items. These can be displayed by right-clicking on the handles as shown:
Read more
  • 0
  • 0
  • 7158

article-image-installation-freenas
Packt
28 Oct 2009
28 min read
Save for later

Installation of FreeNAS

Packt
28 Oct 2009
28 min read
Downloading FreeNAS Before you can install the FreeNAS server, you will need to download the latest version from the FreeNAS website (http://www.freenas.org). Go to the download section and find the latest "LiveCD" version. The LiveCD version is what is known as an ISO image file and will have the .iso file extension. An ISO image is an exact copy of the structure and data for a CD or DVD disk. Using a CD burning program, you can create a FreeNAS bootable CD. We will look at this in more detail later on. What Hardware Do I Need? In this tutorial, we will start exploring FreeNAS, so you will need a machine on which to install the FreeNAS software. At this point in time, it doesn't have to be the final machine you are going to use as the FreeNAS server. You can use a "test" machine now and having learnt all about FreeNAS, you can build, install, and deploy a production machine (or machines) later. So, what we need now is a PC with at least 96Mb of RAM (but 128Mb or more is recommended), a bootable CD-ROM drive, a network card, one or more hard disks, and either a floppy disk drive (and a blank formatted disk) or a USB flash disk (MS-DOS formatted and empty). The hard disk will be for the data that you want to store and the floppy disk or USB flash disk will be for storing the configuration information. For the installation and initialization stages, you will also need a monitor and keyboard (but not mouse) attached to the PC. You can remove the monitor later, once FreeNAS is up and running. Warning FreeNAS boots as a LiveCD, which means that it does not use the disks on the host machine during boot up. However, when you start to configure storage on the FreeNAS server (specifically, when you format drives) all the data on the disk will be LOST. Do NOT use a machine that contains important data or an operating system that you will need afterwards. Virtualization  & VMWare The average PC runs just one operating system and inside that operating system, you would run your applications like word processing and email. There is a technology (called virtualization), which allows PCs to run more than one operating system, or to be more precise, to allow a guest virtual PC to run inside your actual PC. This virtual PC is an independent software box that can run its own OS and applications as if it were a physical computer. A virtual PC behaves exactly like a physical PC and has its own virtual CPU, RAM, hard disk, and network interface card (NIC). You can install FreeNAS on a virtual PC and FreeNAS can't tell the difference between the virtual PC and any other physical machine, also, it appears on the network just as a real PC would, running FreeNAS. There are lots of virtualization products available for Windows, Linux, and Apple OS X today. You can learn more at Wikipedia http://en.wikipedia.org/wiki/Virtualization. A very popular virtualization solution is from VMWare (http://www.vmware.com). VMWare have both commercial and freeware offerings and there are pre-configured FreeNAS images available for the VMWare range of products. This makes it an ideal environment for testing the FreeNAS server. Quick Start Guide For the Impatient If you are comfortable with burning ISO images to CDs, setting your computer's BIOS to boot from CDROM, disk partitions, and TCP/IP networking then this little guide should help you get a simple version of the FreeNAS server up and running in just a few minutes. If, however, some of these things sound daunting, then skip this section and go on to the next one where we shall go through the installation process one step at a time. For this example, we will use a USB flash disk to store the configuration information. You can use a floppy but be careful that during the boot process, the PC doesn't try to boot from the floppy before it boots from the CDROM. Burning and Booting Once you have downloaded the ISO image file from the FreeNAS website, you need to burn it to a CD. Having done that, put the CD into the PC as well as the flash disk and switch it on. Make sure that the BIOS is set to boot from CD. If it isn't, you need to enter into the BIOS and configure it to boot from CD. On many modern PCs, it is possible to select the boot device at start-up by pressing a special key (which is often either F8 or F12) to show a boot device menu. You can then select the CD as the boot device. The boot process is in four distinct parts: First, the PC will go through its POST (Power On Self Test) sequence. Here, the PC will check the amount of memory installed (which you can often see being counted on the screen) and which devices are connected (like hard drives and CDROMs). It should then start to boot from the CD. Here, FreeBSD (the underlying OS of FreeNAS) will start to boot, this is recognizable by the simple spinning wheel (made up of simple text characters like | - / and , which are animated to give the appearance of spinning). The third step is the FreeNAS boot menu. This will appear for just a few seconds and you should just let it boot normally, which is the default. The final stage is when the FreeNAS logo appears and the system will boot as FreeNAS server. You can tell when the system is fully loaded because the PC speaker will make some short but melodious beeps. To enable access to the web interface, the network of the FreeNAS server must be configured. Press the SPACE bar on the keyboard and the FreeNAS logo will disappear and a simple text menu will appear.       There are two aspects to configuring the network, first, you need to choose which network card to use and second, you need to assign it an address. If you have only one network card in your machine, then the FreeNAS server should have found it and automatically assigned it to be the LAN (Local Area Network) interface. What If My Network Card Isn't Found?This probably means that the network card in your machine isn't supported by FreeNAS or more specifically, by FreeBSD. You will need to replace the card with one supported by FreeBSD. Check the FreeBSD hardware compatibility page for more information: http://www.freebsd.org/releases/6.2R/hardware-i386.html If you see something like this: then the network has been recognized and assigned automatically by FreeNAS. The default IPv4 address for FreeNAS is 192.168.1.250, if this is good for your network, then you can just leave it unchanged. However, if you need to change it then press 2 followed by ENTER. If you want the machine to get its address from DHCP (Dynamic Host Configuration Protocol), answer yes (y) to the IPv4 DHCP question, otherwise answer no (n). If you are not using DHCP, you can now enter the desired IP address. Next, you need to enter the subnet mask. For 255.255.255.0, enter 24, for 255.255.0.0 enter 16, and for 255.0.0.0, enter 8. At this point, you can now skip the default gateway and DNS questions (by just pressing ENTER). If you do want to enter a default gateway and DNS server at this point, they will usually be the IP address of your Internet router. We won't be using IPv6 so the simplest thing to do now is just answer yes to the "Do you want to use AutoConfiguration for IPv6?" question. This will cause a small delay while FreeNAS tries (and probably fails) to get the IPv6 address but it is simpler than trying to enter the IPv6 address manually! You are now ready to access the web interface. The FreeNAS web interface can be accessed from any machine on the network with a web browser (including Windows, Linux, and OS X machines). On this client machine, type the address of the FreeNAS server with http:// in front of it into your web browser. For example: http://192.168.1.250 Configuring The first time you access the FreeNAS web interface, you will be asked for the username and password. The default username is admin and the default password is freenas. You should now be in the web interface. To configure some storage space, you need to work with "Disks". The logical order of working is that disks must be added, then formatted (if need be), then mounted. Finally, access is given to the various mounted disks by configuring different system services like CIFS and FTP.     So, to add a disk, go to Disks: Management. There is a + sign in a circle on the right-hand-side of the page (it can be easy to miss first time), click on it to add a disk. On the next page, select the disk you want to add. If you click on the drop-down menu, you should see the hard disks of the machines, the CDROM, and the USB flash disk. Dis'k Names in FreeBSD'The disk naming convention in FreeBSD is:/dev/ad0: Is the IDE/ATA Primary Master /dev/ad1 : Is the IDE/ATA Primary Slave/dev/ad2 : Is the IDE/ATA Secondary Master/dev/ad3 : Is the IDE/ATA Secondary Slave/dev/acd0 : Is the first ATA CD/DVD drive detected/dev/da0: Is the first SCSI hard drive, /dev/da1 the second and so on.USB flash disks are controlled using the SCSI driver, so they will appear as /dev/daN drives as well. Make sure ad0 is selected (which it should be by default). The rest of the page you can leave alone. Click Add to add the disk to the system. You then need to click Apply in order for the changes to take effect. You will now have a table showing you the disk you have added, including its size and a description. ApplyIn FreeNAS, the majority of steps need to be applied (which saves the configuration file to disk) by clicking the Apply button. It is normally found near the top of the page before any tables or configuration information is given. If you do not apply the changes, the interface will, on the whole, remember your changes but they will not be enacted in the system. After a reboot, unapplied changes will disappear. It is possible on some pages to make multiple operations and apply them all at the end. Next, the disk needs to be formatted. In Disks: Format, select the disk ad0 (which you just added above). Leave everything else unchanged and click Format disk. The disk will then be formatted. The low level output of the format command will be displayed in a box. It should end with Done!. Now the disk needs to be mounted. Go to Disks: Mount Point. Click on the + in the circle (which I shall refer to as the "add circle" from now on). Leave the Type as Disk and select the disk ad0 again. You need to type in a name, store is as good a name as any, but feel free to use which ever descriptive name you want to. Be DescriptiveIn setting up and configuring your FreeNAS server, you will be called upon to invent various names for mount points and share names etc. Try to be as descriptive as you can without being long winded. Temp, scratch, blob, and even zob are OK for testing, but try more meaningful names like storeage1, storage60gb or backupstorage etc. Don't use spaces in the names, instead use underline and in general, the names should be no longer than 15 characters. Although filling-in the description isn't mandatory in the web interface, it is worth using. Once you have completed the form click Add and then apply the changes. Sharing with Windows Machines Now that the disk has been added, formatted, and mounted, it is time to share it on the network and give other users the ability to read and write to it. FreeNAS supports many different types of access protocol, for this start guide, we will only look at Microsoft's CIFS protocol that primarily allows Windows machines (but also Apple OS X and Linux machines) to access the storage. In Services: CIFS/SMB, tick the enable box (in the title of the configuration data table). At this point, you can just about leave everything else as is with the exception of the workgroup name. We will be leaving the authentication method as "Anonymous" here as this is the easiest to get working and provides unrestricted read/write access to everyone. To make sure that the Windows machines are able to find the shared storage, we need to set the workgroup name, on the FreeNAS server, to be the same as the workgroup name of the Windows PC that will access the share. The default workgroup name for Windows Vista is WORKGROUP but note that the default for Window XP Home Edition was MSHOME. Now click Save and Restart. This will save the changes you have made and restart the CIFS service. Go to the Shares tab and click on add circle. Enter a name for the share. Repeating the name of the mount point is probably the safest policy, so in this case, store and also add a comment. Then click ... in the Path section. This will bring up a simple file system browser. The files you are seeing are on the FreeNAS server and NOT on your local PC. Click store and /mnt/store/ will appear in the little edit box at the click. OK it and you will be taken back to the shares page. Now /mnt/store/ has been added as the path. Leave everything else as it is and click Add and then apply the changes. So now the first hard disk of the computer is formatted, mounted, and shared to the rest of the network. Now, we will access the share from a Windows Vista machine. Testing the Share You can perform this test from any machine that supports the CIFS protocol including Windows 95/98/ME, Windows 2000/XP, Apple OS X, and Linux. Here, we are going to use Windows Vista. Open the Network and Sharing Center by clicking Network on the Start menu. When the window appears, Vista will automatically scan the network for any shared network resources. When it has finished, you will see the available machines on the network including FREENAS.     Open up the FREENAS computer and you will see store, the storage area that you configured. Double click on that and you now "inside" the FreeNAS server from within your Windows machine. Try dragging and dropping a few files in to the store area. Then try deleting them again. To access the FreeNAS server without using the Network and Sharing Center, click Start, and type freenas and then press Enter. This will bring up the shares available on the FreeNAS server directly:     Detailed Overview of Installation It is time to get your hands on a working FreeNAS server and to do that, we need to boot it up onto a PC. There are several steps to this. First, you must burn a CD of the ISO image file you have downloaded. Then, you need to boot the PC from the CD; this may involve changing your computers BIOS to make it boot from the optical drive. Then, you can configure the FreeNAS server to make some storage space available on the network. When using the LiveCD to boot FreeNAS, there are two types of storage on FreeNAS: data and configuration information. The data will be held on the hard drive of the PC, but the configuration needs to be held on a floppy disk or a USB flash disk. For this example, we will use a USB flash disk to store the configuration information. Making the FreeNAS CD To boot the PC into FreeNAS, you need a CD. The ISO image file you have downloaded contains all the information needed for the CD, but it needs to be written onto a physical CD. This process is often known as burning the CD as the laser writes to the disk by heating it and marking or scorching the surface layer. You need to use a PC with a CD-RW drive and a blank CD-R disk (I recommend using a good brand name CD-R for best results). Download the FreeNAS ISO image on to that machine. The PC with the CD writer should have some CD writing software on it (for example Roxio Easy CD or Nero). If you are familiar with the CD writing software, go ahead and burn the ISO file to the CD-R disk. If you aren't familiar with the CD writing software or it doesn't have any CD writing software, then I recommend ISO Recorder. You can download it from http://isorecorder.alexfeinman.com/isorecorder.htm.     Booting from CD Put your newly made FreeNAS CD into the CD drive of the machine on which you want to install FreeNAS, and also put the USB flash disk into a USB port. The flash disk will be used to store the configuration data. (You can also use a floppy disk. If you have both a USB flash disk and a floppy inserted, FreeNAS will save the configuration on the USB device). Now, you need to switch on the PC. When a PC starts, it goes through what is known as the Power On Self Test sequence. Here, the PC will check the amount of memory installed in the PC and find the installed hard drives. After the checks, the PC will try and boot from one of the hard drives, the CDROM, the floppy disk or even a USB flash disk. Which device the PC chooses first as its boot device can be changed by a built-in setup program. The setup program lets you modify basic system configuration settings. These settings are stored in a special battery-backed area of the computer's memory that retains the settings even when the power is switched off. During the POST sequence, there is normally a message telling you how to enter into the built-in setup program. It is normally either the DEL key or F2, on some systems it is also F10. You need to enter into the setup to check and/or change the first boot device to be the CDROM so that the computer will boot into FreeNAS. Each PC has a slightly different setup program, so you will need to search around until you find what you need. The three most popular types of setup programs (also known as BIOS Basic Input Output Program) are the Phoenix setup program, the Phoenix-Award setup program, and the AMI setup program. There are many types of BIOS setup programs and each PC manufacturer modifies the setup program for their own use. The information below is really only a "rough guide" to help you feel your way around. Your BIOS setup program may be significantly different from the examples below. The best source of information is the manual that came with your PC or your motherboard. If you don't have one, most PC manufacturers have them available for download on their websites. Phoenix BIOS If your machine has a Phoenix BIOS, then normally you need to press F2 to enter the setup program. The top of the setup program has a menu that you can navigate with the left and right arrow keys, you need to select the Boot menu.     On the Boot menu page, you can move up and down the available boot devices using the up and down arrow keys. You can expand and collapse sections with the + or signs using the ENTER key. To change the boot order, you use the + and keys. You want to make sure that the CDROM is the first device in the list. After you have changed the boot order list, you need to go to the Exit menu (by pressing the right arrow key) and select Exit Saving Changes. The PC will then reboot and after the POST, it will start to boot from the FreeNAS CD.     Phoenix-Award BIOS If your PC has a Phoenix-Award BIOS, then normally, you need to press DEL to enter the setup program. Once inside, you can the up, down, left, and right keys to navigate around the menus. Go in to Advanced BIOS Features and set the First Boot Device to be CDROM by using the + and keys. You now need to save your changes and exit. Pressing ESC will bring you back to the main menu, then select Save & Exit Setup. Often, pressing F10 will have the same effect. The PC will then reboot and if you have made the changes correctly, it will boot from the FreeNAS CD. AMI BIOS The American Megatrends, Inc (AMI) BIOS normally displays a message telling you to Hit <DEL> if you want to run setup. Once inside, it is quite different to that of the setup programs for Phoenix or Award. Here, the Tab key is used to navigate and the arrow keys are used to change values. To go from one page to the next, press the ALT+P keys. This information should also be printed at the bottom of the BIOS setup page. You need to find the variable Boot Sequence and make sure that it is set to boot from the CDROM first. First Look at FreeNAS The boot process is in 4 distinct parts. First, the PC will go through its POST (Power On Self Test) sequence. Here, the PC will check the amount of memory installed (which you can often see being counted on the screen) and which devices are connected (like hard drives and CDROMs). It should then start to boot from the CD. Here, FreeBSD (the underlying OS of FreeNAS) will start to boot, this is recognizable by the simple spinning wheel (made up of simple text characters like | - / and which are animated to give the appearance of spinning). The third step is the FreeNAS boot menu. This will appear for just five seconds and you should just let it boot normally which is the default. The final stage is when the FreeNAS logo appears and the system will boot as a FreeNAS server. You can tell when the system is fully loaded because the PC speaker will make some short but melodious beeps. Configuring the Network The majority of the configuration for FreeNAS is done via a web interface, but before you can use the web interface, the FreeNAS server needs to be configured for your network. This is done via a simple text menu system using the keyboard and monitor attached to the PC with FreeNAS running on it. You probably only need to do this once, and after that this new network information will be saved on the USB flash disk (or floppy disk) and the server will boot into this configuration every time. If you press the SPACE bar on FreeNAS machine, the FreeNAS logo will disappear and a simple menu will appear.     Here, you have a number of options including options to reboot or power off the system. The first two options are about configuring the network and they reflect the two parts to configuring the network, first you need to choose which network card to use (option 1) and second you need to assign it an address (option 2). If you have only one network card in your machine then the FreeNAS server should have found it and automatically assigned it to be the LAN (Local Area Network) interface. What If My Network Card Isn't Found?This probably means that the network card in your machine isn't supported by FreeNAS or more specifically by FreeBSD. You will need to replace the card with one supported by FreeBSD. Check the FreeBSD hardware compatibility page for more information: http://www.freebsd.org/releases/6.2R/hardware-i386.html If you see something like the following screenshot:     then the network has been recognised and assigned automatically by FreeNAS. What is a LAN IP Address? IP stands for Internet Protocol and it is the basic low level language that computers use to talk to each other on the Internet. It is also used on private networks (in the office or at home) to connect different PCs and even printers to each other. An IPv4 address is made up of 4 sets of number (0 to 255) and is expressed in what is known as dot notation (meaning that each number has a dot between it). So 192.168.1.250 is an IP address, it also happens to be the default IP address for the FreeNAS server. Like email, the postal service and telephone, each destination (email account, mailbox or handset) needs a unique way of being identified. This is what IP addresses do; they allow each piece of equipment on the network to have a unique identifier so that messages can be addressed to the right place on the network. Pronouncing IP AddressesIf you need to speak to someone about an IP address, the simplest way is to speak about each digit separately, so 192.168.1.250 isn't "one hundred and ninety two dot" but rather "one nine two dot one six eight dot one dot two five zero". There are two ways in which you can obtain an IP address for the FreeNAS server. The first is to have the address assigned automatically via the DHCP service (Dynamic Host Configuration Protocol), and the second is to assign it manually. What is DHCP?The Dynamic Host Configuration Protocol (DHCP) automates the assignment of IP addresses and other IP parameters (like subnet masks and default gateway). A computer that needs an IP address will send a request to the DHCP server and the server will reply with an IP address from a pool of addresses that have been set aside for this purpose. A DHCP server can be a PC or server (running Windows, OS X or Linux) as well as small devices like modern DSL modems and firewalls. The advantage of the DHCP method is that the IP address assignment, all happens in the background and you don't need to worry about setting it yourself. The disadvantages are that first you need to have an already configured and running DHCP server on your network; and second, DHCP assigns addresses from a pool of available addresses. This means that every time the FreeNAS server boots, it is not guaranteed to have the same address as it had previously. This isn't a problem when using the CIFS protocol, however, for accessing the web interface or using protocols like FTP, it is desirable to have a stable IP address to refer to. However, for testing the FreeNAS server and learning about how it works using a DHCP assigned address could be acceptable for now. It is actually possible to assign fixed, permanent IP address to certain pieces of hardware, including a FreeNAS server over DHCP, but that requires extra advanced configuration changes in the DHCP server that cannot be covered in this tutorial. So opting for the manual IP address, you now need to obtain two pieces of information. The first is the actual IP address for the FreeNAS and the second is what is known as the subnet mask. The subnet mask will also be expressed in the dot notation and is normally something like 255.255.255.0. If you are in an office environment, you need to speak to the network administrator and he/she will be able to give you the information you need. If you are administering your own network, you need to choose an IP that isn't currently allocated to any other machine on your network (and also, isn't part of the address pool of any DHCP server on your network). Having obtained the IP address and subnet mask, you can now configure the FreeNAS server for your network. Select option 2 on the console menu. If you have chosen to have DHCP assign the address, answer yes (y) to the first question about using DHCP for IPv4. Otherwise answer no (n). If you are setting the address manually, you can now enter the address in dot notation, i.e. 192.168.1.240. Next, comes the subnet mask. If your subnet mask is 255.255.255.0: enter 24, for 255.255.0.0: enter 16, and for 255.0.0.0: enter 8. At this point, you can now skip the default gateway and DNS questions (by just pressing ENTER). We won't be using IPv6 so the simplest thing to do now is just answer yes to the "Do you want to use AutoConfiguration for IPv6?" question. This will cause a small delay while FreeNAS tries (and probably fails) to get the IPv6 address but it is simpler than trying to enter the IPv6 address manually! After you have successful set the IP address, there will be a small message on the screen inviting you to access the web interface by opening the listed URL in your web browser. If you have used DHCP, note down the URL listed. If you set the IP address manually, check that the URL listed is the same as the IP address you set with [http:// http://] in front of it. You are now ready to access the web interface. What is IPv4 and IPv6?The Internet Protocol has been around since the mid 1980's and when it was designed, the popularity of the Internet was not envisaged. The number of computers connected to the Internet is quickly growing beyond the addressing capabilities of the original protocol. As an answer to this, a new version of the IP protocol has been designed and has been given the name IP version 6 or IPv6 for short and the older version has taken the name IP version 4 or IPv4 for short. FreeNAS supports both versions of the Internet Protocol. In this tutorial, we will concentrate just on IPv4 as it still remains the most popular of the two protocols. Basic Configuration With your FreeNAS server now being up and running, it is time to access the web interface. Open a web browser on a computer on the same network as the FreeNAS server. Enter in the URL of the FreeNAS server. This should be the same as the IP address of the server with [http:// http://] in the front. The default URL is http://192.168.1.250     The first time you access the FreeNAS web interface, you will be asked for the username and password. The default username is admin and the default password is freenas. FreeNAS Web Interface You should now have the web interface in your browser. The interface is split into two main sections. Down the left-hand-side are the menus, and the right-hand-side contains the pages for configuration. The menus are split into various sections: System, Interfaces, Disks, Services, Access, Status, Diagnostics, and Advanced.     When talking about a particular menu item, we shall use the notation Subsection: Menu Item to help you find the right menu option easily. So, the Management option, which is in the Disks subsection, will be referred to as Disks: Management. System This section is for system level configuration and operations, here for example you can change the username and password, backup and restore the configuration data, and shutdown or reboot the server. Interfaces Here, you can configure the network of the FreeNAS server much like you did via the console menu. You can change the network card that is used for the web interface and assign permanent or automatic IP addresses. Be careful when you change things here as some changes won't take effect until you reboot. If you have changed any of the addressing, you will need to access the web interface with the IP address. Disks This section of the menu is for administering the disks on the server. Here, you can set up disk redundancy (RAID), control encryption, format disks, and mount the disks on the server. Services The various access protocols like CIFS, NFS, and FTP are controlled from here. Each service is administered individually and by default NONE of the services are enabled, so before you can access files stored on the FreeNAS server, you need to enable at least one of these services. Access Most of the services offered by FreeNAS use some form of list of users to control who has access and who does not. This section is for defining these users and the groups they belong to as well as connecting the FreeNAS server to other directory services. Status The status menu has several reporting tools for you to see the current state of your FreeNAS server including a general overview, memory usage, disk usage, and network usage. You can also configure emails to be sent periodically about the status of the server. Diagnostics The diagnostics menu contains different tools to help diagnose any problem with the FreeNAS server, including logs of all the important services and diagnostic information from the hard disks and other system modules. Advanced The advanced section provides some simple tools for executing commands at the operating system level and should not be used by those unfamiliar with FreeBSD.    
Read more
  • 0
  • 0
  • 7275
Modal Close icon
Modal Close icon