Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-php-web-20-mashup-projects-your-own-video-jukebox-part-2
Packt
19 Feb 2010
19 min read
Save for later

PHP Web 2.0 Mashup Projects: Your Own Video Jukebox: Part 2

Packt
19 Feb 2010
19 min read
Parsing With PEAR If we were to start mashing up right now, between XSPF, YouTube's XML response, and RSS, we would have to create three different parsers to handle all three response formats. We would have to comb through the documentation and create flexible parsers for all three formats. If the XML response for any of these formats changes, we would also be responsible for changing our parser code. This isn't a difficult task, but we should be aware that someone else has already done the work for us. Someone else has already dissected the XML code. To save time, we can leverage this work for our mashup. We used PEAR, earlier in Chapter 1 to help with XML-RPC parsing. For this project, we will once again use PEAR to save us the trouble of writing parsers for the three XML formats we will encounter. For this project, we will take a look at three packages for our mashup. File_XSPF is a package for extracting and setting up XSPF playlists. Services_YouTube is a Web Services package that was created specifically for handling the YouTube API for us. Finally, XML_RSS is a package for working with RSS feeds. For this project, it works out well that there are three specific packages that fits our XML and RSS formats. If you need to work with an XML format that does not have a specific PEAR package, you can use the XML_Unserializer package. This package will take a XML and return it as a string. Is PEAR Right For You?Before we start installing PEAR packages, we should take a look if it is even feasible to use them for a project. PEAR packages are installed with a command line package manager that is included with every core installation of PHP. In order for you to install PEAR packages, you need to have administrative access to the server. If you are in a shared hosting environment and your hosting company is stingy, or if you are in a strict corporate environment where getting a server change is more hassle than it is worth, PEAR installation may not be allowed. You could get around this by downloading the PEAR files and installing them in your web documents directory. However, you will then have to manage package dependencies and package updates by yourself. This hassle may be more trouble than it's worth, and you may be better off writing your own code to handle the functionality.On the other hand, PEAR packages are often a great time saver. The purpose of the packages is to either simplify tedious tasks, or interface with complex systems. The PEAR developer has done the difficult work for you already. Moreover, as they are written in PHP and not C, like a PHP extension would be, a competent PHP developer should be able to read the code for documentation if it is lacking. Finally, one key benefit of many packages, including the ones we will be looking at, is that they are object-oriented representations of whatever they are interfacing. Values can be extracted by simply calling an object's properties, and complex connections can be ignited by a simple function call. This helps keep our code cleaner and modular. Whether the benefits of PEAR outweigh the potential obstacles depends on your specific situation. Package Installation and Usage Just like when we installed the XML-RPC package, we will use the install binary to install our three packages. If you recall, installing a package, simply type install into the command line followed by the name of the package. In this case, though, we need to set a few more flags to force the installer to grab dependencies and code in beta status. To install File_XSPF, switch to the root user of the machine and use this command: [Blossom:~] shuchow# /usr/local/php5/bin/pear install -f --alldeps File_XSPF This command will download the package. The -alldeps flag tells PEAR to also check for required dependencies and install them if necessary. The progress and outcome of the downloads will be reported. Do a similar command for Services_YouTube: [Blossom:~] shuchow# /usr/local/php5/bin/pear install -f --alldeps Services_YouTube Usually, you will not need the –f flag. By default, PEAR downloads the latest stable release of a package. The –f flag, force, forces PEAR to download the most current version, regardless of its release state. As of this writing, File_XSPF and Services_YouTube do not have stable releases, only beta and alpha respectively. Therefore, we must use –f to grab and install this package. Otherwise, PEAR will complain that the latest version is not available. If the package you want to download is in release state, you will not need the –f flag. This is the case of XML_RSS, which has a stable version available. [Blossom:~] shuchow# /usr/local/php5/bin/pear install --alldeps XML_RSS After this, sending a list-all command to PEAR will show the three new packages along with the packages you had before. PEAR packages are basically self-contained PHP files that PEAR installs into your PHP includes directory. The includes directory is a directive in your php.ini file. Navigate to this directory to see the PEAR packages' source files. To use a PEAR package, you will need to include the package's source file in the top of your code. Consult the package's documentation on how to include the main package file. For example, File_XSPF is activated by including a file named XSPF.php. PEAR places XSPF.php in a directory named File, and that directory is inside your includes directory. <?php require_once 'File/XSPF.php'; //File_XSPF is now available. File_XSPF The documentation to the latest version of XSPF is located at http://pear.php.net/package/File_XSPF/docs/latest/File_XSPF/File_XSPF.html. The package is simple to use. The heart of the package is an object called XSPF. You instantiate and use this object to interact with a playlist. It has methods to retrieve and modify values from a playlist, as well as utility methods to load a playlist into memory, write a playlist from memory to a file, and convert an XSPF file to other formats. Getting information from a playlist consists of two straightforward steps. First, the location of the XSPF file is passed to the XSPF object's parse method. This loads the file into memory. After the file is loaded, you can use the object's various getter methods to extract values from the list. Most of the XSPF getter methods are related to getting metadata about the playlist itself. To get information about the tracks in the playlist, use the getTracks method. This method will return an array of XSPF_Track objects. Each track in the playlist is represented as an XSPF_Track object in this array. You can then use the XSPF_Track object's methods to grab information about the individual tracks. We can grab a playlist from Last.fm to illustrate how this works. The web service has a playlist of a member's most played songs. Named Top Tracks, the playlist is located at http://ws.audioscrobbler.com/1.0/user/USERNAME/toptracks.xspf, where USERNAME is the name of the Last.fm user that you want to query. This page is named XSPFPEARTest.php in the examples. It uses File_XSPF to display my top tracks playlist from Last.fm. <?php require_once 'File/XSPF.php'; $xspfObj =& new File_XSPF(); //Load the playlist into the XSPF object. $xspfObj->parseFile('http://ws.audioscrobbler.com/1.0/user/ ShuTheMoody/toptracks.xspf'); //Get all tracks in the playlist. $tracks = $xspfObj->getTracks();?> This first section creates the XSPF object and loads the playlist. First, we bring in the File_XSPF package into the script. Then, we instantiate the object. The parseFile method is used to load an XSPF file list across a network. This ties the playlist to the XSPF object. We then use the getTracks method to transform the songs on the playlist into XSPF_Track objects. <html><head> <title>Shu Chow's Last.fm Top Tracks</title></head><body> Title: <?= $xspfObj->getTitle() ?><br /> Created By: <?= $xspfObj->getCreator() ?> Next, we prepare to display the playlist. Before we do that, we extract some information about the playlist. The XSPF object's getTitle method returns the XSPF file's title element. getCreator returns the creator element of the file. <?php foreach ($tracks as $track) { ?> <p> Title: <?= $track->getTitle() ?><br /> Artist: <?= $track->getCreator() ?><br /> </p><?php } ?></body></html> Finally, we loop through the tracks array. We assign the array's elements, which are XSPF_Track objects, into the $track variable. XSPF_Track also has getTitle and getCreator methods. Unlike XSPF's methods of the same names, getTitle returns the title of the track, and getCreator returns the track's artist. Running this file in your web browser will return a list populated with data from Last.fm. Services_YouTube Services_YouTube works in a manner very similar to File_XSPF. Like File_XSPF, it is an object-oriented abstraction layer on top of a more complicated system. In this case, the system is the YouTube API. Using Services_YouTube is a lot like using File_XSPF. Include the package in your code, instantiate a Services_YouTube object, and use this object's methods to interact with the service. The official documentation for the latest release of Services_YouTube is located at http://pear.php.net/package/Services_YouTube/docs/latest/. The package also contains online working examples at http://pear.php.net/manual/en/package.webservices.services-youtube.php. Many of the methods deal with getting members' information like their profile and videos they've uploaded. A smaller, but very important subset is used to query YouTube for videos. We will use this subset in our mashup. To get a list of videos that have been tagged with a specific tag, use the object's listByTag method. listByTag will query the YouTube service and store the XML response in memory. It is does not return an array of video objects we can directly manage, but with one additional function call, we can achieve this. From there, we can loop through an array of videos similar to what we did for XSPF tracks. The example file YouTubePearTest.php illustrates this process. <?php require_once 'Services/YouTube.php'; $dev_id = 'Your YouTube DeveloperID'; $tag = 'Social Distortion'; $youtube = new Services_YouTube($dev_id, array('usesCache' => true)); $videos = $youtube->listByTag($tag);?> First, we load the Services_YouTube file into our script. As YouTube's web service requires a Developer ID, we store that information into a local variable. After that, we place the tag we want to search for in another local variable named $tag. In this example, we are going to check out which videos YouTube has for the one of the greatest bands of all time, Social Distortion. Service_YouTube's constructor takes this Developer ID and uses it whenever it queries the YouTube web service. The constructor can take an array of options as a parameter. One of the options is to use a local cache of the queries. It is considered good practice to use a cache, as to not slam the YouTube server and run up your requests quota. Another option is to specify either REST or XML-RPC as the protocol via the driver key in the options array. By default, Services_YouTube uses REST. Unless you have a burning requirement to use XML-RPC, you can leave it as is. Once instantiated, you can call listByTag to get the response from YouTube. listByTag takes only one parameter—the tag of our desire. Services_YouTube now has the results from YouTube. We can begin the display of the results. <html><head> <title>Social Distortion Videos</title></head><body> <h1>YouTube Query Results for Social Distortion</h1> Next, we will loop through the videos. In order to get an array of video objects, we first need to parse the XML response. We do that using Services_YouTube's xpath method, which will use the powerful XPATH query language to go through the XML and convert it into PHP objects. We pass the XPATH query into the method, which will give us an array of useful objects. We will take a closer look at XPATH and XPATH queries later in another project. For now, trust that the query //video will return an array of video objects that we can examine. Within the loop, we display each video's title, a thumbnail image of the video, and a hyperlink to the video itself. <?php foreach ($videos->xpath('//video') as $i => $video) { ?><p> Title: <?= $video->title ?><br /> <img src='<?= $video->thumbnail_url ?>' alt='<?= $video->title ?>' /><br /> <a href='<?= $video->url ?>'>URL</a></p><?php } ?></body></html> Running this query in our web browser will give us a results page of videos that match the search term we submitted. XML_RSS Like the other PEAR extensions, XML_RSS changes something very complex, RSS, into something very simple and easy to use, PHP objects. The complete documentation for this package is at http://pear.php.net/package/XML_RSS/docs/XML_RSS. There is a small difference to the basic philosophy of XML_RSS compared to Services_YouTube and File_XSPF. The latter two packages take information from whatever we're interested in, and place them into PHP object properties. For example, File_XSPF takes track names into a Track object, and you use a getTitle() getter method to get the title of the track. In Services_YouTube, it's the same principle, but the properties are public, and so there are no getter methods. You access the video's properties directly in the video object. In XML_RSS, the values we're interested in are stored in associative arrays. The available methods in this package get the arrays, then you manipulate them directly. It's a small difference, but you should be aware of it in case you want to look at the code. It also means that you will have to check the documentation of the package to see which array keys are available to you. Let's take a look at how this works in an example. The file is named RSSPEARTest.php in the example code. One of Audioscrobbler's feeds gives us an RSS file of songs that a user recently played. The feed isn't always populated because after a few hours, songs that are played aren't considered recent. In other words, songs will eventually drop off the feed simply because they are too old. Therefore, it's best to use this feed on a heavy user of Last.fm. RJ is a good example to use. He seems to always be listening to something. We'll grab his feed from Audioscrobbler: <?php include ("XML/RSS.php"); $rss =& new XML_RSS("http://ws.audioscrobbler.com/1.0/user/RJ/ recenttracks.rss"); $rss->parse(); We start off by including the module and creating an XML_RSS object. XML_RSS is where all of the array get methods reside, and is the heart of this package. It's constructor method takes one variable—the path to the RSS file. At instantiation, the package loads the RSS file into memory. parse() is the method that actually does the RSS parsing. After this, the get methods will return data about the feed. Needless to say, parse() must be called before you do anything constructive with the file. $channelInfo = $rss->getChannelInfo();?> The package's getChannelInfo() method returns an array that holds information about the metadata, the channel, of the file. This array holds the title, description, and link elements of the RSS file. Each of these elements is stored in the array with the same key name as the element. <?= "<?xml version="1.0" encoding="UTF-8" ?>" ?> The data that comes back will be UTF-8 encoded. Therefore, we need to force the page into UTF-8 encoding mode. This line outputs the XML declaration into the top of the web page in order to insure proper rendering. Putting a regular <?xml declaration will trigger the PHP engine to parse the declaration. However, PHP will not recognize the code and halt the page with an error. <html> <head> <title><?= $channelInfo['title'] ?></title> </head> <body> <h1><?= $channelInfo['description'] ?></h1> Here we begin the actual output of the page. We start by using the array returned from getChannelInfo() to output the title and description elements of the feed. <ol> <?php foreach ($rss->getItems() as $item { ?> <li> <?= $item['title'] ?>: <a href="<?= $item ['link'] ?>"><?= $item ['link'] ?></a> </li> <?php } ?></ol> Next, we start outputting the items in the RSS file. We use getItems() to grab information about the items in the RSS. The return is an array that we loop through with a foreach statement. Here, we are extracting the item's title and link elements. We show the title, and then create a hyperlink to the song's page on Last.fm. The description and pubDate elements in the RSS are also available to us in getItems's returned array. Link to User: <a href="<?= $channelInfo['link'] ?>"><?= $channelInfo['link'] ?></a> </body></html> Finally, we use the channel's link property to create a hyperlink to the user's Last.fm page before we close off the page's body and html tags. Using More ElementsIn this example, the available elements in the channel and item arrays are a bit limited. getChannelInfo() returns an array that only has the title, description, and link properties. The array from getItems() only has title, description, link, and pubDate properties. This is because we are using the latest release version of XML_RSS. At the time of writing this book, it is version 0.9.2. The later versions of XML_RSS, currently in beta, handle many more elements. Elements in RSS 2.0 like category and authors are available. To upgrade to a beta version of XML_RSS, use the command PEAR upgrade –f XML_RSS in the command line. The –f flag is the same flag we used to force the beta and alpha installations of Service_YouTube and File_XSPF. Alternatively, you can install the beta version of XML_RSS at the beginning using the same –f flag. If we run this page on our web browser, we can see the successful results of our hit. At this point, we know how to use the Audioscrobbler feeds to get information. The majority of the feeds are either XSPF or RSS format. We know generally how the YouTube API works. Most importantly, we know how to use the respective PEAR packages to extract information from each web service. It's time to start coding our application. Mashing Up If you haven't already, you should, at the very least, create a YouTube account and sign up for a developer key. You should also create a Last.fm account, install the client software, and start listening to some music on your computer. This will personalize the video jukebox to your music tastes. All examples here will assume that you are using your own YouTube key. I will use my own Last.fm account for the examples. As the feeds are open and free, you can use the same feeds if you choose not to create a Last.fm account. Mashup Architecture There are obviously many ways in which we can set up our application. However, we're going to keep functionality fairly simple. The interface will be a framed web page. The top pane is the navigation pane. It will be for the song selection. The bottom section is the content pane and will display and play the video. In the navigation pane, we will create a select menu with all of our songs. The value, and label, for each option will be the artist name followed by a dash, followed by the name of the song (For example, "April Smith—Bright White Jackets"). Providing both pieces of information will help YouTube narrow down the selection. When the user selects a song and pushes a "Go" button, the application will load the content page into the content pane. This form will pass the artist and song information to the content page via a GET parameter. The content page will use this GET parameter to query YouTube. The page will pull up the first, most relevant result from its list of videos and display it. Main Page The main page is named jukebox.html in the example code. This is our frameset page. It will be quite simple. All it will do is define the frameset that we will use. <html><head><title>My Video Jukebox</title></head> <frameset rows="10%,90%"> <frame src="navigation.php" name="Navigation" /> <frame src="" name="Content" /> </frameset></html> This code defines our page. It is two frame rows. The navigation section, named Navigation, is 10% of the height, and the content, named Content, is the remaining 90%. When first loaded, the mashup will load the list of songs in the navigation page and nothing else.
Read more
  • 0
  • 0
  • 2167

article-image-installing-mahara
Packt
19 Feb 2010
7 min read
Save for later

Installing Mahara

Packt
19 Feb 2010
7 min read
What will you need? Before you can install Mahara, you will need to have access to a Linux server. It may be that you run Linux on a laptop or desktop at home or that your company or institution has its own Linux servers, in which case, great! If not, there are many hosting services available on the Internet, which will enable you to access a Linux server and therefore run Mahara. It is important that you get a server to which you have root access. It is also important that you set your server up with the following features: Database: Mahara must have a database to work. The databases supported are PostgreSQL Version 8.1 or later and MySQL Version 5.0.25 or later. The Mahara developers recommend that you use PostgreSQL, if possible, but for most installations, MySQL will work just as well. PHP: Mahara requires PHP Version 5.1.3 or later. Web Server: The preferred web server is Apache. PHP extensions: Compulsory Extensions: GD, JSON, cURL, libxml, SimpleXML, Session, pgSQL or Mysqli, EXIF, OpenSSL or XML-RCP (for networking support) Optional Extension: Imagick Ask your resident IT expert about the features listed above if you don't understand what they mean. A quick way to install some of the software listed above is to use the apt-get install command if you are using the Ubuntu/Debian Linux systems. See http://www.debian.org/doc/manuals/apt-howto/ to find out more. Downloading Mahara It's time for action. Let's start by seeing how easy it is for us to get a copy of Mahara for ourselves, and the best part is... it's free! Time for action – downloading Mahara Go to http://mahara.org. Click on the download button on the Mahara home page. The button will be labeled with the name of the current version of Mahara: You will now see a web page that lists all the various versions of Mahara, both previous and forthcoming versions, in Alpha and Beta. Choose the most recent version from the list in the format you prefer. We recommend that you use the .tar.gz type because it is faster to download than .zip. You will be asked if you would like to open or save the file. Select Save File, and click OK. That's all there is to it. Go to your Internet downloads folder. In there, you should see your newly downloaded Mahara package. What Just Happened? You have just taken your first step on the road to installing Mahara. We have seen the website we have to go to for downloading the most recent version and learned how to download the package in the format we prefer. Using the command line The best way of installing and administering your Mahara is to use the command line. This is a way of writing text commands to perform specific tasks, rather than having to use a graphical user interface. There are many things you can do from the command line, from common tasks such as copying and deleting files to more advanced ones such as downloading and installing software from the Internet. A lot of the things we will be doing in this section assume that you will have Secure Shell Access to your server through the terminal command line. If you have a Linux or a Mac computer, you can use the terminal on your machine to SSH into your web server. Windows users can achieve the same functionality by downloading a free terminal client called PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html. Speak to your resident IT expert for more information on how to use the terminal, or see http://www.physics.ubc.ca/mbelab/computer/linuxintro/html/for an introduction to the Linux command line. For now, let's just learn how to get the contents of our downloaded package into the correct place on our server. Time for action – creating your Mahara file structure Copy the mahara- 1.2.0.tar.gz package you downloaded into your home directory on your web server. If you are copying the file to the server from your own computer, you can do this using the scp command (on Linux or Mac): scp mahara-1.2.0.tar.gz servername:pathtohomedirectory On Windows, you may prefer to use a free FTP utility such as FileZilla (http://filezilla-project.org/). Unpack the contents of the Mahara package on the Linux server. On the terminal, you can do this using the tar command: tar xvzf mahara-1.2.0.tar.gz You will now see a new folder called mahara-1.2.0; you will need to rename this to public. To do this on the terminal, you can use the mv command: mv mahara-1.2.0 public That's it! The Mahara code is now in place. What Just Happened? You just learned where to copy the Mahara package on your server and how to extract its contents. Creating the database A lot of the information created in your Mahara will be stored in a database. Mahara offers support for both PostgreSQL and MySQL databases. However we prefer to use PostgreSQL. If you are interested, see http://mahara.org/interaction/forum/topic.php?id=302 for a discussion on why PostgreSQL is preferred to MySQL. The way you create your database will depend on who you have chosen to host your Mahara. Sometimes, your web host will provide a graphical user interface to access your server database. Get in touch with your local IT expert to find out how to do this. However, for smaller Mahara installations, we often prefer to use something like phpPgAdmin, which is a software application that allows you to manage PostgreSQL databases over the Internet. See http://phppgadmin.sourceforge.ne for more information on setting up phpPgAdmin on your server. Also see,http://www.phpmyadmin.net/ for phpMyAdmin which works in a very similar way to phpPgAdmin but operates on a MySQL database. For now, let's get on with creating a Postgres database using our phpPgAdmin panel. Time for action – creating the Mahara database Open up your phpPgAdmin panel from your Internet browser and log in. The username is hopefully postgres. Contact your admin if you are unsure of the database password or how to locate the phyPgAdmin panel. On the front page there is a section that invites you to create database, click there. Give your database a relevant name such as mysite_Mahara. Make sure you select the UTF8 collation from the drop-down box. Finally, click Create. If you want to, it is a good idea to have a new user for each database you create. Use phpPgAdmin to create a new user. That's it, you're done! What Just Happened? We just created the database for our Mahara installation using the open source phpPgAdmin tool available for Linux. Another way to create the database on your server is to use the database command line tool. Have a go hero – using the command line to create your database Using the command line is a much more elegant way to create the database and quicker once you get the hang of it. Why not have a go at creating the database using the command line? For instructions on how to do this see the database section of the Mahara installation guide:http://wiki.mahara.org/System_Administrator%27s_Guide/Installing_Mahara Setting up the data directory Most of the data that is created in your Mahara is stored in the database. However, all the files that are uploaded by your users, such as their personal photos or documents, need to be stored in a separate place. This is where the data directory comes in. The data directory is simply a folder that holds all of the "stuff" belonging to your users. Everything is kept safe by the data directory being outside of the home directory. This set up also makes it easy for you to migrate your Mahara to another server at some point in the future. The data directory is often referred to as the dataroot.
Read more
  • 0
  • 0
  • 2362

article-image-drupal-6-performance-optimization-using-views-and-panels-caching
Packt
19 Feb 2010
5 min read
Save for later

Drupal 6 Performance Optimization Using Views and Panels Caching

Packt
19 Feb 2010
5 min read
Views caching The Views 2 module allows you to cache your Views data and content. You can cache Views data per View. We're going to enable caching on one of our existing Views, and also create a brand new View and set caching for that as well using the test content we just generated. This will show you a nice integration of the Devel functionality with the Views module and then how caching works with Views. Go to your Site building | Views configuration page and you'll see many of your default and custom views listed. We have a view on this site for our main photo gallery. The view is named photo_gallery in our View listing. Go ahead and click on one of your Views edit links to get into edit mode for a View. In our Views 2 interface mode, we'll see our tabs for default, Page, and/or Block View display. I'm going to click on my Page tab to see my View's page settings. Under my Basic settings configuration, I'll see a link for Caching. Currently, our Caching link states None, meaning that no caching has been configured for this view. Click on the None link. Select the Time-based radio button. This will enable Time-based caching for our View page. Click the Update default display button. The next caching options configuration screen will ask you to set the amount of time for both, your View Query results and for your View Rendered output. Query results refer to the amount of time raw queries should be cached. Rendered output is the amount of time the View HTML output should be cached. So basically, you can cache both your data and your frontend HTML output. Set them both to the default of 1 hour. You can also set one to a specific time and the other to None. Go ahead and tweak these settings to your own requirements. I'm leaving both set to the default of 1 hour. Click on the Update button to save your caching options settings. You are now caching your View. Save your View by clicking on the Save button. The next time you look at your View interface you should see the caching time notation listed under your Basic settings. It will say 1 hour/1 hour for this setting. Once you enable Views caching, if you make a change to your View settings and configuration, the results and output of the View may not update while you have caching enabled. So, while in Views development you may want to disable caching and set it to None. Otherwise, this next section will show you how to disable your Views cache while you are in development. To see the performance results of this, you can use the Devel module's functionality again. When you load your View after you enable caching, you should see a decrease in the amount of ms (milliseconds) needed to build your Views plugin, data, and handlers. So, if your Views plugin build loaded in 27.1 ms before you enabled caching, you may notice that it changes to something less—for example, in my case it now shows that it loads in 2.8 ms. You can immediately see a slight performance increase with your View build. Let's go ahead and build a brand new View using the test content that we generated with the Devel module and then enable caching for this View as well. Go to your Views admin and follow these steps: Add a new View. Name the View, add a description and a tag if applicable. Click on Next. I'm going to create a View that filters my blog entries and lists the new blog entries in post date order using the Devel content I generated. Add a Page display to your new View. Name the page View. Give the page View a title. Give your View an HTML list style. Set the View to display 5 posts and to use a full pager. Set your caching to Time-based (following instructions above in the first view we edited). Give the view a path. Add a Node: Title field and set the field to be linked to its node. Add a filter in order to filter by Node:Type and then select Blog entry. Set your Sort criteria to sort by Node:Post date in ascending order by hour. Your settings should look similar to this: Save your View by clicking on the Save button. Your new View will be visible at the Page path you gave it and it will also be caching the content and data it presents. Again, if you refresh your View page each time you should notice that the plugins, data, and handlers build times decrease or stay very similar and consistent in load times. You should also notice that the Devel database queries status is telling you that it's using the cached results and cached output for the View build times and the MySQL statements. You should see the following code sitting below your page content on the View page you are looking at. It will resemble this: Views plugins build time: 23.509979248 msViews data build time: 55.7069778442 msViews handlers build time: 1.95503234863 msSELECT node.nid AS nid,node_data_field_photo_gallery_photo.field_photo_gallery_photo_fidAS node_data_field_photo_gallery_photo_field_photo_gallery_photo_fid,node_data_field_photo_gallery_photo.field_photo_gallery_photo_listAS node_data_field_photo_gallery_photo_field_photo_gallery_photo_list,node_data_field_photo_gallery_photo.field_photo_gallery_photo_dataAS node_data_field_photo_gallery_photo_field_photo_gallery_photo_data,node.type AS node_type,node.vid AS node_vid,node.title AS node_title,node.created AS node_createdFROM {node} nodeLEFT JOIN {content_type_photo} node_data_field_photo_gallery_photo ONnode.vid = node_data_field_photo_gallery_photo.vidWHERE (node.status <> 0) AND (node.type in ('%s'))ORDER BY node_created ASCUsed cached resultsUsed cached output
Read more
  • 0
  • 0
  • 3182

article-image-ajax-form-validation-part-1
Packt
18 Feb 2010
4 min read
Save for later

AJAX Form Validation: Part 1

Packt
18 Feb 2010
4 min read
The server is the last line of defense against invalid data, so even if you implement client-side validation, server-side validation is mandatory. The JavaScript code that runs on the client can be disabled permanently from the browser's settings and/or it can be easily modified or bypassed. Implementing AJAX form validation The form validation application we will build in this article validates the form at the server side on the classic form submit, implementing AJAX validation while the user navigates through the form. The final validation is performed at the server, as shown in Figure 5-1: Doing a final server-side validation when the form is submitted should never be considered optional. If someone disables JavaScript in the browser settings, AJAX validation on the client side clearly won't work, exposing sensitive data, and thereby allowing an evil-intentioned visitor to harm important data on the server (for example, through SQL injection). Always validate user input on the server. As shown in the preceding figure, the application you are about to build validates a registration form using both AJAX validation (client side) and typical server-side validation: AJAX-style (client side): It happens when each form field loses focus (onblur). The field's value is immediately sent to and evaluated by the server, which then returns a result (0 for failure, 1 for success). If validation fails, an error message will appear and notify the user about the failed validation, as shown in Figure 5-3. PHP-style (server side): This is the usual validation you would do on the server—checking user input against certain rules after the entire form is submitted. If no errors are found and the input data is valid, the browser is redirected to a success page, as shown in Figure 5-4. If validation fails, however, the user is sent back to the form page with the invalid fields highlighted, as shown in Figure 5-3. Both AJAX validation and PHP validation check the entered data against our application's rules: Username must not already exist in the database Name field cannot be empty A gender must be selected Month of birth must be selected Birthday must be a valid date (between 1-31) Year of birth must be a valid year (between 1900-2000) The date must exist in the number of days for each month (that is, there's no February 31) E-mail address must be written in a valid email format Phone number must be written in standard US form: xxx-xxx-xxxx The I've read the Terms of Use checkbox must be selected Watch the application in action in the following screenshots: XMLHttpRequest, version 2 We do our best to combine theory and practice, before moving on to implementing the AJAX form validation script, we'll have another quick look at our favorite AJAX object—XMLHttpRequest. On this occasion, we will step up the complexity (and functionality) a bit and use everything we have learned until now. We will continue to build on what has come before as we move on; so again, it's important that you take the time to be sure you've understood what we are doing here. Time spent on digging into the materials really pays off when you begin to build your own application in the real world. Our OOP JavaScript skills will be put to work improving the existing script that used to make AJAX requests. In addition to the design that we've already discussed, we're creating the following features as well: Flexible design so that the object can be easily extended for future needs and purposes The ability to set all the required properties via a JSON object We'll package this improved XMLHttpRequest functionality in a class named XmlHttp that we'll be able to use in other exercises as well. You can see the class diagram in the following screenshot, along with the diagrams of two helper classes: settings is the class we use to create the call settings; we supply an instance of this class as a parameter to the constructor of XmlHttp complete is a callback delegate, pointing to the function we want executed when the call completes The final purpose of this exercise is to create a class named XmlHttp that we can easily use in other projects to perform AJAX calls. With our goals in mind, let's get to it! Time for action – the XmlHttp object In the ajax folder, create a folder named validate, which will host the exercises in this article.
Read more
  • 0
  • 0
  • 6943

article-image-forms-grok-10
Packt
12 Feb 2010
13 min read
Save for later

Forms in Grok 1.0

Packt
12 Feb 2010
13 min read
A quick demonstration of automatic forms Let's start by showing how this works, before getting into the details. To do that, we'll add a project model to our application. A project can have any number of lists associated with it, so that related to-do lists can be grouped together. For now, let's consider the project model by itself. Add the following lines to the app.py file, just after the Todo application class definition. We'll worry later about how this fits into the application as a whole. class IProject(interface.Interface): name = schema.TextLine(title=u'Name',required=True) kind = schema.Choice(title=u'Kind of project', values=['personal','business']) description = schema.Text(title=u'Description')class AddProject(grok.Form): grok.context(Todo) form_fields = grok.AutoFields(IProject) We'll also need to add a couple of imports at the top of the file: from zope import interfacefrom zope import schema Save the file, restart the server, and go to the URL http://localhost:8080/todo/addproject. The result should be similar to the following screenshot: OK, where did the HTML for the form come from? We know that AddProject is some sort of a view, because we used the grok.context class annotation to set its context and name. Also, the name of the class, but in lowercase, was used in the URL, like in previous view examples. The important new thing is how the form fields were created and used. First, a class named IProject was defined. The interface defines the fields on the form, and the grok.AutoFields method assigns them to the Form view class. That's how the view knows which HTML form controls to generate when the form is rendered. We have three fields: name, description, and kind. Later in the code, the grok.AutoFields line takes this IProject class and turns these fields into form fields. That's it. There's no need for a template or a render method. The grok.Form view takes care of generating the HTML required to present the form, taking the information from the value of the form_fields attribute that the grok.AutoFields call generated. Interfaces The I in the class name stands for Interface. We imported the zope.interface package at the top of the file, and the Interface class that we have used as a base class for IProject comes from this package. Example of an interface An interface is an object that is used to specify and describe the external behavior of objects. In a sense, the interface is like a contract. A class is said to implement an interface when it includes all of the methods and attributes defined in an interface class. Let's see a simple example: from zope import interfaceclass ICaveman(interface.Interface): weapon = interface.Attribute('weapon') def hunt(animal): """Hunt an animal to get food""" def eat(animal): """Eat hunted animal""" def sleep() """Rest before getting up to hunt again""" Here, we are describing how cavemen behave. A caveman will have a weapon, and he can hunt, eat, and sleep. Notice that the weapon is an attribute—something that belongs to the object, whereas hunt, eat, and sleep are methods. Once the interface is defined, we can create classes that implement it. These classes are committed to include all of the attributes and methods of their interface class. Thus, if we say: class Caveman(object): interface.implements(ICaveman) Then we are promising that the Caveman class will implement the methods and attributes described in the ICaveman interface: weapon = 'ax'def hunt(animal): find(animal) hit(animal,self.weapon)def eat(animal): cut(animal) bite()def sleep(): snore() rest() Note that though our example class implements all of the interface methods, there is no enforcement of any kind made by the Python interpreter. We could define a class that does not include any of the methods or attributes defined, and it would still work. Interfaces in Grok In Grok, a model can implement an interface by using the grok.implements method. For example, if we decided to add a project model, it could implement the IProject interface as follows: class Project(grok.Container): grok.implements(IProject) Due to their descriptive nature, interfaces can be used for documentation. They can also be used for enabling component architectures, but we'll see about that later on. What is of more interest to us right now is that they can be used for generating forms automatically. Schemas The way to define the form fields is to use the zope.schema package. This package includes many kinds of field definitions that can be used to populate a form. Basically, a schema permits detailed descriptions of class attributes that are using fields. In terms of a form—which is what is of interest to us here—a schema represents the data that will be passed to the server when the user submits the form. Each field in the form corresponds to a field in the schema. Let's take a closer look at the schema we defined in the last section: class IProject(interface.Interface): name = schema.TextLine(title=u'Name',required=True) kind = schema.Choice(title=u'Kind of project', required=False, values=['personal','business']) description = schema.Text(title=u'Description', required=False) The schema that we are defining for IProject has three fields. There are several kinds of fields, which are listed in the following table. In our example, we have defined a name field, which will be a required field, and will have the label Name beside it. We also have a kind field, which is a list of options from which the user must pick one. Note that the default value for required is True, but it's usually best to specify it explicitly, to avoid confusion. You can see how the list of possible values is passed statically by using the values parameter. Finally, description is a text field, which means it will have multiple lines of text. Available schema attributes and field types In addition to title, values, and required, each schema field can have a number of properties, as detailed in the following table: Attribute Description title A short summary or label. description A description of the field. required Indicates whether a field requires a value to exist. readonly If True, the field's value cannot be changed. default The field's default value may be None, or a valid field value. missing_value If input for this field is missing, and that's OK, then this is the value to use. order The order attribute can be used to determine the order in which fields in a schema are defined. If one field is created after another (in the same thread), its order will be greater. In addition to the field attributes described in the preceding table, some field types provide additional attributes. In the previous example, we saw that there are various field types, such as Text, TextLine, and Choice. There are several other field types available, as shown in the following table. We can create very sophisticated forms just by defining a schema in this way, and letting Grok generate them. Field type Description Parameters Bool Boolean field.   Bytes Field containing a byte string (such as the python str). The value might be constrained to be within length limits.   ASCII Field containing a 7-bit ASCII string. No characters > DEL (chr(127)) are allowed. The value might be constrained to be within length limits.   BytesLine Field containing a byte string without new lines.   ASCIILine Field containing a 7-bit ASCII string without new lines.   Text Field containing a Unicode string.   SourceText Field for the source text of an object.   TextLine Field containing a Unicode string without new lines.   Password Field containing a Unicode string without new lines, which is set as the password.   Int Field containing an Integer value.   Float Field containing a Float.   Decimal Field containing a Decimal.   DateTime Field containing a DateTime.   Date Field containing a date.   Timedelta Field containing a timedelta.   Time Field containing time.   URI A field containing an absolute URI.   Id A field containing a unique identifier. A unique identifier is either an absolute URI or a dotted name. If it's a dotted name, it should have a module or package name as a prefix.   Choice Field whose value is contained in a predefined set. values: A list of text choices for the field. vocabulary: A Vocabulary object that will dynamically produce the choices. source: A different, newer way to produce dynamic choices. Note: only one of the three should be provided. More information about sources and vocabularies is provided later in this book. Tuple Field containing a value that implements the API of a conventional Python tuple. value_type: Field value items must conform to the given type, expressed via a field. Unique. Specifies whether the members of the collection must be unique. List Field containing a value that implements the API of a conventional Python list. value_type: Field value items must conform to the given type, expressed via a field. Unique. Specifies whether the members of the collection must be unique. Set Field containing a value that implements the API of a conventional Python standard library sets.Set or a Python 2.4+ set. value_type: Field value items must conform to the given type, expressed via a field. FrozenSet Field containing a value that implements the API of a conventional Python2.4+ frozenset. value_type: Field value items must conform to the given type, expressed via a field. Object Field containing an object value. Schema: The interface that defines the fields comprising the object. Dict Field containing a conventional dictionary. The key_type and value_type fields allow specification of restrictions for keys and values contained in the dictionary. key_type: Field keys must conform to the given type, expressed via a field. value_type: Field value items must conform to the given type, expressed via a field. Form fields and widgets Schema fields are perfect for defining data structures, but when dealing with forms sometimes they are not enough. In fact, once you generate a form using a schema as a base, Grok turns the schema fields into form fields. A form field is like a schema field but has an extended set of methods and attributes. It also has a default associated widget that is responsible for the appearance of the field inside the form. Rendering forms requires more than the fields and their types. A form field needs to have a user interface, and that is what a widget provides. A Choice field, for example, could be rendered as a <select> box on the form, but it could also use a collection of checkboxes, or perhaps radio buttons. Sometimes, a field may not need to be displayed on a form, or a writable field may need to be displayed as text instead of allowing users to set the field's value. Form components Grok offers four different components that automatically generate forms. We have already worked with the first one of these, grok.Form. The other three are specializations of this one: grok.AddForm is used to add new model instances. grok.EditForm is used for editing an already existing instance. grok.DisplayForm simply displays the values of the fields. A Grok form is itself a specialization of a grok.View, which means that it gets the same methods as those that are available to a view. It also means that a model does not actually need a view assignment if it already has a form. In fact, simple applications can get away by using a form as a view for their objects. Of course, there are times when a more complex view template is needed, or even when fields from multiple forms need to be shown in the same view. Grok can handle these cases as well, which we will see later on. Adding a project container at the root of the site To get to know Grok's form components, let's properly integrate our project model into our to-do list application. We'll have to restructure the code a little bit, as currently the to-do list container is the root object of the application. We need to have a project container as the root object, and then add a to-do list container to it. To begin, let's modify the top of app.py, immediately before the TodoList class definition, to look like this: import grokfrom zope import interface, schemaclass Todo(grok.Application, grok.Container): def __init__(self): super(Todo, self).__init__() self.title = 'To-Do list manager' self.next_id = 0 def deleteProject(self,project): del self[project] First, we import zope.interface and zope.schema. Notice how we keep the Todo class as the root application class, but now it can contain projects instead of lists. We also omitted the addProject method, because the grok.AddForm instance is going to take care of that. Other than that, the Todo class is almost the same. class IProject(interface.Interface): title = schema.TextLine(title=u'Title',required=True) kind = schema.Choice(title=u'Kind of project',values=['personal', 'business']) description = schema.Text(title=u'Description',required=False) next_id = schema.Int(title=u'Next id',default=0) We then have the interface definition for IProject, where we add the title, kind, description, and next_id fields. These were the fields that we previously added during the call to the __init__ method at the time of product initialization. class Project(grok.Container): grok.implements(IProject) def addList(self,title,description): id = str(self.next_id) self.next_id = self.next_id+1 self[id] = TodoList(title,description) def deleteList(self,list): del self[list] The key thing to notice in the Project class definition is that we use the grok.implements class declaration to see that this class will implement the schema that we have just defined. class AddProjectForm(grok.AddForm): grok.context(Todo) grok.name('index') form_fields = grok.AutoFields(Project) label = "To begin, add a new project" @grok.action('Add project') def add(self,**data): project = Project() self.applyData(project,**data) id = str(self.context.next_id) self.context.next_id = self.context.next_id+1 self.context[id] = project return self.redirect(self.url(self.context[id])) The actual form view is defined after that, by using grok.AddForm as a base class. We assign this view to the main Todo container by using the grok.context annotation. The name index is used for now, so that the default page for the application will be the 'add form' itself. Next, we create the form fields by calling the grok.AutoFields method. Notice that this time the argument to this method call is the Project class directly, rather than the interface. This is possible because the Project class was associated with the correct interface when we previously used grok.implements. After we have assigned the fields, we set the label attribute of the form to the text: To begin, add a new project. This is the title that will be shown on the form. In addition to this new code, all occurrences of grok.context(Todo) in the rest of the file need to be changed to grok.context(Project), as the to-do lists and their views will now belong to a project and not to the main Todo application. For details, take a look at the source code of this article for Grok 1.0 Web Development>>Chapter 5.
Read more
  • 0
  • 0
  • 1881

article-image-trunks-using-3cx-part-2
Packt
12 Feb 2010
7 min read
Save for later

Trunks using 3CX: Part 2

Packt
12 Feb 2010
7 min read
The next wizard screen is for Outbound Call Rules. Let's go over it enough so that you can setup a simple rule. We start off with a name. This can be anything you like but I prefer something meaningful. For our example I want to dial 9 to use the analog line, and only allow extensions 100-102 to use this line. I also only want to be able to dial certain phone numbers. Then I have to delete the 9 before it goes out to the phone carrier. Let's have a look at each section of this screen: Calls to numbers starting with (Prefix) This is where you specify what you want someone to dial before the line is used. You could enter a string of numbers here to use as a "password" to dial out. You don't just let anyone call an international phone number, so set this to a string of numbers to use as your international password. Give the password only to those who need it. Just make sure you change it occasionally in case it slips out. Calls from extension(s) Now, you can specify who (by extension number) can use this gateway. Just enter the extension number(s) you want to allow either in a range (100-110), individually (100, 101, 104), or as a mix (100-103, 110). Usually, you will leave this open for everyone to use; otherwise, you will restrict extensions that were allowed to use the gateway, which will have repercussions of forwarding rules to external numbers. Calls to numbers with a length of This setting can be left blank if you want all calls to be able to go out on this gateway. In the next screenshot, I specified 3, 7, 10, and 11. This covers calls to 911, 411, 555-1234, 800-555-1234, and 1-800-555-1234, respectively. You can control what phone numbers go out based on the number of digits that are dialed. Route and strip options Since this is our only gateway right now, we will have it route the calls to the Patton gateway. The Strip Digits option needs to be set to 1. This will strip out the "9" that we specified above to dial out with. We can leave the Prepend section blank for now. Now, go ahead and click Finish: Once you click Finish, you will see a gateway wizard summary, as shown in the next screenshot. This shows you that the gateway is created, and it also gives an overview of the settings. Your next step is to get those settings configured on your gateway. There is a list of links for various supported gateways on the bottom of the summary page with up-to-date instructions. Feel free to visit those links. These links will take you to the 3CX website and explain how to configure that particular gateway. With Patton this is easy; click the Generate config file button. The only other information you need for the configuration file is the Subnet mask for the Patton gateway. Enter your network subnet mask in the box. Here, I entered a standard Class C subnet mask. This matches my 192.168.X.X network. Click OK when you are done: Once you click OK, your browser will prompt you to save the file, as shown in the following screenshot. Click Save: The following screenshot shows a familiar Save As Windows screen. I like to put this file in an easy-to-remember location on my hard drive. As I already have a 3CX folder created, I'm going to save the file there. You can change the name of the file if you wish. Click Save: Now that your file is saved, let's take a look at modifying those settings. Open the administration web interface and, on the left-hand side, click PSTN Devices. Go ahead and expand this by clicking the + sign next to it. Now, you will see our newly created Patton SN4114A gateway listed. Click the + sign again and expand that gateway. Next, click the Patton SN4114A name, and you will see the right-hand side window pane fill up with five separate tabs. The first tab is General. This is where you can change the gateway IP address, SIP port, and all the account details. If you change anything, you will need a new configuration file. So click the Generate config file button at the bottom of the screen. If you forgot to save the file previously, here's your chance to generate and save it again: On the Advanced tab, we have some Provider Capabilities. Leave these settings alone for now: We will leave the rest of the tabs for now. Go ahead and click the 10000 line information in the navigation pane on the left. These are the settings for that particular phone port (10000). The first group of settings that we can change is the authentication username and password. Remember, this is to register the line with 3CX and not to use the phone line. The next two sections are about what to do with an inbound call during Office Hours and Outside Office Hours. I didn't change anything from the gateway wizard but, on this screen, you can see that we selected Ring group 800 MainRingGroup. This is the Ring group that we configured previously. We also see similar drop-down boxes for Outside Office Hours. As no one will be in the office to answer the phone, I've selected a Digital Receptionist 801 DR1. In the section Other Options, the Outbound Caller ID box is used to enter what you would like to have presented to the outside world as caller ID information. If your phone carrier supports this, you can enter a phone number or a name. If the carrier does not support this, just leave it blank and talk to your carrier as to what you would require to have it assigned as your caller ID. The Allow outbound calls on this line and Allow incoming calls on this line checkboxes are used to limit calls in or out. Depending on your environment, you might want to leave one line selected as no outbound calls. This will always leave an incoming line for customers to call. Otherwise, unless you have other lines that they can call on, they will get a busy signal. Maximum simultaneous calls cannot be changed here as analog lines only support one call at a time. If you changed anything, click Apply and then go back and generate a new configuration file: For the most up-to-date information on configuring your gateway, visit the 3CX site: http://www.3cx.com/voip-gateways/index.html We will go over a summary of it here: Since nothing was changed, it is now time to configure the Patton device with the config file that we generated from the 3CX template. If you know the IP address of the device, go ahead and open a browser and navigate to that IP address. Mine would be http://192.168.2.10. If you do not know the IP address of your device, you will need the SmartNode discovery tool. The easiest place to get this tool is the CD that came with the device. You can also download it from http://www.3cx.com/downloads/misc/sndiscovery.zip, or search the Patton website for it. Go ahead and install the SmartNode discovery tool and run it. You will get a screen that tells you all the SmartNodes on your network with their IP address, MAC address, and firmware version. Double-click on the SmartNode to open the web interface in a browser. The default username is administrator, and the password field is left blank. Click Import/Export on the left and Import Configuration on the right. Click Browse to find the configuration file that we generated. Click Import and then Reload to restart the gateway with the new configuration. That's it . We can now get incoming calls and make an outbound call.
Read more
  • 0
  • 0
  • 2986
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-trunks-using-3cx-part-1
Packt
12 Feb 2010
11 min read
Save for later

Trunks using 3CX: Part 1

Packt
12 Feb 2010
11 min read
PSTN trunks A Public Switch Telephone Network (PSTN) trunk is an old fashioned analog Basic Rate Interface (BRI) ISDN or Primary Rate Interface (PRI) phone line. 3CX can use any of these with the correct analog to SIP gateway. Usually these come into your home or business through a pair of copper lines. Depending on where you live, this may be the only means of connecting 3CX and communicating outside of your network. One of the advantages of a PSTN line is reliability and great call quality. Unless the wires break, you will almost always have phone service. However, what about call quality? After all, many people would like to have a comparison between VoIP and PSTN. Analog hardware for BRI ISDN and PRI's will be discussed in greater detail in Chapter 9. For using an analog PSTN line, you will need an FXO gateway. There are many external ones available. Until Sangoma introduced a new line at the end of 2008, there had not been any gateway which worked inside a Windows PC with 3CX. There are many manufacturers of analog gateways such as Linksys, Audio-Codes, Patton Electronics, GrandStream, and Sangoma. What these FXO gateways do is convert the analog phone line into IP signaling. Then the IP signaling gets passed over your network to the 3CX server and your phones. My personal preference is Patton Electronics. They are probably the most expensive FXOs' out there, but in this case, you get what you pay for. I have tried all of them and they all work. Some have issues with echo which can be hard to get rid of without support, or lots of trial and error, whereas some cannot support high demands (40 calls/hour) without needing to be reset every day, so if you are just testing, get a low-end one. For a high demand business, my preference is Patton. Not only do they make great products, but their support is top notch too. We will configure a Patton SmartNode SN4114 later in this article. SIP trunks What is a SIP trunk? A SIP trunk is a call that is routed by IP over the Internet through an Internet Telephony Service Provider (ITSP). For enterprises wanting to make full use of their installed IP PBXs' and communicate over IP not only within the enterprise, but also outside the enterprise—a SIP trunk provided by an ITSP that connects to the traditional PSTN network is the solution. Unlike traditional telephony, where bundles of physical wires were once delivered from the service provider to a business, a SIP trunk allows a company to replace traditional fixed PSTN lines with PSTN connectivity via a SIP trunking service provider on the Internet. SIP trunks can offer significant cost savings for enterprises, eliminating the need for local PSTN gateways, costly ISDN BRI's or PRI's. The following figure is an example of how our phone system operates: You can see that we have a local area network containing our desktops, servers, phones, and our 3CX Phone System. To reach the outside world using a SIP trunk, we have to go through our firewall or router. Depending on your network, you could be using a private IP address (10.x.x.x, 172.16.x.x, or 192.168.x.x) which is not allowed on the public Internet, so it has to get translated to the public IP address. This translation process is called Network Address Translation (NAT). Once we get outside the local network, we are in the public realm. Our ITSP uses the internet to get our phone call to/from the various carriers PSTN (analog) lines where our phone call is connected/terminated. There are three components necessary to successfully deploy SIP trunks: A PBX with a SIP-enabled trunk side An enterprise edge device understanding SIP An Internet Telephony or SIP trunking service provider The PBX In most cases, the PBX is an IP-based PBX, communicating with all endpoints over IP. However, it may just as well be a traditional digital or analog PBX. The sole requirement that has to be available is an interface for SIP trunking connectivity. The enterprise border element The PBX on the LAN connects to the ITSP via the enterprise border element. The enterprise edge component can either be a firewall with complete support for SIP, or an edge device connected to the firewall handling the traversal of the SIP traffic. The ITSP On the Internet, the ITSP provides connectivity to the PSTN for communication with mobile and fixed phones. Choosing a VoIP carrier—more than just price I feel two of the most important features to look for when choosing a VoIP carrier is support and call quality. Usually once you setup and everything is working, you won't need support. I always tell clients that there is no "boxed" solution that I can sell, every installation is a little different. Internet connections are all different even with the same provider. If you have a rock-solid T1 or something better, then this shouldn't be a problem. DSL seems different from building to building, even in the same area. So how do you test support before giving them your credit card? Call them! Try calling support at the worst times such as Monday afternoons when everyone is back to work and online, also try calling after business hours. See how long does it take to connect to a live person and if you can understand them once you speak to them? Find where is their support located? Try talking to them and tell them you are thinking about signing up with their service and ask them for help. If they go out of their way before they have your money, chances are they will be good to work with later on. Some carriers only offer chat or email support in favor of lower prices. While this may work fine for your business, it certainly won't work for the ones who need answers right away. I know I seem to be stressing a lot on support but it's for good reason. If your business depends on phone service and it goes down then you need answers! I pay more for a product if the support is worth it. Part of this is your Return On Investment (ROI). For example, if you have 3 lawyers billing at $200/hour and they need phones to work, that's $600/hour of lost time. Does the extra $50 or $100 upfront cover that? Now back to the topic at hand. Once you have connected 3CX to the carrier, how is the call quality? If it sounds like a bad cell phone, you probably don't want it, unless the price is so cheap that you can live with the low quality. Certain carriers even change the way your call gets routed through the Internet, based on the lowest cost for the particular call. They don't care about quality as long as you get that connection and they make money on it. Concurrent calls with an ITSP are a feature that you may want to look for when choosing an ITSP. Some accounts are a one-to-one ratio of lines per call. If you want to have 5 people on the phone at the same time (inbound or outbound), you would need to pay for 5 lines, this is similar to a PSTN line. You may get some savings here over a PSTN but that depends on what is available in your area. Some ITSP's have concurrent calls where you can use more than one line per call. Not many carriers have this feature but for a small business, this can be a great cost saving feature to look for. I use a couple of different carriers that have this feature. One carrier that I use lets you have 3 concurrent calls simultaneously on the same line. If you need more than 3 calls, you're a higher use customer and they want you to buy several lines. VoIP IP signaling uses special algorithms to compress your voice into IP packets. This compression uses a codec. There are several available, but the most common one is G.711u-law or A-law. This uses about 80kpbs of upload and download bandwidth. Another popular codec is G.729, it uses about 36kpbs. So for the same bandwidth you can have twice the number of calls using G.729 than G.711. You will need to check with your ITSP and see what codec they support. Another carrier I use is based purely on how much internet bandwidth you have. If you have 1Mbps of upload speed (usually the slowest part of your internet connection), you can support about 10 simultaneous or concurrent calls using G.711. You then pay for the minutes you use. This works very well for a small office as your monthly bill is very low and you don't have to maintain a bunch of lines that don't get used. Cable internet providers are also offering VoIP service to your home or business. These are usually single-use lines but they terminate at your office with an FXS plug. To integrate this with 3CX, you will need an FXO just like it's a PSTN line, same setup but you get the advantage of a VoIP line. Another great benefit of a SIP trunk is expandability. You can easily start out with one line which can usually be completed in one day. As you grow you can add more, usually in minutes as you already have the plan setup. Time to consolidate lines? You can even drop them later on without having contracts (most of the time). Try doing that with the local phone company! Call for a new business and it can take 1-2 weeks to get set up, plus contracts to worry about. No wonder they are jumping on the VoIP band wagon. Disaster recovery What do you do when your internet goes down? Some of you might be saying, "Ha! It never goes down". In my experience, it will eventually, and at the worst time. So what do you do? Go home for the day or plan for a backup? Most VoIP carriers provide some kind of disaster recovery option. They try to send you a call and when they don't get a connection to your 3CX box; then then re-route the call to another phone number. This could be a PSTN line or even a cell phone. It can be a free feature or there can be a small monthly fee on the account. It's worth having, especially if you rely on phones. Okay, so that covers inbound disaster recovery. What about outbound? Yes just about everyone has a cell phone these days, if that isn't enough, I'd suggest you invest in a pay-per-use PSTN line. This keeps the monthly cost very low but it's there when you need it. Whether it's an emergency pizza order for that Friday afternoon party or a true emergency when someone panics and dials 911—you want that call to go out. Speaking of emergency numbers, make sure you have your carrier register that phone number to your local address. Let's say you are in New York and you have a Californian phone number to give you some local presence in that part of the country. Your co-worker grabs his chest and falls down and someone dials 911 from the closest phone they see. Emergency services see your Californian number and contacts California for help for your New York office, that's not what you want when someone is clutching their chest, even though it was just heartburn from that pepperoni pizza. Mixing VoIP and PSTN Some of my clients even mix VoIP and PSTN together. Why would you mix? Local calls and inbound calls use the PSTN lines for the best call quality (and do not use any VoIP minutes if they have to pay for those). Long distance calls use the cheaper rate VoIP line. Another scenario is using PSTN lines for all your incoming and outgoing calls and use VoIP to talk to your other offices. Your own office can deal with a lower call quality, and management will appreciate the lower cost. These types of setups can be controlled using a dial plan.
Read more
  • 0
  • 0
  • 3982

article-image-testing-and-debugging-grok-10-part-2
Packt
11 Feb 2010
8 min read
Save for later

Testing and Debugging in Grok 1.0: Part 2

Packt
11 Feb 2010
8 min read
Adding unit tests Apart from functional tests, we can also create pure Python test cases which the test runner can find. While functional tests cover application behavior, unit tests focus on program correctness. Ideally, every single Python method in the application should be tested. The unit test layer does not load the Grok infrastructure, so tests should not take anything that comes with it for granted; just the basic Python behavior. To add our unit tests, we'll create a module named unit_tests.py. Remember, in order for the test runner to find our test modules, their names have to end with 'tests'. Here's what we will put in this file: """ Do a Python test on the app. :unittest: """ import unittest from todo.app import Todo class InitializationTest(unittest.TestCase): todoapp = None def setUp(self): self.todoapp = Todo() def test_title_set(self): self.assertEqual(self.todoapp.title,u'To-do list manager') def test_next_id_set(self): self.assertEqual(self.todoapp.next_id,0) The :unittest: comment at the top, is very important. Without it, the test runner will not know in which layer your tests should be executed, and will simply ignore them. Unit tests are composed of test cases, and in theory, each should contain several related tests based on a specific area of the application's functionality. The test cases use the TestCase class from the Python unittest module. In these tests, we define a single test case that contains two very simple tests. We are not getting into the details here. Just notice that the test case can include a setUp and a tearDown method that can be used to perform any common initialization and destruction tasks which are needed to get the tests working and finishing cleanly. Every test inside a test case needs to have the prefix 'test' in its name, so we have exactly two tests that fulfill this condition. Both of the tests need an instance of the Todo class to be executed, so we assign it as a class variable to the test case, and create it inside the setUp method. The tests are very simple and they just verify that the default property values are set on instance creation. Both of the tests use the assertEqual method to tell the test runner that if the two values passed are different, the test should fail. To see them in action, we just run the bin/test command once more: $ bin/testRunning tests at level 1 Running todo.FunctionalLayer tests: Set up in 2.691 seconds. Running: .......2009-09-30 22:00:50,703 INFO sqlalchemy.engine.base.Engine.0x...684c PRAGMA table_info("users") 2009-09-30 22:00:50,703 INFO sqlalchemy.engine.base.Engine.0x...684c () Ran 7 tests with 0 failures and 0 errors in 0.420 seconds. Running zope.testing.testrunner.layer.UnitTests tests: Tear down todo.FunctionalLayer ... not supported Running in a subprocess. Set up zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Ran 2 tests with 0 failures and 0 errors in 0.000 seconds. Tear down zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Total: 9 tests, 0 failures, 0 errors in 5.795 seconds Now, both the functional and unit test layers contain some tests and both are run one after the other. We can see the subtotal for each layer at the end of its tests as well as the grand total of the nine passed tests when the test runner finishes its work. Extending the test suite Of course, we just scratched the surface of which tests should be added to our application. If we continue to add tests, hundreds of tests may be there by the time we finish. However, this article is not the place to do so. As mentioned earlier, its way easier to have tests for each part of our application, if we add them as we code. There's no hiding from the fact that testing is a lot of work, but there is great value in having a complete test suite for our applications. More so, when third parties might use our work product independently. Debugging We will now take a quick look at the debugging facilities offered by Grok. Even if we have a very thorough test suite, chances are there that we will find a fair number of bugs in our application. When that happens, we need a quick and effective way to inspect the code as it runs and find the problem spots easily. Often, developers will use print statements (placed at key lines) throughout the code, in the hopes of finding the problem spot. While this is usually a good way to begin locating sore spots in the code, we often need some way to follow the code line by line to really find out what's wrong. In the next section, we'll see how to use the Python debugger to step through the code and find the problem spots. We'll also take a quick look at how to do post-mortem debugging in Grok, which means jumping into the debugger to analyze program state immediately after an exception has occurred. Debugging in Grok For regular debugging, where we need to step through the code to see what's going on inside, the Python debugger is an excellent tool. To use it, you just have to add the next line at the point where you wish to start debugging: import pdb; pdb.set_trace() Let's try it out. Open the app.py module and change the add method of the AddProjectForm class (line 108) to look like this: @grok.action('Add project') def add(self,**data): import pdb; pdb.set_trace() project = Project() project.creator = self.request.principal.title project.creation_date = datetime.datetime.now() project.modification_date = datetime.datetime.now() self.applyData(project,**data) id = str(self.context.next_id) self.context.next_id = self.context.next_id+1 self.context[id] = project return self.redirect(self.url(self.context[id])) Notice that we invoke the debugger at the beginning of the method. Now, start the instance, go to the 'add project' form, fill it up, and submit it. Instead of seeing the new project view, the browser will stay at the 'add form' page, and display the waiting for... message. This is because control has been transferred to the console for the debugger to act. Your console will look like this: > /home/cguardia/work/virtual/grok1/todo/src/todo/app.py(109)add() -> project = Project() (Pdb) | The debugger is now active and waiting for input. Notice that the line number where debugging started, appears right beside the path of the module where we are located. After the line number, comes the name of the method, add(). Below that, the next line of code to be executed is shown. The debugger commands are simple. To execute the current line, type n: (Pdb) n > /home/cguardia/work/virtual/grok1/todo/src/todo/app.py(110)add() -> project.creator = self.request.principal.title (Pdb) You can see the available commands if you type h: (Pdb) h Documented commands (type help <topic>): ======================================== EOF break condition disable help list q step w a bt cont down ignore n quit tbreak whatis alias c continue enable j next r u where args cl d exit jump p return unalias b clear debug h l pp s up Miscellaneous help topics: ========================== exec pdb Undocumented commands: ====================== retval rv (Pdb) The list command id is used for getting a bird's eye view of where in the code are we: (Pdb) list 105 106 @grok.action('Add project') 107 def add(self,**data): 108 import pdb; pdb.set_trace() 109 project = Project() 110 -> project.creator = self.request.principal.title 111 project.creation_date = datetime.datetime.now() 112 project.modification_date = datetime.datetime.now() 113 self.applyData(project,**data) 114 id = str(self.context.next_id) 115 self.context.next_id = self.context.next_id+1 (Pdb) As you can see, the current line is shown with an arrow. It's possible to type in the names of objects within the current execution context and find out their values: (Pdb) project <todo.app.Project object at 0xa0ef72c> (Pdb) data {'kind': 'personal', 'description': u'Nothing', 'title': u'Project about nothing'} (Pdb) We can of course, continue stepping line by line through all of the code in the application, including Grok's own code, checking values as we proceed. When we are through reviewing, we can click on c to return control to the browser. At this point, we will see the project view. The Python debugger is very easy to use and it can be invaluable for finding obscure bugs in your code.
Read more
  • 0
  • 0
  • 3018

article-image-testing-and-debugging-grok-10-part-1
Packt
11 Feb 2010
12 min read
Save for later

Testing and Debugging in Grok 1.0: Part 1

Packt
11 Feb 2010
12 min read
Grok offers some tools for testing, and in fact, a project created by grokproject (as the one we have been extending) includes a functional test suite. In this article, we are going to discuss testing a bit and then write some tests for the functionality that our application has so far. Testing helps us avoid bugs, but it does not eliminate them completely, of course. There are times when we will have to dive into the code to find out what's going wrong. A good set of debugging aids becomes very valuable in this situation. We'll see that there are several ways of debugging a Grok application and also try out a couple of them. Testing It's important to understand that testing should not be treated as an afterthought. As mentioned earlier, agile methodologies place a lot of emphasis on testing. In fact, there's even a methodology called Test Driven Development (TDD), which not only encourages writing tests for our code, but also writing tests before any other line of code. There are various kinds of testing, but here we'll briefly describe only two: Unit testing Integration or functional tests Unit testing The idea of unit testing is to break a program into its constituent parts and test each one of them in isolation. Every method or function call can be tested separately to make sure that it returns the expected results and handles all of the possible inputs correctly. An application which has unit tests that cover the majority of its lines of code, allows its developers to constantly run the tests after a change, and makes sure that modifications to the code do not break the existing functionality. Functional tests Functional tests are concerned with how the application behaves as a whole. In a web application, this means how it responds to a browser request and whether it returns the expected HTML for a given call. Ideally, the customer himself has a hand in defining these tests, usually through explicit functionality requirements or acceptance criteria. The more formal the requirements from the customer are, the easier it is to define appropriate functional tests. Testing in Grok Grok highly encourages the use of both kinds of tests, and in fact, includes a powerful testing tool that is automatically configured with every project. In the Zope world—from where Grok originated—a lot of value is placed in a kind of tests known as "doctests", so Grok comes with a sample test suite of this kind. Doctests A doctest is a test that's written as a text file, with lines of code mixed with explanations of what the code is doing. The code is written in a way that simulates a Python interpreter session. As tests exercise large portions of the code (ideally 100%), they usually offer a good way of finding out of what an application does and how. So, if an application has no written documentation, its tests would be the next obvious way of finding out what it does. Doctests take this idea further by allowing the developer to explain in the text file exactly what each test is doing. Doctests are especially useful for functional testing, because it makes more sense to document the high-level operations of a program. Unit tests, on the other hand, are expected to evaluate the program bit by bit and it can be cumbersome to write a text explanation for every little piece of code. A possible drawback of doctests is that they can make the developer think that he needs no other documentation for his project. In almost all of the cases, this is not true. Documenting an application or package makes it immediately more accessible and useful, so it is strongly recommended that doctests should not be used as a replacement for good documentation. We'll show an example of using doctests in the Looking at the test code section of this article. Default test setup for Grok projects As mentioned above, Grok projects that are started with the grokproject tool already include a simple functional test suite by default. Let's examine it in detail. Test configuration The default test configuration looks for packages or modules that have the word 'tests' in their name and tries to run the tests inside. For functional tests, any files ending with .txt or .rst are considered. For functional tests that need to simulate a browser, a special configuration is needed to tell Grok which packages to initialize in addition to the Grok infrastructure (usually the ones that are being worked on). The ftesting.zcml file in the package directory has this configuration. This also includes a couple of user definitions that are used by certain tests to examine functionality specific to a certain role, such as manager. Test files Besides the already mentioned ftesting.zcml file, in the same directory, there is a tests.py file added by grokproject, which basically loads the ZCML declarations and registers all of the tests in the package. The actual tests that are included with the default project files are contained in the app.txt file. These are doctests that do a functional test run by loading the entire Grok environment and imitating a browser. We'll take a look at the contents of the file soon, but first let's run the tests. Running the tests As part of the project's build process, a script named test is included in the bin directory when you create a new project. This is the test runner and calling it without arguments, finds and executes all of the tests in the packages that are included in the configuration. We haven't added a single test so far, so if we type bin/test in our project directory, we'll see more or less the same thing that doing that on a new project would show: $ bin/testRunning tests at level 1 Running todo.FunctionalLayer tests: Set up in 12.319 seconds. Running: ...2009-09-30 15:00:47,490 INFO sqlalchemy.engine.base.Engine.0x...782c PRAGMA table_info("users") 2009-09-30 15:00:47,490 INFO sqlalchemy.engine.base.Engine.0x...782c () Ran 3 tests with 0 failures and 0 errors in 0.465 seconds. Tearing down left over layers: Tear down todo.FunctionalLayer ... not supported The only difference between our output to that of a newly created Grok package is in the sqlalchemy lines. Of course, the most important part of the output is the "penultimate" line, which shows the number of tests that were run and whether there were any failures or errors. A failure means that some test didn't pass, which means that the code is not doing what it's supposed to do and needs to be checked. An error signifies that the code crashed unexpectedly at some point, and the test couldn't even be executed, so it's necessary to find the error and correct it before worrying about the tests. The test runner The test runner program looks for modules that contain tests. The test can be of three different types: Python tests, simple doctests, and full functionality doctests. To let the test runner know, which test file includes which kind of tests, a comment similar to the following is placed at the top of the file: Do a Python test on the app. :unittest: In this case, the Python unit test layer will be used to run the tests. The other value that we are going to use is "doctest" when we learn how to write doctests. The test runner then finds all of the test modules and runs them in the corresponding layer. Although unit tests are considered very important in regular development, we may find functional tests more necessary for a Grok web application, as we will usually be testing views and forms, which require the full Zope/Grok stack to be loaded to work. That's the reason why we find only functional doctests in the default setup. Test layers A test layer is a specific test setup which is used to differentiate the tests that are executed. By default, there is a test layer for each of the three types of tests handled by the test runner. It's possible to run a test layer without running the others and also to name new test layers to be able to cluster together tests that require a specific setup. Invoking the test runner As shown above, running bin/test will start the test runner with the default options. It's also possible to specify a number of options, and the most important ones are summarized below. In the following table, command-line options are shown to the left. Most options can be expressed with a short form (one dash) or a long form (two dashes). Arguments for the option in question are shown in uppercase. -s PACKAGE, --package=PACKAGE, --dir=PACKAGE Search the given package's directories for tests. This can be specified more than once, to run tests in multiple parts of the source tree. For example, when refactoring interfaces, you don't want to see the way you have broken setups for tests in other packages. You just want to run the interface tests. Packages are supplied as dotted names. For compatibility with the old test runner, forward and backward slashes in package names are converted to dots. (In the special case of packages, which are spread over multiple directories, only directories within the test search path are searched.) -m MODULE, --module=MODULE Specify a test-module filter as a regular expression. This is a case sensitive regular expression, which is used in search (not match) mode, to limit which test modules are searched for tests. The regular expressions are checked against dotted module names. In an extension of Python regexp notation, a leading "!" is stripped and causes the sense of the remaining regexp to be negated (so "!bc" matches any string that does not match "bc", and vice versa). The option can specy multiple test-module filters. Test modules matching any of the test filters are searched. If no test-module filter is specified, then all of the test modules are used. -t TEST, --test=TEST Specify a test filter as a regular expression. This is a case sensitive regular expression, which is used in search (not match) mode, to limit which tests are run. In an extension of Python regexp notation, a leading "!" is stripped and causes the sense of the remaining regexp to be negated (so "!bc" matches any string that does not match "bc", and vice versa). The option can specify multiple test filters. Tests matching any of the test filters are included. If no test filter is specified, then all of the tests are executed. --layer=LAYER Specify a test layer to run. The option can be given multiple times to specify more than one layer. If not specified, all of the layers are executed. It is common for the running script to provide default values for this option. Layers are specified regular expressions that are used in search mode, for dotted names of objects that define a layer. In an extension of Python regexp notation, a leading "!" is stripped and causes the sense of the remaining regexp to be negated (so "!bc" matches any string that does not match "bc", and vice versa). The layer named 'unit' is reserved for unit tests, however, take note of the -unit and non-unit options. -u, --unit Executes only unit tests, ignoring any layer options. -f, --non-unit Executes tests other than unit tests. -v, --verbose Makes output more verbose. Increment the verbosity level. -q, --quiet Makes the output minimal by overriding any verbosity options. Looking at the test code Let's take a look at the three default test files of a Grok project, to see what each one does. ftesting.zcml As we explained earlier, ftesting.zcml is a configuration file for the test runner. Its main objective is to help us set up the test instance with users, so that we can test different roles according to our needs. <configure i18n_domain="todo" package="todo" > <include package="todo" /> <include package="todo_plus" /> <!-- Typical functional testing security setup --> <securityPolicy component="zope.securitypolicy.zopepolicy.ZopeSecurityPolicy" /> <unauthenticatedPrincipal id="zope.anybody" title="Unauthenticated User" /> <grant permission="zope.View" principal="zope.anybody" /> <principal id="zope.mgr" title="Manager" login="mgr" password="mgrpw" /> <role id="zope.Manager" title="Site Manager" /> <grantAll role="zope.Manager" /> <grant role="zope.Manager" principal="zope.mgr" /> As shown in the preceding code, the configuration simply includes a security policy, complete with users and roles and the packages that should be loaded by the instance, in addition to the regular Grok infrastructure. If we run any tests that require an authenticated user to work, we'll use these special users. The includes at the top of the file just make sure that all of the Zope Component Architecture setup needed by our application is performed prior to running the tests. tests.py The default test module is very simple. It defines the functional layer and registers the tests for our package: import os.path import z3c.testsetup import todo from zope.app.testing.functional import ZCMLLayer ftesting_zcml = os.path.join( os.path.dirname(todo.__file__), 'ftesting.zcml') FunctionalLayer = ZCMLLayer(ftesting_zcml, __name__, 'FunctionalLayer', allow_teardown=True) test_suite = z3c.testsetup.register_all_tests('todo') After the imports, the first line gets the path for the ftesting.zcml file, which then is passed to the layer definition method ZCMLLayer. The final line in the module tells the test runner to find and register all of the tests in the package. This will be enough for our testing needs in this article, but if we needed to create another non-Grok package for our application, we would need to add a line like the last one to it, so that all of its tests are found by the test runner. This is pretty much boilerplate code, as only the package name has to be changed.
Read more
  • 0
  • 0
  • 2386

article-image-call-control-using-3cx
Packt
11 Feb 2010
9 min read
Save for later

Call Control using 3CX

Packt
11 Feb 2010
9 min read
Let's get started! Ring groups Ring groups are designed to direct calls to a group of extensions so that a person can answer the call. An incoming call will ring at several extensions at once, and the one who picks up the phone gets control of that call. At that point, he/she can transfer the call, send it to voicemail, or hang up. Ring groups are my preferred call routing method. Does anyone really like those automated greetings? I don't. We will of course, set those up because they do have some great uses. However, if you like your customers to get a real live voice when they call, you have two choices—either direct the call to an extension or use a ring group and have a few phones ring at once. To create a ring group, we will use the 3CX web interface. There are several ways to do this. From the top toolbar menu, click Add | Ring Group. In the following screenshot, I chose Add | Ring Group: The following screenshot shows another way of adding a ring group using the Ring Groups section in the navigation pane on the left-hand side. Then click on the Add Ring Group button on the toolbar: Once we click Add Ring Group, 3CX will automatically create a Virtual machine number for this ring group as shown in the next screenshot. This helps the system keep track of calls and where they are. This number can be changed to any unused number that you like. As a reseller, I like to keep them the same from client to client. This creates some standardization among all the systems. Now it's time to give the ring group a Name. Here I use MainRingGroup as it lets me know that when a call comes in, it should go to the Main Ring Group. After you create the first one, you can make more such as SalesRingGroup, SupportRingGroup, and so on. We now have three choices for the Ring Strategy: Prioritized Hunt: Starts hunting for a member from the top of the Ring Group Members list and works down until someone picks up the phone or goes to the Destination if no answer section. Ring All: If all the phones in the Ring Group Members section ring at the same time then the first person to pick up gets the call. Paging: This is a paid feature that will open the speakerphone on Ring Group Members. Now you will need to select your Ring Time (Seconds) to determine how long you want the phones to ring before giving up. The default ring time is 20 seconds, which all my clients agree is too long. I'd recommend 10-15 seconds, but remember, if no one picks up the phone, then the caller goes to the next step, such as a Digital Receptionist. If the next step also makes the caller wait another 10-20 seconds, he/she may just hang up. You also need to be sure that you do not exceed the phone company's timeout of diverting calls to their voicemail (which could be turned off) or returning a busy signal. Adding ring group members Ring Group Members are the extensions that you would like the system to call or page in a ring group. If you select the Prioritized Hunt strategy, it will hunt from the top and go down the list. Ring All and Paging will get everyone at once. The listbox on the left will show you a list of available extensions. Select the ones you want and click the Add button. If you are using Prioritized Hunt, you can change the order of the hunt by using the Up and Down buttons. Destination if no answer The last setting as shown in the next screenshot illustrates what to do when no one answers the call. The options are as follows: End Call: Just drop the call, no chance for the caller to talk to someone. Connect to Extension: Ring the extension of your choice. Connect to Queue / Ring Group: This sends the caller to a call queue (discussed later in the Call queues section)) or to another ring group. A second ring group could be created for stage two that calls the same group plus additional extensions. Connect to Digital Receptionist: As a person didn't pick up the call, we can now send it to an automated greeting/menu system. Voicemail box for Extension: As the caller has already heard phones ringing, you may just want to put him/her straight to someone's voicemail. Forward to Outside Number: If you have had all the phones in the building ringing and no one has picked up, then you might want to send the caller to a different phone outside of your PBX system. Just make sure that you enter the correct phone number and any area codes that may be required. This will use another simultaneous call license and another phone line. If you have one line only, then this is not the option you can use. Digital Receptionist setup A Digital Receptionist (DR) is not a voicemail box; it's an automated greeting with a menu of choices to choose from. A DR will answer the phone for you if no one is available to answer the phone (directly to an extension or hunt group) or if it is after office hours. You need to set up a DR unless you want all incoming calls to go to someone's voicemail. You will also need it if you want to present the caller with a menu of options. Let's see how to create a DR. Recording a menu prompt The first thing you need to do in order to create a DR is record a greeting. There are a couple of ways to do this. However, first let's create the greeting script. In this greeting, you will be defining your phone menu; that is, you will be directing calls to extensions, hunts, agent groups, and the dial by name directory. Following is an example: Thank you for calling. If you know your party's extension, you may dial it at any time. Or else, please listen to the following options: For Rob, dial 1 For the sales group, dial 2 For Zachary, dial 4 Solicitors, please dial 8 For a dial by name directory, dial 9 I suggest having it written down. This makes it easier to record and also gives the person setting up the DR in 3CX a copy of the menu map. Now that you know what you want your callers to hear when they call, it's time to get it recorded so that we can import it into 3CX. You have a couple of options for recording the greeting script. It doesn't matter which option you use or how you obtain this greeting file, as long as the end format is correct. You can hire a professional announcer, put it to music, and obtain the file from him/her. You can record it using any audio software you like such as Windows Sound Recorder, or any audio recording software. The file needs to be a .wav or an .mp3 file saved in PCM, 8KHz, 16 bit, Mono format. If you have Windows Sound Recorder only, I'd suggest that you try out Audacity. Audacity is an open source audio file program available at http://audacity.sourceforge.net/. Audacity gives you a lot more power such as controlling volume, combining several audio tracks (a music track to go with the announcer), using special effects, and many other cool audio tools. I'm not an expert in it but the basics are easy to do. First, hit the Audacity website and download it, then install it using the defaults. Now let's launch Audacity and set it up to use the correct file format, which will save us any issues later. Start by clicking Edit | Preferences. On the Quality tab, select the Default Sample Rate as 8000 Hz. Then change the Default Sample Format to 16-bit as shown in the following screenshot: Now, on the File Formats tab, select WAV (Microsoft 16 bit PCM) from the drop-down list and click OK: Now that those settings are saved, you can record your greeting without having to change any formats. Now it's time to record your greeting. Click on the red Record button as shown in the following screenshot. It will now use your PC's microphone to record the announcer's voice and when the recording is done, click on the Stop button. Press Play to hear it, and if you don't like it, start over again: If you like the way your greeting sounds, then you will need to save it. Click File | Export As WAV... or Export As MP3.... Save it to a location that you remember (for example, c:3CX prompts is a good place) with a descriptive filename. While you are recording this greeting, you might as well record a few more if you have plans for creating multiple DRs: Creating the Digital Receptionist With your greeting script in hand, it's time to create your first DR. In the navigation pane on the left side, click Digital Receptionist, then click Add Digital Receptionist as shown in the following screenshot: Or on the top menu toolbar, click Add | Digital Receptionist: Just like your ring group, the DR gets a Virtual extension number by default, Feel free to change it or stick with it. Give it a Name, (I like to use the same name as the audio greeting filename.) Now, click Browse... and then Add. Browse to your c:3CX prompts directory and select your .wav or .mp3 file as shown in the following screenshot: Next, we need to create the menu system as shown in the following screenshot. We have lots of options available. You can connect to an extension or ring group, transfer directly to someone's voicemail, end the call (my solicitors' option), or start the call by name feature (discussed in the Call by name setup section). At any time during playback, callers can dial the extension number; they don't have to hear all the options. I usually explain this in the DR recorded greeting.
Read more
  • 0
  • 0
  • 7649
article-image-implementing-ajax-grid-using-jquery-data-grid-plugin-jqgrid
Packt
05 Feb 2010
9 min read
Save for later

Implementing AJAX Grid using jQuery data grid plugin jqGrid

Packt
05 Feb 2010
9 min read
In this article by Audra Hendrix, Bogdan Brinzarea and Cristian Darie, authors of AJAX and PHP: Building Modern Web Applications 2nd Edition, we will discuss the usage of an AJAX-enabled data grid plugin, jqGrid. One of the most common ways to render data is in the form of a data grid. Grids are used for a wide range of tasks from displaying address books to controlling inventories and logistics management. Because centralizing data in repositories has multiple advantages for organizations, it wasn't long before a large number of applications were being built to manage data through the Internet and intranet applications by using data grids. But compared to their desktop cousins, online applications using data grids were less than stellar - they felt cumbersome and time consuming, were not always the easiest things to implement (especially when you had to control varying access levels across multiple servers), and from a usability standpoint, time lags during page reloads, sorts, and edits made online data grids a bit of a pain to use, not to mention the resources that all of this consumed. As you are a clever reader, you have undoubtedly surmised that you can use AJAX to update the grid content; we are about to show you how to do it! Your grids can update without refreshing the page, cache data for manipulation on the client (rather than asking the server to do it over and over again), and change their looks with just a few keystrokes! Gone forever are the blinking pages of partial data and sessions that time out just before you finish your edits. Enjoy! In this article, we're going to use a jQuery data grid plugin named jqGrid. jqGrid is freely available for private and commercial use (although your support is appreciated) and can be found at: http://www.trirand.com/blog/. You may have guessed that we'll be using PHP on the server side but jqGrid can be used with any of the several server-side technologies. On the client side, the grid is implemented using JavaScript's jQuery library and JSON. The look and style of the data grid will be controlled via CSS using themes, which make changing the appearance of your grid easy and very fast. Let's start looking at the plugin and how easily your newly acquired AJAX skills enable you to quickly add functionality to any website. Our finished grid will look like the one in Figure 9-1:   Figure 9-1: AJAX Grid using jQuery Let's take a look at the code for the grid and get started building it. Implementing the AJAX data grid The files and folders for this project can be obtained directly from the code download(chap:9) for this article, or can be created by typing them in. We encourage you to use the code download to save time and for accuracy. If you choose to do so, there are just a few steps you need to follow: Copy the grid folder from the code download to your ajax folder. Connect to your ajax database and execute the product.sql script. Update config.php with the correct database username and password. Load http://localhost/ajax/grid to verify the grid works fine - it should look just like Figure 9-1. You can test the editing feature by clicking on a row, making changes, and hitting the Enter key. Figure 9-2 shows a row in editing mode:     Figure 9-2: Editing a row Code overview If you prefer to type the code yourself, you'll find a complete step-by-step exercise a bit later in this article. Before then, though, let's quickly review what our grid is made of. We'll review the code in greater detail at the end of this article. The editable grid feature is made up of a few components: product.sql is the script that creates the grid database config.php and error_handler.php are our standard helper scripts grid.php and grid.class.php make up the server-side functionality index.html contains the client-side part of our project The scripts folder contains the jQuery scripts that we use in index.html   Figure 9-3: The components of the AJAX grid The database Our editable grid displays a fictional database with products. On the server side, we store the data in a table named product, which contains the following fields: product_id: A unique number automatically generated by auto-increment in the database and used as the Primary Key name: The actual name of the product price: The price of the product for sale on_promotion: A numeric field that we use to store 0/1 (or true/false) values. In the user interface, the value is expressed via a checkbox The Primary Key is defined as the product_id, as this will be unique for each product it is a logical choice. This field cannot be empty and is set to auto-increment as entries are added to the database: CREATE TABLE product( product_id INT UNSIGNED NOT NULL AUTO_INCREMENT, name VARCHAR(50) NOT NULL DEFAULT '', price DECIMAL(10,2) NOT NULL DEFAULT '0.00', on_promotion TINYINT NOT NULL DEFAULT '0', PRIMARY KEY (product_id)); The other fields are rather self-explanatory—none of the fields may be left empty and each field, with the exception of product_id, has been assigned a default value. The tinyint field will be shown as a checkbox in our grid that the user can simply set on or off. The on-promotion field is set to tinyint, as it will only need to hold a true (1) or false (0) value. Styles and colors Leaving the database aside, it's useful to look at the more pertinent and immediate aspects of the application code so as to get a general overview of what's going on here. We mentioned earlier that control of the look of the grid is accomplished through CSS. Looking at the index.html file's head region, we find the following code: <link rel="stylesheet" type="text/css" href="scripts/themes/coffee/grid.css" title="coffee" media="screen" /><link rel="stylesheet" type="text/css" media="screen" href="themes/jqModal.css" /> Several themes have been included in the themes folder; coffee is the theme being used in the code above. To change the look of the grid, you need only modify the theme name to another theme, green, for example, to modify the color theme for the entire grid. Creating a custom theme is possible by creating your own images for the grid (following the naming convention of images), collecting them in a folder under the themes folder, and changing this line to reflect your new theme name. There is one exception here though, and it affects which buttons will be used. The buttons' appearance is controlled by imgpath: 'scripts/themes/green/images', found in index.html; you must alter this to reflect the path to the proper theme. Changing the theme name in two different places is error prone and we should do this carefully. By using jQuery and a nifty trick, we will be able to define the theme as a simple variable. We will be able to dynamically load the CSS file based on the current theme and imgpath will also be composed dynamically. The nifty trick involves dynamically creating the < link > tag inside head and setting the appropriate href attribute to the chosen theme. Changing the current theme simply consists of changing the theme JavaScript variable. JqModal.css controls the style of our pop-up or overlay window and is a part of the jqModal plugin. (Its functionality is controlled by the file jqModal.js found in the scripts/js folder.) You can find the plugin and its associated CSS file at: http://dev.iceburg.net/jquery/jqModal/   In addition, in the head region of index.html, there are several script src declarations for the files used to build the grid (and jqModal.js for the overlay): <script src="scripts/jquery-1.3.2.js" type="text/javascript"></script><script src="scripts/jquery.jqGrid.js" type="text/javascript"></script><script src="scripts/js/jqModal.js" type="text/javascript"></script><script src="scripts/js/jqDnR.js" type="text/javascript"></script> There are a number of files that are used to make our grid function and we will talk about these scripts in more detail later. Looking at the body of our index page, we find the declaration of the table that will house our grid and the code for getting the grid on the page and populated with our product data. <script type="text/javascript">var lastSelectedId;$('#list').jqGrid({ url:'grid.php', //name of our server side script. datatype: 'json', mtype: 'POST', //specifies whether using post or get//define columns grid should expect to use (table columns) colNames:['ID','Name', 'Price', 'Promotion'], //define data of each column and is data editable? colModel:[ {name:'product_id',index:'product_id', width:55,editable:false}, //text data that is editable gets defined {name:'name',index:'name', width:100,editable:true, edittype:'text',editoptions:{size:30,maxlength:50}},//editable currency {name:'price',index:'price', width:80, align:'right',formatter:'currency', editable:true},// T/F checkbox for on_promotion {name:'on_promotion',index:'on_promotion', width:80, formatter:'checkbox',editable:true, edittype:'checkbox'} ],//define how pages are displayed and paged rowNum:10, rowList:[5,10,20,30], imgpath: 'scripts/themes/green/images', pager: $('#pager'), sortname: 'product_id',//initially sorted on product_id viewrecords: true, sortorder: "desc", caption:"JSON Example", width:600, height:250, //what will we display based on if row is selected onSelectRow: function(id){ if(id && id!==lastSelectedId){ $('#list').restoreRow(lastSelectedId); $('#list').editRow(id,true,null,onSaveSuccess); lastSelectedId=id; } },//what to call for saving edits editurl:'grid.php?action=save'});//indicate if/when save was successfulfunction onSaveSuccess(xhr){ response = xhr.responseText; if(response == 1) return true; return false;}</script>
Read more
  • 0
  • 0
  • 16054

article-image-advanced-blog-management-apache-roller-40-part-1
Packt
03 Feb 2010
5 min read
Save for later

Advanced Blog Management with Apache Roller 4.0: Part 1

Packt
03 Feb 2010
5 min read
So let's get on with it. Managing group blogs Suddenly, your boss bursts into your office and shouts: "Well, let's give Roller a try for our company's blog!" And now, you have to enable group blogs in your Roller installation. Time for action – creating another user The first thing you need to do in order to enable group blogging is create another user, as shown in the following exercise: Open your web browser and type your Roller's dynamic hostname in the Address bar (for example, mine is http://alromero.no-ip.org). Now click on the Login link on your weblog's main page: The Welcome to Roller page will appear. Instead of logging in, click on the Register link in order to create a new user: The New User Registration screen will show up next. Fill in the fields for your new user, as shown in the following screenshot: Click on Register User when finished. If all goes well, you'll be taken back to the Welcome to Roller screen, and the following success message will appear: Select the Click here link to continue. Type your new Username and Password, and click on the Login button. The Main Menu page will appear: Click on the Create new weblog link, under the Actions panel. Roller will take you to the Create Weblog page. Fill in the required fields to create your new weblog. Use the following data for the Name, Description, and Handle fields: The Email Address field will already contain the e-mail address you used when creating your new user. Leave the default values for Locale, Timezone, and Theme, and click on the Create Weblog button to continue. The following page will appear indicating that your weblog was successfully created: Now click on the New Entry link in order to create the following new entry in your weblog: Scroll down the page and click on the Post to Weblog button to post your entry. What just happened? Well, now there's another user in your Roller server, how about that? Your boss is going to be proud of you and very happy, because your company will have a multiuser blog! The next step is to invite other people to create user accounts and weblogs in the Roller blog server. If you're using Roller in your office, just start spreading the word to your colleagues. Or if you're experimenting with Roller in your home, you can invite some friends to blog with you, create a family group blog, and so on. Have a go hero – inviting members to write in your weblog Now that you've learned how other people can register and get a user account in Roller, it would be a good idea to start exploring the Preferences: Members page, where you can invite other Roller users to collaborate in your weblog by posting entries. Roller has three user levels: Administrator: Can create/edit weblog entries and publish them in your weblog. An administrator can also change the Roller theme and templates, and manage weblog users. Author: Can create/edit weblog entries and upload files, but cannot change themes or templates, and cannot manage users. Limited: Can create/edit entries and save them as drafts, but cannot publish them. Go on and create several test user accounts, and try out the three Roller user levels by inviting the test users to collaborate in your weblog. To invite a user, use the Invite new member link under the Actions panel in the Preferences: Members page. Enabling a front page blog Up until now, you've been using your main weblog as the front page for your Roller blog server. Now that you've enabled group blogging, each user can promote his/her weblog(s) individually, or you can create a community front page to show recent posts from all of your user's weblogs. The next exercise will show you how to create and use a front page blog to show posts from all the other weblogs in your Roller blog server. Time for action – enabling a front page blog In this exercise, we're going to create a new weblog to serve as the front page of your entire Roller weblog server. The front page blog will show a list of recent entries from all your other weblogs, and from all the other users' weblogs in your Roller blog server. Log into Roller (in case you're not already logged in) with your administrator account, go to the Main Menu page, and then click on the Create new weblog link under the Actions panel: Type My Roller Community in the Name field, The best Roller blog community in the Description field, and frontpage in the Handle field: Scroll down the page until you locate the Theme field, select the Frontpage theme, and click on the Create Weblog button: The following page will appear, indicating that your frontpage weblog was created successfully: Now click on the Server administration link located in the Actions panel. The Roller Configuration page will show up. Scroll down until you locate the Handle of weblog to serve as frontpage blog field, and replace its contents with frontpage. Then click on the Enable aggregated site-wide frontpage option to enable it: Scroll down the page until you locate the Save button and click on it to save your changes. Now click on the Front Page link in Roller's menu bar:
Read more
  • 0
  • 0
  • 1855

article-image-rendering-images-typo3-43-part-1
Packt
03 Feb 2010
4 min read
Save for later

Rendering Images in TYPO3 4.3: Part 1

Packt
03 Feb 2010
4 min read
Rendering images using content elements Content elements offer a variety of ways for editors to include images. We will examine these here. Here is a typical selection menu that editor is presented with: A great way to start is to assemble pages from the Regular text element and the Text with image elements Getting Ready Make sure Content (default) is selected in Include static, and the CSS Styled Content template is included in the Include static (from extensions) field of the template record of the current page or any page above it in the hierarchy (page tree). To verify, go to the Template module, select the appropriate page, and click edit the whole template record. How to do it... Create the Text with image element. Under the Text tab, enter the text you want to appear on the page. You can use the RTE (Rich Text Editor) to apply formatting, or disable it. We will cover RTE in more detail later in this article. Under the Media tab, select your image settings. If you want to upload the image, use the first field. If you want to use an existing image, use the second field. Under Position, you are able to select where the image will appear in relation to the text. How it works... When the page is rendered in the frontend, the images will be placed next to the text you entered, in the position that you specify. The specific look will depend on the template that you are using. There's more An alternative to the Text with images is an Images only content element. This element gives you similar options, except limits the options to just a display of images. The rest of the options are the same. You can also resize the image, add caption, alt tags for accessibility and search engine optimization, and change default processing options. See the official TYPO3 documentation for details of how these fields work, (http://typo3.org/documentation/document-library/). See also Render video and audio using content elements and rgmediaimages extension Embedding images in RTE Rich Text Editor is great for text entry. By default, TYPO3 ships with htmlArea RTE as a system extension. Other editors are available, and can be installed if needed. Images can be embedded and manipulated within the RTE. This provides one place for content editors to use in order to arrange content how they want it to appear at the frontend of the site. In this recipe, we will see how this can be accomplished. The instructions apply to all forms that have RTE-enabled fields, but we will use the text content element for a simple demonstration. In the Extension Manager, click on htmlArea RTE extension to bring up its options. Make sure that the Enable images in the RTE [enableImages] setting is enabled. If you have a recent version of DAM installed (at least 1.1.0), make sure that the Enable the DAM media browser [enableDAMBrowser] setting is unchecked. This setting is deprecated, and is there for installations using older versions of DAM. How to do it... Create a new Regular text element content element. In the RTE, click on the icon to insert an image as shown in the following screenshot: Choose a file, and click on the icon to insert it into the Text area. You should see the image as it will appear at the frontend of the site. Save and preview. The output should appear similar to the following screenshot: How it works... When you insert an image through the RTE, the image is copied to uploads folder, and included from there. The new file will be resampled and sized down, so, it usually occupies less space and is downloaded faster than the original file. TYPO3 will automatically determine if the original file has changed, and update the file used in the RTE—but you should still be aware of this behaviour. Furthermore, if you have DAM installed, and you have included an image from DAM, you can see the updated record usage. If you view the record information, you should see the Content Element where the image is used:
Read more
  • 0
  • 0
  • 2250
article-image-advanced-blog-management-apache-roller-40-part-3
Packt
03 Feb 2010
4 min read
Save for later

Advanced Blog Management with Apache Roller 4.0: Part 3

Packt
03 Feb 2010
4 min read
Weblog clients There are times when logging into your Roller weblog to post a new entry can be a tedious process, especially when you have two or more weblogs about different subjects. Let's say that you have to write stuff in your company's blog, and you also write in your personal Roller blog. You can open two web browser windows and log into each blog separately, but it would be better to use a weblog client, as I'll show you in the next exercise. Time for action – using Google docs as your weblog client In this exercise, you'll learn to use Google docs as your weblog client to post entries in your Roller weblogs without having to log in: Open your web browser, go to http://docs.google.com, log in with your username and password (if you don't have a Google account, this is your chance to get one!), then click on the New button, and select the Document option: Your browser will open a new tab for the new Google docs document. Type This is my first post to my Roller weblog from Google Docs! in the word processor writing area, as shown in the following screenshot: Now click on the File menu and select the Save option to save your draft in Google docs: Google docs assigns the title for your document automatically, based on its content. To change the title of your post, click on it: Type Posting to Roller from Google Docs in the dialog that will show up next, and click on OK to continue: Google docs will show the new title for your post: Now click on the Share button and select the Publish as web page option: The Publish this document dialog will appear. Click on the change your blog site settings link to enter your Roller weblog information: The Blog Site Settings dialog will appear next. Choose the My own server / custom option and select MetaWeblog API in the API field. In the URL field you need to type the complete path to Roller's web services—http://alromero.noip.org/roller/roller-services/xmlrpc, in my case. You just need to replace the alromero.no-ip.org part with your dynamic hostname. Then type your Roller username, password, and weblog name, and select the Include the document title when posting option, as shown in the following screenshot: Click on the OK button to save your weblog settings, and then click on the Post to blog button in the Publish this document dialog: A confirmation dialog will pop-up, asking if you want to post the document to your blog now. Click on OK to continue: Google docs will show the This document has been published to your blog success message: Click on the Save & Close button at the upper-right part of the screen to save your document and return to the Google docs main page, then click on Sign out to exit Google docs. Now go to your Roller weblog's main page, to see the post you published from Google docs: What just happened? See how easy it is to use a weblog client, so that you don't need to log into your Roller weblog to post a new entry? And if you want to post to a different Roller weblog, you just need to change your username, blog ID, or URL. There are several other weblog clients available that you can use, depending on your operating system, but all weblog clients work in a similar way. Have a go hero – try out other weblog clients Go and try out some other weblog clients, to see which one is best for you. In Windows, you can use Windows Live Writer (http://download.live.com/writer) and w.bloggar (http://bloggar.com/). In Linux, you can try out BloGTK (https://launchpad.net/blogtk/).
Read more
  • 0
  • 0
  • 1871

article-image-advanced-blog-management-apache-roller-40-part-2
Packt
03 Feb 2010
4 min read
Save for later

Advanced Blog Management with Apache Roller 4.0: Part 2

Packt
03 Feb 2010
4 min read
Enabling weblog pings Now that you have a Technorati account, let's enable your Roller weblog so that it can ping Technorati automatically each time you post a new entry or edit a previously posted entry. Time for action – enabling automatic pings in your weblog This exercise will show you how to enable automatic pinging in your weblog, so that every time you post a new entry or update some entry you posted before, Technorati will receive a ping and will update your blog status: Go to the Front Page: Weblog Settings tab in your web browser, click on the Preferences tab to see your weblog's configuration page, and click on the Pings link: The Configure Automatic Weblog Pings page will appear next. Scroll down the page until you locate the Technorati row under the Common Ping Targets section: Click on Technorati's Enable link to enable automatic pinging for your weblog, so it can send automatic pings to Technorati: Click on the Send Ping Now button to test whether everything works correctly. Roller will show the following success message: Now you just have to wait until Technorati grabs your blog's most recent information, as shown in the following screenshot: What just happened? Now Technorati will keep your weblog information updated every time you post a new entry in your weblog. Once you register with an aggregator, it's very easy to configure automatic pinging in Roller, as you saw in the previous exercise. Now all you need to do is configure all the pings you can to other aggregators and blog search engines, so that people from everywhere can see your weblog! Have a go hero – configure more ping targets Now that you learned how to configure automatic pings to Technorati for your Roller weblog, check out the other ping targets available in the Common Ping Targets list. Go on and enable all the ping targets that you can in order to promote your weblog in all the available blog search engines and aggregators. You can also register with Digg, StumbleUpon, and the other popular aggregators/blog search engines and add new ping targets for them if you click on the Custom Ping Targets link in the Configure Automatic Weblog Pings page. So what are you waiting for? Go and promote your new Roller weblog! Google webmaster tools Now that you have a cool weblog, it would be great if the weblog would show up in Google every time someone searches for a subject related to the things you're writing, don't you think? That's why Google invented the webmaster tools—a great resource to help you find out how your weblog is interacting with the Google bot. With these tools you can get detailed information about broken links, popular keywords, and basically, all the stuff you need to have a successful weblog! Time for action – enabling Google webmaster tools This exercise will show you how to configure Google webmaster tools for your Roller weblog, so you can start receiving important information about visitors and how your weblog interacts with Google: Open your web browser and type https://www.google.com/webmasters/tools to go to the Google webmaster tools website: If you created a Gmail account when installing Roller, you can use it to sign in to Google webmaster tools. Or you can create a new Gmail account in case you don't have one already. Click on the Sign In button when ready. The Google webmaster tools Home page will appear next. Click on the Add a Site button at the bottom to add your Roller weblog: Now enter your weblog's URL in the pop-up box and click on Continue: Google will show you a Meta tag that you need to copy and paste in your Roller weblog. Select the meta tag, right-click on it, and click on Copy: Now open a new tab in your web browser, log into your Roller weblog, and go to the Design tab. The Weblog Theme page will appear. Click on the Custom Theme option and then on the Update Theme button: Roller will show the Successfully set theme to – custom message. Click on the Templates link and then select the Weblog template: Scroll down the page until you locate the </head> HTML tag and paste the Google webmaster tools meta tag right before it, as shown in the following screenshot: Scroll down the page until you locate the Save button and click on it. Roller will show the Template updated successfully message. Return to the Google webmaster tools tab in your web browser and click on the Verify button to verify your weblog: If all goes well, Google will verify your weblog and take you to the Dashboard:
Read more
  • 0
  • 0
  • 2113
Modal Close icon
Modal Close icon