Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-building-simple-address-book-application-jquery-and-php
Packt
19 Feb 2010
14 min read
Save for later

Building a Simple Address Book Application with jQuery and PHP

Packt
19 Feb 2010
14 min read
Let's get along. The application folder will be made up of five files: addressbook.css addressbook.html addressbook.php addressbook.js jquery.js Addressbook.css will contain the css for the interface styling, addressbook.html will contain the html source, addressbook.js contains  javascript codes, addressbook.php will mostly contain the server side code that will store the contacts to database, delete the contacts, provide updates and fetch the list of the contacts. Let's look through the HTML We include the scripts and the css file in the head tag in addressbook.html file. <title>sample address book</title> <link rel="stylesheet" type="text/css" href="addressbook.css"> <script type="text/javascript" src="jquery.js"></script> <script type="text/javascript" src="addressbook.js"></script> The code above includes the css for styling the application, jquery library for cross browser javascript and easy DOM access, and the addressbook.js contains functions that help user actions translated via javascript and ajax calls. The Body tag should contain this: <div id="Layer1"> <h1>Simple Address Book</h1> <div id="addContact">               <a href="#add-contact" id="add-contact-btn">Add Contact</a>               <table id="add-contact-form">               <tr>               <td>Names:</td><td><input type="text" name="names" id="names"  /></td>               </tr>               <tr>               <td>Phone Number:</td><td><input type="text" name="phone" id="phone"  /></td>               </tr>               <tr>               <td>&nbsp;</td><td>               <a href="#save-contact" id="save-contact-btn">Save Contact</a>               <a href="#cancel" id="cancel-btn">Cancel</a>               </td>               </tr>               </table> </div> <div id="notice">               notice box </div> <div id="list-title">My Contact List</div> <ul id="contacts-lists">         <li>mambe nanje [+23777545907] - <a href="#delete-id" class="deletebtn" contactid='1'> delete contact </a></li>         <li>mambe nanje [+23777545907] - <a href="#delete-id" class="deletebtn" contactid='2'> delete contact</a></li>         <li>mambe nanje [+23777545907] - <a  href="#delete-id" class="deletebtn" contactid='3'> delete contact</a></li> </ul> </div> The above code creates an html form that provides input fields to insert new address book entries. It also displays a button to make it appear via javascript functions. It also creates a notification div and goes to display the contact list with delete button on each entry. With the above code, the application will now look like this:   /* CSS Document */   body {               background-color: #000000; } #Layer1 {               margin:auto;               width:484px;               height:308px;               z-index:1; } #add-contact-form{               color:#FF9900;               font-weight:bold;               font-family:Verdana, Arial, Helvetica, sans-serif;               background-color:#333333;               margin-top:5px;               padding:10px; } #add-contact-btn{               background-color:#FF9900;               font-weight:bold;               font-family:Verdana, Arial, Helvetica, sans-serif;               border:1px solid #666666;               color:#000;               text-decoration:none;               padding:2px;               font-weight:bold; } #save-contact-btn{               background-color:#FF9900;               font-weight:bold;               font-family:Verdana, Arial, Helvetica, sans-serif;               border:1px solid #666666;               color:#000;               text-decoration:none;               padding:2px;               font-weight:bold; } #cancel-btn{               background-color:#FF9900;               font-weight:bold;               font-family:Verdana, Arial, Helvetica, sans-serif;               border:1px solid #666666;               color:#000;               text-decoration:none;               padding:2px;               font-weight:bold; } h1{               color:#FFFFFF;               font-family:Arial, Helvetica, sans-serif; } #list-title{               color:#FFFFFF;               font-weight:bold;               font-size:14px;               font-family:Arial, Helvetica, sans-serif;               margin-top:10px; } #contacts-lists{               color:#FF6600;               font-weight:bold;               font-family:Verdana, Arial, Helvetica, sans-serif;               font-size:12px; } #contacts-lists a{               background-color:#FF9900;               text-decoration:none;                            padding:2px;               color:#000;               margin-bottom:2px; } #contacts-lists li{               list-style:none;               border-bottom:1px dashed #666666;               margin-bottom:10px;               padding-bottom:5px; } #notice{               width:400px;               margin:auto;               background-color:#FFFF99;               border:1px solid #FFCC99;               font-weight:bold;               font-family:verdana;               margin-top:10px;               padding:4px; } The CSS code styles the HTML above and it ends up looking like this: Now that we have our html and css perfectly working, we need to setup the database and the PHP server codes that will handle the AJAX requests from the jquery functions. Create a MySQL database and executing the following SQL code will create the contacts table. This is the only table this application needs. CREATE TABLE `contacts` ( `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY , `names` VARCHAR( 200 ) NOT NULL , `phone` VARCHAR( 100 ) NOT NULL ); Let's analyse the php codes. Remember, this code will be located in addressbook.php. The database connection code # FileName="Connection_php_mysql.htm" # Type="MYSQL" # HTTP="true" //configure the database paramaters $hostname_packpub_addressbook = "YOUR-DATABASE-HOST"; $database_packpub_addressbook = "YOUR-DATABASE-NAME"; $username_packpub_addressbook = "YOUR-DATABASE-USERNAME"; $password_packpub_addressbook = "YOUR-DATABASE-PASSWORD"; //connect to the database server $packpub_addressbook = mysql_pconnect($hostname_packpub_addressbook, $username_packpub_addressbook,  $password_packpub_addressbook) or trigger_error(mysql_error(),E_USER_ERROR); //selete the database mysql_select_db($database_packpub_addressbook); the above code sets the parameters required for the database connection, then establishes a connection to the server and selects your database. The PHP codes will then contain functions SAVECONTACTS, DELETECONTACTS, GETCONTACTS. These functions will do exactly as their name implies. Save the contact from AJAX call to the database, delete contact via AJAX request or get the contacts. The functions are as show below: //function to save new contact /** * @param <string> $name //name of the contact * @param <string> $phone //the telephone number of the contact */ function saveContact($name,$phone){               $sql="INSERT INTO contacts (names , phone ) VALUES ('".$name."','".$phone."');";               $result=mysql_query($sql)or die(mysql_error()); } //lets write a function to delete contact /** * @param <int> id //the contact id in database we wish to delete */ function deleteContact($id){               $sql="DELETE FROM contacts where id=".$id;               $result=mysql_query($sql); }   //lets get all the contacts function getContacts(){               //execute the sql to get all the contacts in db               $sql="SELECT * FROM contacts";               $result=mysql_query($sql);               //store the contacts in an array of objects               $contacts=array();               while($record=mysql_fetch_object($result)){                             array_push($contacts,$record);               }               //return the contacts               return $contacts; } The codes above creates the functions but the functions are not called till the following code executes: //lets handle the Ajax calls now $action=$_POST['action']; //the action for now is either add or delete if($action=="add"){               //get the post variables for the new contact               $name=$_POST['name'];               $phone=$_POST['phone'];               //save the new contact               saveContact($name,$phone);               $output['msg']=$name." has been saved successfully";               //reload the contacts               $output['contacts']=getContacts();               echo json_encode($output); }else if($action=="delete"){               //collect the id we wish to delete               $id=$_POST['id'];               //delete the contact with that id               deleteContact($id);               $output['msg']="one entry has been deleted successfully";               //reload the contacts               $output['contacts']=getContacts();               echo json_encode($output); }else{               $output['contacts']=getContacts();               $output['msg']="list of all contacts";               echo json_encode($output); } The above code is the heart of the addressbook.php codes. It gets the action from post variables sent via AJAX call in addressbook.js file, interprets the action and executes the appropriate function for either add, delete or nothing which will just get the list of contacts. json_encode() function is used to encode the data in to Javascript Object Notation format; it will be easily interpreted by the javascript codes.
Read more
  • 0
  • 0
  • 12718

article-image-ajax-form-validation-part-2
Packt
19 Feb 2010
11 min read
Save for later

AJAX Form Validation: Part 2

Packt
19 Feb 2010
11 min read
How to implement AJAX form validation In this article, we redesigned the code for making AJAX requests when creating the XmlHttp class. The AJAX form validation application makes use of these techniques. The application contains three pages: One page renders the form to be validated Another page validates the input The third page is displayed if the validation is successful The application will have a standard structure, composed of these files: index.php: It is the file loaded initially by the user. It contains references to the necessary JavaScript files and makes asynchronous requests for validation to validate.php. index_top.php: It is a helper file loaded by index.php and contains several objects for rendering the HTML form. validate.css: It is the file containing the CSS styles for the application. json2.js: It is the JavaScript file used for handling JSON objects. xhr.js: It is the JavaScript file that contains our XmlHttp object used for making AJAX requests. validate.js: It is the JavaScript file loaded together with index.php on the client side. It makes asynchronous requests to a PHP script called validate.php to perform the AJAX validation. validate.php: It is a PHP script residing on the same server as index.php, and it offers the server-side functionality requested asynchronously by the JavaScript code in index.php. validate.class.php: It is a PHP script that contains a class called Validate, which contains the business logic and database operations to support the functionality of validate.php. config.php: It will be used to store global configuration options for your application, such as database connection data, and so on. error_handler.php: It contains the error-handling mechanism that changes the text of an error message into a human-readable format. allok.php: It is the page to be displayed if the validation is successful. Time for action – AJAX form validation Connect to the ajax database and create a table named users with the following code: CREATE TABLE users ( user_id INT UNSIGNED NOT NULL AUTO_INCREMENT, user_name VARCHAR(32) NOT NULL, PRIMARY KEY (user_id) ); Execute the following INSERT commands to populate your users table with some sample data: INSERT INTO users (user_name) VALUES ('bogdan'); INSERT INTO users (user_name) VALUES ('audra'); INSERT INTO users (user_name) VALUES ('cristian'); Let's start writing the code with the presentation tier. We'll define the styles for our form by creating a file named validate.css, and adding the following code to it: body { font-family: Arial, Helvetica, sans-serif; font-size: 0.8em; color: #000000; } label { float: left; width: 150px; font-weight: bold; } input, select { margin-bottom: 3px; } .button { font-size: 2em; } .left { margin-left: 150px; } .txtFormLegend { color: #777777; font-weight: bold; font-size: large; } .txtSmall { color: #999999; font-size: smaller; } .hidden { display: none; } .error { display: block; margin-left: 150px; color: #ff0000; } Now create a new file named index_top.php, and add the following code to it. This script will be loaded from the main page index.php. <?php // enable PHP session session_start(); // Build HTML <option> tags function buildOptions($options, $selectedOption) { foreach ($options as $value => $text) { if ($value == $selectedOption) { echo '<option value="' . $value . '" selected="selected">' . $text . '</option>'; } else { echo '<option value="' . $value . '">' . $text . '</option>'; } } } // initialize gender options array $genderOptions = array("0" => "[Select]", "1" => "Male", "2" => "Female"); // initialize month options array $monthOptions = array("0" => "[Select]", "1" => "January", "2" => "February", "3" => "March", "4" => "April", "5" => "May", "6" => "June", "7" => "July", "8" => "August", "9" => "September", "10" => "October", "11" => "November", "12" => "December"); // initialize some session variables to prevent PHP throwing // Notices if (!isset($_SESSION['values'])) { $_SESSION['values']['txtUsername'] = ''; $_SESSION['values']['txtName'] = ''; $_SESSION['values']['selGender'] = ''; $_SESSION['values']['selBthMonth'] = ''; $_SESSION['values']['txtBthDay'] = ''; $_SESSION['values']['txtBthYear'] = ''; $_SESSION['values']['txtEmail'] = ''; $_SESSION['values']['txtPhone'] = ''; $_SESSION['values']['chkReadTerms'] = ''; } if (!isset($_SESSION['errors'])) { $_SESSION['errors']['txtUsername'] = 'hidden'; $_SESSION['errors']['txtName'] = 'hidden'; $_SESSION['errors']['selGender'] = 'hidden'; $_SESSION['errors']['selBthMonth'] = 'hidden'; $_SESSION['errors']['txtBthDay'] = 'hidden'; $_SESSION['errors']['txtBthYear'] = 'hidden'; $_SESSION['errors']['txtEmail'] = 'hidden'; $_SESSION['errors']['txtPhone'] = 'hidden'; $_SESSION['errors']['chkReadTerms'] = 'hidden'; } ?> Now create index.php, and add the following code to it: <?php require_once ('index_top.php'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www. w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html > <head> <title>Degradable AJAX Form Validation with PHP and MySQL</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link href="validate.css" rel="stylesheet" type="text/css" /> </head> <body onload="setFocus();"> <script type="text/javascript" src="json2.js"></script> <script type="text/javascript" src="xhr.js"></script> <script type="text/javascript" src="validate.js"></script> <fieldset> <legend class="txtFormLegend"> New User Registratio Form </legend> <br /> <form name="frmRegistration" method="post" action="validate.php"> <input type="hidden" name="validationType" value="php"/> <!-- Username --> <label for="txtUsername">Desired username:</label> <input id="txtUsername" name="txtUsername" type="text" onblur="validate(this.value, this.id)" value="<?php echo $_SESSION['values'] ['txtUsername'] ?>" /> <span id="txtUsernameFailed" class="<?php echo $_SESSION['errors']['txtUsername'] ?>"> This username is in use, or empty username field. </span> <br /> <!-- Name --> <label for="txtName">Your name:</label> <input id="txtName" name="txtName" type="text" onblur="validate(this.value, this.id)" value="<?php echo $_SESSION['values']['txtName'] ?>" /> <span id="txtNameFailed" class="<?php echo $_SESSION['errors']['txtName']?>"> Please enter your name. </span> <br /> <!-- Gender --> <label for="selGender">Gender:</label> <select name="selGender" id="selGender" onblur="validate(this.value, this.id)"> <?php buildOptions($genderOptions, $_SESSION['values']['selGender']); ?> </select> <span id="selGenderFailed" class="<?php echo $_SESSION['errors']['selGender'] ?>"> Please select your gender. </span> <br /> <!-- Birthday --> <label for="selBthMonth">Birthday:</label> <!-- Month --> <select name="selBthMonth" id="selBthMonth" onblur="validate(this.value, this.id)"> <?php buildOptions($monthOptions, $_SESSION['values']['selBthMonth']); ?> </select> &nbsp;-&nbsp; <!-- Day --> <input type="text" name="txtBthDay" id="txtBthDay" maxlength="2" size="2" onblur="validate(this.value, this.id)" value="<?php echo $_SESSION['values']['txtBthDay'] ?>" /> &nbsp;-&nbsp; <!-- Year --> <input type="text" name="txtBthYear" id="txtBthYear" maxlength="4" size="2" onblur="validate(document.getElementById ('selBthMonth').options[document.getElementById ('selBthMonth').selectedIndex].value + '#' + document.getElementById('txtBthDay').value + '#' + this.value, this.id)" value="<?php echo $_SESSION['values']['txtBthYear'] ?>" /> <!-- Month, Day, Year validation --> <span id="selBthMonthFailed" class="<?php echo $_SESSION['errors']['selBthMonth'] ?>"> Please select your birth month. </span> <span id="txtBthDayFailed" class="<?php echo $_SESSION['errors']['txtBthDay'] ?>"> Please enter your birth day. </span> <span id="txtBthYearFailed" class="<?php echo $_SESSION['errors']['txtBthYear'] ?>"> Please enter a valid date. </span> <br /> <!-- Email --> <label for="txtEmail">E-mail:</label> <input id="txtEmail" name="txtEmail" type="text" onblur="validate(this.value, this.id)" value="<?php echo $_SESSION['values']['txtEmail'] ?>" /> <span id="txtEmailFailed" class="<?php echo $_SESSION['errors']['txtEmail'] ?>"> Invalid e-mail address. </span> <br /> <!-- Phone number --> <label for="txtPhone">Phone number:</label> <input id="txtPhone" name="txtPhone" type="text" onblur="validate(this.value, this.id)" value="<?php echo $_SESSION['values']['txtPhone'] ?>" /> <span id="txtPhoneFailed" class="<?php echo $_SESSION['errors']['txtPhone'] ?>"> Please insert a valid US phone number (xxx-xxx-xxxx). </span> <br /> <!-- Read terms checkbox --> <input type="checkbox" id="chkReadTerms" name="chkReadTerms" class="left" onblur="validate(this.checked, this.id)" <?php if ($_SESSION['values']['chkReadTerms'] == 'on') echo 'checked="checked"' ?> /> I've read the Terms of Use <span id="chkReadTermsFailed" class="<?php echo$_SESSION['errors'] ['chkReadTerms'] ?>"> Please make sure you read the Terms of Use. </span> <!-- End of form --> <hr /> <span class="txtSmall">Note: All fields arerequired. </span> <br /><br /> <input type="submit" name="submitbutton" value="Register" class="left button" /> </form> </fieldset> </body> </html> Create a new file named allok.php, and add the following code to it: <?php // clear any data saved in the session session_start(); session_destroy(); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html > <head> <title>AJAX Form Validation</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link href="validate.css" rel="stylesheet" type="text/css" /> </head> <body> Registration Successful!<br /> <a href="index.php" title="Go back">&lt;&lt; Go back</a> </body> </html> Copy json2.js (which you downloaded in a previous exercise from http://json.org/json2.js) to your ajax/validate folder. Create a file named validate.js. This file performs the client-side functionality, including the AJAX requests: // holds the remote server address var serverAddress = "validate.php"; // when set to true, display detailed error messages var showErrors = true; // the function handles the validation for any form field function validate(inputValue, fieldID) { // the data to be sent to the server through POST var data = "validationType=ajax&inputValue=" + inputValue + "&fieldID=" + fieldID; // build the settings object for the XmlHttp object var settings = { url: serverAddress, type: "POST", async: true, complete: function (xhr, response, status) { if (xhr.responseText.indexOf("ERRNO") >= 0 || xhr.responseText.indexOf("error:") >= 0 || xhr.responseText.length == 0) { alert(xhr.responseText.length == 0 ? "Server error." : response); } result = response.result; fieldID = response.fieldid; // find the HTML element that displays the error message = document.getElementById(fieldID + "Failed"); // show the error or hide the error message.className = (result == "0") ? "error" : "hidden"; }, data: data, showErrors: showErrors }; // make a server request to validate the input data var xmlHttp = new XmlHttp(settings); } // sets focus on the first field of the form function setFocus() { document.getElementById("txtUsername").focus(); } Now it's time to add the server-side logic. Start by creating config.php, with the following code in it: <?php // defines database connection data define('DB_HOST', 'localhost'); define('DB_USER', 'ajaxuser'); define('DB_PASSWORD', 'practical'); define('DB_DATABASE', 'ajax'); ?> Now create the error handler code in a file named error_handler.php: <?php // set the user error handler method to be error_handler set_error_handler('error_handler', E_ALL); // error handler function function error_handler($errNo, $errStr, $errFile, $errLine) { // clear any output that has already been generated if(ob_get_length()) ob_clean(); // output the error message $error_message = 'ERRNO: ' . $errNo . chr(10) . 'TEXT: ' . $errStr . chr(10) . 'LOCATION: ' . $errFile . ', line ' . $errLine; echo $error_message; // prevent processing any more PHP scripts exit; } ?> The PHP script that handles the client's AJAX calls, and also handles the validation on form submit, is validate.php: <?php // start PHP session session_start(); // load error handling script and validation class require_once ('error_handler.php'); require_once ('validate.class.php'); // Create new validator object $validator = new Validate(); // read validation type (PHP or AJAX?) $validationType = ''; if (isset($_POST['validationType'])) { $validationType = $_POST['validationType']; } // AJAX validation or PHP validation? if ($validationType == 'php') { // PHP validation is performed by the ValidatePHP method, //which returns the page the visitor should be redirected to //(which is allok.php if all the data is valid, or back to //index.php if not) header('Location:' . $validator->ValidatePHP()); } else { // AJAX validation is performed by the ValidateAJAX method. //The results are used to form a JSON document that is sent //back to the client $response = array('result' => $validator->ValidateAJAX ($_POST['inputValue'],$_POST['fieldID']), 'fieldid' => $_POST['fieldID'] ); // generate the response if(ob_get_length()) ob_clean(); header('Content-Type: application/json'); echo json_encode($response); } ?>
Read more
  • 0
  • 0
  • 5871

article-image-working-data-application-components-sql-server-2008-r2
Packt
19 Feb 2010
4 min read
Save for later

Working with Data Application Components in SQL Server 2008 R2

Packt
19 Feb 2010
4 min read
(For more resources on Microsoft, see here.) A Data Application Component is an entity that integrates all data tier related objects used in authoring, deploying and managing into a single unit instead of working with them separately. Programmatically DACs belong to classes that are found in The Microsoft.SqlServer.Management.Dac namespace. DACs are stored in a DacStore and managed centrally. Dacs can be authored and built using SQL Server Data-Tier Application templates in VS2010 (now in Beta 2) or using SQL Server Management Studio. This article describes creating DAC using SQL Server 2008 R2 Nov-CTP(R2 server in this article), a new feature in this version. Overview of the article In order to proceed working with this example you need to download SQL Server 2008 R2 Nov-CTP. The ease with which this installs depend on the Windows OS on your machine. I had encountered problems installing it on my Windows XP SP3 where only partial files were installed. On Windows 7 Ultimate it installed very easily. This article uses the R2 Server installed on Windows 7 Ultimate. You can download the R2 Server from this link after registering at the site. Download the x 32 version, a 1.2 GB file. In order to work with Data Tier Applications in Visual Studio you need to install Visual Studio 2010 now in Beta 2. If you have installed Beta 1, take it out (use Add/Remove programs) before you install Beta 2. You would create a Database Project as shown in the next figure. In the following sections we will look at how to extract a DAC from a existing Database using tools in SSMS and R2 Server. This will be followed by deploying the DAC to a SQL Server 2008 (before R2 Version). In a future article we will see how to create and work with the DACs in Visual Studio. Extracting a DAC We will use the Extract a Data-Tier Application wizard to create a DAC file. Connect to the SQL Server 2008 Server in the Management Studio as shown. We will create a DAC package that will create a DAC file for us on completing this task. Right click the Pubs database and click on Tasks | Extract Data-Tier Appplication... You may also use any other database for working with this exercise. This brings up the Wizard as shown in the next figure. Read the notes on this window and review the database icons on this window. Click Next. The Set Properties page of the wizard gets displayed. The Application name will be the database name with which you started. You can change it if you like. The package file name will reflect the application name. The version is set at 1.0.0.0., but you may specify any version you like. You can create different DACs with different version numbers. Click Next. The program cranks up and after validation the Validation & Summary page gets displayed as shown. The file path of the package, the name of the package and the DAC objects that got into the package are all shown here. All database objects (Tables, Views, Stored Procedures etc) are included in the package. Click Save Report button to save the packaging info to a file. This saves the HTML file ExtractDACSummary_HODENTEK3_ pubs_20100125 to the SQL Server Management Studio folder. This report shows what objects were validated during this process as shown. Click Next. The Build the Package opens up and after the build process is completed you will be able to save the package as shown in the next picture. At the package location shown earlier you will see a package object as shown. This file can be Unpacked to a destination as well as opened with Microsoft SQL Server DAC Package File Unpack wizard. These actions can be accessed by making a right click on this package file.
Read more
  • 0
  • 0
  • 2686

article-image-ajax-form-validation-part-1
Packt
18 Feb 2010
4 min read
Save for later

AJAX Form Validation: Part 1

Packt
18 Feb 2010
4 min read
The server is the last line of defense against invalid data, so even if you implement client-side validation, server-side validation is mandatory. The JavaScript code that runs on the client can be disabled permanently from the browser's settings and/or it can be easily modified or bypassed. Implementing AJAX form validation The form validation application we will build in this article validates the form at the server side on the classic form submit, implementing AJAX validation while the user navigates through the form. The final validation is performed at the server, as shown in Figure 5-1: Doing a final server-side validation when the form is submitted should never be considered optional. If someone disables JavaScript in the browser settings, AJAX validation on the client side clearly won't work, exposing sensitive data, and thereby allowing an evil-intentioned visitor to harm important data on the server (for example, through SQL injection). Always validate user input on the server. As shown in the preceding figure, the application you are about to build validates a registration form using both AJAX validation (client side) and typical server-side validation: AJAX-style (client side): It happens when each form field loses focus (onblur). The field's value is immediately sent to and evaluated by the server, which then returns a result (0 for failure, 1 for success). If validation fails, an error message will appear and notify the user about the failed validation, as shown in Figure 5-3. PHP-style (server side): This is the usual validation you would do on the server—checking user input against certain rules after the entire form is submitted. If no errors are found and the input data is valid, the browser is redirected to a success page, as shown in Figure 5-4. If validation fails, however, the user is sent back to the form page with the invalid fields highlighted, as shown in Figure 5-3. Both AJAX validation and PHP validation check the entered data against our application's rules: Username must not already exist in the database Name field cannot be empty A gender must be selected Month of birth must be selected Birthday must be a valid date (between 1-31) Year of birth must be a valid year (between 1900-2000) The date must exist in the number of days for each month (that is, there's no February 31) E-mail address must be written in a valid email format Phone number must be written in standard US form: xxx-xxx-xxxx The I've read the Terms of Use checkbox must be selected Watch the application in action in the following screenshots: XMLHttpRequest, version 2 We do our best to combine theory and practice, before moving on to implementing the AJAX form validation script, we'll have another quick look at our favorite AJAX object—XMLHttpRequest. On this occasion, we will step up the complexity (and functionality) a bit and use everything we have learned until now. We will continue to build on what has come before as we move on; so again, it's important that you take the time to be sure you've understood what we are doing here. Time spent on digging into the materials really pays off when you begin to build your own application in the real world. Our OOP JavaScript skills will be put to work improving the existing script that used to make AJAX requests. In addition to the design that we've already discussed, we're creating the following features as well: Flexible design so that the object can be easily extended for future needs and purposes The ability to set all the required properties via a JSON object We'll package this improved XMLHttpRequest functionality in a class named XmlHttp that we'll be able to use in other exercises as well. You can see the class diagram in the following screenshot, along with the diagrams of two helper classes: settings is the class we use to create the call settings; we supply an instance of this class as a parameter to the constructor of XmlHttp complete is a callback delegate, pointing to the function we want executed when the call completes The final purpose of this exercise is to create a class named XmlHttp that we can easily use in other projects to perform AJAX calls. With our goals in mind, let's get to it! Time for action – the XmlHttp object In the ajax folder, create a folder named validate, which will host the exercises in this article.
Read more
  • 0
  • 0
  • 6943

article-image-trunks-using-3cx-part-2
Packt
12 Feb 2010
7 min read
Save for later

Trunks using 3CX: Part 2

Packt
12 Feb 2010
7 min read
The next wizard screen is for Outbound Call Rules. Let's go over it enough so that you can setup a simple rule. We start off with a name. This can be anything you like but I prefer something meaningful. For our example I want to dial 9 to use the analog line, and only allow extensions 100-102 to use this line. I also only want to be able to dial certain phone numbers. Then I have to delete the 9 before it goes out to the phone carrier. Let's have a look at each section of this screen: Calls to numbers starting with (Prefix) This is where you specify what you want someone to dial before the line is used. You could enter a string of numbers here to use as a "password" to dial out. You don't just let anyone call an international phone number, so set this to a string of numbers to use as your international password. Give the password only to those who need it. Just make sure you change it occasionally in case it slips out. Calls from extension(s) Now, you can specify who (by extension number) can use this gateway. Just enter the extension number(s) you want to allow either in a range (100-110), individually (100, 101, 104), or as a mix (100-103, 110). Usually, you will leave this open for everyone to use; otherwise, you will restrict extensions that were allowed to use the gateway, which will have repercussions of forwarding rules to external numbers. Calls to numbers with a length of This setting can be left blank if you want all calls to be able to go out on this gateway. In the next screenshot, I specified 3, 7, 10, and 11. This covers calls to 911, 411, 555-1234, 800-555-1234, and 1-800-555-1234, respectively. You can control what phone numbers go out based on the number of digits that are dialed. Route and strip options Since this is our only gateway right now, we will have it route the calls to the Patton gateway. The Strip Digits option needs to be set to 1. This will strip out the "9" that we specified above to dial out with. We can leave the Prepend section blank for now. Now, go ahead and click Finish: Once you click Finish, you will see a gateway wizard summary, as shown in the next screenshot. This shows you that the gateway is created, and it also gives an overview of the settings. Your next step is to get those settings configured on your gateway. There is a list of links for various supported gateways on the bottom of the summary page with up-to-date instructions. Feel free to visit those links. These links will take you to the 3CX website and explain how to configure that particular gateway. With Patton this is easy; click the Generate config file button. The only other information you need for the configuration file is the Subnet mask for the Patton gateway. Enter your network subnet mask in the box. Here, I entered a standard Class C subnet mask. This matches my 192.168.X.X network. Click OK when you are done: Once you click OK, your browser will prompt you to save the file, as shown in the following screenshot. Click Save: The following screenshot shows a familiar Save As Windows screen. I like to put this file in an easy-to-remember location on my hard drive. As I already have a 3CX folder created, I'm going to save the file there. You can change the name of the file if you wish. Click Save: Now that your file is saved, let's take a look at modifying those settings. Open the administration web interface and, on the left-hand side, click PSTN Devices. Go ahead and expand this by clicking the + sign next to it. Now, you will see our newly created Patton SN4114A gateway listed. Click the + sign again and expand that gateway. Next, click the Patton SN4114A name, and you will see the right-hand side window pane fill up with five separate tabs. The first tab is General. This is where you can change the gateway IP address, SIP port, and all the account details. If you change anything, you will need a new configuration file. So click the Generate config file button at the bottom of the screen. If you forgot to save the file previously, here's your chance to generate and save it again: On the Advanced tab, we have some Provider Capabilities. Leave these settings alone for now: We will leave the rest of the tabs for now. Go ahead and click the 10000 line information in the navigation pane on the left. These are the settings for that particular phone port (10000). The first group of settings that we can change is the authentication username and password. Remember, this is to register the line with 3CX and not to use the phone line. The next two sections are about what to do with an inbound call during Office Hours and Outside Office Hours. I didn't change anything from the gateway wizard but, on this screen, you can see that we selected Ring group 800 MainRingGroup. This is the Ring group that we configured previously. We also see similar drop-down boxes for Outside Office Hours. As no one will be in the office to answer the phone, I've selected a Digital Receptionist 801 DR1. In the section Other Options, the Outbound Caller ID box is used to enter what you would like to have presented to the outside world as caller ID information. If your phone carrier supports this, you can enter a phone number or a name. If the carrier does not support this, just leave it blank and talk to your carrier as to what you would require to have it assigned as your caller ID. The Allow outbound calls on this line and Allow incoming calls on this line checkboxes are used to limit calls in or out. Depending on your environment, you might want to leave one line selected as no outbound calls. This will always leave an incoming line for customers to call. Otherwise, unless you have other lines that they can call on, they will get a busy signal. Maximum simultaneous calls cannot be changed here as analog lines only support one call at a time. If you changed anything, click Apply and then go back and generate a new configuration file: For the most up-to-date information on configuring your gateway, visit the 3CX site: http://www.3cx.com/voip-gateways/index.html We will go over a summary of it here: Since nothing was changed, it is now time to configure the Patton device with the config file that we generated from the 3CX template. If you know the IP address of the device, go ahead and open a browser and navigate to that IP address. Mine would be http://192.168.2.10. If you do not know the IP address of your device, you will need the SmartNode discovery tool. The easiest place to get this tool is the CD that came with the device. You can also download it from http://www.3cx.com/downloads/misc/sndiscovery.zip, or search the Patton website for it. Go ahead and install the SmartNode discovery tool and run it. You will get a screen that tells you all the SmartNodes on your network with their IP address, MAC address, and firmware version. Double-click on the SmartNode to open the web interface in a browser. The default username is administrator, and the password field is left blank. Click Import/Export on the left and Import Configuration on the right. Click Browse to find the configuration file that we generated. Click Import and then Reload to restart the gateway with the new configuration. That's it . We can now get incoming calls and make an outbound call.
Read more
  • 0
  • 0
  • 2986

article-image-trunks-using-3cx-part-1
Packt
12 Feb 2010
11 min read
Save for later

Trunks using 3CX: Part 1

Packt
12 Feb 2010
11 min read
PSTN trunks A Public Switch Telephone Network (PSTN) trunk is an old fashioned analog Basic Rate Interface (BRI) ISDN or Primary Rate Interface (PRI) phone line. 3CX can use any of these with the correct analog to SIP gateway. Usually these come into your home or business through a pair of copper lines. Depending on where you live, this may be the only means of connecting 3CX and communicating outside of your network. One of the advantages of a PSTN line is reliability and great call quality. Unless the wires break, you will almost always have phone service. However, what about call quality? After all, many people would like to have a comparison between VoIP and PSTN. Analog hardware for BRI ISDN and PRI's will be discussed in greater detail in Chapter 9. For using an analog PSTN line, you will need an FXO gateway. There are many external ones available. Until Sangoma introduced a new line at the end of 2008, there had not been any gateway which worked inside a Windows PC with 3CX. There are many manufacturers of analog gateways such as Linksys, Audio-Codes, Patton Electronics, GrandStream, and Sangoma. What these FXO gateways do is convert the analog phone line into IP signaling. Then the IP signaling gets passed over your network to the 3CX server and your phones. My personal preference is Patton Electronics. They are probably the most expensive FXOs' out there, but in this case, you get what you pay for. I have tried all of them and they all work. Some have issues with echo which can be hard to get rid of without support, or lots of trial and error, whereas some cannot support high demands (40 calls/hour) without needing to be reset every day, so if you are just testing, get a low-end one. For a high demand business, my preference is Patton. Not only do they make great products, but their support is top notch too. We will configure a Patton SmartNode SN4114 later in this article. SIP trunks What is a SIP trunk? A SIP trunk is a call that is routed by IP over the Internet through an Internet Telephony Service Provider (ITSP). For enterprises wanting to make full use of their installed IP PBXs' and communicate over IP not only within the enterprise, but also outside the enterprise—a SIP trunk provided by an ITSP that connects to the traditional PSTN network is the solution. Unlike traditional telephony, where bundles of physical wires were once delivered from the service provider to a business, a SIP trunk allows a company to replace traditional fixed PSTN lines with PSTN connectivity via a SIP trunking service provider on the Internet. SIP trunks can offer significant cost savings for enterprises, eliminating the need for local PSTN gateways, costly ISDN BRI's or PRI's. The following figure is an example of how our phone system operates: You can see that we have a local area network containing our desktops, servers, phones, and our 3CX Phone System. To reach the outside world using a SIP trunk, we have to go through our firewall or router. Depending on your network, you could be using a private IP address (10.x.x.x, 172.16.x.x, or 192.168.x.x) which is not allowed on the public Internet, so it has to get translated to the public IP address. This translation process is called Network Address Translation (NAT). Once we get outside the local network, we are in the public realm. Our ITSP uses the internet to get our phone call to/from the various carriers PSTN (analog) lines where our phone call is connected/terminated. There are three components necessary to successfully deploy SIP trunks: A PBX with a SIP-enabled trunk side An enterprise edge device understanding SIP An Internet Telephony or SIP trunking service provider The PBX In most cases, the PBX is an IP-based PBX, communicating with all endpoints over IP. However, it may just as well be a traditional digital or analog PBX. The sole requirement that has to be available is an interface for SIP trunking connectivity. The enterprise border element The PBX on the LAN connects to the ITSP via the enterprise border element. The enterprise edge component can either be a firewall with complete support for SIP, or an edge device connected to the firewall handling the traversal of the SIP traffic. The ITSP On the Internet, the ITSP provides connectivity to the PSTN for communication with mobile and fixed phones. Choosing a VoIP carrier—more than just price I feel two of the most important features to look for when choosing a VoIP carrier is support and call quality. Usually once you setup and everything is working, you won't need support. I always tell clients that there is no "boxed" solution that I can sell, every installation is a little different. Internet connections are all different even with the same provider. If you have a rock-solid T1 or something better, then this shouldn't be a problem. DSL seems different from building to building, even in the same area. So how do you test support before giving them your credit card? Call them! Try calling support at the worst times such as Monday afternoons when everyone is back to work and online, also try calling after business hours. See how long does it take to connect to a live person and if you can understand them once you speak to them? Find where is their support located? Try talking to them and tell them you are thinking about signing up with their service and ask them for help. If they go out of their way before they have your money, chances are they will be good to work with later on. Some carriers only offer chat or email support in favor of lower prices. While this may work fine for your business, it certainly won't work for the ones who need answers right away. I know I seem to be stressing a lot on support but it's for good reason. If your business depends on phone service and it goes down then you need answers! I pay more for a product if the support is worth it. Part of this is your Return On Investment (ROI). For example, if you have 3 lawyers billing at $200/hour and they need phones to work, that's $600/hour of lost time. Does the extra $50 or $100 upfront cover that? Now back to the topic at hand. Once you have connected 3CX to the carrier, how is the call quality? If it sounds like a bad cell phone, you probably don't want it, unless the price is so cheap that you can live with the low quality. Certain carriers even change the way your call gets routed through the Internet, based on the lowest cost for the particular call. They don't care about quality as long as you get that connection and they make money on it. Concurrent calls with an ITSP are a feature that you may want to look for when choosing an ITSP. Some accounts are a one-to-one ratio of lines per call. If you want to have 5 people on the phone at the same time (inbound or outbound), you would need to pay for 5 lines, this is similar to a PSTN line. You may get some savings here over a PSTN but that depends on what is available in your area. Some ITSP's have concurrent calls where you can use more than one line per call. Not many carriers have this feature but for a small business, this can be a great cost saving feature to look for. I use a couple of different carriers that have this feature. One carrier that I use lets you have 3 concurrent calls simultaneously on the same line. If you need more than 3 calls, you're a higher use customer and they want you to buy several lines. VoIP IP signaling uses special algorithms to compress your voice into IP packets. This compression uses a codec. There are several available, but the most common one is G.711u-law or A-law. This uses about 80kpbs of upload and download bandwidth. Another popular codec is G.729, it uses about 36kpbs. So for the same bandwidth you can have twice the number of calls using G.729 than G.711. You will need to check with your ITSP and see what codec they support. Another carrier I use is based purely on how much internet bandwidth you have. If you have 1Mbps of upload speed (usually the slowest part of your internet connection), you can support about 10 simultaneous or concurrent calls using G.711. You then pay for the minutes you use. This works very well for a small office as your monthly bill is very low and you don't have to maintain a bunch of lines that don't get used. Cable internet providers are also offering VoIP service to your home or business. These are usually single-use lines but they terminate at your office with an FXS plug. To integrate this with 3CX, you will need an FXO just like it's a PSTN line, same setup but you get the advantage of a VoIP line. Another great benefit of a SIP trunk is expandability. You can easily start out with one line which can usually be completed in one day. As you grow you can add more, usually in minutes as you already have the plan setup. Time to consolidate lines? You can even drop them later on without having contracts (most of the time). Try doing that with the local phone company! Call for a new business and it can take 1-2 weeks to get set up, plus contracts to worry about. No wonder they are jumping on the VoIP band wagon. Disaster recovery What do you do when your internet goes down? Some of you might be saying, "Ha! It never goes down". In my experience, it will eventually, and at the worst time. So what do you do? Go home for the day or plan for a backup? Most VoIP carriers provide some kind of disaster recovery option. They try to send you a call and when they don't get a connection to your 3CX box; then then re-route the call to another phone number. This could be a PSTN line or even a cell phone. It can be a free feature or there can be a small monthly fee on the account. It's worth having, especially if you rely on phones. Okay, so that covers inbound disaster recovery. What about outbound? Yes just about everyone has a cell phone these days, if that isn't enough, I'd suggest you invest in a pay-per-use PSTN line. This keeps the monthly cost very low but it's there when you need it. Whether it's an emergency pizza order for that Friday afternoon party or a true emergency when someone panics and dials 911—you want that call to go out. Speaking of emergency numbers, make sure you have your carrier register that phone number to your local address. Let's say you are in New York and you have a Californian phone number to give you some local presence in that part of the country. Your co-worker grabs his chest and falls down and someone dials 911 from the closest phone they see. Emergency services see your Californian number and contacts California for help for your New York office, that's not what you want when someone is clutching their chest, even though it was just heartburn from that pepperoni pizza. Mixing VoIP and PSTN Some of my clients even mix VoIP and PSTN together. Why would you mix? Local calls and inbound calls use the PSTN lines for the best call quality (and do not use any VoIP minutes if they have to pay for those). Long distance calls use the cheaper rate VoIP line. Another scenario is using PSTN lines for all your incoming and outgoing calls and use VoIP to talk to your other offices. Your own office can deal with a lower call quality, and management will appreciate the lower cost. These types of setups can be controlled using a dial plan.
Read more
  • 0
  • 0
  • 3982
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-forms-grok-10
Packt
12 Feb 2010
13 min read
Save for later

Forms in Grok 1.0

Packt
12 Feb 2010
13 min read
A quick demonstration of automatic forms Let's start by showing how this works, before getting into the details. To do that, we'll add a project model to our application. A project can have any number of lists associated with it, so that related to-do lists can be grouped together. For now, let's consider the project model by itself. Add the following lines to the app.py file, just after the Todo application class definition. We'll worry later about how this fits into the application as a whole. class IProject(interface.Interface): name = schema.TextLine(title=u'Name',required=True) kind = schema.Choice(title=u'Kind of project', values=['personal','business']) description = schema.Text(title=u'Description')class AddProject(grok.Form): grok.context(Todo) form_fields = grok.AutoFields(IProject) We'll also need to add a couple of imports at the top of the file: from zope import interfacefrom zope import schema Save the file, restart the server, and go to the URL http://localhost:8080/todo/addproject. The result should be similar to the following screenshot: OK, where did the HTML for the form come from? We know that AddProject is some sort of a view, because we used the grok.context class annotation to set its context and name. Also, the name of the class, but in lowercase, was used in the URL, like in previous view examples. The important new thing is how the form fields were created and used. First, a class named IProject was defined. The interface defines the fields on the form, and the grok.AutoFields method assigns them to the Form view class. That's how the view knows which HTML form controls to generate when the form is rendered. We have three fields: name, description, and kind. Later in the code, the grok.AutoFields line takes this IProject class and turns these fields into form fields. That's it. There's no need for a template or a render method. The grok.Form view takes care of generating the HTML required to present the form, taking the information from the value of the form_fields attribute that the grok.AutoFields call generated. Interfaces The I in the class name stands for Interface. We imported the zope.interface package at the top of the file, and the Interface class that we have used as a base class for IProject comes from this package. Example of an interface An interface is an object that is used to specify and describe the external behavior of objects. In a sense, the interface is like a contract. A class is said to implement an interface when it includes all of the methods and attributes defined in an interface class. Let's see a simple example: from zope import interfaceclass ICaveman(interface.Interface): weapon = interface.Attribute('weapon') def hunt(animal): """Hunt an animal to get food""" def eat(animal): """Eat hunted animal""" def sleep() """Rest before getting up to hunt again""" Here, we are describing how cavemen behave. A caveman will have a weapon, and he can hunt, eat, and sleep. Notice that the weapon is an attribute—something that belongs to the object, whereas hunt, eat, and sleep are methods. Once the interface is defined, we can create classes that implement it. These classes are committed to include all of the attributes and methods of their interface class. Thus, if we say: class Caveman(object): interface.implements(ICaveman) Then we are promising that the Caveman class will implement the methods and attributes described in the ICaveman interface: weapon = 'ax'def hunt(animal): find(animal) hit(animal,self.weapon)def eat(animal): cut(animal) bite()def sleep(): snore() rest() Note that though our example class implements all of the interface methods, there is no enforcement of any kind made by the Python interpreter. We could define a class that does not include any of the methods or attributes defined, and it would still work. Interfaces in Grok In Grok, a model can implement an interface by using the grok.implements method. For example, if we decided to add a project model, it could implement the IProject interface as follows: class Project(grok.Container): grok.implements(IProject) Due to their descriptive nature, interfaces can be used for documentation. They can also be used for enabling component architectures, but we'll see about that later on. What is of more interest to us right now is that they can be used for generating forms automatically. Schemas The way to define the form fields is to use the zope.schema package. This package includes many kinds of field definitions that can be used to populate a form. Basically, a schema permits detailed descriptions of class attributes that are using fields. In terms of a form—which is what is of interest to us here—a schema represents the data that will be passed to the server when the user submits the form. Each field in the form corresponds to a field in the schema. Let's take a closer look at the schema we defined in the last section: class IProject(interface.Interface): name = schema.TextLine(title=u'Name',required=True) kind = schema.Choice(title=u'Kind of project', required=False, values=['personal','business']) description = schema.Text(title=u'Description', required=False) The schema that we are defining for IProject has three fields. There are several kinds of fields, which are listed in the following table. In our example, we have defined a name field, which will be a required field, and will have the label Name beside it. We also have a kind field, which is a list of options from which the user must pick one. Note that the default value for required is True, but it's usually best to specify it explicitly, to avoid confusion. You can see how the list of possible values is passed statically by using the values parameter. Finally, description is a text field, which means it will have multiple lines of text. Available schema attributes and field types In addition to title, values, and required, each schema field can have a number of properties, as detailed in the following table: Attribute Description title A short summary or label. description A description of the field. required Indicates whether a field requires a value to exist. readonly If True, the field's value cannot be changed. default The field's default value may be None, or a valid field value. missing_value If input for this field is missing, and that's OK, then this is the value to use. order The order attribute can be used to determine the order in which fields in a schema are defined. If one field is created after another (in the same thread), its order will be greater. In addition to the field attributes described in the preceding table, some field types provide additional attributes. In the previous example, we saw that there are various field types, such as Text, TextLine, and Choice. There are several other field types available, as shown in the following table. We can create very sophisticated forms just by defining a schema in this way, and letting Grok generate them. Field type Description Parameters Bool Boolean field.   Bytes Field containing a byte string (such as the python str). The value might be constrained to be within length limits.   ASCII Field containing a 7-bit ASCII string. No characters > DEL (chr(127)) are allowed. The value might be constrained to be within length limits.   BytesLine Field containing a byte string without new lines.   ASCIILine Field containing a 7-bit ASCII string without new lines.   Text Field containing a Unicode string.   SourceText Field for the source text of an object.   TextLine Field containing a Unicode string without new lines.   Password Field containing a Unicode string without new lines, which is set as the password.   Int Field containing an Integer value.   Float Field containing a Float.   Decimal Field containing a Decimal.   DateTime Field containing a DateTime.   Date Field containing a date.   Timedelta Field containing a timedelta.   Time Field containing time.   URI A field containing an absolute URI.   Id A field containing a unique identifier. A unique identifier is either an absolute URI or a dotted name. If it's a dotted name, it should have a module or package name as a prefix.   Choice Field whose value is contained in a predefined set. values: A list of text choices for the field. vocabulary: A Vocabulary object that will dynamically produce the choices. source: A different, newer way to produce dynamic choices. Note: only one of the three should be provided. More information about sources and vocabularies is provided later in this book. Tuple Field containing a value that implements the API of a conventional Python tuple. value_type: Field value items must conform to the given type, expressed via a field. Unique. Specifies whether the members of the collection must be unique. List Field containing a value that implements the API of a conventional Python list. value_type: Field value items must conform to the given type, expressed via a field. Unique. Specifies whether the members of the collection must be unique. Set Field containing a value that implements the API of a conventional Python standard library sets.Set or a Python 2.4+ set. value_type: Field value items must conform to the given type, expressed via a field. FrozenSet Field containing a value that implements the API of a conventional Python2.4+ frozenset. value_type: Field value items must conform to the given type, expressed via a field. Object Field containing an object value. Schema: The interface that defines the fields comprising the object. Dict Field containing a conventional dictionary. The key_type and value_type fields allow specification of restrictions for keys and values contained in the dictionary. key_type: Field keys must conform to the given type, expressed via a field. value_type: Field value items must conform to the given type, expressed via a field. Form fields and widgets Schema fields are perfect for defining data structures, but when dealing with forms sometimes they are not enough. In fact, once you generate a form using a schema as a base, Grok turns the schema fields into form fields. A form field is like a schema field but has an extended set of methods and attributes. It also has a default associated widget that is responsible for the appearance of the field inside the form. Rendering forms requires more than the fields and their types. A form field needs to have a user interface, and that is what a widget provides. A Choice field, for example, could be rendered as a <select> box on the form, but it could also use a collection of checkboxes, or perhaps radio buttons. Sometimes, a field may not need to be displayed on a form, or a writable field may need to be displayed as text instead of allowing users to set the field's value. Form components Grok offers four different components that automatically generate forms. We have already worked with the first one of these, grok.Form. The other three are specializations of this one: grok.AddForm is used to add new model instances. grok.EditForm is used for editing an already existing instance. grok.DisplayForm simply displays the values of the fields. A Grok form is itself a specialization of a grok.View, which means that it gets the same methods as those that are available to a view. It also means that a model does not actually need a view assignment if it already has a form. In fact, simple applications can get away by using a form as a view for their objects. Of course, there are times when a more complex view template is needed, or even when fields from multiple forms need to be shown in the same view. Grok can handle these cases as well, which we will see later on. Adding a project container at the root of the site To get to know Grok's form components, let's properly integrate our project model into our to-do list application. We'll have to restructure the code a little bit, as currently the to-do list container is the root object of the application. We need to have a project container as the root object, and then add a to-do list container to it. To begin, let's modify the top of app.py, immediately before the TodoList class definition, to look like this: import grokfrom zope import interface, schemaclass Todo(grok.Application, grok.Container): def __init__(self): super(Todo, self).__init__() self.title = 'To-Do list manager' self.next_id = 0 def deleteProject(self,project): del self[project] First, we import zope.interface and zope.schema. Notice how we keep the Todo class as the root application class, but now it can contain projects instead of lists. We also omitted the addProject method, because the grok.AddForm instance is going to take care of that. Other than that, the Todo class is almost the same. class IProject(interface.Interface): title = schema.TextLine(title=u'Title',required=True) kind = schema.Choice(title=u'Kind of project',values=['personal', 'business']) description = schema.Text(title=u'Description',required=False) next_id = schema.Int(title=u'Next id',default=0) We then have the interface definition for IProject, where we add the title, kind, description, and next_id fields. These were the fields that we previously added during the call to the __init__ method at the time of product initialization. class Project(grok.Container): grok.implements(IProject) def addList(self,title,description): id = str(self.next_id) self.next_id = self.next_id+1 self[id] = TodoList(title,description) def deleteList(self,list): del self[list] The key thing to notice in the Project class definition is that we use the grok.implements class declaration to see that this class will implement the schema that we have just defined. class AddProjectForm(grok.AddForm): grok.context(Todo) grok.name('index') form_fields = grok.AutoFields(Project) label = "To begin, add a new project" @grok.action('Add project') def add(self,**data): project = Project() self.applyData(project,**data) id = str(self.context.next_id) self.context.next_id = self.context.next_id+1 self.context[id] = project return self.redirect(self.url(self.context[id])) The actual form view is defined after that, by using grok.AddForm as a base class. We assign this view to the main Todo container by using the grok.context annotation. The name index is used for now, so that the default page for the application will be the 'add form' itself. Next, we create the form fields by calling the grok.AutoFields method. Notice that this time the argument to this method call is the Project class directly, rather than the interface. This is possible because the Project class was associated with the correct interface when we previously used grok.implements. After we have assigned the fields, we set the label attribute of the form to the text: To begin, add a new project. This is the title that will be shown on the form. In addition to this new code, all occurrences of grok.context(Todo) in the rest of the file need to be changed to grok.context(Project), as the to-do lists and their views will now belong to a project and not to the main Todo application. For details, take a look at the source code of this article for Grok 1.0 Web Development>>Chapter 5.
Read more
  • 0
  • 0
  • 1881

article-image-testing-and-debugging-grok-10-part-1
Packt
11 Feb 2010
12 min read
Save for later

Testing and Debugging in Grok 1.0: Part 1

Packt
11 Feb 2010
12 min read
Grok offers some tools for testing, and in fact, a project created by grokproject (as the one we have been extending) includes a functional test suite. In this article, we are going to discuss testing a bit and then write some tests for the functionality that our application has so far. Testing helps us avoid bugs, but it does not eliminate them completely, of course. There are times when we will have to dive into the code to find out what's going wrong. A good set of debugging aids becomes very valuable in this situation. We'll see that there are several ways of debugging a Grok application and also try out a couple of them. Testing It's important to understand that testing should not be treated as an afterthought. As mentioned earlier, agile methodologies place a lot of emphasis on testing. In fact, there's even a methodology called Test Driven Development (TDD), which not only encourages writing tests for our code, but also writing tests before any other line of code. There are various kinds of testing, but here we'll briefly describe only two: Unit testing Integration or functional tests Unit testing The idea of unit testing is to break a program into its constituent parts and test each one of them in isolation. Every method or function call can be tested separately to make sure that it returns the expected results and handles all of the possible inputs correctly. An application which has unit tests that cover the majority of its lines of code, allows its developers to constantly run the tests after a change, and makes sure that modifications to the code do not break the existing functionality. Functional tests Functional tests are concerned with how the application behaves as a whole. In a web application, this means how it responds to a browser request and whether it returns the expected HTML for a given call. Ideally, the customer himself has a hand in defining these tests, usually through explicit functionality requirements or acceptance criteria. The more formal the requirements from the customer are, the easier it is to define appropriate functional tests. Testing in Grok Grok highly encourages the use of both kinds of tests, and in fact, includes a powerful testing tool that is automatically configured with every project. In the Zope world—from where Grok originated—a lot of value is placed in a kind of tests known as "doctests", so Grok comes with a sample test suite of this kind. Doctests A doctest is a test that's written as a text file, with lines of code mixed with explanations of what the code is doing. The code is written in a way that simulates a Python interpreter session. As tests exercise large portions of the code (ideally 100%), they usually offer a good way of finding out of what an application does and how. So, if an application has no written documentation, its tests would be the next obvious way of finding out what it does. Doctests take this idea further by allowing the developer to explain in the text file exactly what each test is doing. Doctests are especially useful for functional testing, because it makes more sense to document the high-level operations of a program. Unit tests, on the other hand, are expected to evaluate the program bit by bit and it can be cumbersome to write a text explanation for every little piece of code. A possible drawback of doctests is that they can make the developer think that he needs no other documentation for his project. In almost all of the cases, this is not true. Documenting an application or package makes it immediately more accessible and useful, so it is strongly recommended that doctests should not be used as a replacement for good documentation. We'll show an example of using doctests in the Looking at the test code section of this article. Default test setup for Grok projects As mentioned above, Grok projects that are started with the grokproject tool already include a simple functional test suite by default. Let's examine it in detail. Test configuration The default test configuration looks for packages or modules that have the word 'tests' in their name and tries to run the tests inside. For functional tests, any files ending with .txt or .rst are considered. For functional tests that need to simulate a browser, a special configuration is needed to tell Grok which packages to initialize in addition to the Grok infrastructure (usually the ones that are being worked on). The ftesting.zcml file in the package directory has this configuration. This also includes a couple of user definitions that are used by certain tests to examine functionality specific to a certain role, such as manager. Test files Besides the already mentioned ftesting.zcml file, in the same directory, there is a tests.py file added by grokproject, which basically loads the ZCML declarations and registers all of the tests in the package. The actual tests that are included with the default project files are contained in the app.txt file. These are doctests that do a functional test run by loading the entire Grok environment and imitating a browser. We'll take a look at the contents of the file soon, but first let's run the tests. Running the tests As part of the project's build process, a script named test is included in the bin directory when you create a new project. This is the test runner and calling it without arguments, finds and executes all of the tests in the packages that are included in the configuration. We haven't added a single test so far, so if we type bin/test in our project directory, we'll see more or less the same thing that doing that on a new project would show: $ bin/testRunning tests at level 1 Running todo.FunctionalLayer tests: Set up in 12.319 seconds. Running: ...2009-09-30 15:00:47,490 INFO sqlalchemy.engine.base.Engine.0x...782c PRAGMA table_info("users") 2009-09-30 15:00:47,490 INFO sqlalchemy.engine.base.Engine.0x...782c () Ran 3 tests with 0 failures and 0 errors in 0.465 seconds. Tearing down left over layers: Tear down todo.FunctionalLayer ... not supported The only difference between our output to that of a newly created Grok package is in the sqlalchemy lines. Of course, the most important part of the output is the "penultimate" line, which shows the number of tests that were run and whether there were any failures or errors. A failure means that some test didn't pass, which means that the code is not doing what it's supposed to do and needs to be checked. An error signifies that the code crashed unexpectedly at some point, and the test couldn't even be executed, so it's necessary to find the error and correct it before worrying about the tests. The test runner The test runner program looks for modules that contain tests. The test can be of three different types: Python tests, simple doctests, and full functionality doctests. To let the test runner know, which test file includes which kind of tests, a comment similar to the following is placed at the top of the file: Do a Python test on the app. :unittest: In this case, the Python unit test layer will be used to run the tests. The other value that we are going to use is "doctest" when we learn how to write doctests. The test runner then finds all of the test modules and runs them in the corresponding layer. Although unit tests are considered very important in regular development, we may find functional tests more necessary for a Grok web application, as we will usually be testing views and forms, which require the full Zope/Grok stack to be loaded to work. That's the reason why we find only functional doctests in the default setup. Test layers A test layer is a specific test setup which is used to differentiate the tests that are executed. By default, there is a test layer for each of the three types of tests handled by the test runner. It's possible to run a test layer without running the others and also to name new test layers to be able to cluster together tests that require a specific setup. Invoking the test runner As shown above, running bin/test will start the test runner with the default options. It's also possible to specify a number of options, and the most important ones are summarized below. In the following table, command-line options are shown to the left. Most options can be expressed with a short form (one dash) or a long form (two dashes). Arguments for the option in question are shown in uppercase. -s PACKAGE, --package=PACKAGE, --dir=PACKAGE Search the given package's directories for tests. This can be specified more than once, to run tests in multiple parts of the source tree. For example, when refactoring interfaces, you don't want to see the way you have broken setups for tests in other packages. You just want to run the interface tests. Packages are supplied as dotted names. For compatibility with the old test runner, forward and backward slashes in package names are converted to dots. (In the special case of packages, which are spread over multiple directories, only directories within the test search path are searched.) -m MODULE, --module=MODULE Specify a test-module filter as a regular expression. This is a case sensitive regular expression, which is used in search (not match) mode, to limit which test modules are searched for tests. The regular expressions are checked against dotted module names. In an extension of Python regexp notation, a leading "!" is stripped and causes the sense of the remaining regexp to be negated (so "!bc" matches any string that does not match "bc", and vice versa). The option can specy multiple test-module filters. Test modules matching any of the test filters are searched. If no test-module filter is specified, then all of the test modules are used. -t TEST, --test=TEST Specify a test filter as a regular expression. This is a case sensitive regular expression, which is used in search (not match) mode, to limit which tests are run. In an extension of Python regexp notation, a leading "!" is stripped and causes the sense of the remaining regexp to be negated (so "!bc" matches any string that does not match "bc", and vice versa). The option can specify multiple test filters. Tests matching any of the test filters are included. If no test filter is specified, then all of the tests are executed. --layer=LAYER Specify a test layer to run. The option can be given multiple times to specify more than one layer. If not specified, all of the layers are executed. It is common for the running script to provide default values for this option. Layers are specified regular expressions that are used in search mode, for dotted names of objects that define a layer. In an extension of Python regexp notation, a leading "!" is stripped and causes the sense of the remaining regexp to be negated (so "!bc" matches any string that does not match "bc", and vice versa). The layer named 'unit' is reserved for unit tests, however, take note of the -unit and non-unit options. -u, --unit Executes only unit tests, ignoring any layer options. -f, --non-unit Executes tests other than unit tests. -v, --verbose Makes output more verbose. Increment the verbosity level. -q, --quiet Makes the output minimal by overriding any verbosity options. Looking at the test code Let's take a look at the three default test files of a Grok project, to see what each one does. ftesting.zcml As we explained earlier, ftesting.zcml is a configuration file for the test runner. Its main objective is to help us set up the test instance with users, so that we can test different roles according to our needs. <configure i18n_domain="todo" package="todo" > <include package="todo" /> <include package="todo_plus" /> <!-- Typical functional testing security setup --> <securityPolicy component="zope.securitypolicy.zopepolicy.ZopeSecurityPolicy" /> <unauthenticatedPrincipal id="zope.anybody" title="Unauthenticated User" /> <grant permission="zope.View" principal="zope.anybody" /> <principal id="zope.mgr" title="Manager" login="mgr" password="mgrpw" /> <role id="zope.Manager" title="Site Manager" /> <grantAll role="zope.Manager" /> <grant role="zope.Manager" principal="zope.mgr" /> As shown in the preceding code, the configuration simply includes a security policy, complete with users and roles and the packages that should be loaded by the instance, in addition to the regular Grok infrastructure. If we run any tests that require an authenticated user to work, we'll use these special users. The includes at the top of the file just make sure that all of the Zope Component Architecture setup needed by our application is performed prior to running the tests. tests.py The default test module is very simple. It defines the functional layer and registers the tests for our package: import os.path import z3c.testsetup import todo from zope.app.testing.functional import ZCMLLayer ftesting_zcml = os.path.join( os.path.dirname(todo.__file__), 'ftesting.zcml') FunctionalLayer = ZCMLLayer(ftesting_zcml, __name__, 'FunctionalLayer', allow_teardown=True) test_suite = z3c.testsetup.register_all_tests('todo') After the imports, the first line gets the path for the ftesting.zcml file, which then is passed to the layer definition method ZCMLLayer. The final line in the module tells the test runner to find and register all of the tests in the package. This will be enough for our testing needs in this article, but if we needed to create another non-Grok package for our application, we would need to add a line like the last one to it, so that all of its tests are found by the test runner. This is pretty much boilerplate code, as only the package name has to be changed.
Read more
  • 0
  • 0
  • 2386

article-image-testing-and-debugging-grok-10-part-2
Packt
11 Feb 2010
8 min read
Save for later

Testing and Debugging in Grok 1.0: Part 2

Packt
11 Feb 2010
8 min read
Adding unit tests Apart from functional tests, we can also create pure Python test cases which the test runner can find. While functional tests cover application behavior, unit tests focus on program correctness. Ideally, every single Python method in the application should be tested. The unit test layer does not load the Grok infrastructure, so tests should not take anything that comes with it for granted; just the basic Python behavior. To add our unit tests, we'll create a module named unit_tests.py. Remember, in order for the test runner to find our test modules, their names have to end with 'tests'. Here's what we will put in this file: """ Do a Python test on the app. :unittest: """ import unittest from todo.app import Todo class InitializationTest(unittest.TestCase): todoapp = None def setUp(self): self.todoapp = Todo() def test_title_set(self): self.assertEqual(self.todoapp.title,u'To-do list manager') def test_next_id_set(self): self.assertEqual(self.todoapp.next_id,0) The :unittest: comment at the top, is very important. Without it, the test runner will not know in which layer your tests should be executed, and will simply ignore them. Unit tests are composed of test cases, and in theory, each should contain several related tests based on a specific area of the application's functionality. The test cases use the TestCase class from the Python unittest module. In these tests, we define a single test case that contains two very simple tests. We are not getting into the details here. Just notice that the test case can include a setUp and a tearDown method that can be used to perform any common initialization and destruction tasks which are needed to get the tests working and finishing cleanly. Every test inside a test case needs to have the prefix 'test' in its name, so we have exactly two tests that fulfill this condition. Both of the tests need an instance of the Todo class to be executed, so we assign it as a class variable to the test case, and create it inside the setUp method. The tests are very simple and they just verify that the default property values are set on instance creation. Both of the tests use the assertEqual method to tell the test runner that if the two values passed are different, the test should fail. To see them in action, we just run the bin/test command once more: $ bin/testRunning tests at level 1 Running todo.FunctionalLayer tests: Set up in 2.691 seconds. Running: .......2009-09-30 22:00:50,703 INFO sqlalchemy.engine.base.Engine.0x...684c PRAGMA table_info("users") 2009-09-30 22:00:50,703 INFO sqlalchemy.engine.base.Engine.0x...684c () Ran 7 tests with 0 failures and 0 errors in 0.420 seconds. Running zope.testing.testrunner.layer.UnitTests tests: Tear down todo.FunctionalLayer ... not supported Running in a subprocess. Set up zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Ran 2 tests with 0 failures and 0 errors in 0.000 seconds. Tear down zope.testing.testrunner.layer.UnitTests in 0.000 seconds. Total: 9 tests, 0 failures, 0 errors in 5.795 seconds Now, both the functional and unit test layers contain some tests and both are run one after the other. We can see the subtotal for each layer at the end of its tests as well as the grand total of the nine passed tests when the test runner finishes its work. Extending the test suite Of course, we just scratched the surface of which tests should be added to our application. If we continue to add tests, hundreds of tests may be there by the time we finish. However, this article is not the place to do so. As mentioned earlier, its way easier to have tests for each part of our application, if we add them as we code. There's no hiding from the fact that testing is a lot of work, but there is great value in having a complete test suite for our applications. More so, when third parties might use our work product independently. Debugging We will now take a quick look at the debugging facilities offered by Grok. Even if we have a very thorough test suite, chances are there that we will find a fair number of bugs in our application. When that happens, we need a quick and effective way to inspect the code as it runs and find the problem spots easily. Often, developers will use print statements (placed at key lines) throughout the code, in the hopes of finding the problem spot. While this is usually a good way to begin locating sore spots in the code, we often need some way to follow the code line by line to really find out what's wrong. In the next section, we'll see how to use the Python debugger to step through the code and find the problem spots. We'll also take a quick look at how to do post-mortem debugging in Grok, which means jumping into the debugger to analyze program state immediately after an exception has occurred. Debugging in Grok For regular debugging, where we need to step through the code to see what's going on inside, the Python debugger is an excellent tool. To use it, you just have to add the next line at the point where you wish to start debugging: import pdb; pdb.set_trace() Let's try it out. Open the app.py module and change the add method of the AddProjectForm class (line 108) to look like this: @grok.action('Add project') def add(self,**data): import pdb; pdb.set_trace() project = Project() project.creator = self.request.principal.title project.creation_date = datetime.datetime.now() project.modification_date = datetime.datetime.now() self.applyData(project,**data) id = str(self.context.next_id) self.context.next_id = self.context.next_id+1 self.context[id] = project return self.redirect(self.url(self.context[id])) Notice that we invoke the debugger at the beginning of the method. Now, start the instance, go to the 'add project' form, fill it up, and submit it. Instead of seeing the new project view, the browser will stay at the 'add form' page, and display the waiting for... message. This is because control has been transferred to the console for the debugger to act. Your console will look like this: > /home/cguardia/work/virtual/grok1/todo/src/todo/app.py(109)add() -> project = Project() (Pdb) | The debugger is now active and waiting for input. Notice that the line number where debugging started, appears right beside the path of the module where we are located. After the line number, comes the name of the method, add(). Below that, the next line of code to be executed is shown. The debugger commands are simple. To execute the current line, type n: (Pdb) n > /home/cguardia/work/virtual/grok1/todo/src/todo/app.py(110)add() -> project.creator = self.request.principal.title (Pdb) You can see the available commands if you type h: (Pdb) h Documented commands (type help <topic>): ======================================== EOF break condition disable help list q step w a bt cont down ignore n quit tbreak whatis alias c continue enable j next r u where args cl d exit jump p return unalias b clear debug h l pp s up Miscellaneous help topics: ========================== exec pdb Undocumented commands: ====================== retval rv (Pdb) The list command id is used for getting a bird's eye view of where in the code are we: (Pdb) list 105 106 @grok.action('Add project') 107 def add(self,**data): 108 import pdb; pdb.set_trace() 109 project = Project() 110 -> project.creator = self.request.principal.title 111 project.creation_date = datetime.datetime.now() 112 project.modification_date = datetime.datetime.now() 113 self.applyData(project,**data) 114 id = str(self.context.next_id) 115 self.context.next_id = self.context.next_id+1 (Pdb) As you can see, the current line is shown with an arrow. It's possible to type in the names of objects within the current execution context and find out their values: (Pdb) project <todo.app.Project object at 0xa0ef72c> (Pdb) data {'kind': 'personal', 'description': u'Nothing', 'title': u'Project about nothing'} (Pdb) We can of course, continue stepping line by line through all of the code in the application, including Grok's own code, checking values as we proceed. When we are through reviewing, we can click on c to return control to the browser. At this point, we will see the project view. The Python debugger is very easy to use and it can be invaluable for finding obscure bugs in your code.
Read more
  • 0
  • 0
  • 3018

article-image-call-control-using-3cx
Packt
11 Feb 2010
9 min read
Save for later

Call Control using 3CX

Packt
11 Feb 2010
9 min read
Let's get started! Ring groups Ring groups are designed to direct calls to a group of extensions so that a person can answer the call. An incoming call will ring at several extensions at once, and the one who picks up the phone gets control of that call. At that point, he/she can transfer the call, send it to voicemail, or hang up. Ring groups are my preferred call routing method. Does anyone really like those automated greetings? I don't. We will of course, set those up because they do have some great uses. However, if you like your customers to get a real live voice when they call, you have two choices—either direct the call to an extension or use a ring group and have a few phones ring at once. To create a ring group, we will use the 3CX web interface. There are several ways to do this. From the top toolbar menu, click Add | Ring Group. In the following screenshot, I chose Add | Ring Group: The following screenshot shows another way of adding a ring group using the Ring Groups section in the navigation pane on the left-hand side. Then click on the Add Ring Group button on the toolbar: Once we click Add Ring Group, 3CX will automatically create a Virtual machine number for this ring group as shown in the next screenshot. This helps the system keep track of calls and where they are. This number can be changed to any unused number that you like. As a reseller, I like to keep them the same from client to client. This creates some standardization among all the systems. Now it's time to give the ring group a Name. Here I use MainRingGroup as it lets me know that when a call comes in, it should go to the Main Ring Group. After you create the first one, you can make more such as SalesRingGroup, SupportRingGroup, and so on. We now have three choices for the Ring Strategy: Prioritized Hunt: Starts hunting for a member from the top of the Ring Group Members list and works down until someone picks up the phone or goes to the Destination if no answer section. Ring All: If all the phones in the Ring Group Members section ring at the same time then the first person to pick up gets the call. Paging: This is a paid feature that will open the speakerphone on Ring Group Members. Now you will need to select your Ring Time (Seconds) to determine how long you want the phones to ring before giving up. The default ring time is 20 seconds, which all my clients agree is too long. I'd recommend 10-15 seconds, but remember, if no one picks up the phone, then the caller goes to the next step, such as a Digital Receptionist. If the next step also makes the caller wait another 10-20 seconds, he/she may just hang up. You also need to be sure that you do not exceed the phone company's timeout of diverting calls to their voicemail (which could be turned off) or returning a busy signal. Adding ring group members Ring Group Members are the extensions that you would like the system to call or page in a ring group. If you select the Prioritized Hunt strategy, it will hunt from the top and go down the list. Ring All and Paging will get everyone at once. The listbox on the left will show you a list of available extensions. Select the ones you want and click the Add button. If you are using Prioritized Hunt, you can change the order of the hunt by using the Up and Down buttons. Destination if no answer The last setting as shown in the next screenshot illustrates what to do when no one answers the call. The options are as follows: End Call: Just drop the call, no chance for the caller to talk to someone. Connect to Extension: Ring the extension of your choice. Connect to Queue / Ring Group: This sends the caller to a call queue (discussed later in the Call queues section)) or to another ring group. A second ring group could be created for stage two that calls the same group plus additional extensions. Connect to Digital Receptionist: As a person didn't pick up the call, we can now send it to an automated greeting/menu system. Voicemail box for Extension: As the caller has already heard phones ringing, you may just want to put him/her straight to someone's voicemail. Forward to Outside Number: If you have had all the phones in the building ringing and no one has picked up, then you might want to send the caller to a different phone outside of your PBX system. Just make sure that you enter the correct phone number and any area codes that may be required. This will use another simultaneous call license and another phone line. If you have one line only, then this is not the option you can use. Digital Receptionist setup A Digital Receptionist (DR) is not a voicemail box; it's an automated greeting with a menu of choices to choose from. A DR will answer the phone for you if no one is available to answer the phone (directly to an extension or hunt group) or if it is after office hours. You need to set up a DR unless you want all incoming calls to go to someone's voicemail. You will also need it if you want to present the caller with a menu of options. Let's see how to create a DR. Recording a menu prompt The first thing you need to do in order to create a DR is record a greeting. There are a couple of ways to do this. However, first let's create the greeting script. In this greeting, you will be defining your phone menu; that is, you will be directing calls to extensions, hunts, agent groups, and the dial by name directory. Following is an example: Thank you for calling. If you know your party's extension, you may dial it at any time. Or else, please listen to the following options: For Rob, dial 1 For the sales group, dial 2 For Zachary, dial 4 Solicitors, please dial 8 For a dial by name directory, dial 9 I suggest having it written down. This makes it easier to record and also gives the person setting up the DR in 3CX a copy of the menu map. Now that you know what you want your callers to hear when they call, it's time to get it recorded so that we can import it into 3CX. You have a couple of options for recording the greeting script. It doesn't matter which option you use or how you obtain this greeting file, as long as the end format is correct. You can hire a professional announcer, put it to music, and obtain the file from him/her. You can record it using any audio software you like such as Windows Sound Recorder, or any audio recording software. The file needs to be a .wav or an .mp3 file saved in PCM, 8KHz, 16 bit, Mono format. If you have Windows Sound Recorder only, I'd suggest that you try out Audacity. Audacity is an open source audio file program available at http://audacity.sourceforge.net/. Audacity gives you a lot more power such as controlling volume, combining several audio tracks (a music track to go with the announcer), using special effects, and many other cool audio tools. I'm not an expert in it but the basics are easy to do. First, hit the Audacity website and download it, then install it using the defaults. Now let's launch Audacity and set it up to use the correct file format, which will save us any issues later. Start by clicking Edit | Preferences. On the Quality tab, select the Default Sample Rate as 8000 Hz. Then change the Default Sample Format to 16-bit as shown in the following screenshot: Now, on the File Formats tab, select WAV (Microsoft 16 bit PCM) from the drop-down list and click OK: Now that those settings are saved, you can record your greeting without having to change any formats. Now it's time to record your greeting. Click on the red Record button as shown in the following screenshot. It will now use your PC's microphone to record the announcer's voice and when the recording is done, click on the Stop button. Press Play to hear it, and if you don't like it, start over again: If you like the way your greeting sounds, then you will need to save it. Click File | Export As WAV... or Export As MP3.... Save it to a location that you remember (for example, c:3CX prompts is a good place) with a descriptive filename. While you are recording this greeting, you might as well record a few more if you have plans for creating multiple DRs: Creating the Digital Receptionist With your greeting script in hand, it's time to create your first DR. In the navigation pane on the left side, click Digital Receptionist, then click Add Digital Receptionist as shown in the following screenshot: Or on the top menu toolbar, click Add | Digital Receptionist: Just like your ring group, the DR gets a Virtual extension number by default, Feel free to change it or stick with it. Give it a Name, (I like to use the same name as the audio greeting filename.) Now, click Browse... and then Add. Browse to your c:3CX prompts directory and select your .wav or .mp3 file as shown in the following screenshot: Next, we need to create the menu system as shown in the following screenshot. We have lots of options available. You can connect to an extension or ring group, transfer directly to someone's voicemail, end the call (my solicitors' option), or start the call by name feature (discussed in the Call by name setup section). At any time during playback, callers can dial the extension number; they don't have to hear all the options. I usually explain this in the DR recorded greeting.
Read more
  • 0
  • 0
  • 7649
article-image-implementing-ajax-grid-using-jquery-data-grid-plugin-jqgrid
Packt
05 Feb 2010
9 min read
Save for later

Implementing AJAX Grid using jQuery data grid plugin jqGrid

Packt
05 Feb 2010
9 min read
In this article by Audra Hendrix, Bogdan Brinzarea and Cristian Darie, authors of AJAX and PHP: Building Modern Web Applications 2nd Edition, we will discuss the usage of an AJAX-enabled data grid plugin, jqGrid. One of the most common ways to render data is in the form of a data grid. Grids are used for a wide range of tasks from displaying address books to controlling inventories and logistics management. Because centralizing data in repositories has multiple advantages for organizations, it wasn't long before a large number of applications were being built to manage data through the Internet and intranet applications by using data grids. But compared to their desktop cousins, online applications using data grids were less than stellar - they felt cumbersome and time consuming, were not always the easiest things to implement (especially when you had to control varying access levels across multiple servers), and from a usability standpoint, time lags during page reloads, sorts, and edits made online data grids a bit of a pain to use, not to mention the resources that all of this consumed. As you are a clever reader, you have undoubtedly surmised that you can use AJAX to update the grid content; we are about to show you how to do it! Your grids can update without refreshing the page, cache data for manipulation on the client (rather than asking the server to do it over and over again), and change their looks with just a few keystrokes! Gone forever are the blinking pages of partial data and sessions that time out just before you finish your edits. Enjoy! In this article, we're going to use a jQuery data grid plugin named jqGrid. jqGrid is freely available for private and commercial use (although your support is appreciated) and can be found at: http://www.trirand.com/blog/. You may have guessed that we'll be using PHP on the server side but jqGrid can be used with any of the several server-side technologies. On the client side, the grid is implemented using JavaScript's jQuery library and JSON. The look and style of the data grid will be controlled via CSS using themes, which make changing the appearance of your grid easy and very fast. Let's start looking at the plugin and how easily your newly acquired AJAX skills enable you to quickly add functionality to any website. Our finished grid will look like the one in Figure 9-1:   Figure 9-1: AJAX Grid using jQuery Let's take a look at the code for the grid and get started building it. Implementing the AJAX data grid The files and folders for this project can be obtained directly from the code download(chap:9) for this article, or can be created by typing them in. We encourage you to use the code download to save time and for accuracy. If you choose to do so, there are just a few steps you need to follow: Copy the grid folder from the code download to your ajax folder. Connect to your ajax database and execute the product.sql script. Update config.php with the correct database username and password. Load http://localhost/ajax/grid to verify the grid works fine - it should look just like Figure 9-1. You can test the editing feature by clicking on a row, making changes, and hitting the Enter key. Figure 9-2 shows a row in editing mode:     Figure 9-2: Editing a row Code overview If you prefer to type the code yourself, you'll find a complete step-by-step exercise a bit later in this article. Before then, though, let's quickly review what our grid is made of. We'll review the code in greater detail at the end of this article. The editable grid feature is made up of a few components: product.sql is the script that creates the grid database config.php and error_handler.php are our standard helper scripts grid.php and grid.class.php make up the server-side functionality index.html contains the client-side part of our project The scripts folder contains the jQuery scripts that we use in index.html   Figure 9-3: The components of the AJAX grid The database Our editable grid displays a fictional database with products. On the server side, we store the data in a table named product, which contains the following fields: product_id: A unique number automatically generated by auto-increment in the database and used as the Primary Key name: The actual name of the product price: The price of the product for sale on_promotion: A numeric field that we use to store 0/1 (or true/false) values. In the user interface, the value is expressed via a checkbox The Primary Key is defined as the product_id, as this will be unique for each product it is a logical choice. This field cannot be empty and is set to auto-increment as entries are added to the database: CREATE TABLE product( product_id INT UNSIGNED NOT NULL AUTO_INCREMENT, name VARCHAR(50) NOT NULL DEFAULT '', price DECIMAL(10,2) NOT NULL DEFAULT '0.00', on_promotion TINYINT NOT NULL DEFAULT '0', PRIMARY KEY (product_id)); The other fields are rather self-explanatory—none of the fields may be left empty and each field, with the exception of product_id, has been assigned a default value. The tinyint field will be shown as a checkbox in our grid that the user can simply set on or off. The on-promotion field is set to tinyint, as it will only need to hold a true (1) or false (0) value. Styles and colors Leaving the database aside, it's useful to look at the more pertinent and immediate aspects of the application code so as to get a general overview of what's going on here. We mentioned earlier that control of the look of the grid is accomplished through CSS. Looking at the index.html file's head region, we find the following code: <link rel="stylesheet" type="text/css" href="scripts/themes/coffee/grid.css" title="coffee" media="screen" /><link rel="stylesheet" type="text/css" media="screen" href="themes/jqModal.css" /> Several themes have been included in the themes folder; coffee is the theme being used in the code above. To change the look of the grid, you need only modify the theme name to another theme, green, for example, to modify the color theme for the entire grid. Creating a custom theme is possible by creating your own images for the grid (following the naming convention of images), collecting them in a folder under the themes folder, and changing this line to reflect your new theme name. There is one exception here though, and it affects which buttons will be used. The buttons' appearance is controlled by imgpath: 'scripts/themes/green/images', found in index.html; you must alter this to reflect the path to the proper theme. Changing the theme name in two different places is error prone and we should do this carefully. By using jQuery and a nifty trick, we will be able to define the theme as a simple variable. We will be able to dynamically load the CSS file based on the current theme and imgpath will also be composed dynamically. The nifty trick involves dynamically creating the < link > tag inside head and setting the appropriate href attribute to the chosen theme. Changing the current theme simply consists of changing the theme JavaScript variable. JqModal.css controls the style of our pop-up or overlay window and is a part of the jqModal plugin. (Its functionality is controlled by the file jqModal.js found in the scripts/js folder.) You can find the plugin and its associated CSS file at: http://dev.iceburg.net/jquery/jqModal/   In addition, in the head region of index.html, there are several script src declarations for the files used to build the grid (and jqModal.js for the overlay): <script src="scripts/jquery-1.3.2.js" type="text/javascript"></script><script src="scripts/jquery.jqGrid.js" type="text/javascript"></script><script src="scripts/js/jqModal.js" type="text/javascript"></script><script src="scripts/js/jqDnR.js" type="text/javascript"></script> There are a number of files that are used to make our grid function and we will talk about these scripts in more detail later. Looking at the body of our index page, we find the declaration of the table that will house our grid and the code for getting the grid on the page and populated with our product data. <script type="text/javascript">var lastSelectedId;$('#list').jqGrid({ url:'grid.php', //name of our server side script. datatype: 'json', mtype: 'POST', //specifies whether using post or get//define columns grid should expect to use (table columns) colNames:['ID','Name', 'Price', 'Promotion'], //define data of each column and is data editable? colModel:[ {name:'product_id',index:'product_id', width:55,editable:false}, //text data that is editable gets defined {name:'name',index:'name', width:100,editable:true, edittype:'text',editoptions:{size:30,maxlength:50}},//editable currency {name:'price',index:'price', width:80, align:'right',formatter:'currency', editable:true},// T/F checkbox for on_promotion {name:'on_promotion',index:'on_promotion', width:80, formatter:'checkbox',editable:true, edittype:'checkbox'} ],//define how pages are displayed and paged rowNum:10, rowList:[5,10,20,30], imgpath: 'scripts/themes/green/images', pager: $('#pager'), sortname: 'product_id',//initially sorted on product_id viewrecords: true, sortorder: "desc", caption:"JSON Example", width:600, height:250, //what will we display based on if row is selected onSelectRow: function(id){ if(id && id!==lastSelectedId){ $('#list').restoreRow(lastSelectedId); $('#list').editRow(id,true,null,onSaveSuccess); lastSelectedId=id; } },//what to call for saving edits editurl:'grid.php?action=save'});//indicate if/when save was successfulfunction onSaveSuccess(xhr){ response = xhr.responseText; if(response == 1) return true; return false;}</script>
Read more
  • 0
  • 0
  • 16054

article-image-advanced-blog-management-apache-roller-40-part-1
Packt
03 Feb 2010
5 min read
Save for later

Advanced Blog Management with Apache Roller 4.0: Part 1

Packt
03 Feb 2010
5 min read
So let's get on with it. Managing group blogs Suddenly, your boss bursts into your office and shouts: "Well, let's give Roller a try for our company's blog!" And now, you have to enable group blogs in your Roller installation. Time for action – creating another user The first thing you need to do in order to enable group blogging is create another user, as shown in the following exercise: Open your web browser and type your Roller's dynamic hostname in the Address bar (for example, mine is http://alromero.no-ip.org). Now click on the Login link on your weblog's main page: The Welcome to Roller page will appear. Instead of logging in, click on the Register link in order to create a new user: The New User Registration screen will show up next. Fill in the fields for your new user, as shown in the following screenshot: Click on Register User when finished. If all goes well, you'll be taken back to the Welcome to Roller screen, and the following success message will appear: Select the Click here link to continue. Type your new Username and Password, and click on the Login button. The Main Menu page will appear: Click on the Create new weblog link, under the Actions panel. Roller will take you to the Create Weblog page. Fill in the required fields to create your new weblog. Use the following data for the Name, Description, and Handle fields: The Email Address field will already contain the e-mail address you used when creating your new user. Leave the default values for Locale, Timezone, and Theme, and click on the Create Weblog button to continue. The following page will appear indicating that your weblog was successfully created: Now click on the New Entry link in order to create the following new entry in your weblog: Scroll down the page and click on the Post to Weblog button to post your entry. What just happened? Well, now there's another user in your Roller server, how about that? Your boss is going to be proud of you and very happy, because your company will have a multiuser blog! The next step is to invite other people to create user accounts and weblogs in the Roller blog server. If you're using Roller in your office, just start spreading the word to your colleagues. Or if you're experimenting with Roller in your home, you can invite some friends to blog with you, create a family group blog, and so on. Have a go hero – inviting members to write in your weblog Now that you've learned how other people can register and get a user account in Roller, it would be a good idea to start exploring the Preferences: Members page, where you can invite other Roller users to collaborate in your weblog by posting entries. Roller has three user levels: Administrator: Can create/edit weblog entries and publish them in your weblog. An administrator can also change the Roller theme and templates, and manage weblog users. Author: Can create/edit weblog entries and upload files, but cannot change themes or templates, and cannot manage users. Limited: Can create/edit entries and save them as drafts, but cannot publish them. Go on and create several test user accounts, and try out the three Roller user levels by inviting the test users to collaborate in your weblog. To invite a user, use the Invite new member link under the Actions panel in the Preferences: Members page. Enabling a front page blog Up until now, you've been using your main weblog as the front page for your Roller blog server. Now that you've enabled group blogging, each user can promote his/her weblog(s) individually, or you can create a community front page to show recent posts from all of your user's weblogs. The next exercise will show you how to create and use a front page blog to show posts from all the other weblogs in your Roller blog server. Time for action – enabling a front page blog In this exercise, we're going to create a new weblog to serve as the front page of your entire Roller weblog server. The front page blog will show a list of recent entries from all your other weblogs, and from all the other users' weblogs in your Roller blog server. Log into Roller (in case you're not already logged in) with your administrator account, go to the Main Menu page, and then click on the Create new weblog link under the Actions panel: Type My Roller Community in the Name field, The best Roller blog community in the Description field, and frontpage in the Handle field: Scroll down the page until you locate the Theme field, select the Frontpage theme, and click on the Create Weblog button: The following page will appear, indicating that your frontpage weblog was created successfully: Now click on the Server administration link located in the Actions panel. The Roller Configuration page will show up. Scroll down until you locate the Handle of weblog to serve as frontpage blog field, and replace its contents with frontpage. Then click on the Enable aggregated site-wide frontpage option to enable it: Scroll down the page until you locate the Save button and click on it to save your changes. Now click on the Front Page link in Roller's menu bar:
Read more
  • 0
  • 0
  • 1855

article-image-advanced-blog-management-apache-roller-40-part-2
Packt
03 Feb 2010
4 min read
Save for later

Advanced Blog Management with Apache Roller 4.0: Part 2

Packt
03 Feb 2010
4 min read
Enabling weblog pings Now that you have a Technorati account, let's enable your Roller weblog so that it can ping Technorati automatically each time you post a new entry or edit a previously posted entry. Time for action – enabling automatic pings in your weblog This exercise will show you how to enable automatic pinging in your weblog, so that every time you post a new entry or update some entry you posted before, Technorati will receive a ping and will update your blog status: Go to the Front Page: Weblog Settings tab in your web browser, click on the Preferences tab to see your weblog's configuration page, and click on the Pings link: The Configure Automatic Weblog Pings page will appear next. Scroll down the page until you locate the Technorati row under the Common Ping Targets section: Click on Technorati's Enable link to enable automatic pinging for your weblog, so it can send automatic pings to Technorati: Click on the Send Ping Now button to test whether everything works correctly. Roller will show the following success message: Now you just have to wait until Technorati grabs your blog's most recent information, as shown in the following screenshot: What just happened? Now Technorati will keep your weblog information updated every time you post a new entry in your weblog. Once you register with an aggregator, it's very easy to configure automatic pinging in Roller, as you saw in the previous exercise. Now all you need to do is configure all the pings you can to other aggregators and blog search engines, so that people from everywhere can see your weblog! Have a go hero – configure more ping targets Now that you learned how to configure automatic pings to Technorati for your Roller weblog, check out the other ping targets available in the Common Ping Targets list. Go on and enable all the ping targets that you can in order to promote your weblog in all the available blog search engines and aggregators. You can also register with Digg, StumbleUpon, and the other popular aggregators/blog search engines and add new ping targets for them if you click on the Custom Ping Targets link in the Configure Automatic Weblog Pings page. So what are you waiting for? Go and promote your new Roller weblog! Google webmaster tools Now that you have a cool weblog, it would be great if the weblog would show up in Google every time someone searches for a subject related to the things you're writing, don't you think? That's why Google invented the webmaster tools—a great resource to help you find out how your weblog is interacting with the Google bot. With these tools you can get detailed information about broken links, popular keywords, and basically, all the stuff you need to have a successful weblog! Time for action – enabling Google webmaster tools This exercise will show you how to configure Google webmaster tools for your Roller weblog, so you can start receiving important information about visitors and how your weblog interacts with Google: Open your web browser and type https://www.google.com/webmasters/tools to go to the Google webmaster tools website: If you created a Gmail account when installing Roller, you can use it to sign in to Google webmaster tools. Or you can create a new Gmail account in case you don't have one already. Click on the Sign In button when ready. The Google webmaster tools Home page will appear next. Click on the Add a Site button at the bottom to add your Roller weblog: Now enter your weblog's URL in the pop-up box and click on Continue: Google will show you a Meta tag that you need to copy and paste in your Roller weblog. Select the meta tag, right-click on it, and click on Copy: Now open a new tab in your web browser, log into your Roller weblog, and go to the Design tab. The Weblog Theme page will appear. Click on the Custom Theme option and then on the Update Theme button: Roller will show the Successfully set theme to – custom message. Click on the Templates link and then select the Weblog template: Scroll down the page until you locate the </head> HTML tag and paste the Google webmaster tools meta tag right before it, as shown in the following screenshot: Scroll down the page until you locate the Save button and click on it. Roller will show the Template updated successfully message. Return to the Google webmaster tools tab in your web browser and click on the Verify button to verify your weblog: If all goes well, Google will verify your weblog and take you to the Dashboard:
Read more
  • 0
  • 0
  • 2113
article-image-advanced-blog-management-apache-roller-40-part-3
Packt
03 Feb 2010
4 min read
Save for later

Advanced Blog Management with Apache Roller 4.0: Part 3

Packt
03 Feb 2010
4 min read
Weblog clients There are times when logging into your Roller weblog to post a new entry can be a tedious process, especially when you have two or more weblogs about different subjects. Let's say that you have to write stuff in your company's blog, and you also write in your personal Roller blog. You can open two web browser windows and log into each blog separately, but it would be better to use a weblog client, as I'll show you in the next exercise. Time for action – using Google docs as your weblog client In this exercise, you'll learn to use Google docs as your weblog client to post entries in your Roller weblogs without having to log in: Open your web browser, go to http://docs.google.com, log in with your username and password (if you don't have a Google account, this is your chance to get one!), then click on the New button, and select the Document option: Your browser will open a new tab for the new Google docs document. Type This is my first post to my Roller weblog from Google Docs! in the word processor writing area, as shown in the following screenshot: Now click on the File menu and select the Save option to save your draft in Google docs: Google docs assigns the title for your document automatically, based on its content. To change the title of your post, click on it: Type Posting to Roller from Google Docs in the dialog that will show up next, and click on OK to continue: Google docs will show the new title for your post: Now click on the Share button and select the Publish as web page option: The Publish this document dialog will appear. Click on the change your blog site settings link to enter your Roller weblog information: The Blog Site Settings dialog will appear next. Choose the My own server / custom option and select MetaWeblog API in the API field. In the URL field you need to type the complete path to Roller's web services—http://alromero.noip.org/roller/roller-services/xmlrpc, in my case. You just need to replace the alromero.no-ip.org part with your dynamic hostname. Then type your Roller username, password, and weblog name, and select the Include the document title when posting option, as shown in the following screenshot: Click on the OK button to save your weblog settings, and then click on the Post to blog button in the Publish this document dialog: A confirmation dialog will pop-up, asking if you want to post the document to your blog now. Click on OK to continue: Google docs will show the This document has been published to your blog success message: Click on the Save & Close button at the upper-right part of the screen to save your document and return to the Google docs main page, then click on Sign out to exit Google docs. Now go to your Roller weblog's main page, to see the post you published from Google docs: What just happened? See how easy it is to use a weblog client, so that you don't need to log into your Roller weblog to post a new entry? And if you want to post to a different Roller weblog, you just need to change your username, blog ID, or URL. There are several other weblog clients available that you can use, depending on your operating system, but all weblog clients work in a similar way. Have a go hero – try out other weblog clients Go and try out some other weblog clients, to see which one is best for you. In Windows, you can use Windows Live Writer (http://download.live.com/writer) and w.bloggar (http://bloggar.com/). In Linux, you can try out BloGTK (https://launchpad.net/blogtk/).
Read more
  • 0
  • 0
  • 1871

article-image-rendering-images-typo3-43-part-1
Packt
03 Feb 2010
4 min read
Save for later

Rendering Images in TYPO3 4.3: Part 1

Packt
03 Feb 2010
4 min read
Rendering images using content elements Content elements offer a variety of ways for editors to include images. We will examine these here. Here is a typical selection menu that editor is presented with: A great way to start is to assemble pages from the Regular text element and the Text with image elements Getting Ready Make sure Content (default) is selected in Include static, and the CSS Styled Content template is included in the Include static (from extensions) field of the template record of the current page or any page above it in the hierarchy (page tree). To verify, go to the Template module, select the appropriate page, and click edit the whole template record. How to do it... Create the Text with image element. Under the Text tab, enter the text you want to appear on the page. You can use the RTE (Rich Text Editor) to apply formatting, or disable it. We will cover RTE in more detail later in this article. Under the Media tab, select your image settings. If you want to upload the image, use the first field. If you want to use an existing image, use the second field. Under Position, you are able to select where the image will appear in relation to the text. How it works... When the page is rendered in the frontend, the images will be placed next to the text you entered, in the position that you specify. The specific look will depend on the template that you are using. There's more An alternative to the Text with images is an Images only content element. This element gives you similar options, except limits the options to just a display of images. The rest of the options are the same. You can also resize the image, add caption, alt tags for accessibility and search engine optimization, and change default processing options. See the official TYPO3 documentation for details of how these fields work, (http://typo3.org/documentation/document-library/). See also Render video and audio using content elements and rgmediaimages extension Embedding images in RTE Rich Text Editor is great for text entry. By default, TYPO3 ships with htmlArea RTE as a system extension. Other editors are available, and can be installed if needed. Images can be embedded and manipulated within the RTE. This provides one place for content editors to use in order to arrange content how they want it to appear at the frontend of the site. In this recipe, we will see how this can be accomplished. The instructions apply to all forms that have RTE-enabled fields, but we will use the text content element for a simple demonstration. In the Extension Manager, click on htmlArea RTE extension to bring up its options. Make sure that the Enable images in the RTE [enableImages] setting is enabled. If you have a recent version of DAM installed (at least 1.1.0), make sure that the Enable the DAM media browser [enableDAMBrowser] setting is unchecked. This setting is deprecated, and is there for installations using older versions of DAM. How to do it... Create a new Regular text element content element. In the RTE, click on the icon to insert an image as shown in the following screenshot: Choose a file, and click on the icon to insert it into the Text area. You should see the image as it will appear at the frontend of the site. Save and preview. The output should appear similar to the following screenshot: How it works... When you insert an image through the RTE, the image is copied to uploads folder, and included from there. The new file will be resampled and sized down, so, it usually occupies less space and is downloaded faster than the original file. TYPO3 will automatically determine if the original file has changed, and update the file used in the RTE—but you should still be aware of this behaviour. Furthermore, if you have DAM installed, and you have included an image from DAM, you can see the updated record usage. If you view the record information, you should see the Content Element where the image is used:
Read more
  • 0
  • 0
  • 2250
Modal Close icon
Modal Close icon