Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-integrating-twitter-and-youtube-mediawiki
Packt
23 Oct 2009
5 min read
Save for later

Integrating Twitter and YouTube with MediaWiki

Packt
23 Oct 2009
5 min read
Twitter in MediaWiki Twitter (http://www.twitter.com) is a micro-blogging service that allows users to convey the world (or at least the small portion of it on Twitter) what they are doing, in messages of 140 characters or less. It is possible to embed these messages in external websites, which is what we will be doing for JazzMeet. We can use the updates to inform our wiki's visitors of the latest JazzMeet being held across the world, and they can send a response to the JazzMeet Twitter account. Shorter Links Because Twitter only allows posts of up to 140 characters, many Twitter usersmake use of URL-shortening services such as Tiny URL (http://tinyurl.com), and notlong (http://notlong.com) to turn long web addresses into short, more manageable URLs. Tiny URL assigns a random URL such as http://tinyurl.com/3ut9p4, while notlong allows you to pick a free sub-domain to redirect to your chosen address, such as http://asgkasdgadg.notlong.com. Twitter automatically shortens web addresses in your posts. Creating a Twitter Account Creating a Twitter account is quite easy. Just fill in the username, password, and email address fields, and submit the registration form, once you have read and accepted the terms and conditions. If your chosen username is free, your account is created instantly. Once your account has been created, you can change the settings such as your display name and your profile's background image, to help blur the distinction between your website and your Twitter profile. Colors can be specified as "hex" values under the Design tab of your Twitter account's settings section. The following color codes change the link colors to our JazzMeet's palette of browns and reds: As you can see in the screenshot, JazzMeet's Twitter profile now looks a little more like the JazzMeet wiki. By doing this, the visitors catching up with JazzMeet's events on Twitter will not be confused by a sudden change in color scheme:   Embedding Twitter Feeds in MediaWiki Twitter provides a few ways to embed your latest posts in to your own website(s); simply log in and go to http://www.twitter.com/badges. Flash: With this option you can show just your posts, or your posts and your friends' most recent posts on Twitter. HTML and JavaScript: You can configure the code to show between 1 and 20 of your most recent Twitter posts. As JazzMeet isn't really the sort of wiki the visitors would expect to find on Flash, we will be using the HTML and JavaScript version. You are provided with the necessary code to embed in your website or wiki. We will add it to the JazzMeet skin template, as we want it to be displayed on every page of our wiki, just beneath our sponsor links. Refer to the following code: <div id="twitter_div"><h2 class="twitter-title">Twitter Updates</h2><ul id="twitter_update_list"></ul></div><script type="text/javascript" src="http://twitter.com/javascripts/blogger.js"></script><script type="text/javascript" src="http://twitter.com/statuses/user_timeline/jazzmeet.json?callback=twitterCallback2&count=5"></script> The JavaScript given at the bottom of the code can be moved just above the </body> tag of your wiki's skin template. This will help your wiki to load other important elements of your wiki before the Twitter status. You will need to replace "jazzmeet" in the code with your own Twitter username, otherwise you will receive JazzMeet's Twitter updates, and not your own. It is important to leave the unordered list of ID twitter_update_list as it is, as this is the element the JavaScript code looks for to insert a list item containing each of your twitter messages in the page. Styling Twitter's HTML We need to style the Twitter HTML by adding some CSS to change the colors and style of the Twitter status code: #twitter_div {background: #FFF;border: 3px #BEB798 solid;color: #BEB798;margin: 0;padding: 5px;width: 165px;}#twitter_div a {color: #8D1425 !important;}ul#twitter_update_list {list-style-type: none;margin: 0;padding: 0;}#twitter_update_list li {color: #38230C;display: block;}h2.twitter-title {color: #BEB798;font-size: 100%;} There are only a few CSS IDs and classes that need to be taken care of. They are as follows: #twitter_div is the element that contains the Twitter feeds. #twitter_update_list is the ID applied to the unordered list. Styling this affects how your Twitter feeds are displayed. .twitter-title is the class applied to the Twitter feed's heading (which you can remove, if necessary). Our wiki's skin for JazzMeet now has JazzMeet's Twitter feed embedded in the righthand column, allowing visitors to keep up-to-date with the latest JazzMeet news. Inserting Twitter as Page Content Media Wiki does not allow JavaScript to be embedded in a page via the "edit" function, so you won't be able to insert a Twitter status feed directly in a page unless it is in the template itself. Even if you inserted the relevant JavaScript links into your MediaWiki skin template, they are relevant only for one Twitter profile ("jazzmeet", in our case).  
Read more
  • 0
  • 0
  • 8266

article-image-debugging-java-programs-using-jdb
Packt
23 Jun 2010
6 min read
Save for later

Debugging Java Programs using JDB

Packt
23 Jun 2010
6 min read
In this article by Nataraju Neeluru, we will learn how to debug a Java program using a simple command-line debugging tool called JDB. JDB is one of the several debuggers available for debugging Java programs. It comes as part of the Sun's JDK. JDB is used by a lot of people for debugging purposes, for the main reason that it is very simple to use, lightweight and being a command-line tool, is very fast. Those who are familiar with debugging C programs with gdb, will be more inclined to use JDB for debugging Java programs. We will cover most of the commonly used and needed JDB commands for debugging Java programs. Nothing much is assumed to read this article, other than some familiarity with Java programming and general concepts of debugging like breakpoint, stepping through the code, examining variables, etc. Beginners may learn quite a few things here, and experts may revise their knowledge. (For more resources on Java, see here.) Introduction JDB is a debugging tool that comes along with the Sun's JDK. The executable exists in JAVA_HOME/bin as 'jdb' on Linux and 'jdb.exe' on Windows (where JAVA_HOME is the root directory of the JDK installation). A few notes about the tools and notation used in this article: We will use 'jdb' on Linux for illustration throughout this article, though the JDB command set is more or less same on all platforms. All the tools (like jdb, java) used in this article are of JDK 5, though most of the material presented here holds true and works in other versions also. '$' is the command prompt on the Linux machine on which the illustration is carried out. We will use 'JDB' to denote the tool in general, and 'jdb' to denote the particular executable in JDK on Linux. JDB commands are explained in a particular sequence. If that sequence is changed, then the output obtained may be different from what is shown in this article. Throughout this article, we will use the following simple Java program for debugging: public class A{ private int x; private int y; public A(int a, int b) { x = a; y = b; } public static void main(String[] args) { System.out.println("Hi, I'm main.. and I'm going to call f1"); f1(); f2(3, 4); f3(4, 5); f4(); f5(); } public static void f1() { System.out.println("I'm f1..."); System.out.println("I'm still f1..."); System.out.println("I'm still f1..."); } public static int f2(int a, int b) { return a + b; } public static A f3(int a, int b) { A obj = new A(a, b); obj.reset(); return obj; } public static void f4() { System.out.println("I'm f4 "); } public static void f5() { A a = new A(5, 6); synchronized(a) { System.out.println("I'm f5, accessing a's fields " + a.x + " " + a.y); } } private void reset() { x = 0; y = 0; }} Let us put this code in a file called A.java in the current working directory, compile it using 'javac -g A.java' (note the '-g' option that makes the Java compiler generate some extra debugging information in the class file), and even run it once using 'java A' to see what the output is. Apparently, there is no bug in this program to debug it, but we will see, using JDB, how the control flows through this program. Recall that, this program being a Java program, runs on a Java Virtual Machine (JVM). Before we actually debug the Java program, we need to see that a connection is established between JDB and the JVM on which the Java program is running. Depending on the way JDB connects to the JVM, there are a few ways in which we can use JDB. No matter how the connection is established, once JDB is connected to the JVM, we can use the same set of commands for debugging. The JVM, on which the Java program to be debugged is running, is called the 'debuggee' here. Establishing the connection between JDB and the JVM In this section, we will see a few ways of establishing the connection between JDB and the JVM. JDB launching the JVM: In this option, we do not see two separate things as the debugger (JDB) and the debuggee(JVM), but rather we just invoke JDB by giving the initial class (i.e., the class that has the main() method) as an argument, and internally JDB 'launches' the JVM. $jdb AInitializing jdb ... At this point, the JVM is not yet started. We need to give 'run' command at the JDB prompt for the JVM to be started. JDB connecting to a running JVM: In this option, first start the JVM by using a command of the form: $java -Xdebug -Xrunjdwp:transport=dt_socket,server=y,address=6000 AListening for transport dt_socket at address: 6000 It says that the JVM is listening at port 6000 for a connection. Now, start JDB (in another terminal) as: $jdb -attach 6000Set uncaught java.lang.ThrowableSet deferred uncaught java.lang.ThrowableInitializing jdb ...>VM Started: No frames on the current call stack main[1] At this point, JDB is connected to the JVM. It is possible to do remote debugging with JDB. If the JVM is running on machine M1, and we want to run JDB on M2, then we can start JDB on M2 as: $jdb -attach M1:6000 JDB listening for a JVM to connect: In this option, JDB is started first, with a command of the form: $jdb -listen 6000Listening at address: adc2180852:6000 This makes JDB listen at port 6000 for a connection from the JVM. Now, start the JVM (from another terminal) as: $java -Xdebug -Xrunjdwp:transport=dt_socket,server=n,address=6000 A Once the above command is run, we see the following in the JDB terminal: Set uncaught java.lang.ThrowableSet deferred uncaught java.lang.ThrowableInitializing jdb ...>VM Started: No frames on the current call stack main[1] At this point, JDB has accepted the connection from the JVM. Here also, we can make the JVM running on machine M1 connect to a remote JDB running on machine M2, by starting the JVM as: $java -Xdebug -Xrunjdwp:transport=dt_socket,server=n,address=M2:6000 A
Read more
  • 0
  • 0
  • 8244

article-image-working-json-php-jquery
Packt
20 Dec 2010
5 min read
Save for later

Working with JSON in PHP jQuery

Packt
20 Dec 2010
5 min read
  PHP jQuery Cookbook Over 60 simple but highly effective recipes to create interactive web applications using PHP with jQuery Create rich and interactive web applications with PHP and jQuery Debug and execute jQuery code on a live site Design interactive forms and menus Another title in the Packt Cookbook range, which will help you get to grips with PHP as well as jQuery         Read more about this book       In this article, by Vijay Joshi, author of PHP jQuery Cookbook, we will cover: Creating JSON in PHP Reading JSON in PHP Catching JSON parsing errors Accessing data from a JSON in jQuery (For more resources on this subject, see here.) Introduction Recently, JSON (JavaScript Object Notation) has become a very popular data interchange format with more and more developers opting for it over XML. Even many web services nowadays provide JSON as the default output format. JSON is a text format that is programming-language independent and is a native data form of JavaScript. It is lighter and faster than XML because it needs less markup compared to XML. Because JSON is the native data form of JavaScript, it can be used on the client side in an AJAX application more easily than XML. A JSON object starts with { and ends with }. According to the JSON specification, the following types are allowed in JSON: Object: An object is a collection of key-value pairs enclosed between { and } and separated by a comma. The key and the value themselves are separated using a colon (:). Think of objects as associative arrays or hash tables. Keys are simple strings and values can be an array, string, number, boolean, or null. Array: Like other languages, an array is an ordered pair of data. For representing an array, values are comma separated and enclosed between [ and ]. String: A string must be enclosed in double quotes The last type is a number A JSON can be as simple as: { "name":"Superman", "address": "anywhere"} An example using an array is as follows: { "name": "Superman", "phoneNumbers": ["8010367150", "9898989898", "1234567890" ]} A more complex example that demonstrates the use of objects, arrays, and values is as follows:   { "people": [ { "name": "Vijay Joshi", "age": 28, "isAdult": true }, { "name": "Charles Simms", "age": 13, "isAdult": false } ]} An important point to note: { 'name': 'Superman', 'address': 'anywhere'} Above is a valid JavaScript object but not a valid JSON. JSON requires that the name and value must be enclosed in double quotes; single quotes are not allowed. Another important thing is to remember the proper charset of data. Remember that JSON expects the data to be UTF-8 whereas PHP adheres to ISO-8859-1 encoding by default. Also note that JSON is not a JavaScript; it is basically a specification or a subset derived from JavaScript. Now that we are familiar with JSON, let us proceed towards the recipes where we will learn how we can use JSON along with PHP and jQuery. Create a new folder and name it as Chapter 4. We will put all the recipes of this article together in this folder. Also put the jquery.js file inside this folder. To be able to use PHP's built-in JSON functions, you should have PHP version 5.2 or higher installed. Creating JSON in PHP This recipe will explain how JSON can be created from PHP arrays and objects Getting ready Create a new folder inside the Chapter4 directory and name it as Recipe1. How to do it... Create a file and save it by the name index.php in the Recipe1 folder. Write the PHP code that creates a JSON string from an array. <?php $travelDetails = array( 'origin' => 'Delhi', 'destination' => 'London', 'passengers' => array ( array('name' => 'Mr. Perry Mason', 'type' => 'Adult', 'age'=> 28), array('name' => 'Miss Irene Adler', 'type' => 'Adult', 'age'=> 28) ), 'travelDate' => '17-Dec-2010' ); echo json_encode($travelDetails);?> Run the file in your browser. It will show a JSON string as output on screen. After indenting the result will look like the following: { "origin":"Delhi","destination":"London","passengers":[ { "name":"Mr. Perry Mason", "type":"Adult", "age":28 }, { "name":"Miss Irene Adler", "type":"Adult", "age":28 }],"travelDate":"17-Dec-2010"} How it works... PHP provides the function json_encode() to create JSON strings from objects and arrays. This function accepts two parameters. First is the value to be encoded and the second parameter includes options that control how certain special characters are encoded. This parameter is optional. In the previous code we created a somewhat complex associative array that contains travel information of two passengers. Passing this array to json_encode() creates a JSON string. There's more... Predefined constants Any of the following constants can be passed as a second parameter to json_encode(). JSON_HEX_TAG: Converts < and > to u003C and u003E JSON_HEX_AMP: Converts &s to u0026 JSON_HEX_APOS: Converts ' to u0027 JSON_HEX_QUOT: Converts " to u0022 JSON_FORCE_OBJECT: Forces the return value in JSON string to be an object instead of an array These constants require PHP version 5.3 or higher.
Read more
  • 0
  • 0
  • 8233

article-image-websphere-mq-sample-programs
Packt
25 Aug 2010
5 min read
Save for later

WebSphere MQ Sample Programs

Packt
25 Aug 2010
5 min read
(For more resources on IBM, see here.) WebSphere MQ sample programs—server There is a whole set of sample programs available to the WebSphere MQ administrator. We are interested in only a couple of them: the program to put a message onto a queue and a program to retrieve a message from a queue. The names of these sample programs depend on whether we are running WebSphere MQ as a server or as a client. There is a server sample program called amqsput, which puts messages onto a queue using the MQPUT call. This sample program comes as part on the WebSphere MQ samples installation package, not as part of the base package. The maximum length of message that we can put onto a queue using the amqsput sample program is 100 bytes. There is a corresponding server sample program to retrieve messages from a queue called amqsget. Use the sample programs only when testing that the queues are set up correctly—do NOT use the programs once Q Capture and Q Apply have been started. This is because Q replication uses dense numbering between Q Capture and Q Apply, and if we insert or retrieve a message, then the dense numbering will not be maintained and Q Apply will stop. The usual response to this is to cold start Q Capture! To put a message onto a Queue (amqsput) The amqsput utility can be invoked from the command line or from within a batch program. If we are invoking the utility from the command line, the format of the command is: $ amqsput <Queue> <QM name> < <message> We would issue this command from the system on which the Queue Manager sits. We have to specify the queue (<Queue>) that we want to put the message on, and the Queue Manager (<QM name>) which controls this queue. We then pipe (<) the message (<message>) into this. An example of the command is: $ amqsput CAPA.TO.APPB.SENDQ.REMOTE QMA < hello We put these commands into an appropriately named batch fle (say SYSA_QMA_TESTP_UNI_AB.BAT), which would contain the following: Batch file—Windows example: call "C:Program FilesIBMWebSphere MQbinamqsput" CAPA.TO.APPB.SENDQ.REMOTE QMA < QMA_TEST1.TXT Batch file—UNIX example: "/opt/mqm/samp/bin/amqsput" CAPA.TO.APPB.SENDQ.REMOTE QMA < QMA_TEST1.TXT Where the QMA_TEST1.TXT fle contains the message we want to send. Once we have put a message onto the Send Queue, we need to be able to retrieve it. To retrieve a message from a Queue(amqsget) The amqsget utility can be invoked from the command line or from within a batch program. The utility takes 15 seconds to run. We need to specify the Receive Queue that we want to read from and the Queue Manager that the queue belongs to: $ amqsget <Queue> <QM name> As example of the command is shown here: $ amqsget CAPA.TO.APPB.RECVQ QMB If we have correctly set up all the queues, and the Listeners and Channels are running, then when we issue the preceding command, we should see the message we put onto the Send Queue. We can put the amqsget command into a batch fle, as shown next: Batch file—Windows example: @ECHO This takes 15 seconds to runcall "C:Program FilesIBMWebSphere MQbinamqsget" CAPA.TO.APPB.RECVQ QMB@ECHO You should see: test1 Batch file—UNIX example: echo This takes 15 seconds to run"/opt/mqm/samp/bin/amqsget" CAPA.TO.APPB.RECVQ QMBecho You should see: test1 Using these examples and putting messages onto the queues in a unidirectional scenario, then the "get" message batch fle for QMA (SYSA_QMA_TESTG_UNI_BA.BAT) contains: @ECHO This takes 15 seconds to runcall "C:Program FilesIBMWebSphere MQbinamqsget" CAPA.ADMINQ QMA@ECHO You should see: test2 From CLP-A, run the fle as: $ SYSA_QMA_TESTG_UNI_BA.BAT The "get" message batch file for QMB (SYSB_QMB_TESTG_UNI_AB.BAT) contains: @ECHO This takes 15 seconds to runcall "C:Program FilesIBMWebSphere MQbinamqsget" CAPA.TO.APPB.RECVQ QMB@ECHO You should see: test1 From CLP-B, run the file as: $ SYSB_QMB_TESTG_UNI_AB.BAT To browse a message on a Queue It is useful to be able to browse a queue, especially when setting up Event Publishing. There are three ways to browse the messages on a Local Queue. We can use the rfhutil utility, or the amqsbcg sample program, both of which are WebSphere MQ entities, or we can use the asnqmfmt Q replication command. Using the WebSphere MQ rfhutil utility: The rfhutil utility is part of the WebSphere MQ support pack available to download from the web—to find the current download website, simply type rfhutil into an internet search engine. The installation is very simple—unzip the file and run the rfhutil.exe file. Using the WebSphere MQ amqsbcg sample program: The amqsbcg sample program displays messages including message descriptors. $ amqsbcg CAPA.TO.APPB.RECVQ QMB
Read more
  • 0
  • 0
  • 8228

article-image-create-user-profile-system-and-use-null-coalesce-operator
Packt
12 Oct 2016
15 min read
Save for later

Create a User Profile System and use the Null Coalesce Operator

Packt
12 Oct 2016
15 min read
In this article by Jose Palala and Martin Helmich, author of PHP 7 Programming Blueprints, will show you how to build a simple profiles page with listed users which you can click on, and create a simple CRUD-like system which will enable us to register new users to the system, and delete users for banning purposes. (For more resources related to this topic, see here.) You will learn to use the PHP 7 null coalesce operator so that you can show data if there is any, or just display a simple message if there isn’t any. Let's create a simple UserProfile class. The ability to create classes has been available since PHP 5. A class in PHP starts with the word class, and the name of the class: class UserProfile { private $table = 'user_profiles'; } } We've made the table private and added a private variable, where we define which table it will be related to. Let's add two functions, also known as a method, inside the class to simply fetch the data from the database: function fetch_one($id) { $link = mysqli_connect(''); $query = "SELECT * from ". $this->table . " WHERE `id` =' " . $id "'"; $results = mysqli_query($link, $query); } function fetch_all() { $link = mysqli_connect('127.0.0.1', 'root','apassword','my_dataabase' ); $query = "SELECT * from ". $this->table . "; $results = mysqli_query($link, $query); } The null coalesce operator We can use PHP 7's null coalesce operator to allow us to check whether our results contain anything, or return a defined text which we can check on the views—this will be responsible for displaying any data. Lets put this in a file which will contain all the define statements, and call it: //definitions.php define('NO_RESULTS_MESSAGE', 'No results found'); require('definitions.php'); function fetch_all() { …same lines ... $results = $results ?? NO_RESULTS_MESSAGE; return $message; } On the client side, we'll need to come up with a template to show the list of user profiles. Let’s create a basic HTML block to show that each profile can be a div element with several list item elements to output each table. In the following function, we need to make sure that all values have been filled in with at least the name and the age. Then we simply return the entire string when the function is called: function profile_template( $name, $age, $country ) { $name = $name ?? null; $age = $age ?? null; if($name == null || $age === null) { return 'Name or Age need to be set'; } else { return '<div> <li>Name: ' . $name . ' </li> <li>Age: ' . $age . '</li> <li>Country: ' . $country . ' </li> </div>'; } } Separation of concerns In a proper MVC architecture, we need to separate the view from the models that get our data, and the controllers will be responsible for handling business logic. In our simple app, we will skip the controller layer since we just want to display the user profiles in one public facing page. The preceding function is also known as the template render part in an MVC architecture. While there are frameworks available for PHP that use the MVC architecture out of the box, for now we can stick to what we have and make it work. PHP frameworks can benefit a lot from the null coalesce operator. In some codes that I've worked with, we used to use the ternary operator a lot, but still had to add more checks to ensure a value was not falsy. Furthermore, the ternary operator can get confusing, and takes some getting used to. The other alternative is to use the isSet function. However, due to the nature of the isSet function, some falsy values will be interpreted by PHP as being a set. Creating views Now that we have our  model complete, a template render function, we just need to create the view with which we can look at each profile. Our view will be put inside a foreach block, and we'll use the template we wrote to render the right values: //listprofiles.php <html> <!doctype html> <head> <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.6/css/bootstrap.min.css"> </head> <body> <?php foreach($results as $item) { echo profile_template($item->name, $item->age, $item->country; } ?> </body> </html> Let's put the code above into index.php. While we may install the Apache server, configure it to run PHP, install new virtual hosts and the other necessary featuress, and put our PHP code into an Apache folder, this will take time, so, for the purposes of testing this out, we can just run PHP's server for development. To  run the built-in PHP server (read more at http://php.net/manual/en/features.commandline.webserver.php) we will use the folder we are running, inside a terminal: php -S localhost:8000 If we open up our browser, we should see nothing yet—No results found. This means we need to populate our database. If you have an error with your database connection, be sure to replace the correct database credentials we supplied into each of the mysql_connect calls that we made. To supply data into our database, we can create a simple SQL script like this: INSERT INTO user_profiles ('Chin Wu', 30, 'Mongolia'); INSERT INTO user_profiles ('Erik Schmidt', 22, 'Germany'); INSERT INTO user_profiles ('Rashma Naru', 33, 'India'); Let's save it in a file such as insert_profiles.sql. In the same directory as the SQL file, log on to the MySQL client by using the following command: mysql -u root -p Then type use <name of database>: mysql> use <database>; Import the script by running the source command: mysql> source insert_profiles.sql Now our user profiles page should show the following: Create a profile input form Now let's create the HTML form for users to enter their profile data. Our profiles app would be no use if we didn't have a simple way for a user to enter their user profile details. We'll create the profile input form  like this: //create_profile.php <html> <body> <form action="post_profile.php" method="POST"> <label>Name</label><input name="name"> <label>Age</label><input name="age"> <label>Country</label><input name="country"> </form> </body> </html> In this profile post, we'll need to create a PHP script to take care of anything the user posts. It will create an SQL statement from the input values and output whether or not they were inserted. We can use the null coalesce operator again to verify that the user has inputted all values and left nothing undefined or null: $name = $_POST['name'] ?? ""; $age = $_POST['country'] ?? ""; $country = $_POST['country'] ?? ""; This prevents us from accumulating errors while inserting data into our database. First, let's create a variable to hold each of the inputs in one array: $input_values = [ 'name' => $name, 'age' => $age, 'country' => $country ]; The preceding code is a new PHP 5.4+ way to write arrays. In PHP 5.4+, it is no longer necessary to put an actual array(); the author personally likes the new syntax better. We should create a new method in our UserProfile class to accept these values: Class UserProfile { public function insert_profile($values) { $link = mysqli_connect('127.0.0.1', 'username','password', 'databasename'); $q = " INSERT INTO " . $this->table . " VALUES ( '". $values['name']."', '".$values['age'] . "' ,'". $values['country']. "')"; return mysqli_query($q); } } Instead of creating a parameter in our function to hold each argument as we did with our profile template render function, we can simply use an array to hold our values. This way, if a new field needs to be inserted into our database, we can just add another field to the SQL insert statement. While we are at it, let's create the edit profile section. For now, we'll assume that whoever is using this edit profile is the administrator of the site. We'll need to create a page where, provided the $_GET['id'] or has been set, that the user that we will be fetching from the database and displaying on the form. <?php require('class/userprofile.php');//contains the class UserProfile into $id = $_GET['id'] ?? 'No ID'; //if id was a string, i.e. "No ID", this would go into the if block if(is_numeric($id)) { $profile = new UserProfile(); //get data from our database $results = $user->fetch_id($id); if($results && $results->num_rows > 0 ) { while($obj = $results->fetch_object()) { $name = $obj->name; $age = $obj->age; $country = $obj->country; } //display form with a hidden field containing the value of the ID ?> <form action="post_update_profile.php" method="post"> <label>Name</label><input name="name" value="<?=$name?>"> <label>Age</label><input name="age" value="<?=$age?>"> <label>Country</label><input name="country" value="<?=country?>"> </form> <?php } else { exit('No such user'); } } else { echo $id; //this should be No ID'; exit; } Notice that we're using what is known as the shortcut echo statement in the form. It makes our code simpler and easier to read. Since we're using PHP 7, this feature should come out of the box. Once someone submits the form, it goes into our $_POST variable and we'll create a new Update function in our UserProfile class. Admin system Let's finish off by creating a simple grid for an admin dashboard portal that will be used with our user profiles database. Our requirement for this is simple: We can just set up a table-based layout that displays each user profile in rows. From the grid, we will add the links to be able to edit the profile, or  delete it, if we want to. The code to display a table in our HTML view would look like this: <table> <tr> <td>John Doe</td> <td>21</td> <td>USA</td> <td><a href="edit_profile.php?id=1">Edit</a></td> <td><a href="profileview.php?id=1">View</a> <td><a href="delete_profile.php?id=1">Delete</a> </tr> </table> This script to this is the following: //listprofiles.php $sql = "SELECT * FROM userprofiles LIMIT $start, $limit "; $rs_result = mysqli_query ($sql); //run the query while($row = mysqli_fetch_assoc($rs_result) { ?> <tr> <td><?=$row['name'];?></td> <td><?=$row['age'];?></td> <td><?=$row['country'];?></td> <td><a href="edit_profile.php?id=<?=$id?>">Edit</a></td> <td><a href="profileview.php?id=<?=$id?>">View</a> <td><a href="delete_profile.php?id=<?=$id?>">Delete</a> </tr> <?php } There's one thing that we haven't yet created: A delete_profile.php page. The view and edit pages - have been discussed already. Here's how the delete_profile.php page would look: <?php //delete_profile.php $connection = mysqli_connect('localhost','<username>','<password>', '<databasename>'); $id = $_GET['id'] ?? 'No ID'; if(is_numeric($id)) { mysqli_query( $connection, "DELETE FROM userprofiles WHERE id = '" .$id . "'"); } else { echo $id; } i(!is_numeric($id)) { exit('Error: non numeric $id'); } else { echo "Profile #" . $id . " has been deleted"; ?> Of course, since we might have a lot of user profiles in our database, we have to create a simple pagination. In any pagination system, you just need to figure out the total number of rows, and how many rows you want displayed per page. We can create a function that will be able to return a URL that contains the page number and how many to view per page. From our queries database, we first create a new function for us to select only up to the total number of items in our database: class UserProfile{ // …. Etc … function count_rows($table) { $dbconn = new mysqli('localhost', 'root', 'somepass', 'databasename'); $query = $dbconn->query("select COUNT(*) as num from '". $table . "'"); $total_pages = mysqli_fetch_array($query); return $total_pages['num']; //fetching by array, so element 'num' = count } For our pagination, we can create a simple paginate function which accepts the base_url of the page where we have pagination, the rows per page — also known as the number of records we want each page to have — and the total number of records found: require('definitions.php'); require('db.php'); //our database class Function paginate ($baseurl, $rows_per_page, $total_rows) { $pagination_links = array(); //instantiate an array to hold our html page links //we can use null coalesce to check if the inputs are null ( $total_rows || $rows_per_page) ?? exit('Error: no rows per page and total rows); //we exit with an error message if this function is called incorrectly $pages = $total_rows % $rows_per_page; $i= 0; $pagination_links[$i] = "<a href="http://". $base_url . "?pagenum=". $pagenum."&rpp=".$rows_per_page. ">" . $pagenum . "</a>"; } return $pagination_links; } This function will help display the above page links in a table: function display_pagination($links) { $display = ' <div class="pagination">'; <table><tr>'; foreach ($links as $link) { echo "<td>" . $link . "</td>"; } $display .= '</tr></table></div>'; return $display; } Notice that we're following the principle that there should rarely be any echo statements inside a function. This is because we want to make sure that other users of these functions are not confused when they debug some mysterious output on their page. By requiring the programmer to echo out whatever the functions return, it becomes easier to debug our program. Also, we're following the separation of concerns—our code doesn't output the display, it just formats the display. So any future programmer can just update the function's internal code and return something else. It also makes our function reusable; imagine that in the future someone uses our function—this way, they won't have to double check that there's some misplaced echo statement within our functions. A note on alternative short tags As you know, another way to echo is to use  the <?= tag. You can use it like so: <?="helloworld"?>.These are known as short tags. In PHP 7, alternative PHP tags have been removed. The RFC states that <%, <%=, %> and <script language=php>  have been deprecated. The RFC at https://wiki.php.net/rfc/remove_alternative_php_tags says that the RFC does not remove short opening tags (<?) or short opening tags with echo (<?=). Since we have laid out the groundwork of creating paginate links, we now just have to invoke our functions. The following script is all that is needed to create a paginated page using the preceding function: $mysqli = mysqli_connect('localhost','<username>','<password>', '<dbname>'); $limit = $_GET['rpp'] ?? 10; //how many items to show per page default 10; $pagenum = $_GET['pagenum']; //what page we are on if($pagenum) $start = ($pagenum - 1) * $limit; //first item to display on this page else $start = 0; //if no page var is given, set start to 0 /*Display records here*/ $sql = "SELECT * FROM userprofiles LIMIT $start, $limit "; $rs_result = mysqli_query ($sql); //run the query while($row = mysqli_fetch_assoc($rs_result) { ?> <tr> <td><?php echo $row['name']; ?></td> <td><?php echo $row['age']; ?></td> <td><?php echo $row['country']; ?></td> </tr> <?php } /* Let's show our page */ /* get number of records through */ $record_count = $db->count_rows('userprofiles'); $pagination_links = paginate('listprofiles.php' , $limit, $rec_count); echo display_pagination($paginaiton_links); The HTML output of our page links in listprofiles.php will look something like this: <div class="pagination"><table> <tr> <td> <a href="listprofiles.php?pagenum=1&rpp=10">1</a> </td> <td><a href="listprofiles.php?pagenum=2&rpp=10">2</a> </td> <td><a href="listprofiles.php?pagenum=3&rpp=10">2</a> </td> </tr> </table></div> Summary As you can see, we have a lot of use cases for the null coalesce. We learned how to make a simple user profile system, and how to use PHP 7's null coalesce feature when fetching data from the database, which returns null if there are no records. We also learned that the null coalesce operator is similar to a ternary operator, except this returns null by default if there is no data. Resources for Article: Further resources on this subject: Running Simpletest and PHPUnit [article] Mapping Requirements for a Modular Web Shop App [article] HTML5: Generic Containers [article]
Read more
  • 0
  • 0
  • 8131

article-image-creating-restful-api
Packt
19 Sep 2014
24 min read
Save for later

Creating a RESTful API

Packt
19 Sep 2014
24 min read
In this article by Jason Krol, the author of Web Development with MongoDB and NodeJS, we will review the following topics: (For more resources related to this topic, see here.) Introducing RESTful APIs Installing a few basic tools Creating a basic API server and sample JSON data Responding to GET requests Updating data with POST and PUT Removing data with DELETE Consuming external APIs from Node What is an API? An Application Programming Interface (API) is a set of tools that a computer system makes available that provides unrelated systems or software the ability to interact with each other. Typically, a developer uses an API when writing software that will interact with a closed, external, software system. The external software system provides an API as a standard set of tools that all developers can use. Many popular social networking sites provide developer's access to APIs to build tools to support those sites. The most obvious examples are Facebook and Twitter. Both have a robust API that provides developers with the ability to build plugins and work with data directly, without them being granted full access as a general security precaution. As you will see with this article, providing your own API is not only fairly simple, but also it empowers you to provide your users with access to your data. You also have the added peace of mind knowing that you are in complete control over what level of access you can grant, what sets of data you can make read-only, as well as what data can be inserted and updated. What is a RESTful API? Representational State Transfer (REST) is a fancy way of saying CRUD over HTTP. What this means is when you use a REST API, you have a uniform means to create, read, and update data using simple HTTP URLs with a standard set of HTTP verbs. The most basic form of a REST API will accept one of the HTTP verbs at a URL and return some kind of data as a response. Typically, a REST API GET request will always return some kind of data such as JSON, XML, HTML, or plain text. A POST or PUT request to a RESTful API URL will accept data to create or update. The URL for a RESTful API is known as an endpoint, and while working with these endpoints, it is typically said that you are consuming them. The standard HTTP verbs used while interfacing with REST APIs include: GET: This retrieves data POST: This submits data for a new record PUT: This submits data to update an existing record PATCH: This submits a date to update only specific parts of an existing record DELETE: This deletes a specific record Typically, RESTful API endpoints are defined in a way that they mimic the data models and have semantic URLs that are somewhat representative of the data models. What this means is that to request a list of models, for example, you would access an API endpoint of /models. Likewise, to retrieve a specific model by its ID, you would include that in the endpoint URL via /models/:Id. Some sample RESTful API endpoint URLs are as follows: GET http://myapi.com/v1/accounts: This returns a list of accounts GET http://myapi.com/v1/accounts/1: This returns a single account by Id: 1 POST http://myapi.com/v1/accounts: This creates a new account (data submitted as a part of the request) PUT http://myapi.com/v1/accounts/1: This updates an existing account by Id: 1 (data submitted as part of the request) GET http://myapi.com/v1/accounts/1/orders: This returns a list of orders for account Id: 1 GET http://myapi.com/v1/accounts/1/orders/21345: This returns the details for a single order by Order Id: 21345 for account Id: 1 It's not a requirement that the URL endpoints match this pattern; it's just common convention. Introducing Postman REST Client Before we get started, there are a few tools that will make life much easier when you're working directly with APIs. The first of these tools is called Postman REST Client, and it's a Google Chrome application that can run right in your browser or as a standalone-packaged application. Using this tool, you can easily make any kind of request to any endpoint you want. The tool provides many useful and powerful features that are very easy to use and, best of all, free! Installation instructions Postman REST Client can be installed in two different ways, but both require Google Chrome to be installed and running on your system. The easiest way to install the application is by visiting the Chrome Web Store at https://chrome.google.com/webstore/category/apps. Perform a search for Postman REST Client and multiple results will be returned. There is the regular Postman REST Client that runs as an application built into your browser, and then separate Postman REST Client (packaged app) that runs as a standalone application on your system in its own dedicated window. Go ahead and install your preference. If you install the application as the standalone packaged app, an icon to launch it will be added to your dock or taskbar. If you installed it as a regular browser app, you can launch it by opening a new tab in Google Chrome and going to Apps and finding the Postman REST Client icon. After you've installed and launched the app, you should be presented with an output similar to the following screenshot: A quick tour of Postman REST Client Using Postman REST Client, we're able to submit REST API calls to any endpoint we want as well as modify the type of request. Then, we can have complete access to the data that's returned from the API as well as any errors that might have occurred. To test an API call, enter the URL to your favorite website in the Enter request URL here field and leave the dropdown next to it as GET. This will mimic a standard GET request that your browser performs anytime you visit a website. Click on the blue Send button. The request is made and the response is displayed at the bottom half of the screen. In the following screenshot, I sent a simple GET request to http://kroltech.com and the HTML is returned as follows: If we change this URL to that of the RSS feed URL for my website, you can see the XML returned: The XML view has a few more features as it exposes the sidebar to the right that gives you a handy outline to glimpse the tree structure of the XML data. Not only that, you can now see a history of the requests we've made so far along the left sidebar. This is great when we're doing more advanced POST or PUT requests and don't want to repeat the data setup for each request while testing an endpoint. Here is a sample API endpoint I submitted a GET request to that returns the JSON data in its response: A really nice thing about making API calls to endpoints that return JSON using Postman Client is that it parses and displays the JSON in a very nicely formatted way, and each node in the data is expandable and collapsible. The app is very intuitive so make sure you spend some time playing around and experimenting with different types of calls to different URLs. Using the JSONView Chrome extension There is one other tool I want to let you know about (while extremely minor) that is actually a really big deal. The JSONView Chrome extension is a very small plugin that will instantly convert any JSON you view directly via the browser into a more usable JSON tree (exactly like Postman Client). Here is an example of pointing to a URL that returns JSON from Chrome before JSONView is installed: And here is that same URL after JSONView has been installed: You should install the JSONView Google Chrome extension the same way you installed Postman REST Client—access the Chrome Web Store and perform a search for JSONView. Now that you have the tools to be able to easily work with and test API endpoints, let's take a look at writing your own and handling the different request types. Creating a Basic API server Let's create a super basic Node.js server using Express that we'll use to create our own API. Then, we can send tests to the API using Postman REST Client to see how it all works. In a new project workspace, first install the npm modules that we're going to need in order to get our server up and running: $ npm init $ npm install --save express body-parser underscore Now that the package.json file for this project has been initialized and the modules installed, let's create a basic server file to bootstrap up an Express server. Create a file named server.js and insert the following block of code: var express = require('express'),    bodyParser = require('body-parser'),    _ = require('underscore'), json = require('./movies.json'),    app = express();   app.set('port', process.env.PORT || 3500);   app.use(bodyParser.urlencoded()); app.use(bodyParser.json());   var router = new express.Router(); // TO DO: Setup endpoints ... app.use('/', router);   var server = app.listen(app.get('port'), function() {    console.log('Server up: http://localhost:' + app.get('port')); }); Most of this should look familiar to you. In the server.js file, we are requiring the express, body-parser, and underscore modules. We're also requiring a file named movies.json, which we'll create next. After our modules are required, we set up the standard configuration for an Express server with the minimum amount of configuration needed to support an API server. Notice that we didn't set up Handlebars as a view-rendering engine because we aren't going to be rendering any HTML with this server, just pure JSON responses. Creating sample JSON data Let's create the sample movies.json file that will act as our temporary data store (even though the API we build for the purposes of demonstration won't actually persist data beyond the app's life cycle): [{    "Id": "1",    "Title": "Aliens",    "Director": "James Cameron",    "Year": "1986",    "Rating": "8.5" }, {    "Id": "2",    "Title": "Big Trouble in Little China",    "Director": "John Carpenter",    "Year": "1986",    "Rating": "7.3" }, {    "Id": "3",    "Title": "Killer Klowns from Outer Space",    "Director": "Stephen Chiodo",    "Year": "1988",    "Rating": "6.0" }, {    "Id": "4",    "Title": "Heat",    "Director": "Michael Mann",    "Year": "1995",    "Rating": "8.3" }, {    "Id": "5",    "Title": "The Raid: Redemption",    "Director": "Gareth Evans",    "Year": "2011",    "Rating": "7.6" }] This is just a really simple JSON list of a few of my favorite movies. Feel free to populate it with whatever you like. Boot up the server to make sure you aren't getting any errors (note we haven't set up any routes yet, so it won't actually do anything if you tried to load it via a browser): $ node server.js Server up: http://localhost:3500 Responding to GET requests Adding a simple GET request support is fairly simple, and you've seen this before already in the app we built. Here is some sample code that responds to a GET request and returns a simple JavaScript object as JSON. Insert the following code in the routes section where we have the // TO DO: Setup endpoints ... waiting comment: router.get('/test', function(req, res) {    var data = {        name: 'Jason Krol',        website: 'http://kroltech.com'    };      res.json(data); }); Let's tweak the function a little bit and change it so that it responds to a GET request against the root URL (that is /) route and returns the JSON data from our movies file. Add this new route after the /test route added previously: router.get('/', function(req, res) {    res.json(json); }); The res (response) object in Express has a few different methods to send data back to the browser. Each of these ultimately falls back on the base send method, which includes header information, statusCodes, and so on. res.json and res.jsonp will automatically format JavaScript objects into JSON and then send using res.send. res.render will render a template view as a string and then send it using res.send as well. With that code in place, if we launch the server.js file, the server will be listening for a GET request to the / URL route and will respond with the JSON data of our movies collection. Let's first test it out using the Postman REST Client tool: GET requests are nice because we could have just as easily pulled that same URL via our browser and received the same result: However, we're going to use Postman for the remainder of our endpoint testing as it's a little more difficult to send POST and PUT requests using a browser. Receiving data – POST and PUT requests When we want to allow our users using our API to insert or update data, we need to accept a request from a different HTTP verb. When inserting new data, the POST verb is the preferred method to accept data and know it's for an insert. Let's take a look at code that accepts a POST request and data along with the request, and inserts a record into our collection and returns the updated JSON. Insert the following block of code after the route you added previously for GET: router.post('/', function(req, res) {    // insert the new item into the collection (validate first)    if(req.body.Id && req.body.Title && req.body.Director && req.body.Year && req.body.Rating) {        json.push(req.body);        res.json(json);    } else {        res.json(500, { error: 'There was an error!' });    } }); You can see the first thing we do in the POST function is check to make sure the required fields were submitted along with the actual request. Assuming our data checks out and all the required fields are accounted for (in our case every field), we insert the entire req.body object into the array as is using the array's push function. If any of the required fields aren't submitted with the request, we return a 500 error message instead. Let's submit a POST request this time to the same endpoint using the Postman REST Client. (Don't forget to make sure your API server is running with node server.js.): First, we submitted a POST request with no data, so you can clearly see the 500 error response that was returned. Next, we provided the actual data using the x-www-form-urlencoded option in Postman and provided each of the name/value pairs with some new custom data. You can see from the results that the STATUS was 200, which is a success and the updated JSON data was returned as a result. Reloading the main GET endpoint in a browser yields our original movies collection with the new one added. PUT requests will work in almost exactly the same way except traditionally, the Id property of the data is handled a little differently. In our example, we are going to require the Id attribute as a part of the URL and not accept it as a parameter in the data that's submitted (since it's usually not common for an update function to change the actual Id of the object it's updating). Insert the following code for the PUT route after the existing POST route you added earlier: router.put('/:id', function(req, res) {    // update the item in the collection    if(req.params.id && req.body.Title && req.body.Director && req.body.Year && req.body.Rating) {        _.each(json, function(elem, index) {             // find and update:            if (elem.Id === req.params.id) {                elem.Title = req.body.Title;                elem.Director = req.body.Director;                elem.Year = req.body.Year;                elem.Rating = req.body.Rating;            }        });          res.json(json);    } else {        res.json(500, { error: 'There was an error!' });    } }); This code again validates that the required fields are included with the data that was submitted along with the request. Then, it performs an _.each loop (using the underscore module) to look through the collection of movies and find the one whose Id parameter matches that of the Id included in the URL parameter. Assuming there's a match, the individual fields for that matched object are updated with the new values that were sent with the request. Once the loop is complete, the updated JSON data is sent back as the response. Similarly, in the POST request, if any of the required fields are missing, a simple 500 error message is returned. The following screenshot demonstrates a successful PUT request updating an existing record. The response from Postman after including the value 1 in the URL as the Id parameter, which provides the individual fields to update as x-www-form-urlencoded values, and finally sending as PUT shows that the original item in our movies collection is now the original Alien (not Aliens, its sequel as we originally had). Removing data – DELETE The final stop on our whirlwind tour of the different REST API HTTP verbs is DELETE. It should be no surprise that sending a DELETE request should do exactly what it sounds like. Let's add another route that accepts DELETE requests and will delete an item from our movies collection. Here is the code that takes care of DELETE requests that should be placed after the existing block of code from the previous PUT: router.delete('/:id', function(req, res) {    var indexToDel = -1;    _.each(json, function(elem, index) {        if (elem.Id === req.params.id) {            indexToDel = index;        }    });    if (~indexToDel) {        json.splice(indexToDel, 1);    }    res.json(json); }); This code will loop through the collection of movies and find a matching item by comparing the values of Id. If a match is found, the array index for the matched item is held until the loop is finished. Using the array.splice function, we can remove an array item at a specific index. Once the data has been updated by removing the requested item, the JSON data is returned. Notice in the following screenshot that the updated JSON that's returned is in fact no longer displaying the original second item we deleted. Note that ~ in there! That's a little bit of JavaScript black magic! The tilde (~) in JavaScript will bit flip a value. In other words, take a value and return the negative of that value incremented by one, that is ~n === -(n+1). Typically, the tilde is used with functions that return -1 as a false response. By using ~ on -1, you are converting it to a 0. If you were to perform a Boolean check on -1 in JavaScript, it would return true. You will see ~ is used primarily with the indexOf function and jQuery's $.inArray()—both return -1 as a false response. All of the endpoints defined in this article are extremely rudimentary, and most of these should never ever see the light of day in a production environment! Whenever you have an API that accepts anything other than GET requests, you need to be sure to enforce extremely strict validation and authentication rules. After all, you are basically giving your users direct access to your data. Consuming external APIs from Node.js There will undoubtedly be a time when you want to consume an API directly from within your Node.js code. Perhaps, your own API endpoint needs to first fetch data from some other unrelated third-party API before sending a response. Whatever the reason, the act of sending a request to an external API endpoint and receiving a response can be done fairly easily using a popular and well-known npm module called Request. Request was written by Mikeal Rogers and is currently the third most popular and (most relied upon) npm module after async and underscore. Request is basically a super simple HTTP client, so everything you've been doing with Postman REST Client so far is basically what Request can do, only the resulting data is available to you in your node code as well as the response status codes and/or errors, if any. Consuming an API endpoint using Request Let's do a neat trick and actually consume our own endpoint as if it was some third-party external API. First, we need to ensure we have Request installed and can include it in our app: $ npm install --save request Next, edit server.js and make sure you include Request as a required module at the start of the file: var express = require('express'),    bodyParser = require('body-parser'),    _ = require('underscore'),    json = require('./movies.json'),    app = express(),    request = require('request'); Now let's add a new endpoint after our existing routes, which will be an endpoint accessible in our server via a GET request to /external-api. This endpoint, however, will actually consume another endpoint on another server, but for the purposes of this example, that other server is actually the same server we're currently running! The Request module accepts an options object with a number of different parameters and settings, but for this particular example, we only care about a few. We're going to pass an object that has a setting for the method (GET, POST, PUT, and so on) and the URL of the endpoint we want to consume. After the request is made and a response is received, we want an inline callback function to execute. Place the following block of code after your existing list of routes in server.js: router.get('/external-api', function(req, res) {    request({            method: 'GET',            uri: 'http://localhost:' + (process.env.PORT || 3500),        }, function(error, response, body) {             if (error) { throw error; }              var movies = [];            _.each(JSON.parse(body), function(elem, index) {                movies.push({                    Title: elem.Title,                    Rating: elem.Rating                });            });            res.json(_.sortBy(movies, 'Rating').reverse());        }); }); The callback function accepts three parameters: error, response, and body. The response object is like any other response that Express handles and has all of the various parameters as such. The third parameter, body, is what we're really interested in. That will contain the actual result of the request to the endpoint that we called. In this case, it is the JSON data from our main GET route we defined earlier that returns our own list of movies. It's important to note that the data returned from the request is returned as a string. We need to use JSON.parse to convert that string to actual usable JSON data. Using the data that came back from the request, we transform it a little bit. That is, we take that data and manipulate it a bit to suit our needs. In this example, we took the master list of movies and just returned a new collection that consists of only the title and rating of each movie and then sorts the results by the top scores. Load this new endpoint by pointing your browser to http://localhost:3500/external-api, and you can see the new transformed JSON output to the screen. Let's take a look at another example that's a little more real world. Let's say that we want to display a list of similar movies for each one in our collection, but we want to look up that data somewhere such as www.imdb.com. Here is the sample code that will send a GET request to IMDB's JSON API, specifically for the word aliens, and returns a list of related movies by the title and year. Go ahead and place this block of code after the previous route for external-api: router.get('/imdb', function(req, res) {    request({            method: 'GET',            uri: 'http://sg.media-imdb.com/suggests/a/aliens.json',        }, function(err, response, body) {            var data = body.substring(body.indexOf('(')+1);            data = JSON.parse(data.substring(0,data.length-1));            var related = [];            _.each(data.d, function(movie, index) {                related.push({                    Title: movie.l,                    Year: movie.y,                    Poster: movie.i ? movie.i[0] : ''                });            });              res.json(related);        }); }); If we take a look at this new endpoint in a browser, we can see the JSON data that's returned from our /imdb endpoint is actually itself retrieving and returning data from some other API endpoint: Note that the JSON endpoint I'm using for IMDB isn't actually from their API, but rather what they use on their homepage when you type in the main search box. This would not really be the most appropriate way to use their data, but it's more of a hack to show this example. In reality, to use their API (like most other APIs), you would need to register and get an API key that you would use so that they can properly track how much data you are requesting on a daily or an hourly basis. Most APIs will to require you to use a private key with them for this same reason. Summary In this article, we took a brief look at how APIs work in general, the RESTful API approach to semantic URL paths and arguments, and created a bare bones API. We used Postman REST Client to interact with the API by consuming endpoints and testing the different types of request methods (GET, POST, PUT, and so on). You also learned how to consume an external API endpoint by using the third-party node module Request. Resources for Article: Further resources on this subject: RESTful Services JAX-RS 2.0 [Article] REST – Where It Begins [Article] RESTful Web Services – Server-Sent Events (SSE) [Article]
Read more
  • 0
  • 0
  • 8114
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-enhancing-your-site-php-and-jquery
Packt
29 Dec 2010
12 min read
Save for later

Enhancing your Site with PHP and jQuery

Packt
29 Dec 2010
12 min read
  PHP jQuery Cookbook Over 60 simple but highly effective recipes to create interactive web applications using PHP with jQuery Create rich and interactive web applications with PHP and jQuery Debug and execute jQuery code on a live site Design interactive forms and menus Another title in the Packt Cookbook range, which will help you get to grips with PHP as well as jQuery         Read more about this book       (For more resources on this subject, see here.) Introduction In this article, we will look at some advanced techniques that can be used to enhance the functionality of web applications. We will create a few examples where we will search for images the from Flickr and videos from YouTube using their respective APIs. We will parse a RSS feed XML using jQuery and learn to create an endless scrolling page like Google reader or the new interface of Twitter. Besides this, you will also learn to create a jQuery plugin, which you can use independently in your applications. Sending cross-domain requests using server proxy Browsers do not allow scripts to send cross-domain requests due to security reasons. This means a script at domain http://www.abc.com cannot send AJAX requests to http://www.xyz.com. This recipe will show how you can overcome this limitation by using a PHP script on the server side. We will create an example that will search Flickr for images. Flickr will return a JSON, which will be parsed by jQuery and images will be displayed on the page. The following screenshot shows a JSON response from Flickr: Getting ready Create a directory for this article and name it as Article9. In this directory, create a folder named Recipe1. Also get an API key from Flickr by signing up at http://www.flickr.com/services/api/keys/. How to do it... Create a file inside the Recipe1 folder and name it as index.html. Write the HTML code to create a form with three fields: tag, number of images, and image size. Also create an ul element inside which the results will be displayed. <html> <head> <title>Flickr Image Search</title> <style type="text/css"> body { font-family:"Trebuchet MS",verdana,arial;width:900px; } fieldset { width:333px; } ul{ margin:0;padding:0;list-style:none; } li{ padding:5px; } span{ display:block;float:left;width:150px; } #results li{ float:left; } .error{ font-weight:bold; color:#ff0000; } </style> </head> <body> <form id="searchForm"> <fieldset> <legend>Search Criteria</legend> <ul> <li> <span>Tag</span> <input type="text" name="tag" id="tag"/> </li> <li> <span>Number of images</span> <select name="numImages" id="numImages"> <option value="20">20</option> <option value="30">30</option> <option value="40">40</option> <option value="50">50</option> </select> </li> <li> <span>Select a size</span> <select id="size"> <option value="s">Small</option> <option value="t">Thumbnail</option> <option value="-">Medium</option> <option value="b">Large</option> <option value="o">Original</option> </select> </li> <li> <input type="button" value="Search" id="search"/> </li> </ul> </fieldset> </form> <ul id="results"> </ul> </body> </html> The following screenshot shows the form created: Include the jquery.js file. Then, enter the jQuery code that will send the AJAX request to a PHP file search.php. Values of form elements will be posted with an AJAX request. A callback function showImages is also defined that actually reads the JSON response and displays the images on the page. <script type="text/javascript" src="../jquery.js"></script> <script type="text/javascript"> $(document).ready(function() { $('#search').click(function() { if($.trim($('#tag').val()) == '') { $('#results').html('<li class="error">Please provide search criteria</li>'); return; } $.post( 'search.php', $('#searchForm').serialize(), showImages, 'json' ); }); function showImages(response) { if(response['stat'] == 'ok') { var photos = response.photos.photo; var str= ''; $.each(photos, function(index,value) { var farmId = value.farm; var serverId = value.server; var id = value.id; var secret = value.secret; var size = $('#size').val(); var title = value.title; var imageUrl = 'http://farm' + farmId + '.static.flickr.com/' + serverId + '/' + id + '_' + secret + '_' + size + '.jpg'; str+= '<li>'; str+= '<img src="' + imageUrl + '" alt="' + title + '" />'; str+= '</li>'; }); $('#results').html(str); } else { $('#results').html('<li class="error">an error occured</li>'); } } }); </script> Create another file named search.php. The PHP code in this file will contact the Flickr API with specified search criteria. Flickr will return a JSON that will be sent back to the browser where jQuery will display it on the page. <?php define('API_KEY', 'your-API-key-here'); $url = 'http://api.flickr.com/services/rest/?method=flickr. photos.search'; $url.= '&api_key='.API_KEY; $url.= '&tags='.$_POST['tag']; $url.= '&per_page='.$_POST['numImages']; $url.= '&format=json'; $url.= '&nojsoncallback=1'; header('Content-Type:text/json;'); echo file_get_contents($url); ?> Now, run the index.html file in your browser, enter a tag to search in the form, and select the number of images to be retrieved and image size. Click on the Search button. A few seconds later you will see the images from Flickr displayed on the page: <html> <head> <title>Youtube Video Search</title> <style type="text/css"> body { font-family:"Trebuchet MS",verdana,arial;width:900px; } fieldset { width:333px; } ul{ margin:0;padding:0;list-style:none; } li{ padding:5px; } span{ display:block;float:left;width:150px; } #results ul li{ float:left; background-color:#483D8B; color:#fff;margin:5px; width:120px; } .error{ font-weight:bold; color:#ff0000; } img{ border:0} </style> </head> <body> <form id="searchForm"> <fieldset> <legend>Search Criteria</legend> <ul> <li> <span>Enter query</span> <input type="text" id="query"/> </li> <li> <input type="button" value="Search" id="search"/> </li> </ul> </fieldset> </form> <div id="results"> </div> </body> </html> How it works... On clicking the Search button, form values are sent to the PHP file search.php. Now, we have to contact Flickr and search for images. Flickr API provides several methods for accessing images. We will use the method flickr.photos.search to search by tag name. Along with method name we will have to send the following parameters in the URL: api_key: An API key is mandatory. You can get one from: http://www.flickr.com/services/api/keys/. tags: The tags to search for. These can be comma-separated. This value will be the value of textbox tag. per_page: Number of images in a page. This can be a maximum of 99. Its value will be the value of select box numImages. format: It can be JSON, XML, and so on. For this example, we will use JSON. nojsoncallback: Its value will be set to 1 if we don't want Flickr to wrap the JSON in a function wrapper. Once the URL is complete we can contact Flickr to get results. To get the results' we will use the PHP function file_get_contents, which will get the results JSON from the specified URL. This JSON will be echoed to the browser. jQuery will receive the JSON in callback function showImages. This function first checks the status of the response. If the response is OK, we get the photo elements from the response and we can iterate over them using jQuery's $.each method. To display an image, we will have to get its URL first, which will be created by combining different values of the photo object. According to Flickr API specification, an image URL can be constructed in the following manner: http://farm{farm-id}.static.flickr.com/{server-id}/{id}_{secret}_[size].jpg So we get the farmId, serverId, id, and secret from the photo element. The size can be one of the following: s (small square) t (thumbnail) - (medium) b (large) o (original image) We have already selected the image size from the select box in the form. By combining all these values, we now have the Flickr image URL. We wrap it in a li element and repeat the process for all images. Finally, we insert the constructed images into the results li. Making cross-domain requests with jQuery The previous recipe demonstrated the use of a PHP file as a proxy for querying cross-domain URLs. This recipe will show the use of JSONP to query cross-domain URLs from jQuery itself. We will create an example that will search for the videos from YouTube and will display them in a list. Clicking on a video thumbnail will open a new window that will take the user to the YouTube website to show that video. The following screenshot shows a sample JSON response from YouTube: Getting ready Create a folder named Recipe2 inside the Article9 directory. How to do it... Create a file inside the Recipe2 folder and name it as index.html. Write the HTML code to create a form with a single field query and a DIV with results ID inside which the search results will be displayed. <script type="text/javascript" src="../jquery.js"></script> <script type="text/javascript"> $(document).ready(function() { $('#search').click(function() { var query = $.trim($('#query').val()); if(query == '') { $('#results').html('<li class="error">Please enter a query.</li>'); return; } $.get( 'http://gdata.youtube.com/feeds/api/videos?q=' + query + '&alt=json-in-script', {}, showVideoList, 'jsonp' ); }); }); function showVideoList(response) { var totalResults = response['feed']['openSearch$totalResults']['$t']; if(parseInt(totalResults,10) > 0) { var entries = response.feed.entry; var str = '<ul>'; for(var i=1; i< entries.length; i++) { var value = entries[i]; var title = value['title']['$t']; var mediaGroup = value['media$group']; var videoURL = mediaGroup['media$player'][0]['url']; var thumbnail = mediaGroup['media$thumbnail'][0]['url']; var thumbnailWidth = mediaGroup['media$thumbnail'][0]['width']; var thumbnailHeight = mediaGroup['media$thumbnail'][0]['height']; var numComments = value['gd$comments']['gd$feedLink']['countHint']; var rating = parseFloat(value['gd$rating']['average']).toFixed(2); str+= '<li>'; str+= '<a href="' + videoURL + '" target="_blank">'; str+= '<img src="'+thumbNail+'" width="'+thumbNailWidth+'" height="'+thumbNailWidth+'" title="' + title + '" />'; str+= '</a>'; str+= '<hr>'; str+= '<p style="width: 120px; font-size: 12px;">Comments: ' + numComments + ''; str+= '<br/>'; str+= 'Rating: ' + rating; str+= '</p>'; str+= '</li>'; } str+= '</ul>'; $('#results').html(str); } else { $('#results').html('<li class="error">No results.</li>'); } } </script> Include the jquery.js file before closing the &ltbody> tag. Now, write the jQuery code that will take the search query from the textbox and will try to retrieve the results from YouTube. A callback function called showVideoList will get the response and will create a list of videos from the response. http://gdata.youtube.com/feeds/api/videos?q=' + query + '&alt=json-in-script All done, and we are now ready to search YouTube. Run the index.html file in your browser and enter a search query. Click on the Search button and you will see a list of videos with a number of comments and a rating for each video. How it works... script tags are an exception to cross-browser origin policy. We can take advantage of this by requesting the URL from the src attribute of a script tag and by wrapping the raw response in a callback function. In this way the response becomes JavaScript code instead of data. This code can now be executed on the browser. The URL for YouTube video search is as follows: http://gdata.youtube.com/feeds/api/videos?q=' + query + '&alt=json-in-script Parameter q is the query that we entered in the textbox and alt is the type of response we want. Since we are using JSONP instead of JSON, the value for alt is defined as json-in-script as per YouTube API specification. On getting the response, the callback function showVideoList executes. It checks whether any results are available or not. If none are found, an error message is displayed. Otherwise, we get all the entry elements and iterate over them using a for loop. For each video entry, we get the videoURL, thumbnail, thumbnailWidth, thumbnailHeight, numComments, and rating. Then we create the HTML from these variables with a list item for each video. For each video an anchor is created with href set to videoURL. The video thumbnail is put inside the anchor and a p tag is created where we display the number of comments and rating for a particular video. After the HTML has been created, it is inserted in the DIV with ID results. There's more... About JSONP You can read more about JSONP at the following websites: http://remysharp.com/2007/10/08/what-is-jsonp/ http://en.wikipedia.org/wiki/JSON#JSONP
Read more
  • 0
  • 0
  • 8009

article-image-testing-your-business-rules-jboss-drools
Packt
12 Oct 2009
9 min read
Save for later

Testing your Business Rules in JBoss Drools

Packt
12 Oct 2009
9 min read
When we start writing 'real' business rules, there is very little space to make mistakes as the mistakes made can cause a lot of wastage of money. How much money would a company lose if a rule that you wrote gave double the intended discount to a customer? Or, what if your airline ticket pricing rule started giving away first class transatlantic flights for one cent? Of course, mistakes happen. This article makes sure that these costly mistakes don't happen to you. If you're going through the trouble of writing business rules, you will want to make sure that they do what you intend them to, and keep on doing what you intend, even when you or other people make changes, both now and in the future. But first of all, we will see how testing is not a standalone activity, but part of an ongoing cycle. Testing when building rules It's a slightly morbid thought, but there's every chance that some of the business rules that you write will last longer than you do. Remember the millennium bug caused by programmers in the 1960's, assuming that nobody would be using their work in 40 years' time, and then being surprised when the year 2000 actually came along? Rather than 'play and throw away', we're more likely to create production business rules in the following cycle: Write your rules (or modify an existing one) based on a specification, or feedback from end users. Test your rules to make sure that your new rules do what you want them to do, and ensure that you haven't inadvertently broken any existing rules. Deploy your rules to somewhere other than your local machine, where end users (perhaps via a web page or an enterprise system) can interact with them. You can repeat steps 1, 2, and 3 as often as required. That means, repeat as many times as it takes you to get the first version into production. Or, deploy now and modify it anytime later –in 1, 2, or 10 years time. Making testing interesting Normal testing, where you inspect everything manually, is booooooooring! You might check everything the first time, but after the hundredth deployment you'll be tempted to skip your tests—and you'll probably get away with it without any problems. You'll then be tempted to skip your tests on the 101st deployment—still no problems. So, not testing becomes a bad habit either because you're bored, or because your boss fails to see the value of the tests. The problem, then comes one Friday afternoon, or just when you're about to go on vacation, or some other worst possible time. The whole world will see any mistakes in the rules that are in production. Therefore, fixing them is a lot more time and money consuming than if you catch the error at the very start on your own PC. What's the solution? Automate the testing. All of your manual checks are very repetitive—exactly the sort of thing that computers are good at. The sort of checks for our chocolate shipment example would be 'every time we have an order of 2000 candy bars, we should have 10 shipments of 210 bars and one shipment of 110 bars'. Testing using Guvnor There is one very important  advantage of testing—we can instantly see whether our tests are correct, without having to wait for our rules to be deployed into the target system. At a high level, Guvnor has two main screens that deal with testing: An individual test screen: Here you can edit your test by specifying the values that you want to input, and the values that you expect once your rules have fired A package or multiple tests screen (below): This allows you to run (later on) all of the tests in your package, to catch any rules that you may have inadvertently broken Another way of saying this is: You write your tests for selfish reasons because you need them to ensure that your rules do what you want them to do. By keeping your tests for later, they automatically become a free safety net that catches bugs as soon as you make a change. Testing using FIT Guvnor testing is great. But, often, a lot of what you are testing for is already specified in the requirements documents for the system. With a bit of thought in specifying various scenarios in your requirements documents, FIT allows you to automatically compare these requirements against your rules. These requirements documents can be written in Microsoft Word, or similar format, and they will highlight if the outputs aren't what is specified. Like Drools, FIT is an open source project, so there is no charge for either using it, or for customizing it to fit your needs. Before you get too excited about this, your requirements documents do have some compromises. The tests must specify the values to be input to the rules, and the expected result—similar to the examples, or scenarios, that many specifications already contain. These scenarios have to follow a FIT-specific format. Specification documents should follow a standard format anyway—the FIT scenario piece is often less than 10% of it, and it is still highly human-readable! Even better, the document can be written in anything that generates HTML, which includes Microsoft Word, Excel, OpenOffice, Google documents, and most of the myriad of editors that are available today. Like the Guvnor testing, we can use FIT to test whether our individual requirements are being met when writing our rules. It is possible to run FIT automatically over multiple requirement documents to ensure that nothing has 'accidentally' broken as we update other rules. Getting FIT When you download the samples, you will probably notice three strange packages and folders. fit-testcase: This folder resides just within the main project folder, and contains the FIT requirements documents that we're going to test against. chap7: This is a folder under src/main/java/net/firstpartners, and contains the startpoint (FitRulesExample.java) that we'll use to kick-start our FIT Tests. FIT: This folder is next to the chap7 folder. It folder contains some of the 'magic plumbing' that makes FIT work. Most business users won't care how this works (you probably won't need to change what you find here), but we will take a look at it in more detail in case we want to customize exactly how FIT works. If you built the previous example using Maven, then all of the required FIT software will have been downloaded for you. (Isn't Maven great?) So, we're ready to go. The FIT requirements document Open the word document fit-testcase.doc using Word, or OpenOffice. Remember that it's in the fit-testcase folder. fit-testcase.doc is a normal document, without any hidden code. The testing magic lies in the way the document is laid out. More specifically, it's in the tables that you see in the document. All of the other text is optional. Let's go through it. Logo and the first paragraph At the very top of the document is the Drools logo and a reference to where you can download FIT for rules code. It's also worth reading the text here, as it's another explanation of what the FIT project does and how it works. None of this text matters, or rather FIT ignores it as it contains no tables. We can safely replace this (or any other text in this document that isn't in the table) with your company logo, or whatever you normally put at the top of your requirement documents. FIT is a GPL (General Public License) open source software. This means you can modify it (as long as you publish your changes). In this sample we've modified it to accept global variables passed into the rules. (We will use this feature in step 3.) The changes are published in the FIT plumbing directory, which is a part of the sample. Feel free to use it in your own projects. First step—setup The setup table prepares the ground for our test, and explains the objects that we want to use in our test. These objects are familiar as they are the Java facts that we've used in our rules. There's a bit of text (worth reading as it also explains what the table does), but FIT ignores it. The bit that it reads is given in the following table: net.firstpartners.fit.fixture.Setup net.firstpartners.chap6.domain.CustomerOrder AcmeOrder net.firstpartners.chap6.domain.OoompaLoompaDate nextAvailableShipmentDate If you're wondering what this does, try the following explanation in the same table format: Use the piece of plumbing called 'Setup'   Create CustomerOrder and call it AcmeOrder Create OoompaLoompaDate and call it nextAvailableShipmentDate There is nothing here that we haven't seen before. Note that we will be passing nextShipmentDate as a global so that it matches the global of a same name in our rules file (the match includes the exact spelling, and the same lower-and uppercase). Second step—values in The second part also has the usual text explanation (ignored by FIT) and table (the important bit), which explains how to set the values. net.firstpartners.fit.fixture.Populate AcmeOrder Set initial balance 2000 AcmeOrder Set current balance 2000 It's a little bit clearer than the first table, but we'll explain it again anyway. Use the piece of plumbing called Populate AcmeOrder Take the ... we created earlier, and set it to have an initial balance of ... 2000 AcmeOrder Take the ... we created earlier, and set it to have a current balance of ... 2000 Third step—click on the Go button Our next part starts the rules. Or rather, the table tells FIT to invoke the rules. The rest of the text (which is useful to explain what is going on to us humans) gets ignored. net.firstpartners.fit.fixture.Engine Ruleset src/main/java/net/firstpartners/chap6/shipping-rules.drl Assert AcmeOrder Global nextAvailableShipmentDate Execute   The following table is the same again, in English: Use the piece of plumbing called 'Engine' Ruleset Use the rules in shipping-rules.drl Assert Pass our AcmeOrder to the rule engine (as a fact) Global Pass our nextAvailableShipmentDate to the rule engine (as a global) Execute Click on the Go Button
Read more
  • 0
  • 0
  • 8006

article-image-interactive-crime-map-using-flask
Packt
12 Jan 2016
18 min read
Save for later

Interactive Crime Map Using Flask

Packt
12 Jan 2016
18 min read
In this article by Gareth Dwyer, author of the book, Flask By Example, we will cover how to set up a MySQL database on our VPS and creating a database for the crime data. We'll follow on from this by setting up a basic page containing a map and textbox. We'll see how to link Flask to MySQL by storing data entered into the textbox in our database. We won't be using an ORM for our database queries or a JavaScript framework for user input and interaction. This means that there will be some laborious writing of SQL and vanilla JavaScript, but it's important to fully understand why tools and frameworks exist, and what problems they solve, before diving in and using them blindly. (For more resources related to this topic, see here.) We'll cover the following topics: Introduction to SQL Databases Installing MySQL on our VPS Connecting to MySQL from Python and creating the database Connecting to MySQL from Flask and inserting data Setting up We'll create a new git repository for our new code base, since although some of the setup will be similar, our new project should be completely unrelated to our first one. If you need more help with this step, head back to the setup of the first project and follow the detailed instructions there. If you're feeling confident, see if you can do it just with the following summary: Head over to the website for bitbucket, GitHub, or whichever hosting platform you used for the first project. Log in and use their Create a new repository functionality. Name your repo crimemap, and take note of the URL you're given. On your local machine, fire up a terminal and run the following commands: mkdir crimemap cd crimemap git init git remote add origin <git repository URL> We'll leave this repository empty for now as we need to set up a database on our VPS. Once we have the database installed, we'll come back here to set up our Flask project. Understanding relational databases In its simplest form, a relational database management system, such as MySQL, is a glorified spreadsheet program, such as Microsoft Excel: We store data in rows and columns. Every row is a "thing" and every column is a specific piece of information about the thing in the relevant row. I put "thing" in inverted commas because we're not limited to storing objects. In fact, the most common example, both in the real world and in explaining databases, is data about people. A basic database storing information about customers of an e-commerce website could look something like the following: ID First Name Surname Email Address Telephone 1 Frodo Baggins fbaggins@example.com +1 111 111 1111 2 Bilbo Baggins bbaggins@example.com +1 111 111 1010 3 Samwise Gamgee sgamgee@example.com +1 111 111 1001 If we look from left to right in a single row, we get all the information about one person. If we look at a single column from top to bottom, we get one piece of information (for example, an e-mail address) for everyone. Both can be useful—if we want to add a new person or contact a specific person, we're probably interested in a specific row. If we want to send a newsletter to all our customers, we're just interested in the e-mail column. So why can't we just use spreadsheets instead of databases then? Well, if we take the example of an e-commerce store further, we quickly see the limitations. If we want to store a list of all the items we have on offer, we can create another table similar to the preceding one, with columns such as "Item name", "Description", "Price", and "Quantity in stock". Our model continues to be useful. But now, if we want to store a list of all the items Frodo has ever purchased, there's no good place to put the data. We could add 1000 columns to our customer table: "Purchase 1", "Purchase 2", and so on up to "Purchase 1000", and hope that Frodo never buys more than 1000 items. This isn't scalable or easy to work with: How do we get the description for the item Frodo purchased last Tuesday? Do we just store the item's name in our new column? What happens with items that don't have unique names? Soon, we realise that we need to think about it backwards. Instead of storing the items purchased by a person in the "Customers" table, we create a new table called "Orders" and store a reference to the customer in every order. Thus, an order knows which customer it belongs to, but a customer has no inherent knowledge of what orders belong to them. While our model still fits into a spreadsheet at the push of a button, as we grow our data model and data size, our spreadsheet becomes cumbersome. We need to perform complicated queries such as "I want to see all the items that are in stock and have been ordered at least once in the last 6 months and cost more than $10." Enter Relational database management systems (RDBMS). They've been around for decades and are a tried and tested way of solving a common problem—storing data with complicated relations in an organized and accessible manner. We won't be touching on their full capabilities in our crime map (in fact, we could probably store our data in a .txt file if we needed to), but if you're interested in building web applications, you will need a database at some point. So, let's start small and add the powerful MySQL tool to our growing toolbox. I highly recommend learning more about databases. If the taster you experience while building our current project takes your fancy, go read and learn about databases. The history of RDBMS is interesting, and the complexities and subtleties of normalization and database varieties (including NoSQL databases, which we'll see something of in our next project) deserve more study time than we can devote to them in a book that focuses on Python web development. Installing and configuring MySQL Installing and configuring MySQL is an extremely common task. You can therefore find it in prebuilt images or in scripts that build entire stacks for you. A common stack is called the LAMP (Linux, Apache, MySQL, and PHP) stack, and many VPS providers provide a one-click LAMP stack image. As we are already using Linux and have already installed Apache manually, after installing MySQL, we'll be very close to the traditional LAMP stack, just using the P for Python instead of PHP. In keeping with our goal of "education first", we'll install MySQL manually and configure it through the command line instead of installing a GUI control panel. If you've used MySQL before, feel free to set it up as you see fit. Installing MySQL on our VPS Installing MySQL on our server is quite straightforward. SSH into your VPS and run the following commands: sudo apt-get update sudo apt-get install mysql-server You should see an interface prompting you for a root password for MySQL. Enter a password of your choice and repeat it when prompted. Once the installation has completed, you can get a live SQL shell by typing the following command and entering the password you chose earlier: mysql –p We could create a database and schema using this shell, but we'll be doing that through Python instead, so hit Ctrl + C to terminate the MySQL shell if you opened it. Installing Python drivers for MySQL Because we want to use Python to talk to our database, we need to install another package. There are two main MySQL connectors for Python: PyMySql and MySqlDB. The first is preferable from a simplicity and ease-of-use point of view. It is a pure Python library, meaning that it has no dependencies. MySqlDB is a C extension, and therefore has some dependencies, but is, in theory, a bit faster. They work very similarly once installed. To install it, run the following (still on your VPS): sudo pip install pymysql Creating our crimemap database in MySQL Some knowledge of SQL's syntax will be useful for the rest of this article, but you should be able to follow either way. The first thing we need to do is create a database for our web application. If you're comfortable using a command-line editor, you can create the following scripts directly on the VPS as we won't be running them locally and this can make them easier to debug. However, developing over an SSH session is far from ideal, so I recommend that you write them locally and use git to transfer them to the server before running. This can make debugging a bit frustrating, so be extra careful in writing these scripts. If you want, you can get them directly from the code bundle that comes with this book. In this case, you simply need to populate the Password field correctly and everything should work. Creating a database setup script In the crimemap directory where we initialised our git repo in the beginning, create a Python file called db_setup.py, containing the following code: import pymysql import dbconfig connection = pymysql.connect(host='localhost', user=dbconfig.db_user, passwd=dbconfig.db_password) try: with connection.cursor() as cursor: sql = "CREATE DATABASE IF NOT EXISTS crimemap" cursor.execute(sql) sql = """CREATE TABLE IF NOT EXISTS crimemap.crimes ( id int NOT NULL AUTO_INCREMENT, latitude FLOAT(10,6), longitude FLOAT(10,6), date DATETIME, category VARCHAR(50), description VARCHAR(1000), updated_at TIMESTAMP, PRIMARY KEY (id) )""" cursor.execute(sql); connection.commit() finally: connection.close() Let’s take a look at what this code does. First, we import the pymysql library we just installed. We also import dbconfig, which we’ll create locally in a bit and populate with the database credentials (we don’t want to store these in our repository). Then, we create a connection to our database using localhost (because our database is installed on the same machine as our code) and the credentials that don’t exist yet. Now that we have a connection to our database, we can get a cursor. You can think of a cursor as being a bit like the blinking object in your word processor that indicates where text will appear when you start typing. A database cursor is an object that points to a place in the database where we want to create, read, update, or delete data. Once we start dealing with database operations, there are various exceptions that could occur. We’ll always want to close our connection to the database, so we create a cursor (and do all subsequent operations) inside a try block with a connection.close() in a finally block (the finally block will get executed whether or not the try block succeeds). The cursor is also a resource, so we’ll grab one and use it in a with block so that it’ll automatically be closed when we’re done with it. With the setup done, we can start executing SQL code. Creating the database SQL reads similarly to English, so it's normally quite straightforward to work out what existing SQL does even if it's a bit more tricky to write new code. Our first SQL statement creates a database (crimemap) if it doesn't already exist (this means that if we come back to this script, we can leave this line in without deleting the entire database every time). We create our first SQL statement as a string and use the variable sql to store it. Then we execute the statement using the cursor we created. Using the database setup script We save our script locally and push it to the repository using the following command: git add db_setup.py git commit –m “database setup script” git push origin master We then SSH to our VPS and clone the new repository to our /var/www directory using the following command: ssh user@123.456.789.123 cd /var/www git clone <your-git-url> cd crimemap Adding credentials to our setup script Now, we still don’t have the credentials that our script relies on. We’ll do the following things before using our setup script: Create the dbconfig.py file with the database and password. Add this file to .gitignore to prevent it from being added to our repository. The following are the steps to do so: Create and edit dbconfig.py using the nano command: nano dbconfig.py Then, type the following (using the password you chose when you installed MySQL): db_username = “root” db_password = “<your-mysql-password>” Save it by hitting Ctrl + X and entering Y when prompted. Now, use similar nano commands to create, edit, and save .gitignore, which should contain this single line: dbconfig.py Running our database setup script With that done, you can run the following command: python db_setup.py Assuming everything goes smoothly, you should now have a database with a table to store crimes. Python will output any SQL errors, allowing you to debug if necessary. If you make changes to the script from the server, run the same git add, git commit, and git push commands that you did from your local machine. That concludes our preliminary database setup! Now we can create a basic Flask project that uses our database. Creating an outline for our Flask app We're going to start by building a skeleton of our crime map application. It'll be a basic Flask application with a single page that: Displays all data in the crimes table of our database Allows users to input data and stores this data in the database Has a "clear" button that deletes all the previously input data Although what we're going to be storing and displaying can't really be described as "crime data" yet, we'll be storing it in the crimes table that we created earlier. We'll just be using the description field for now, ignoring all the other ones. The process to set up the Flask application is very similar to what we used before. We're going to separate out the database logic into a separate file, leaving our main crimemap.py file for the Flask setup and routing. Setting up our directory structure On your local machine, change to the crimemap directory. If you created the database setup script on the server or made any changes to it there, then make sure you sync the changes locally. Then, create the templates directory and touch the files we're going to be using, as follows: cd crimemap git pull origin master mkdir templates touch templates/home.html touch crimemap.py touch dbhelper.py Looking at our application code The crimemap.py file contains nothing unexpected and should be entirely familiar from our headlines project. The only thing to point out is the DBHelper() function, whose code we'll see next. We simply create a global DBHelper() function right after initializing our app and then use it in the relevant methods to grab data from, insert data into, or delete all data from the database. from dbhelper import DBHelper from flask import Flask from flask import render_template from flask import request app = Flask(__name__) DB = DBHelper() @app.route("/") def home(): try: data = DB.get_all_inputs() except Exception as e: print e data = None return render_template("home.html", data=data) @app.route("/add", methods=["POST"]) def add(): try: data = request.form.get("userinput") DB.add_input(data) except Exception as e: print e return home() @app.route("/clear") def clear(): try: DB.clear_all() except Exception as e: print e return home() if __name__ == '__main__': app.run(debug=True) Looking at our SQL code There's a little bit more SQL to learn from our database helper code. In dbhelper.py, we need the following: import pymysql import dbconfig class DBHelper: def connect(self, database="crimemap"): return pymysql.connect(host='localhost', user=dbconfig.db_user, passwd=dbconfig.db_password, db=database) def get_all_inputs(self): connection = self.connect() try: query = "SELECT description FROM crimes;" with connection.cursor() as cursor: cursor.execute(query) return cursor.fetchall() finally: connection.close() def add_input(self, data): connection = self.connect() try: query = "INSERT INTO crimes (description) VALUES ('{}');".format(data) with connection.cursor() as cursor: cursor.execute(query) connection.commit() finally: connection.close() def clear_all(self): connection = self.connect() try: query = "DELETE FROM crimes;" with connection.cursor() as cursor: cursor.execute(query) connection.commit() finally: connection.close() As in our setup script, we need to make a connection to our database and then get a cursor from our connection in order to do anything meaningful. Again, we perform all our operations in try: ...finally: blocks in order to ensure that the connection is closed. In our helper code, we see three of the four main database operations. CRUD (Create, Read, Update, and Delete) describes the basic database operations. We are either creating and inserting new data or reading, modifying, or deleting existing data. We have no need to update data in our basic app, but creating, reading, and deleting are certainly useful. Creating our view code Python and SQL code is fun to write, and it is indeed the main part of our application. However, at the moment, we have a house without doors or windows—the difficult and impressive bit is done, but it's unusable. Let's add a few lines of HTML to allow the world to interact with the code we've written. In /templates/home.html, add the following: <html> <body> <head> <title>Crime Map</title> </head> <h1>Crime Map</h1> <form action="/add" method="POST"> <input type="text" name="userinput"> <input type="submit" value="Submit"> </form> <a href="/clear">clear</a> {% for userinput in data %} <p>{{userinput}}</p> {% endfor %} </body> </html> There's nothing we haven't seen before. We have a form with a single text input box to add data to our database by calling the /add function of our app, and directly below it, we loop through all the existing data and display each piece within <p> tags. Running the code on our VPS Finally, we just need to make our code accessible to the world. This means pushing it to our git repo, pulling it onto the VPS, and configuring Apache to serve it. Run the following commands locally: git add git commit –m "Skeleton CrimeMap" git push origin master ssh <username>@<vps-ip-address> And on your VPS use the following command: cd /var/www/crimemap git pull origin master Now, we need a .wsgi file to link our Python code to Apache: nano crimemap.wsgi The .wsgi file should contain the following: import sys sys.path.insert(0, "/var/www/crimemap") from crimemap import app as application Hit Ctrl + X and then Y when prompted to save. We also need to create a new Apache .conf file and set this as the default (instead of the headlines.conf file that is our current default), as follows: cd /etc/apache2/sites-available nano crimemap.conf This file should contain the following: <VirtualHost *> ServerName example.com WSGIScriptAlias / /var/www/crimemap/crimemap.wsgi WSGIDaemonProcess crimemap <Directory /var/www/crimemap> WSGIProcessGroup crimemap WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all </Directory> </VirtualHost> This is so similar to the headlines.conf file we created for our previous project that you might find it easier to just copy that one and substitute code as necessary. Finally, we need to deactivate the old site (later on, we'll look at how to run multiple sites simultaneously off the same server) and activate the new one: sudo a2dissite headlines.conf sudo a2enssite crimemap.conf sudo service apache2 reload Now, everything should be working. If you copied the code out manually, it's almost certain that there's a bug or two to deal with. Don't be discouraged by this—remember that debugging is expected to be a large part of development! If necessary, do a tail –f on /var/log/apache2/error.log while you load the site in order to see any errors. If this fails, add some print statements to crimemap.py and dbhelper.py to narrow down the places where things are breaking. Once everything is working, you should be able to see the following in your browser: Notice how the data we get from the database is a tuple, which is why it is surrounded by brackets and has a trailing comma. This is because we selected only a single field (description) from our crimes table when we could, in theory, be dealing with many columns for each crime (and soon will be). Summary That's it for the introduction to our crime map project. Resources for Article: Further resources on this subject: Web Scraping with Python[article] Python 3: Building a Wiki Application[article] Using memcached with Python[article]
Read more
  • 0
  • 0
  • 8005

Packt
20 Aug 2010
11 min read
Save for later

Adding Features to your Joomla! Form using ChronoForms

Packt
20 Aug 2010
11 min read
(For more resources on ChronoForms, see here.) Introduction We have so far mostly worked with fairly standard forms where the user is shown some inputs, enters some data, and the results are e-mailed and/or saved to a database table. Many forms are just like this, and some have other features added. These features can be of many different kinds and the recipes in this article are correspondingly a mixture. Some, like Adding a validated checkbox, change the way the form works. Others, like Signing up to a newsletter service change what happens after the form is submitted. While you can use these recipes as they are presented, they are just as useful as suggestions for ways to use ChronoForms to solve a wide range of user interactions on your site. Adding a validated checkbox Checkboxes are less often used on forms than most of the other elements and they have some slightly unusual behavior that we need to manage. ChronoForms will do a little to help us, but not everything that we need. In this recipe, we'll look at one of the most common applications—a stand alone checkbox that the user is asked to click to ensure that they've accepted some terms and conditions. We want to make sure that the form is not submitted unless the box is checked. Getting ready We'll just add one more element to our basic newsletter form. It's probably going to be best to recreate a new version of the form using the Form Wizard to make sure that we have a clean starting point. How to do it... In the Form Wizard, create a new form with two TextBox elements. In the Properties box, add the Labels "Name" and "Email" and the Field Names "name" and "email" respectively. Now drag in a CheckBox element. You'll see that ChronoForms inserts the element with three checkboxes and we only need one. In the Properties box remove the default values and type in "I agree". While you are there change the label to "Terms and Conditions". Lastly, we want to make sure that this box is checked so check the Validation | One Required checkbox and add "please confirm your agreement" in the Validation Message box. Apply the changes to the Properties. To complete the form add the Button element, then save your form, publish it, and view it in your browser. To test, click the Submit button without entering anything. You should find that the form does not submit and an error message is displayed. How it works... The only special thing to notice about this is that the validation we used was validate-one- required and not the more familiar required. Checkbox arrays, radio button groups, and select drop-downs will not work with the required option as they always have a value set, at least from the perspective of the JavaScript that is running the validation. There's more... Validating the checkbox server-side If the checkbox is really important to us, then we may want to confirm that it has been checked using the server-side validation box. We want to check and, if our box isn't checked, then create the error message. However, there is a little problem—an unchecked checkbox doesn't return anything at all, there is just no entry in the form results array. Joomla! has some functionality that will help us out though; the JRequest::getVar() function that we use to get the form results allows us to set a default value. If nothing is found in the form results, then the default value will be used instead. So we can add this code block to the server-side validation box: <?php $agree = JRequest::getString('check0[]', 'empty', 'post'); if ( $agree == 'empty' ) { return 'Please check the box to confirm your agreement'; } ?> Note: To test this, we need to remove the validate-one-required class from the input in the Form HTML. Now when we submit the empty form, we see the ChronoForms error message. Notice that the input name in the code snippet is check0[]. ChronoForms doesn't give you the option of setting the name of a checkbox element in the Form Wizard | Properties box. It assigns a check0, check1, and so on value for you. (You can edit this in the Form Editor if you like.) And because checkboxes often come in arrays of several linked boxes with the same name, ChronoForms also adds the [] to create an array name. If this isn't done then only the value of the last checked box will be returned. Locking the Submit button until the box is checked If we want to make the point about terms and conditions even more strongly then we can add some JavaScript to the form to disable the Submit button until the box is checked. We need to make one change to the Form HTML to make this task a little easier. ChronoForms does not add ID attributes to the Submit button input; so open the form in the Form Editor, find the line near the end of the Form HTML and alter it to read: <input value="Submit" name="submit" id='submit' type="submit" /> Now add the following snippet into the Form JavaScript box: // stop the code executing // until the page is loaded in the browser window.addEvent('load', function() { // function to enable and disable the submit button function agree() { if ( $('check00').checked == true ) { $('submit').disabled = false; } else { $('submit').disabled = true; } }; // disable the submit button on load $('submit').disabled = true; //execute the function when the checkbox is clicked $('check00').addEvent('click', agree); }); Apply or save the form and view it in your browser. Now as you tick or untick the checkbox, the submit button will be enabled and disabled. This is a simple example of adding a custom script to a form to add a useful feature. If you are reasonably competent in JavaScript, you will find that there is quite a lot more that you can do. There are different styles of laying out both JavaScript and PHP and sometimes fierce debates about where line breaks and spaces should go. We've adopted a style here that is hopefully fairly clear, reasonably compact, and more or less the same for both JavaScript and PHP. If it's not the style you are accustomed to, then we're sorry. Adding an "other" box to a drop-down Drop-downs are a valuable way of offering a list of choices to your user to select from. And sometimes it just isn't possible to make the list complete, there's always another option that someone will want to add. So we add an "other" option to the drop-down. But that tells us nothing, so we need to add an input to tell us what "other" means here. Getting ready We'll just add one more element to our basic newsletter form. We haven't used a drop-down before but it is very similar to the check-box element from the previous recipe. How to do it... Use the Form Wizard to create a form with two TextBox elements, a DropDown element, and a Button element. The changes to make in the element are: Add "I heard from" in the Label Change the Field Name to "hearabout" Add some options to the Options box—"Google", "Newspaper", "Friend", and "Other" Leave the Add Choose Option box checked and leave Choose Option in the Choose Option Text box. Apply the Properties box. Make any other changes you need to the form elements; then save the form, publish it, and view it in your browser. Notice that as well as the four options we added the Choose Option entry is at the top of the list. That comes from the checkbox and text field that we left with their default values. It's important to have a "null" option like this in a drop-down for two reasons. First, so that it is obvious to a user that no choice has been made. Otherwise it's very easy for them to leave the first option showing and this value—Google in this case—will be returned by default. Second, so that we can validate select-one-required if necessary. The "null" option has no value set and so can be detected by validation script. Now we just need one more text box to collect details if Other is selected. Open the form in the Wizard Edit; add one more TextBox element after the DropDown element. Give it the Label please add details and the name "other". Even though we set the name to "other", ChronoForms will have left the input ID attribute as text_4 or something similar. Open the Form in the Form Editor and change the ID to "other" as well. The same is true of the drop-down. The ID there is select_2, change that to hearabout. Now we need a script snippet to enable and disable the "other" text box if the Other option is selected in the drop-down. Here's the code to put in the Form JavaScript box: window.addEvent('domready', function() { $('hearabout').addEvent('change', function() { if ($('hearabout').value == 'Other' ) { $('other').disabled = false; } else { $('other').disabled = true; } }); $('other').disabled = true; }); This is very similar to the code in the last recipe except that it's been condensed a little more by merging the function directly into the addEvent(). When you view the form you will see that the text box for please add details is grayed out and blocked until you select Other in the drop-down. Make sure that you don't make the please add details input required. It's an easy mistake to make but it stops the form working correctly as you have to select Other in the drop-down to be able to submit it. How it works Once again, this is a little JavaScript that is checking for changes in one part of the form in order to alter the display of another part of the form. There's more... Hiding the whole input It looks a little untidy to have the disabled box showing on the form when it is not required. Let's change the script a little to hide and unhide the input instead of disabling and enabling it. To make this work we need a way of recognizing the input together with its label. We could deal with both separately, but let's make our lives simpler. In the Form Editor, open the Form HTML box and look near the end for the other input block: <div class="form_item"> <div class="form_element cf_textbox"> <label class="cf_label" style="width: 150px;">please add details</label> <input class="cf_inputbox" maxlength="150" size="30" title="" id="other" name="other" type="text" /> </div> <div class="cfclear">&nbsp;</div> </div> That <div class="form_element cf_textbox"> looks like it is just what we need so let's add an ID attribute to make it visible to the JavaScript:   <div class="form_element cf_textbox" id="other_input"> Now we'll modify our script snippet to use this: window.addEvent('domready', function() { $('hearabout').addEvent('change', function() { if ($('hearabout').value == 'Other' ) { $('other_input').setStyle('display', 'block'); } else { $('other_input').setStyle('display', 'none'); } }); // initialise the display if ($('hearabout').value == 'Other' ) { $('other_input').setStyle('display', 'block'); } else { $('other_input').setStyle('display', 'none'); } }); Apply or save the form and view it in your browser. Now the input is invisible see the following screenshot labeled 1 until you select Other from the drop-down see the following screenshot labeled 2. The disadvantage of this approach is that the form can appear to "jump around" as extra fields appear. You can overcome this with a little thought, for example by leaving an empty space. See also In some of the script here we are using shortcuts from the MooTools JavaScript framework. Version 1.1 of MooTools is installed with Joomla! 1.5 and is usually loaded by ChronoForms. You can find the documentation for MooTools v1.1 at http://docs111.mootools.net/ Version 1.1 is not the latest version of MooTools and many of the more recent MooTools script will not run with the earlier version. Joomla 1.6 is expected to use the latest release.
Read more
  • 0
  • 0
  • 7986
article-image-linux-shell-script-tips-and-tricks
Packt
15 Apr 2011
7 min read
Save for later

Linux Shell Script: Tips and Tricks

Packt
15 Apr 2011
7 min read
  Linux Shell Scripting Cookbook Solve real-world shell scripting problems with over 110 simple but incredibly effective recipes  In this article, we took a look at some tips and tricks on working with Linux shell script.    Successful and unsuccessful command Tip: When a command returns after error, it returns a non-zero exit status. The command returns zero when it terminates after successful completion. Return status can be read from special variable $? (run echo $? immediately after the command execution statement to print the exit status). Fork bomb :(){ :|:& };: Tip: This recursive function is a function that calls itself. It infinitely spawns processes and ends up in a denial of service attack. & is postfixed with the function call to bring the subprocess into the background. This is a dangerous code as it forks processes and, therefore, it is called a fork bomb. You may find it difficult to interpret the above code. See Wikipedia page http://en.wikipedia.org/wiki/Fork_bomb for more details and interpretation of the fork bomb. It can be prevented by restricting the maximum number of processes that can be spawned from the config file /etc/security/limits.conf. Specify -maxdepth and –mindepth as the third argument Tip: -maxdepth and –mindepth should be specified as the third argument to the find. If they are specified as the fourth or further arguments, it may affect the efficiency of the find as it has to do unnecessary checks (for example, if –maxdepth is specified as the fourth argument and –type as the third argument, the find command first finds out all the files having the specified –type and then finds all of the matched files having the specified depth. However, if the depth were specified as the third argument and –type as the fourth, find could collect all the files having at most the specified depth and then check for the file type, which is the most efficient way of searching. -exec with multiple commands Tip: We cannot use multiple commands along with the –exec parameter. It accepts only a single command, but we can use a trick. Write multiple commands in a shell script (for example, commands.sh) and use it with –exec as follows: -exec ./commands.sh {} ; -n option for numeric sort Tip: Always be careful about the -n option for numeric sort. The sort command treats alphabetical sort and numeric sort differently. Hence, in order to specify numeric sort the –n option should be provided. The ## operator Tip: The ## operator is more preferred over the # operator to extract an extension from a filename since the filename may contain multiple '.' characters. Since ## makes greedy match, it always extract extensions only. Recursively search many files Tip: To recursively search for a text over many directories of descendants use: $ grep "text" . -R -n This is one of the most frequently used commands by developers. It is used to find the file of source code in which a certain text exists. Placing variable assignments Tip: Usually, we place initial variable assignments, such as var=0; and statements to print the file header in the BEGIN block. In the END{} block, we place statements such as printing results and so on. -d argument Tip: The -d argument should always be given in quotes. If quotes are not used, & is interpreted by the shell to indicate this should be a background process. Excluding a set of files from archiving Tip: It is possible to exclude a set of files from archiving by specifying patterns. Use --exclude [PATTERN] for excluding files matched by wildcard patterns. For example, to exclude all .txt files from archiving use: $ tar -cf arch.tar * --exclude "*.txt" Note that the pattern should be enclosed in double quotes. Using cpio command for absolute paths Tip: By using cpio, we can also archive using files as absolute paths. /usr/somedir is an absolute path as it contains the full path starting from root (/). A relative path will not start with / but it starts the path from the current directory. For example, test/file means that there is a directory test and the file is inside the test directory. While extracting, cpio extracts to the absolute path itself. But incase of tar it removes the / in the absolute path and converts it as relative path. PATH format Tip: For the PATH format, if we use / at the end of the source, rsync will copy contents of that end directory specified in the source_path to the destination. If / not at the end of the source, rsync will copy that end directory itself to the destination. / at the end of destination_path Tip: If / is at the end of destination_path, rsync will copy the source to the destination directory. If / is not used at the end of the destination path, rsync will create a folder, named similar to the source directory, at the end of the destination path and copy the source into that directory. wait command Tip: The wait command enables a script to be terminated only after all its child process or background processes terminate or complete. First argument of the sftp command Tip: -oPort should be the first argument of the sftp command. Running du DIRECTORY Tip: Running du DIRECTORY will output a similar result, but it will show only the size consumed by subdirectories. However, they do not show the disk usage for each of the files. For printing the disk usage by files, -a is mandatory. du DIRECTORY commands traversal Tip: du can be restricted to traverse only a single file system by using the –x argument. Suppose du DIRECTORY is run, it will traverse through every possible subdirectory of DIRECTORY recursively. A subdirectory in the directory hierarchy may be a mount point (for example, /mnt/sda1 is a subdirectory of /mnt and it is a mount point for the device /dev/sda1). du will traverse that mount point and calculate the sum of disk usage for that device filesystem also. In order to prevent du from traversing and to calculate from other mount points or filesystems, use the -x flag along with other du options. du –x / will exclude all mount points in /mnt/ for disk usage calculation. Use an absolute path for the executable Tip: An executable binary of the time command is available at /usr/bin/time as well as a shell built-in named time exists. When we run time, it calls the shell built-in by default. The shell built-in time has limited options. Hence, we should use an absolute path for the executable (/usr/bin/time) for performing additional functionalities. -x argument Tip: The -x argument along with -a specifies to remove the TTY restriction imparted, by default, by ps. Usually, using ps without arguments prints processes that are attached to terminal only. Parameters for -o Tip: Parameters for -o are delimited by using the comma (,) operator. It should be noted that there is no space in between the comma operator and next parameter. Mostly, the -o option is combined with the -e (every) option (-oe) since it should list every process running in the system. However, when certain filters are used along with –o, such as those used for listing the processes owned by specified users, -e is not used along with –o. Usage of -e with a filter will nullify the filter and it will show all process entries. pgrep command Tip: pgrep requires only a portion of the command name as its input argument to extract a Bash command, for example, pgrep ash or pgrep bas will also work. But ps requires you to type the exact command. apropos Tip: Sometimes we need to search if some command related to a word exists. Then we can search the manpages for strings in the command. For this we can use: apropos COMMAND   In this article, we took a look at some tips and tricks on working with Linux shell script.Further resources on this subject: Linux Email [Book] Installing VirtualBox on Linux [Article] Linux Shell Script: Logging Tasks [Article] Linux Shell Script: Monitoring Activities [Article] Compression Formats in Linux Shell Script [Article] Making a Complete yet Small Linux Distribution [Article]
Read more
  • 0
  • 0
  • 7978

article-image-building-wpf-net-client
Packt
16 Sep 2015
8 min read
Save for later

Building a WPF .NET Client

Packt
16 Sep 2015
8 min read
In this article by Einar Ingebrigtsen, author of the book SignalR: Real-time Application Development - Second Edition we will bring the full feature set of what we've built so far for the web onto the desktop through a WPF .NET client. There are quite a few ways of developing Windows client solutions, and WPF was introduced back in 2005 and has become one of the most popular ways of developing software for Windows. In WPF, we have something called XAML, which is what Windows Phone development supports and is also the latest programming model in Windows 10. In this chapter, the following topics will be covered: MVVM Brief introduction to the SOLID principles XAML WPF (For more resources related to this topic, see here.) Decoupling it all So you might be asking yourself, what is MVVM? It stands for Model View ViewModel: a pattern for client development that became very popular in the XAML stack, enabled by Microsoft based on Martin Fowlers presentation model (http://martinfowler.com/eaaDev/PresentationModel.html). Its principle is that you have a ViewModel that holds the state and exposes behavior that can be utilized from a view. The view observes any changes of the state the ViewModel exposes, making the ViewModel totally unaware that there is a View. The ViewModel is decoupled and can be put in isolation and is perfect for automated testing. As part of the state that the ViewModel typically holds is the model part, which is something it usually gets from the server, and a SignalR hub is the perfect transport to get this. It boils down to recognizing the different concerns that make up the frontend and separating it all. This gives us the following diagram: Decoupling – the next level In this chapter, one of the things we will brush up is the usage of the Dependency Inversion Principle, the D of SOLID. Let's start with the first principle: the S in SOLID of Single Responsibility Principle, which states that a method or a class should only have one reason to change and only have one responsibility. With this, we can't have our units take on more than one responsibility and need help from collaborators to do the entire job. These collaborators are things we now depend on and we should represent these dependencies clearly to our units so that anyone or anything instantiating it knows what we are depending on. We have now flipped around the way in which we get dependencies. Instead of the unit trying to instantiate everything itself, we now clearly state what we need as collaborators, opening up for the calling code to decide what implementations of these dependencies you want to pass on. Also, this is an important aspect; typically, you'd want the dependencies expressed in the form of interfaces, yielding flexibility for the calling code. Basically, what this all means is that instead of a unit or system instantiating and managing its dependencies, we decouple and let something called as the Inversion of Control container deal with this. In the sample, we will use an IoC (Inversion of Control) container called Ninject that will deal with this for us. What it basically does is manage what implementations to give to the dependency specified on the constructor. Often, you'll find that the dependencies are interfaces in C#. This means one is not coupled to a specific implementation and has the flexibility of changing things at runtime based on configuration. Another role of the IOC container is to govern the life cycle of the dependencies. It is responsible for knowing when to create new instances and when to reuse an instance. For instance, in a web application, there are some systems that you want to have a life cycle of per request, meaning that we will get the same instance for the lifetime of a web request. The life cycle is configurable in what is known as a binding. When you explicitly set up the relationship between a contract (interface) and its implementation, you can choose to set up the life cycle behavior as well. Building for the desktop The first thing we will need is a separate project in our solution: Let's add it by right-clicking on the solution in Solution Explorer and navigating to Add | New Project: In the Add New Project dialog box, we want to make sure the .NET Framework 4.5.1 is selected. We could have gone with 4.5, but some of the dependencies that we're going to use have switched to 4.5.1. This is the latest version of the .NET Framework at the time of writing, so if you can, use it. Make sure to select Windows Desktop and then select WPF Application. Give the project the name SignalRChat.WPF and then click on the OK button: Setting up the packages We will need some packages to get started properly. This process is described in detail in Chapter 1, The Primer. Let's start off by adding SignalR, which is our primary framework that we will be working with to move on. We will be pulling this using NuGet, as described in Chapter 1, The Primer: Right-click on the References in Solution Explorer and select Manage NuGet Packages, and type Microsoft.AspNet.SignalR.Client in the Search dialog box. Select it and click on Install. Next, we're going to pull down something called as Bifrost. Bifrost is a library that helps us build MVVM-based solutions on WPF; there are a few other solutions out there, but we'll focus on Bifrost. Add a package called Bifrost.Client. Then, we need the package that gives us the IOC container called Ninject, working together with Bifrost. Add a package called Bifrost.Ninject. Observables One of the things that is part of WPF and all other XAML-based platforms is the notion of observables; be it in properties or collections that will notify when they change. The notification is done through well-known interfaces for this, such as INotifyPropertyChanged or INotifyCollectionChanged. Implementing these interfaces quickly becomes tedious all over the place where you want to notify everything when there are changes. Luckily, there are ways to make this pretty much go away. We can generate the code for this instead, either at runtime or at build time. For our project, we will go for a build-time solution. To accomplish this, we will use something called as Fody and a plugin for it called PropertyChanged. Add another NuGet package called PropertyChanged.Fody. If you happen to get problems during compiling, it could be the result of the dependency to a package called Fody not being installed. This happens for some versions of the package in combination with the latest Roslyn compiler. To fix this, install the NuGet package called Fody explicitly. Now that we have all the packages, we will need some configuration in code: Open the App.xam.cs file and add the following statement: using Bifrost.Configuration; The next thing we will need is a constructor for the App class: public App() { Configure.DiscoverAndConfigure(); } This will tell Bifrost to discover the implementations of the well-known interfaces to do the configuration. Bifrost uses the IoC container internally all the time, so the next thing we will need to do is give it an implementation. Add a class called ContainerCreator at the root of the project. Make it look as follows: using Bifrost.Configuration; using Bifrost.Execution; using Bifrost.Ninject; using Ninject; namespace SignalRChat.WPF { public class ContainerCreator : ICanCreateContainer { public IContainer CreateContainer() { var kernel = new StandardKernel(); var container = new Container(kernel); return container; } } } We've chosen Ninject among others that Bifrost supports, mainly because of familiarity and habit. If you happen to have another favorite, Bifrost supports a few. It's also fairly easy to implement your own support; just go to the source at http://github.com/dolittle/bifrost to find reference implementations. In order for Bifrost to be targeting the desktop, we need to tell it through configuration. Add a class called Configurator at the root of the project. Make it look as follows: using Bifrost.Configuration; namespace SignalRChat.WPF { public class Configurator : ICanConfigure { public void Configure(IConfigure configure) { configure.Frontend.Desktop(); } } } Summary Although there are differences between creating a web solution and a desktop client, the differences have faded over time. We can apply the same principles across the different environments; it's just different programming languages. The SignalR API adds the same type of consistency in thinking, although not as matured as the JavaScript API with proxy generation and so on; still the same ideas and concepts are found in the underlying API. Resources for Article: Further resources on this subject: The Importance of Securing Web Services [article] Working with WebStart and the Browser Plugin [article] Microsoft Azure – Developing Web API for Mobile Apps [article]
Read more
  • 0
  • 0
  • 7972

article-image-data-modeling-naming-standards-ibm-infosphere-data-architect
Packt
24 Dec 2009
4 min read
Save for later

Data Modeling Naming Standards with IBM InfoSphere Data Architect

Packt
24 Dec 2009
4 min read
The Prime-Class-Modifier Words Pattern Prime words represent key business entities. In an insurance business, examples of prime word are policy and coverage. A class word is a category that qualifies a prime word; for example, in policy code name, code is a class word. policy code can further be qualified by a modifier word; for instance, previous policy code where previous is the modifier word. You can define your own naming pattern different from the above modifier prime class pattern for a specific modeling object, including the separator between words and if modifier word or class word in the pattern is optional. You can have, for instance, modifier?_prime_modifer?_class_modifier? pattern for attribute naming in a logical data model. The ? characters indicate the words are optional and the separators are _. An example name with that pattern is permanent employee last name, assuming we have defined in our standard that permanent as a modifier word, employee as a prime word, last a modifier word, and name as a class word. Note that we don’t have the last optional modifier word in this example. In a different business (not insurance), code might well be a prime word and policy might not be a prime word; hence the need to define your own specific list of prime, class and modifier words and naming patterns for their application, and that is what you build in glossary model. Building Glossary Model The InfoSphere Data Architect (IDA) allows you to build a glossary model from blank or from pre-defined enterprise model. Creating glossary model and selecting its template, blank or pre-built enterprise template The enterprise glossary model gives you a head start with its collection of words relevant across various business types, most of which would probably be applicable to your business too. You can customize the glossary: change or delete the existing words, or add new ones. Selecting an existing word or words in the list and then clicking the cross icon will delete the selected words   Clicking the plus icon allows you to add a new word into the glossary   When you add a new word, in addition to the name, you specify its Abbreviation, Alternate name, and most importantly its type (CLASS, PRIME) and if it is a Modifier word. When the glossary is applied for transforming a logical to physical model, the abbreviation is applied to the physical modeling object. Customizing a word being added   Selecting the type of a word   Before we can apply the words to naming our data model objects, we need to define the naming pattern. You can define the naming pattern for logical and physical modeling objects. The sequence of the word types in the pattern from top to bottom is left to right when you apply them in the names. You can also choose the separator for your naming pattern: space or title case for the logical model, and any character for the physical model (most preferred choice would be non alpha numeric character that is not used in any of the words in the glossary). Defining pattern for logical model objects (entity and attribute)   Defining pattern for physical model objects (table and column)   Specifying separator for logical model   Specifying separator for physical model You then choose the glossary model that you want to apply to your data models. Glossary Model.ndm in the packtpub directory is applied   When you have finished building your glossary model and defining naming pattern, you can then apply them for naming your modeling objects. (You can further adjust the words in the glossary them when such a need arises)
Read more
  • 0
  • 0
  • 7912
article-image-ubuntu-server-and-wordpress-15-minutes-flat
Packt
21 Sep 2010
6 min read
Save for later

Ubuntu Server and WordPress in 15 Minutes Flat

Packt
21 Sep 2010
6 min read
(For more resources on WordPress, see here.) Introduction Ubuntu Server is a robust, powerful and user-friendly distribution engineered by a dedicated team at Canonical as well as hundreds (if not thousands) of volunteers around the world. It powers thousands of server installations, but public and private and is becoming a very popular and trusted solution for all types of server needs. In this article I will outline how to install Ubuntu server toward the goal of running and publishing your own blog, using the WordPress blogging software. This can be used to run a personal blog out of your home, or even run a corporate blog in a workplace. Hundreds of companies use Wordpress as their blogging software of choice—I've deployed it at my office even. I personally maintain about a dozen Wordpress installations, all at varying levels of popularity and traffic. Wordpress scales well, is easy to maintain, and very intuitive to use. If you're not familiar with the Wordpress blogging software I'd invite you to go check it out at http://www.wordpress.com. Requirements In order to get this whole process started you'll only need a few simple things. First, a copy of Ubuntu Server. At the time of this writing, the latest release is 10.04.1 LTS (Long Term Support), which will be supported and provide security and errata updates for five years. You can download a free copy of Ubuntu Server here: http://www.ubuntu.com/server In addition to a copy of Ubuntu Server you'll need, of course, a platform to install it one. This could be a physical server, or a virtual machine. Your times (the 15 minute goal) may vary based on your physical hardware speeds. I based this article on the following platform and specifications: Dell D630 Core 2 Duo 2.10 Ghz 2G RAM VirtualBox 3.2.8 Open Source Edition Again, your mileage may vary depending on your hardware and network, but overall this article will quickly get you from zero to blogger in no time! The last requirement you'll need, and I mentioned this just very briefly in this last paragraph, is network access. If you're installing this on a physical machine, make sure that you'll have local network access to that machine. If you're planning on installing this on a virtual machine, make sure that you configure the virtual machine to use bridged networking, making it accessible to your local area network. To recap, your requirements are: Ubuntu Server 10.04.1 LTS .iso (or printed CD) Physical or virtual machine to provision Local network access to said machine Getting started Once you have everything prepared we can jump right in and get started. Start up your virtual machine, or drop in your CD-ROM, and we'll start the installation. I've taken screenshots of each step in the process so you should be able to follow along closely. In most situations I chose the default configuration. If you are unsure about the configuration requirements during installation, it is generally safe to select the default. Again, just follow my lead and you should be fine! This is the initial installer screen. You'll notice there are a number of options available. The highlighted option (also the default) of "Install Ubuntu Server" is what you'll want to select here. Next, the installer will prompt you for your preferred or native language. The default here is English, and was my selection. You'll notice that there is a huge number of available languages here. This is one of the goals and strengths of Ubuntu, "that software tools should be usable by people in their local language". Select your preferred language and move on to the next step. The next step is to select your country. If you selected English as your primary language you'll then need to select your region. The default is United States, and was also my selection. The Ubuntu installer can automatically detect your keyboard layout if you ask it to. The default prompt is no, which then allows you to select your keyboard from a list. I prefer to use the auto-detection, which I find a bit faster. You can use your own preference here, but be sure you select the correct layout. Nothing more frustrating than not being able to type properly on your keyboard! Next you'll need to assign a hostname to your machine. This is an enjoyable part of the process for me, as I get to assign a unique name to the machine I'll be working with. This always seems to personalize the process for me, and I've chosen a number of creative names for my machines. Select whatever you like here, just make sure it is unique compared to the other machines on your current network. To help ensure that your clock is set properly the Ubuntu installer will auto-detect or prompt you for your time zone. I've found that, when installing on physical hardware, the auto-detection is usuall pretty accurate. When installing on virtual hardware it has a more difficult time. The screenshot above was taken on virtual hardware, which required me to select my time zone manually. If this is the case for you, find your time zone and hit ENTER. The next step in the installation process is partitioning the disks. Unless you have specific needs here, I'd suggest safely selecting the defaults. If you're wondering whether or not you do have specific needs, you probably don't. For our intentions here toward the goal of setting up a web server to run Wordpress, the default is just fine. Select "Guided – use entire disk and set up LVM" and hit ENTER. The installer will prompt you with a confirmation dialog before writing partitioning changes to the disk. Based on the fact that making changes to partitions and filesystems will destroy any existing data on the disk(s), this requires secondary confirmation. If you are installing on a newly created virtual machine you should have nothing to worry about here. If you are installing on physical hardware, please note that it will destroy any existing data and you should be OK with that action. You also have the option of defining the size of the disk made available to your installation. Again, I selected the default here which is to use 100% of the available space. If you have more specific requirements, make them here. Lastly, in regards to the partitioning, one more final confirmation. This screen outlines the partitions that will be created or changed and the filesystems and formatting that will be done on those partitions. Each of these filesystem related screenshots selected the default values. If you've done the same, and you're OK with losing any existing data that might be on the machine, finalize this change by selecting YES. At this point the installer will install the base system within the newly created partitions. This will take a few minutes (again, your mileage may vary depending on hardware type). There are no prompts during this process, just a progress bar and a communication of the packages that are being installed and configured.
Read more
  • 0
  • 0
  • 7910

article-image-digging-deep-requests
Packt
16 Jun 2015
17 min read
Save for later

Digging Deep into Requests

Packt
16 Jun 2015
17 min read
In this article by Rakesh Vidya Chandra and Bala Subrahmanyam Varanasi, authors of the book Python Requests Essentials, we are going to deal with advanced topics in the Requests module. There are many more features in the Requests module that makes the interaction with the web a cakewalk. Let us get to know more about different ways to use Requests module which helps us to understand the ease of using it. (For more resources related to this topic, see here.) In a nutshell, we will cover the following topics: Persisting parameters across requests using Session objects Revealing the structure of request and response Using prepared requests Verifying SSL certificate with Requests Body Content Workflow Using generator for sending chunk encoded requests Getting the request method arguments with event hooks Iterating over streaming API Self-describing the APIs with link headers Transport Adapter Persisting parameters across Requests using Session objects The Requests module contains a session object, which has the capability to persist settings across the requests. Using this session object, we can persist cookies, we can create prepared requests, we can use the keep-alive feature and do many more things. The Session object contains all the methods of Requests API such as GET, POST, PUT, DELETE and so on. Before using all the capabilities of the Session object, let us get to know how to use sessions and persist cookies across requests. Let us use the session method to get the resource. >>> import requests >>> session = requests.Session() >>> response = requests.get("https://google.co.in", cookies={"new-cookie-identifier": "1234abcd"}) In the preceding example, we created a session object with requests and its get method is used to access a web resource. The cookie value which we had set in the previous example will be accessible using response.request.headers. >>> response.request.headers CaseInsensitiveDict({'Cookie': 'new-cookie-identifier=1234abcd', 'Accept-Encoding': 'gzip, deflate, compress', 'Accept': '*/*', 'User-Agent': 'python-requests/2.2.1 CPython/2.7.5+ Linux/3.13.0-43-generic'}) >>> response.request.headers['Cookie'] 'new-cookie-identifier=1234abcd' With session object, we can specify some default values of the properties, which needs to be sent to the server using GET, POST, PUT and so on. We can achieve this by specifying the values to the properties like headers, auth and so on, on a Session object. >>> session.params = {"key1": "value", "key2": "value2"} >>> session.auth = ('username', 'password') >>> session.headers.update({'foo': 'bar'}) In the preceding example, we have set some default values to the properties—params, auth, and headers using the session object. We can override them in the subsequent request, as shown in the following example, if we want to: >>> session.get('http://mysite.com/new/url', headers={'foo': 'new-bar'}) Revealing the structure of request and response A Requests object is the one which is created by the user when he/she tries to interact with a web resource. It will be sent as a prepared request to the server and does contain some parameters which are optional. Let us have an eagle eye view on the parameters: Method: This is the HTTP method to be used to interact with the web service. For example: GET, POST, PUT. URL: The web address to which the request needs to be sent. headers: A dictionary of headers to be sent in the request. files: This can be used while dealing with the multipart upload. It's the dictionary of files, with key as file name and value as file object. data: This is the body to be attached to the request.json. There are two cases that come in to the picture here: If json is provided, content-type in the header is changed to application/json and at this point, json acts as a body to the request. In the second case, if both json and data are provided together, data is silently ignored. params: A dictionary of URL parameters to append to the URL. auth: This is used when we need to specify the authentication to the request. It's a tuple containing username and password. cookies: A dictionary or a cookie jar of cookies which can be added to the request. hooks: A dictionary of callback hooks. A Response object contains the response of the server to a HTTP request. It is generated once Requests gets a response back from the server. It contains all of the information returned by the server and also stores the Request object we created originally. Whenever we make a call to a server using the requests, two major transactions are taking place in this context which are listed as follows: We are constructing a Request object which will be sent out to the server to request a resource A Response object is generated by the requests module Now, let us look at an example of getting a resource from Python's official site. >>> response = requests.get('https://python.org') In the preceding line of code, a requests object gets constructed and will be sent to 'https://python.org'. Thus obtained Requests object will be stored in the response.request variable. We can access the headers of the Request object which was sent off to the server in the following way: >>> response.request.headers CaseInsensitiveDict({'Accept-Encoding': 'gzip, deflate, compress', 'Accept': '*/*', 'User-Agent': 'python-requests/2.2.1 CPython/2.7.5+ Linux/3.13.0-43-generic'}) The headers returned by the server can be accessed with its 'headers' attribute as shown in the following example: >>> response.headers CaseInsensitiveDict({'content-length': '45950', 'via': '1.1 varnish', 'x-cache': 'HIT', 'accept-ranges': 'bytes', 'strict-transport-security': 'max-age=63072000; includeSubDomains', 'vary': 'Cookie', 'server': 'nginx', 'age': '557','content-type': 'text/html; charset=utf-8', 'public-key-pins': 'max-age=600; includeSubDomains; ..) The response object contains different attributes like _content, status_code, headers, url, history, encoding, reason, cookies, elapsed, request. >>> response.status_code 200 >>> response.url u'https://www.python.org/' >>> response.elapsed datetime.timedelta(0, 1, 904954) >>> response.reason 'OK' Using prepared Requests Every request we send to the server turns to be a PreparedRequest by default. The request attribute of the Response object which is received from an API call or a session call is actually the PreparedRequest that was used. There might be cases in which we ought to send a request which would incur an extra step of adding a different parameter. Parameters can be cookies, files, auth, timeout and so on. We can handle this extra step efficiently by using the combination of sessions and prepared requests. Let us look at an example: >>> from requests import Request, Session >>> header = {} >>> request = Request('get', 'some_url', headers=header) We are trying to send a get request with a header in the previous example. Now, take an instance where we are planning to send the request with the same method, URL, and headers, but we want to add some more parameters to it. In this condition, we can use the session method to receive complete session level state to access the parameters of the initial sent request. This can be done by using the session object. >>> from requests import Request, Session >>> session = Session() >>> request1 = Request('GET', 'some_url', headers=header) Now, let us prepare a request using the session object to get the values of the session level state: >>> prepare = session.prepare_request(request1) We can send the request object request with more parameters now, as follows: >>> response = session.send(prepare, stream=True, verify=True) 200 Voila! Huge time saving! The prepare method prepares the complete request with the supplied parameters. In the previous example, the prepare_request method was used. There are also some other methods like prepare_auth, prepare_body, prepare_cookies, prepare_headers, prepare_hooks, prepare_method, prepare_url which are used to create individual properties. Verifying an SSL certificate with Requests Requests provides the facility to verify an SSL certificate for HTTPS requests. We can use the verify argument to check whether the host's SSL certificate is verified or not. Let us consider a website which has got no SSL certificate. We shall send a GET request with the argument verify to it. The syntax to send the request is as follows: requests.get('no ssl certificate site', verify=True) As the website doesn't have an SSL certificate, it will result an error similar to the following: requests.exceptions.ConnectionError: ('Connection aborted.', error(111, 'Connection refused')) Let us verify the SSL certificate for a website which is certified. Consider the following example: >>> requests.get('https://python.org', verify=True) <Response [200]> In the preceding example, the result was 200, as the mentioned website is SSL certified one. If we do not want to verify the SSL certificate with a request, then we can put the argument verify=False. By default, the value of verify will turn to True. Body content workflow Take an instance where a continuous stream of data is being downloaded when we make a request. In this situation, the client has to listen to the server continuously until it receives the complete data. Consider the case of accessing the content from the response first and the worry about the body next. In the above two situations, we can use the parameter stream. Let us look at an example: >>> requests.get("https://pypi.python.org/packages/source/F/Flask/Flask-0.10.1.tar.gz", stream=True) If we make a request with the parameter stream=True, the connection remains open and only the headers of the response will be downloaded. This gives us the capability to fetch the content whenever we need by specifying the conditions like the number of bytes of data. The syntax is as follows: if int(request.headers['content_length']) < TOO_LONG: content = r.content By setting the parameter stream=True and by accessing the response as a file-like object that is response.raw, if we use the method iter_content, we can iterate over response.data. This will avoid reading of larger responses at once. The syntax is as follows: iter_content(chunk_size=size in bytes, decode_unicode=False) In the same way, we can iterate through the content using iter_lines method which will iterate over the response data one line at a time. The syntax is as follows: iter_lines(chunk_size = size in bytes, decode_unicode=None, delimitter=None) The important thing that should be noted while using the stream parameter is it doesn't release the connection when it is set as True, unless all the data is consumed or response.close is executed. Keep-alive facility As the urllib3 supports the reuse of the same socket connection for multiple requests, we can send many requests with one socket and receive the responses using the keep-alive feature in the Requests library. Within a session, it turns to be automatic. Every request made within a session automatically uses the appropriate connection by default. The connection that is being used will be released after all the data from the body is read. Streaming uploads A file-like object which is of massive size can be streamed and uploaded using the Requests library. All we need to do is to supply the contents of the stream as a value to the data attribute in the request call as shown in the following lines. The syntax is as follows: with open('massive-body', 'rb') as file:    requests.post('http://example.com/some/stream/url',                  data=file) Using generator for sending chunk encoded Requests Chunked transfer encoding is a mechanism for transferring data in an HTTP request. With this mechanism, the data is sent in a series of chunks. Requests supports chunked transfer encoding, for both outgoing and incoming requests. In order to send a chunk encoded request, we need to supply a generator for your body. The usage is shown in the following example: >>> def generator(): ...     yield "Hello " ...     yield "World!" ... >>> requests.post('http://example.com/some/chunked/url/path',                  data=generator()) Getting the request method arguments with event hooks We can alter the portions of the request process signal event handling using hooks. For example, there is hook named response which contains the response generated from a request. It is a dictionary which can be passed as a parameter to the request. The syntax is as follows: hooks = {hook_name: callback_function, … } The callback_function parameter may or may not return a value. When it returns a value, it is assumed that it is to replace the data that was passed in. If the callback function doesn't return any value, there won't be any effect on the data. Here is an example of a callback function: >>> def print_attributes(request, *args, **kwargs): ...     print(request.url) ...     print(request .status_code) ...     print(request .headers) If there is an error in the execution of callback_function, you'll receive a warning message in the standard output. Now let us print some of the attributes of the request, using the preceding callback_function: >>> requests.get('https://www.python.org/',                  hooks=dict(response=print_attributes)) https://www.python.org/ 200 CaseInsensitiveDict({'content-type': 'text/html; ...}) <Response [200]> Iterating over streaming API Streaming API tends to keep the request open allowing us to collect the stream data in real time. While dealing with a continuous stream of data, to ensure that none of the messages being missed from it we can take the help of iter_lines() in Requests. The iter_lines() iterates over the response data line by line. This can be achieved by setting the parameter stream as True while sending the request. It's better to keep in mind that it's not always safe to call the iter_lines() function as it may result in loss of received data. Consider the following example taken from http://docs.python-requests.org/en/latest/user/advanced/#streaming-requests: >>> import json >>> import requests >>> r = requests.get('http://httpbin.org/stream/4', stream=True) >>> for line in r.iter_lines(): ...     if line: ...         print(json.loads(line) ) In the preceding example, the response contains a stream of data. With the help of iter_lines(), we tried to print the data by iterating through every line. Encodings As specified in the HTTP protocol (RFC 7230), applications can request the server to return the HTTP responses in an encoded format. The process of encoding turns the response content into an understandable format which makes it easy to access it. When the HTTP header fails to return the type of encoding, Requests will try to assume the encoding with the help of chardet. If we access the response headers of a request, it does contain the keys of content-type. Let us look at a response header's content-type: >>> re = requests.get('http://google.com') >>> re.headers['content-type'] 'text/html; charset=ISO-8859-1' In the preceding example the content type contains 'text/html; charset=ISO-8859-1'. This happens when the Requests finds the charset value to be None and the 'content-type' value to be 'Text'. It follows the protocol RFC 7230 to change the value of charset to ISO-8859-1 in this type of a situation. In case we are dealing with different types of encodings like 'utf-8', we can explicitly specify the encoding by setting the property to Response.encoding. HTTP verbs Requests support the usage of the full range of HTTP verbs which are defined in the following table. To most of the supported verbs, 'url' is the only argument that must be passed while using them. Method Description GET GET method requests a representation of the specified resource. Apart from retrieving the data, there will be no other effect of using this method. Definition is given as requests.get(url, **kwargs) POST The POST verb is used for the creation of new resources. The submitted data will be handled by the server to a specified resource. Definition is given as requests.post(url, data=None, json=None, **kwargs) PUT This method uploads a representation of the specified URI. If the URI is not pointing to any resource, the server can create a new object with the given data or it will modify the existing resource. Definition is given as requests.put(url, data=None, **kwargs) DELETE This is pretty easy to understand. It is used to delete the specified resource. Definition is given as requests.delete(url, **kwargs) HEAD This verb is useful for retrieving meta-information written in response headers without having to fetch the response body. Definition is given as requests.head(url, **kwargs) OPTIONS OPTIONS is a HTTP method which returns the HTTP methods that the server supports for a specified URL. Definition is given as requests.options(url, **kwargs) PATCH This method is used to apply partial modifications to a resource. Definition is given as requests.patch(url, data=None, **kwargs) Self-describing the APIs with link headers Take a case of accessing a resource in which the information is accommodated in different pages. If we need to approach the next page of the resource, we can make use of the link headers. The link headers contain the meta data of the requested resource, that is the next page information in our case. >>> url = "https://api.github.com/search/code?q=addClass+user:mozilla&page=1&per_page=4" >>> response = requests.head(url=url) >>> response.headers['link'] '<https://api.github.com/search/code?q=addClass+user%3Amozilla&page=2&per_page=4>; rel="next", <https://api.github.com/search/code?q=addClass+user%3Amozilla&page=250&per_page=4>; rel="last" In the preceding example, we have specified in the URL that we want to access page number one and it should contain four records. The Requests automatically parses the link headers and updates the information about the next page. When we try to access the link header, it showed the output with the values of the page and the number of records per page. Transport Adapter It is used to provide an interface for Requests sessions to connect with HTTP and HTTPS. This will help us to mimic the web service to fit our needs. With the help of Transport Adapters, we can configure the request according to the HTTP service we opt to use. Requests contains a Transport Adapter called HTTPAdapter included in it. Consider the following example: >>> session = requests.Session() >>> adapter = requests.adapters.HTTPAdapter(max_retries=6) >>> session.mount("http://google.co.in", adapter) In this example, we created a request session in which every request we make retries only six times, when the connection fails. Summary In this article, we learnt about creating sessions and using the session with different criteria. We also looked deeply into HTTP verbs and using proxies. We learnt about streaming requests, dealing with SSL certificate verifications and streaming responses. We also got to know how to use prepared requests, link headers and chunk encoded requests. Resources for Article: Further resources on this subject: Machine Learning [article] Solving problems – closest good restaurant [article] Installing NumPy, SciPy, matplotlib, and IPython [article]
Read more
  • 0
  • 0
  • 7908
Modal Close icon
Modal Close icon