Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Full-Stack Web Development

52 Articles
article-image-working-json-php-jquery
Packt
20 Dec 2010
5 min read
Save for later

Working with JSON in PHP jQuery

Packt
20 Dec 2010
5 min read
  PHP jQuery Cookbook Over 60 simple but highly effective recipes to create interactive web applications using PHP with jQuery Create rich and interactive web applications with PHP and jQuery Debug and execute jQuery code on a live site Design interactive forms and menus Another title in the Packt Cookbook range, which will help you get to grips with PHP as well as jQuery         Read more about this book       In this article, by Vijay Joshi, author of PHP jQuery Cookbook, we will cover: Creating JSON in PHP Reading JSON in PHP Catching JSON parsing errors Accessing data from a JSON in jQuery (For more resources on this subject, see here.) Introduction Recently, JSON (JavaScript Object Notation) has become a very popular data interchange format with more and more developers opting for it over XML. Even many web services nowadays provide JSON as the default output format. JSON is a text format that is programming-language independent and is a native data form of JavaScript. It is lighter and faster than XML because it needs less markup compared to XML. Because JSON is the native data form of JavaScript, it can be used on the client side in an AJAX application more easily than XML. A JSON object starts with { and ends with }. According to the JSON specification, the following types are allowed in JSON: Object: An object is a collection of key-value pairs enclosed between { and } and separated by a comma. The key and the value themselves are separated using a colon (:). Think of objects as associative arrays or hash tables. Keys are simple strings and values can be an array, string, number, boolean, or null. Array: Like other languages, an array is an ordered pair of data. For representing an array, values are comma separated and enclosed between [ and ]. String: A string must be enclosed in double quotes The last type is a number A JSON can be as simple as: { "name":"Superman", "address": "anywhere"} An example using an array is as follows: { "name": "Superman", "phoneNumbers": ["8010367150", "9898989898", "1234567890" ]} A more complex example that demonstrates the use of objects, arrays, and values is as follows:   { "people": [ { "name": "Vijay Joshi", "age": 28, "isAdult": true }, { "name": "Charles Simms", "age": 13, "isAdult": false } ]} An important point to note: { 'name': 'Superman', 'address': 'anywhere'} Above is a valid JavaScript object but not a valid JSON. JSON requires that the name and value must be enclosed in double quotes; single quotes are not allowed. Another important thing is to remember the proper charset of data. Remember that JSON expects the data to be UTF-8 whereas PHP adheres to ISO-8859-1 encoding by default. Also note that JSON is not a JavaScript; it is basically a specification or a subset derived from JavaScript. Now that we are familiar with JSON, let us proceed towards the recipes where we will learn how we can use JSON along with PHP and jQuery. Create a new folder and name it as Chapter 4. We will put all the recipes of this article together in this folder. Also put the jquery.js file inside this folder. To be able to use PHP's built-in JSON functions, you should have PHP version 5.2 or higher installed. Creating JSON in PHP This recipe will explain how JSON can be created from PHP arrays and objects Getting ready Create a new folder inside the Chapter4 directory and name it as Recipe1. How to do it... Create a file and save it by the name index.php in the Recipe1 folder. Write the PHP code that creates a JSON string from an array. <?php $travelDetails = array( 'origin' => 'Delhi', 'destination' => 'London', 'passengers' => array ( array('name' => 'Mr. Perry Mason', 'type' => 'Adult', 'age'=> 28), array('name' => 'Miss Irene Adler', 'type' => 'Adult', 'age'=> 28) ), 'travelDate' => '17-Dec-2010' ); echo json_encode($travelDetails);?> Run the file in your browser. It will show a JSON string as output on screen. After indenting the result will look like the following: { "origin":"Delhi","destination":"London","passengers":[ { "name":"Mr. Perry Mason", "type":"Adult", "age":28 }, { "name":"Miss Irene Adler", "type":"Adult", "age":28 }],"travelDate":"17-Dec-2010"} How it works... PHP provides the function json_encode() to create JSON strings from objects and arrays. This function accepts two parameters. First is the value to be encoded and the second parameter includes options that control how certain special characters are encoded. This parameter is optional. In the previous code we created a somewhat complex associative array that contains travel information of two passengers. Passing this array to json_encode() creates a JSON string. There's more... Predefined constants Any of the following constants can be passed as a second parameter to json_encode(). JSON_HEX_TAG: Converts < and > to u003C and u003E JSON_HEX_AMP: Converts &s to u0026 JSON_HEX_APOS: Converts ' to u0027 JSON_HEX_QUOT: Converts " to u0022 JSON_FORCE_OBJECT: Forces the return value in JSON string to be an object instead of an array These constants require PHP version 5.3 or higher.
Read more
  • 0
  • 0
  • 8233

article-image-creating-restful-api
Packt
19 Sep 2014
24 min read
Save for later

Creating a RESTful API

Packt
19 Sep 2014
24 min read
In this article by Jason Krol, the author of Web Development with MongoDB and NodeJS, we will review the following topics: (For more resources related to this topic, see here.) Introducing RESTful APIs Installing a few basic tools Creating a basic API server and sample JSON data Responding to GET requests Updating data with POST and PUT Removing data with DELETE Consuming external APIs from Node What is an API? An Application Programming Interface (API) is a set of tools that a computer system makes available that provides unrelated systems or software the ability to interact with each other. Typically, a developer uses an API when writing software that will interact with a closed, external, software system. The external software system provides an API as a standard set of tools that all developers can use. Many popular social networking sites provide developer's access to APIs to build tools to support those sites. The most obvious examples are Facebook and Twitter. Both have a robust API that provides developers with the ability to build plugins and work with data directly, without them being granted full access as a general security precaution. As you will see with this article, providing your own API is not only fairly simple, but also it empowers you to provide your users with access to your data. You also have the added peace of mind knowing that you are in complete control over what level of access you can grant, what sets of data you can make read-only, as well as what data can be inserted and updated. What is a RESTful API? Representational State Transfer (REST) is a fancy way of saying CRUD over HTTP. What this means is when you use a REST API, you have a uniform means to create, read, and update data using simple HTTP URLs with a standard set of HTTP verbs. The most basic form of a REST API will accept one of the HTTP verbs at a URL and return some kind of data as a response. Typically, a REST API GET request will always return some kind of data such as JSON, XML, HTML, or plain text. A POST or PUT request to a RESTful API URL will accept data to create or update. The URL for a RESTful API is known as an endpoint, and while working with these endpoints, it is typically said that you are consuming them. The standard HTTP verbs used while interfacing with REST APIs include: GET: This retrieves data POST: This submits data for a new record PUT: This submits data to update an existing record PATCH: This submits a date to update only specific parts of an existing record DELETE: This deletes a specific record Typically, RESTful API endpoints are defined in a way that they mimic the data models and have semantic URLs that are somewhat representative of the data models. What this means is that to request a list of models, for example, you would access an API endpoint of /models. Likewise, to retrieve a specific model by its ID, you would include that in the endpoint URL via /models/:Id. Some sample RESTful API endpoint URLs are as follows: GET http://myapi.com/v1/accounts: This returns a list of accounts GET http://myapi.com/v1/accounts/1: This returns a single account by Id: 1 POST http://myapi.com/v1/accounts: This creates a new account (data submitted as a part of the request) PUT http://myapi.com/v1/accounts/1: This updates an existing account by Id: 1 (data submitted as part of the request) GET http://myapi.com/v1/accounts/1/orders: This returns a list of orders for account Id: 1 GET http://myapi.com/v1/accounts/1/orders/21345: This returns the details for a single order by Order Id: 21345 for account Id: 1 It's not a requirement that the URL endpoints match this pattern; it's just common convention. Introducing Postman REST Client Before we get started, there are a few tools that will make life much easier when you're working directly with APIs. The first of these tools is called Postman REST Client, and it's a Google Chrome application that can run right in your browser or as a standalone-packaged application. Using this tool, you can easily make any kind of request to any endpoint you want. The tool provides many useful and powerful features that are very easy to use and, best of all, free! Installation instructions Postman REST Client can be installed in two different ways, but both require Google Chrome to be installed and running on your system. The easiest way to install the application is by visiting the Chrome Web Store at https://chrome.google.com/webstore/category/apps. Perform a search for Postman REST Client and multiple results will be returned. There is the regular Postman REST Client that runs as an application built into your browser, and then separate Postman REST Client (packaged app) that runs as a standalone application on your system in its own dedicated window. Go ahead and install your preference. If you install the application as the standalone packaged app, an icon to launch it will be added to your dock or taskbar. If you installed it as a regular browser app, you can launch it by opening a new tab in Google Chrome and going to Apps and finding the Postman REST Client icon. After you've installed and launched the app, you should be presented with an output similar to the following screenshot: A quick tour of Postman REST Client Using Postman REST Client, we're able to submit REST API calls to any endpoint we want as well as modify the type of request. Then, we can have complete access to the data that's returned from the API as well as any errors that might have occurred. To test an API call, enter the URL to your favorite website in the Enter request URL here field and leave the dropdown next to it as GET. This will mimic a standard GET request that your browser performs anytime you visit a website. Click on the blue Send button. The request is made and the response is displayed at the bottom half of the screen. In the following screenshot, I sent a simple GET request to http://kroltech.com and the HTML is returned as follows: If we change this URL to that of the RSS feed URL for my website, you can see the XML returned: The XML view has a few more features as it exposes the sidebar to the right that gives you a handy outline to glimpse the tree structure of the XML data. Not only that, you can now see a history of the requests we've made so far along the left sidebar. This is great when we're doing more advanced POST or PUT requests and don't want to repeat the data setup for each request while testing an endpoint. Here is a sample API endpoint I submitted a GET request to that returns the JSON data in its response: A really nice thing about making API calls to endpoints that return JSON using Postman Client is that it parses and displays the JSON in a very nicely formatted way, and each node in the data is expandable and collapsible. The app is very intuitive so make sure you spend some time playing around and experimenting with different types of calls to different URLs. Using the JSONView Chrome extension There is one other tool I want to let you know about (while extremely minor) that is actually a really big deal. The JSONView Chrome extension is a very small plugin that will instantly convert any JSON you view directly via the browser into a more usable JSON tree (exactly like Postman Client). Here is an example of pointing to a URL that returns JSON from Chrome before JSONView is installed: And here is that same URL after JSONView has been installed: You should install the JSONView Google Chrome extension the same way you installed Postman REST Client—access the Chrome Web Store and perform a search for JSONView. Now that you have the tools to be able to easily work with and test API endpoints, let's take a look at writing your own and handling the different request types. Creating a Basic API server Let's create a super basic Node.js server using Express that we'll use to create our own API. Then, we can send tests to the API using Postman REST Client to see how it all works. In a new project workspace, first install the npm modules that we're going to need in order to get our server up and running: $ npm init $ npm install --save express body-parser underscore Now that the package.json file for this project has been initialized and the modules installed, let's create a basic server file to bootstrap up an Express server. Create a file named server.js and insert the following block of code: var express = require('express'),    bodyParser = require('body-parser'),    _ = require('underscore'), json = require('./movies.json'),    app = express();   app.set('port', process.env.PORT || 3500);   app.use(bodyParser.urlencoded()); app.use(bodyParser.json());   var router = new express.Router(); // TO DO: Setup endpoints ... app.use('/', router);   var server = app.listen(app.get('port'), function() {    console.log('Server up: http://localhost:' + app.get('port')); }); Most of this should look familiar to you. In the server.js file, we are requiring the express, body-parser, and underscore modules. We're also requiring a file named movies.json, which we'll create next. After our modules are required, we set up the standard configuration for an Express server with the minimum amount of configuration needed to support an API server. Notice that we didn't set up Handlebars as a view-rendering engine because we aren't going to be rendering any HTML with this server, just pure JSON responses. Creating sample JSON data Let's create the sample movies.json file that will act as our temporary data store (even though the API we build for the purposes of demonstration won't actually persist data beyond the app's life cycle): [{    "Id": "1",    "Title": "Aliens",    "Director": "James Cameron",    "Year": "1986",    "Rating": "8.5" }, {    "Id": "2",    "Title": "Big Trouble in Little China",    "Director": "John Carpenter",    "Year": "1986",    "Rating": "7.3" }, {    "Id": "3",    "Title": "Killer Klowns from Outer Space",    "Director": "Stephen Chiodo",    "Year": "1988",    "Rating": "6.0" }, {    "Id": "4",    "Title": "Heat",    "Director": "Michael Mann",    "Year": "1995",    "Rating": "8.3" }, {    "Id": "5",    "Title": "The Raid: Redemption",    "Director": "Gareth Evans",    "Year": "2011",    "Rating": "7.6" }] This is just a really simple JSON list of a few of my favorite movies. Feel free to populate it with whatever you like. Boot up the server to make sure you aren't getting any errors (note we haven't set up any routes yet, so it won't actually do anything if you tried to load it via a browser): $ node server.js Server up: http://localhost:3500 Responding to GET requests Adding a simple GET request support is fairly simple, and you've seen this before already in the app we built. Here is some sample code that responds to a GET request and returns a simple JavaScript object as JSON. Insert the following code in the routes section where we have the // TO DO: Setup endpoints ... waiting comment: router.get('/test', function(req, res) {    var data = {        name: 'Jason Krol',        website: 'http://kroltech.com'    };      res.json(data); }); Let's tweak the function a little bit and change it so that it responds to a GET request against the root URL (that is /) route and returns the JSON data from our movies file. Add this new route after the /test route added previously: router.get('/', function(req, res) {    res.json(json); }); The res (response) object in Express has a few different methods to send data back to the browser. Each of these ultimately falls back on the base send method, which includes header information, statusCodes, and so on. res.json and res.jsonp will automatically format JavaScript objects into JSON and then send using res.send. res.render will render a template view as a string and then send it using res.send as well. With that code in place, if we launch the server.js file, the server will be listening for a GET request to the / URL route and will respond with the JSON data of our movies collection. Let's first test it out using the Postman REST Client tool: GET requests are nice because we could have just as easily pulled that same URL via our browser and received the same result: However, we're going to use Postman for the remainder of our endpoint testing as it's a little more difficult to send POST and PUT requests using a browser. Receiving data – POST and PUT requests When we want to allow our users using our API to insert or update data, we need to accept a request from a different HTTP verb. When inserting new data, the POST verb is the preferred method to accept data and know it's for an insert. Let's take a look at code that accepts a POST request and data along with the request, and inserts a record into our collection and returns the updated JSON. Insert the following block of code after the route you added previously for GET: router.post('/', function(req, res) {    // insert the new item into the collection (validate first)    if(req.body.Id && req.body.Title && req.body.Director && req.body.Year && req.body.Rating) {        json.push(req.body);        res.json(json);    } else {        res.json(500, { error: 'There was an error!' });    } }); You can see the first thing we do in the POST function is check to make sure the required fields were submitted along with the actual request. Assuming our data checks out and all the required fields are accounted for (in our case every field), we insert the entire req.body object into the array as is using the array's push function. If any of the required fields aren't submitted with the request, we return a 500 error message instead. Let's submit a POST request this time to the same endpoint using the Postman REST Client. (Don't forget to make sure your API server is running with node server.js.): First, we submitted a POST request with no data, so you can clearly see the 500 error response that was returned. Next, we provided the actual data using the x-www-form-urlencoded option in Postman and provided each of the name/value pairs with some new custom data. You can see from the results that the STATUS was 200, which is a success and the updated JSON data was returned as a result. Reloading the main GET endpoint in a browser yields our original movies collection with the new one added. PUT requests will work in almost exactly the same way except traditionally, the Id property of the data is handled a little differently. In our example, we are going to require the Id attribute as a part of the URL and not accept it as a parameter in the data that's submitted (since it's usually not common for an update function to change the actual Id of the object it's updating). Insert the following code for the PUT route after the existing POST route you added earlier: router.put('/:id', function(req, res) {    // update the item in the collection    if(req.params.id && req.body.Title && req.body.Director && req.body.Year && req.body.Rating) {        _.each(json, function(elem, index) {             // find and update:            if (elem.Id === req.params.id) {                elem.Title = req.body.Title;                elem.Director = req.body.Director;                elem.Year = req.body.Year;                elem.Rating = req.body.Rating;            }        });          res.json(json);    } else {        res.json(500, { error: 'There was an error!' });    } }); This code again validates that the required fields are included with the data that was submitted along with the request. Then, it performs an _.each loop (using the underscore module) to look through the collection of movies and find the one whose Id parameter matches that of the Id included in the URL parameter. Assuming there's a match, the individual fields for that matched object are updated with the new values that were sent with the request. Once the loop is complete, the updated JSON data is sent back as the response. Similarly, in the POST request, if any of the required fields are missing, a simple 500 error message is returned. The following screenshot demonstrates a successful PUT request updating an existing record. The response from Postman after including the value 1 in the URL as the Id parameter, which provides the individual fields to update as x-www-form-urlencoded values, and finally sending as PUT shows that the original item in our movies collection is now the original Alien (not Aliens, its sequel as we originally had). Removing data – DELETE The final stop on our whirlwind tour of the different REST API HTTP verbs is DELETE. It should be no surprise that sending a DELETE request should do exactly what it sounds like. Let's add another route that accepts DELETE requests and will delete an item from our movies collection. Here is the code that takes care of DELETE requests that should be placed after the existing block of code from the previous PUT: router.delete('/:id', function(req, res) {    var indexToDel = -1;    _.each(json, function(elem, index) {        if (elem.Id === req.params.id) {            indexToDel = index;        }    });    if (~indexToDel) {        json.splice(indexToDel, 1);    }    res.json(json); }); This code will loop through the collection of movies and find a matching item by comparing the values of Id. If a match is found, the array index for the matched item is held until the loop is finished. Using the array.splice function, we can remove an array item at a specific index. Once the data has been updated by removing the requested item, the JSON data is returned. Notice in the following screenshot that the updated JSON that's returned is in fact no longer displaying the original second item we deleted. Note that ~ in there! That's a little bit of JavaScript black magic! The tilde (~) in JavaScript will bit flip a value. In other words, take a value and return the negative of that value incremented by one, that is ~n === -(n+1). Typically, the tilde is used with functions that return -1 as a false response. By using ~ on -1, you are converting it to a 0. If you were to perform a Boolean check on -1 in JavaScript, it would return true. You will see ~ is used primarily with the indexOf function and jQuery's $.inArray()—both return -1 as a false response. All of the endpoints defined in this article are extremely rudimentary, and most of these should never ever see the light of day in a production environment! Whenever you have an API that accepts anything other than GET requests, you need to be sure to enforce extremely strict validation and authentication rules. After all, you are basically giving your users direct access to your data. Consuming external APIs from Node.js There will undoubtedly be a time when you want to consume an API directly from within your Node.js code. Perhaps, your own API endpoint needs to first fetch data from some other unrelated third-party API before sending a response. Whatever the reason, the act of sending a request to an external API endpoint and receiving a response can be done fairly easily using a popular and well-known npm module called Request. Request was written by Mikeal Rogers and is currently the third most popular and (most relied upon) npm module after async and underscore. Request is basically a super simple HTTP client, so everything you've been doing with Postman REST Client so far is basically what Request can do, only the resulting data is available to you in your node code as well as the response status codes and/or errors, if any. Consuming an API endpoint using Request Let's do a neat trick and actually consume our own endpoint as if it was some third-party external API. First, we need to ensure we have Request installed and can include it in our app: $ npm install --save request Next, edit server.js and make sure you include Request as a required module at the start of the file: var express = require('express'),    bodyParser = require('body-parser'),    _ = require('underscore'),    json = require('./movies.json'),    app = express(),    request = require('request'); Now let's add a new endpoint after our existing routes, which will be an endpoint accessible in our server via a GET request to /external-api. This endpoint, however, will actually consume another endpoint on another server, but for the purposes of this example, that other server is actually the same server we're currently running! The Request module accepts an options object with a number of different parameters and settings, but for this particular example, we only care about a few. We're going to pass an object that has a setting for the method (GET, POST, PUT, and so on) and the URL of the endpoint we want to consume. After the request is made and a response is received, we want an inline callback function to execute. Place the following block of code after your existing list of routes in server.js: router.get('/external-api', function(req, res) {    request({            method: 'GET',            uri: 'http://localhost:' + (process.env.PORT || 3500),        }, function(error, response, body) {             if (error) { throw error; }              var movies = [];            _.each(JSON.parse(body), function(elem, index) {                movies.push({                    Title: elem.Title,                    Rating: elem.Rating                });            });            res.json(_.sortBy(movies, 'Rating').reverse());        }); }); The callback function accepts three parameters: error, response, and body. The response object is like any other response that Express handles and has all of the various parameters as such. The third parameter, body, is what we're really interested in. That will contain the actual result of the request to the endpoint that we called. In this case, it is the JSON data from our main GET route we defined earlier that returns our own list of movies. It's important to note that the data returned from the request is returned as a string. We need to use JSON.parse to convert that string to actual usable JSON data. Using the data that came back from the request, we transform it a little bit. That is, we take that data and manipulate it a bit to suit our needs. In this example, we took the master list of movies and just returned a new collection that consists of only the title and rating of each movie and then sorts the results by the top scores. Load this new endpoint by pointing your browser to http://localhost:3500/external-api, and you can see the new transformed JSON output to the screen. Let's take a look at another example that's a little more real world. Let's say that we want to display a list of similar movies for each one in our collection, but we want to look up that data somewhere such as www.imdb.com. Here is the sample code that will send a GET request to IMDB's JSON API, specifically for the word aliens, and returns a list of related movies by the title and year. Go ahead and place this block of code after the previous route for external-api: router.get('/imdb', function(req, res) {    request({            method: 'GET',            uri: 'http://sg.media-imdb.com/suggests/a/aliens.json',        }, function(err, response, body) {            var data = body.substring(body.indexOf('(')+1);            data = JSON.parse(data.substring(0,data.length-1));            var related = [];            _.each(data.d, function(movie, index) {                related.push({                    Title: movie.l,                    Year: movie.y,                    Poster: movie.i ? movie.i[0] : ''                });            });              res.json(related);        }); }); If we take a look at this new endpoint in a browser, we can see the JSON data that's returned from our /imdb endpoint is actually itself retrieving and returning data from some other API endpoint: Note that the JSON endpoint I'm using for IMDB isn't actually from their API, but rather what they use on their homepage when you type in the main search box. This would not really be the most appropriate way to use their data, but it's more of a hack to show this example. In reality, to use their API (like most other APIs), you would need to register and get an API key that you would use so that they can properly track how much data you are requesting on a daily or an hourly basis. Most APIs will to require you to use a private key with them for this same reason. Summary In this article, we took a brief look at how APIs work in general, the RESTful API approach to semantic URL paths and arguments, and created a bare bones API. We used Postman REST Client to interact with the API by consuming endpoints and testing the different types of request methods (GET, POST, PUT, and so on). You also learned how to consume an external API endpoint by using the third-party node module Request. Resources for Article: Further resources on this subject: RESTful Services JAX-RS 2.0 [Article] REST – Where It Begins [Article] RESTful Web Services – Server-Sent Events (SSE) [Article]
Read more
  • 0
  • 0
  • 8114

article-image-enhancing-your-site-php-and-jquery
Packt
29 Dec 2010
12 min read
Save for later

Enhancing your Site with PHP and jQuery

Packt
29 Dec 2010
12 min read
  PHP jQuery Cookbook Over 60 simple but highly effective recipes to create interactive web applications using PHP with jQuery Create rich and interactive web applications with PHP and jQuery Debug and execute jQuery code on a live site Design interactive forms and menus Another title in the Packt Cookbook range, which will help you get to grips with PHP as well as jQuery         Read more about this book       (For more resources on this subject, see here.) Introduction In this article, we will look at some advanced techniques that can be used to enhance the functionality of web applications. We will create a few examples where we will search for images the from Flickr and videos from YouTube using their respective APIs. We will parse a RSS feed XML using jQuery and learn to create an endless scrolling page like Google reader or the new interface of Twitter. Besides this, you will also learn to create a jQuery plugin, which you can use independently in your applications. Sending cross-domain requests using server proxy Browsers do not allow scripts to send cross-domain requests due to security reasons. This means a script at domain http://www.abc.com cannot send AJAX requests to http://www.xyz.com. This recipe will show how you can overcome this limitation by using a PHP script on the server side. We will create an example that will search Flickr for images. Flickr will return a JSON, which will be parsed by jQuery and images will be displayed on the page. The following screenshot shows a JSON response from Flickr: Getting ready Create a directory for this article and name it as Article9. In this directory, create a folder named Recipe1. Also get an API key from Flickr by signing up at http://www.flickr.com/services/api/keys/. How to do it... Create a file inside the Recipe1 folder and name it as index.html. Write the HTML code to create a form with three fields: tag, number of images, and image size. Also create an ul element inside which the results will be displayed. <html> <head> <title>Flickr Image Search</title> <style type="text/css"> body { font-family:"Trebuchet MS",verdana,arial;width:900px; } fieldset { width:333px; } ul{ margin:0;padding:0;list-style:none; } li{ padding:5px; } span{ display:block;float:left;width:150px; } #results li{ float:left; } .error{ font-weight:bold; color:#ff0000; } </style> </head> <body> <form id="searchForm"> <fieldset> <legend>Search Criteria</legend> <ul> <li> <span>Tag</span> <input type="text" name="tag" id="tag"/> </li> <li> <span>Number of images</span> <select name="numImages" id="numImages"> <option value="20">20</option> <option value="30">30</option> <option value="40">40</option> <option value="50">50</option> </select> </li> <li> <span>Select a size</span> <select id="size"> <option value="s">Small</option> <option value="t">Thumbnail</option> <option value="-">Medium</option> <option value="b">Large</option> <option value="o">Original</option> </select> </li> <li> <input type="button" value="Search" id="search"/> </li> </ul> </fieldset> </form> <ul id="results"> </ul> </body> </html> The following screenshot shows the form created: Include the jquery.js file. Then, enter the jQuery code that will send the AJAX request to a PHP file search.php. Values of form elements will be posted with an AJAX request. A callback function showImages is also defined that actually reads the JSON response and displays the images on the page. <script type="text/javascript" src="../jquery.js"></script> <script type="text/javascript"> $(document).ready(function() { $('#search').click(function() { if($.trim($('#tag').val()) == '') { $('#results').html('<li class="error">Please provide search criteria</li>'); return; } $.post( 'search.php', $('#searchForm').serialize(), showImages, 'json' ); }); function showImages(response) { if(response['stat'] == 'ok') { var photos = response.photos.photo; var str= ''; $.each(photos, function(index,value) { var farmId = value.farm; var serverId = value.server; var id = value.id; var secret = value.secret; var size = $('#size').val(); var title = value.title; var imageUrl = 'http://farm' + farmId + '.static.flickr.com/' + serverId + '/' + id + '_' + secret + '_' + size + '.jpg'; str+= '<li>'; str+= '<img src="' + imageUrl + '" alt="' + title + '" />'; str+= '</li>'; }); $('#results').html(str); } else { $('#results').html('<li class="error">an error occured</li>'); } } }); </script> Create another file named search.php. The PHP code in this file will contact the Flickr API with specified search criteria. Flickr will return a JSON that will be sent back to the browser where jQuery will display it on the page. <?php define('API_KEY', 'your-API-key-here'); $url = 'http://api.flickr.com/services/rest/?method=flickr. photos.search'; $url.= '&api_key='.API_KEY; $url.= '&tags='.$_POST['tag']; $url.= '&per_page='.$_POST['numImages']; $url.= '&format=json'; $url.= '&nojsoncallback=1'; header('Content-Type:text/json;'); echo file_get_contents($url); ?> Now, run the index.html file in your browser, enter a tag to search in the form, and select the number of images to be retrieved and image size. Click on the Search button. A few seconds later you will see the images from Flickr displayed on the page: <html> <head> <title>Youtube Video Search</title> <style type="text/css"> body { font-family:"Trebuchet MS",verdana,arial;width:900px; } fieldset { width:333px; } ul{ margin:0;padding:0;list-style:none; } li{ padding:5px; } span{ display:block;float:left;width:150px; } #results ul li{ float:left; background-color:#483D8B; color:#fff;margin:5px; width:120px; } .error{ font-weight:bold; color:#ff0000; } img{ border:0} </style> </head> <body> <form id="searchForm"> <fieldset> <legend>Search Criteria</legend> <ul> <li> <span>Enter query</span> <input type="text" id="query"/> </li> <li> <input type="button" value="Search" id="search"/> </li> </ul> </fieldset> </form> <div id="results"> </div> </body> </html> How it works... On clicking the Search button, form values are sent to the PHP file search.php. Now, we have to contact Flickr and search for images. Flickr API provides several methods for accessing images. We will use the method flickr.photos.search to search by tag name. Along with method name we will have to send the following parameters in the URL: api_key: An API key is mandatory. You can get one from: http://www.flickr.com/services/api/keys/. tags: The tags to search for. These can be comma-separated. This value will be the value of textbox tag. per_page: Number of images in a page. This can be a maximum of 99. Its value will be the value of select box numImages. format: It can be JSON, XML, and so on. For this example, we will use JSON. nojsoncallback: Its value will be set to 1 if we don't want Flickr to wrap the JSON in a function wrapper. Once the URL is complete we can contact Flickr to get results. To get the results' we will use the PHP function file_get_contents, which will get the results JSON from the specified URL. This JSON will be echoed to the browser. jQuery will receive the JSON in callback function showImages. This function first checks the status of the response. If the response is OK, we get the photo elements from the response and we can iterate over them using jQuery's $.each method. To display an image, we will have to get its URL first, which will be created by combining different values of the photo object. According to Flickr API specification, an image URL can be constructed in the following manner: http://farm{farm-id}.static.flickr.com/{server-id}/{id}_{secret}_[size].jpg So we get the farmId, serverId, id, and secret from the photo element. The size can be one of the following: s (small square) t (thumbnail) - (medium) b (large) o (original image) We have already selected the image size from the select box in the form. By combining all these values, we now have the Flickr image URL. We wrap it in a li element and repeat the process for all images. Finally, we insert the constructed images into the results li. Making cross-domain requests with jQuery The previous recipe demonstrated the use of a PHP file as a proxy for querying cross-domain URLs. This recipe will show the use of JSONP to query cross-domain URLs from jQuery itself. We will create an example that will search for the videos from YouTube and will display them in a list. Clicking on a video thumbnail will open a new window that will take the user to the YouTube website to show that video. The following screenshot shows a sample JSON response from YouTube: Getting ready Create a folder named Recipe2 inside the Article9 directory. How to do it... Create a file inside the Recipe2 folder and name it as index.html. Write the HTML code to create a form with a single field query and a DIV with results ID inside which the search results will be displayed. <script type="text/javascript" src="../jquery.js"></script> <script type="text/javascript"> $(document).ready(function() { $('#search').click(function() { var query = $.trim($('#query').val()); if(query == '') { $('#results').html('<li class="error">Please enter a query.</li>'); return; } $.get( 'http://gdata.youtube.com/feeds/api/videos?q=' + query + '&alt=json-in-script', {}, showVideoList, 'jsonp' ); }); }); function showVideoList(response) { var totalResults = response['feed']['openSearch$totalResults']['$t']; if(parseInt(totalResults,10) > 0) { var entries = response.feed.entry; var str = '<ul>'; for(var i=1; i< entries.length; i++) { var value = entries[i]; var title = value['title']['$t']; var mediaGroup = value['media$group']; var videoURL = mediaGroup['media$player'][0]['url']; var thumbnail = mediaGroup['media$thumbnail'][0]['url']; var thumbnailWidth = mediaGroup['media$thumbnail'][0]['width']; var thumbnailHeight = mediaGroup['media$thumbnail'][0]['height']; var numComments = value['gd$comments']['gd$feedLink']['countHint']; var rating = parseFloat(value['gd$rating']['average']).toFixed(2); str+= '<li>'; str+= '<a href="' + videoURL + '" target="_blank">'; str+= '<img src="'+thumbNail+'" width="'+thumbNailWidth+'" height="'+thumbNailWidth+'" title="' + title + '" />'; str+= '</a>'; str+= '<hr>'; str+= '<p style="width: 120px; font-size: 12px;">Comments: ' + numComments + ''; str+= '<br/>'; str+= 'Rating: ' + rating; str+= '</p>'; str+= '</li>'; } str+= '</ul>'; $('#results').html(str); } else { $('#results').html('<li class="error">No results.</li>'); } } </script> Include the jquery.js file before closing the &ltbody> tag. Now, write the jQuery code that will take the search query from the textbox and will try to retrieve the results from YouTube. A callback function called showVideoList will get the response and will create a list of videos from the response. http://gdata.youtube.com/feeds/api/videos?q=' + query + '&alt=json-in-script All done, and we are now ready to search YouTube. Run the index.html file in your browser and enter a search query. Click on the Search button and you will see a list of videos with a number of comments and a rating for each video. How it works... script tags are an exception to cross-browser origin policy. We can take advantage of this by requesting the URL from the src attribute of a script tag and by wrapping the raw response in a callback function. In this way the response becomes JavaScript code instead of data. This code can now be executed on the browser. The URL for YouTube video search is as follows: http://gdata.youtube.com/feeds/api/videos?q=' + query + '&alt=json-in-script Parameter q is the query that we entered in the textbox and alt is the type of response we want. Since we are using JSONP instead of JSON, the value for alt is defined as json-in-script as per YouTube API specification. On getting the response, the callback function showVideoList executes. It checks whether any results are available or not. If none are found, an error message is displayed. Otherwise, we get all the entry elements and iterate over them using a for loop. For each video entry, we get the videoURL, thumbnail, thumbnailWidth, thumbnailHeight, numComments, and rating. Then we create the HTML from these variables with a list item for each video. For each video an anchor is created with href set to videoURL. The video thumbnail is put inside the anchor and a p tag is created where we display the number of comments and rating for a particular video. After the HTML has been created, it is inserted in the DIV with ID results. There's more... About JSONP You can read more about JSONP at the following websites: http://remysharp.com/2007/10/08/what-is-jsonp/ http://en.wikipedia.org/wiki/JSON#JSONP
Read more
  • 0
  • 0
  • 8009

article-image-building-wpf-net-client
Packt
16 Sep 2015
8 min read
Save for later

Building a WPF .NET Client

Packt
16 Sep 2015
8 min read
In this article by Einar Ingebrigtsen, author of the book SignalR: Real-time Application Development - Second Edition we will bring the full feature set of what we've built so far for the web onto the desktop through a WPF .NET client. There are quite a few ways of developing Windows client solutions, and WPF was introduced back in 2005 and has become one of the most popular ways of developing software for Windows. In WPF, we have something called XAML, which is what Windows Phone development supports and is also the latest programming model in Windows 10. In this chapter, the following topics will be covered: MVVM Brief introduction to the SOLID principles XAML WPF (For more resources related to this topic, see here.) Decoupling it all So you might be asking yourself, what is MVVM? It stands for Model View ViewModel: a pattern for client development that became very popular in the XAML stack, enabled by Microsoft based on Martin Fowlers presentation model (http://martinfowler.com/eaaDev/PresentationModel.html). Its principle is that you have a ViewModel that holds the state and exposes behavior that can be utilized from a view. The view observes any changes of the state the ViewModel exposes, making the ViewModel totally unaware that there is a View. The ViewModel is decoupled and can be put in isolation and is perfect for automated testing. As part of the state that the ViewModel typically holds is the model part, which is something it usually gets from the server, and a SignalR hub is the perfect transport to get this. It boils down to recognizing the different concerns that make up the frontend and separating it all. This gives us the following diagram: Decoupling – the next level In this chapter, one of the things we will brush up is the usage of the Dependency Inversion Principle, the D of SOLID. Let's start with the first principle: the S in SOLID of Single Responsibility Principle, which states that a method or a class should only have one reason to change and only have one responsibility. With this, we can't have our units take on more than one responsibility and need help from collaborators to do the entire job. These collaborators are things we now depend on and we should represent these dependencies clearly to our units so that anyone or anything instantiating it knows what we are depending on. We have now flipped around the way in which we get dependencies. Instead of the unit trying to instantiate everything itself, we now clearly state what we need as collaborators, opening up for the calling code to decide what implementations of these dependencies you want to pass on. Also, this is an important aspect; typically, you'd want the dependencies expressed in the form of interfaces, yielding flexibility for the calling code. Basically, what this all means is that instead of a unit or system instantiating and managing its dependencies, we decouple and let something called as the Inversion of Control container deal with this. In the sample, we will use an IoC (Inversion of Control) container called Ninject that will deal with this for us. What it basically does is manage what implementations to give to the dependency specified on the constructor. Often, you'll find that the dependencies are interfaces in C#. This means one is not coupled to a specific implementation and has the flexibility of changing things at runtime based on configuration. Another role of the IOC container is to govern the life cycle of the dependencies. It is responsible for knowing when to create new instances and when to reuse an instance. For instance, in a web application, there are some systems that you want to have a life cycle of per request, meaning that we will get the same instance for the lifetime of a web request. The life cycle is configurable in what is known as a binding. When you explicitly set up the relationship between a contract (interface) and its implementation, you can choose to set up the life cycle behavior as well. Building for the desktop The first thing we will need is a separate project in our solution: Let's add it by right-clicking on the solution in Solution Explorer and navigating to Add | New Project: In the Add New Project dialog box, we want to make sure the .NET Framework 4.5.1 is selected. We could have gone with 4.5, but some of the dependencies that we're going to use have switched to 4.5.1. This is the latest version of the .NET Framework at the time of writing, so if you can, use it. Make sure to select Windows Desktop and then select WPF Application. Give the project the name SignalRChat.WPF and then click on the OK button: Setting up the packages We will need some packages to get started properly. This process is described in detail in Chapter 1, The Primer. Let's start off by adding SignalR, which is our primary framework that we will be working with to move on. We will be pulling this using NuGet, as described in Chapter 1, The Primer: Right-click on the References in Solution Explorer and select Manage NuGet Packages, and type Microsoft.AspNet.SignalR.Client in the Search dialog box. Select it and click on Install. Next, we're going to pull down something called as Bifrost. Bifrost is a library that helps us build MVVM-based solutions on WPF; there are a few other solutions out there, but we'll focus on Bifrost. Add a package called Bifrost.Client. Then, we need the package that gives us the IOC container called Ninject, working together with Bifrost. Add a package called Bifrost.Ninject. Observables One of the things that is part of WPF and all other XAML-based platforms is the notion of observables; be it in properties or collections that will notify when they change. The notification is done through well-known interfaces for this, such as INotifyPropertyChanged or INotifyCollectionChanged. Implementing these interfaces quickly becomes tedious all over the place where you want to notify everything when there are changes. Luckily, there are ways to make this pretty much go away. We can generate the code for this instead, either at runtime or at build time. For our project, we will go for a build-time solution. To accomplish this, we will use something called as Fody and a plugin for it called PropertyChanged. Add another NuGet package called PropertyChanged.Fody. If you happen to get problems during compiling, it could be the result of the dependency to a package called Fody not being installed. This happens for some versions of the package in combination with the latest Roslyn compiler. To fix this, install the NuGet package called Fody explicitly. Now that we have all the packages, we will need some configuration in code: Open the App.xam.cs file and add the following statement: using Bifrost.Configuration; The next thing we will need is a constructor for the App class: public App() { Configure.DiscoverAndConfigure(); } This will tell Bifrost to discover the implementations of the well-known interfaces to do the configuration. Bifrost uses the IoC container internally all the time, so the next thing we will need to do is give it an implementation. Add a class called ContainerCreator at the root of the project. Make it look as follows: using Bifrost.Configuration; using Bifrost.Execution; using Bifrost.Ninject; using Ninject; namespace SignalRChat.WPF { public class ContainerCreator : ICanCreateContainer { public IContainer CreateContainer() { var kernel = new StandardKernel(); var container = new Container(kernel); return container; } } } We've chosen Ninject among others that Bifrost supports, mainly because of familiarity and habit. If you happen to have another favorite, Bifrost supports a few. It's also fairly easy to implement your own support; just go to the source at http://github.com/dolittle/bifrost to find reference implementations. In order for Bifrost to be targeting the desktop, we need to tell it through configuration. Add a class called Configurator at the root of the project. Make it look as follows: using Bifrost.Configuration; namespace SignalRChat.WPF { public class Configurator : ICanConfigure { public void Configure(IConfigure configure) { configure.Frontend.Desktop(); } } } Summary Although there are differences between creating a web solution and a desktop client, the differences have faded over time. We can apply the same principles across the different environments; it's just different programming languages. The SignalR API adds the same type of consistency in thinking, although not as matured as the JavaScript API with proxy generation and so on; still the same ideas and concepts are found in the underlying API. Resources for Article: Further resources on this subject: The Importance of Securing Web Services [article] Working with WebStart and the Browser Plugin [article] Microsoft Azure – Developing Web API for Mobile Apps [article]
Read more
  • 0
  • 0
  • 7972

article-image-building-chat-application
Packt
27 Jun 2013
4 min read
Save for later

Building a Chat Application

Packt
27 Jun 2013
4 min read
(For more resources related to this topic, see here.) The following is a screenshot of our chat application: Creating a project To begin developing our chat application, we need to create an Opa project using the following Opa command: opa create chat This command will create an empty Opa project. Also, it will generate the required directories and files automatically with the structure as shown in the following screenshot: Let's have a brief look at what these source code files do: controller.opa: This file serves as the entry point of the chat application; we start the web server in controller.opa view.opa: This file serves as an user interface model.opa: This is the model of the chat application; it defines the message, network, and the chat room style.css: This is an external stylesheet file Makefile: This file is used to build an application As we do not need database support in the chat application, we can remove --import-package stdlib.database.mongo from the FLAG option in Makefile. Type make and make run to run the empty application. Launching the web server Let's begin with controller.opa, the entry point of our chat application where we launch the web server. We have already discussed the function Server.start in the Server module section. In our chat application, we will use a handlers group to handle users requests. Server.start(Server.http, [ {resources: @static_resource_directory("resources")}, {register: [{css:["/resources/css/style.css"]}]}, {title:"Opa Chat", page: View.page } ]) So, what exactly are the arguments that we are passing to the Server.start function? The line {resources: @static_resource_direcotry("resources")} registers a resource handler and will serve resource files in the resources directory. Next, the line {register: [{css:["/resources/css/style.css"]}]} registers an external CSS file—style.css. This permits us to use styles in the style.css application scope. Finally, the line {title:"Opa Chat", page: View.page} registers a single page handler that will dispatch all other requests to the function View.page. The server uses the default configuration Server.http and will run on port 8080. Designing user interface When the application starts, all the requests (except requests for resources) will be distributed to the function View.page, which displays the chat page on the browser. Let's take a look at the view part; we define a module named View in view.opa. import stdlib.themes.bootstrap.css module View { function page(){ user = Random.string(8) <div id=#title class="navbar navbar-inverse navbar-fixed-top"> <div class=navbar-inner> <div id=#logo /> </div> </div> <div id=#conversation class=container-fluid onready={function(_){Model.join(updatemsg)}} /> <div id=#footer class="navbar navbar-fixed-bottom"> <div class=input-append> <input type=text id=#entry class=input-xxlarge onnewline={broadcast(user)}/> <button class="btn btn-primary" onclick={broadcast(user)}>Post</button> </div> </div> } ... } The module View contains functions to display the page on the browser. In the first line, import stdlib.themes.bootstrap.css, we import Bootstrap styles. This permits us to use Bootstrap markup in our code, such as navbar, navbar-fixtop, and btn-primary. We also registered an external style.css file so we can use styles in style.css such as conversation and footer. As we can see, this code in the function page follows almost the same syntax as HTML. As discussed in earlier, we can use HTML freely in the Opa code, the HTML values having a predefined type xhtml in Opa. Summary In this article, we started by creating and a project and launching the web server. Resources for Article : Further resources on this subject: MySQL 5.1 Plugin: HTML Storage Engine—Reads and Writes [Article] Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] Oracle Web RowSet - Part1 [Article]
Read more
  • 0
  • 0
  • 7844

article-image-why-coffeescript
Packt
31 Jan 2013
9 min read
Save for later

Why CoffeeScript?

Packt
31 Jan 2013
9 min read
(For more resources related to this topic, see here.) CoffeeScript CoffeeScript compiles to JavaScript and follows its idioms closely. It's quite possible to rewrite any CoffeeScript code in Javascript and it won't look drastically different. So why would you want to use CoffeeScript? As an experienced JavaScript programmer, you might think that learning a completely new language is simply not worth the time and effort. But ultimately, code is for programmers. The compiler doesn't care how the code looks or how clear its meaning is; either it will run or it won't. We aim to write expressive code as programmers so that we can read, reference, understand, modify, and rewrite it. If the code is too complex or filled with needless ceremony, it will be harder to understand and maintain. CoffeeScript gives us an advantage to clarify our ideas and write more readable code. It's a misconception to think that CoffeeScript is very different from JavaScript. There might be some drastic syntax differences here and there, but in essence, CoffeeScript was designed to polish the rough edges of JavaScript to reveal the beautiful language hidden beneath. It steers programmers towards JavaScript's so-called "good parts" and holds strong opinions of what constitutes good JavaScript. One of the mantras of the CoffeeScript community is: "It's just JavaScript", and I have also found that the best way to truly comprehend the language is to look at how it generates its output, which is actually quite readable and understandable code. Throughout this article, we'll highlight some of the differences between the two languages, often focusing on the things in JavaScript that CoffeeScript tries to improve. In this way, I would not only like to give you an overview of the major features of the language, but also prepare you to be able to debug your CoffeeScript from its generated code once you start using it more often, as well as being able to convert existing JavaScript. Let's start with some of the things CoffeeScript fixes in JavaScript. CoffeeScript syntax One of the great things about CoffeeScript is that you tend to write much shorter and more succinct programs than you normally would in JavaScript. Some of this is because of the powerful features added to the language, but it also makes a few tweaks to the general syntax of JavaScript to transform it to something quite elegant. It does away with all the semicolons, braces, and other cruft that usually contributes to a lot of the "line noise" in JavaScript. To illustrate this, let's look at an example. On the left-hand side of the following table is CoffeeScript; on the right-hand side is the generated JavaScript: CoffeeScript JavaScript fibonacci = (n) -> return 0 if n == 0 return 1 if n == 1 (fibonacci n-1) + (fibonacci n-2) alert fibonacci 10 var fibonacci; fibonacci = function(n) { if (n === 0) { return 0; } if (n === 1) { return 1; } return (fibonacci(n - 1)) + (fibonacci(n - 2)); }; alert(fibonacci(10)); To run the code examples in this article, you can use the great Try CoffeeScript online tool, at http://coffeescript.org. It allows you to type in CoffeeScript code, which will then display the equivalent JavaScript in a side pane. You can also run the code right from the browser (by clicking the Run button in the upper-left corner). At first, the two languages might appear to be quite drastically different, but hopefully as we go through the differences, you'll see that it's all still JavaScript with some small tweaks and a lot of nice syntactical sugar. Semicolons and braces As you might have noticed, CoffeeScript does away with all the trailing semicolons at the end of a line. You can still use a semicolon if you want to put two expressions on a single line. It also does away with enclosing braces (also known as curly brackets) for code blocks such as if statements, switch, and the try..catch block. Whitespace You might be wondering how the parser figures out where your code blocks start and end. The CoffeeScript compiler does this by using syntactical whitespace. This means that indentation is used for delimited code blocks instead of braces. This is perhaps one of the most controversial features of the language. If you think about it, in almost all languages, programmers tend to already use indentation of code blocks to improve readability, so why not make it part of the syntax? This is not a new concept, and was mostly borrowed from Python. If you have any experience with significant whitespace language, you will not have any trouble with CoffeeScript indentation. If you don't, it might take some getting used to, but it makes for code that is wonderfully readable and easy to scan, while shaving off quite a few keystrokes. I'm willing to bet that if you do take the time to get over some initial reservations you might have, you might just grow to love block indentation. Blocks can be indented with tabs or spaces, but be careful about being consistent using one or the other, or CoffeeScript will not be able to parse your code correctly. Parenthesis You'll see that the clause of the if statement does not need be enclosed within parentheses. The same goes for the alert function; you'll see that the single string parameter follows the function call without parentheses as well. In CoffeeScript, parentheses are optional in function calls with parameters, clauses for if..else statements, as well as while loops. Although functions with arguments do not need parentheses, it is still a good idea to use them in cases where ambiguity might exist. The CoffeeScript community has come up with a nice idiom: wrapping the whole function call in parenthesis. The use of the alert function in CoffeeScript is shown in the following table: CoffeeScript JavaScript alert square 2 * 2.5 + 1 alert(square(2 * 2.5 + 1)); alert (square 2 * 2.5) + 1 alert((square(2 * 2.5)) + 1); Functions are first class objects in JavaScript. This means that when you refer to a function without parentheses, it will return the function itself, as a value. Thus, in CoffeeScript you still need to add parentheses when calling a function with no arguments. By making these few tweaks to the syntax of JavaScript, CoffeeScript arguably already improves the readability and succinctness of your code by a big factor, and also saves you quite a lot of keystrokes. But it has a few other tricks up its sleeve. Most programmers who have written a fair amount of JavaScript would probably agree that one of the phrases that gets typed the most frequently would have to be the function definition function(){}. Functions are really at the heart of JavaScript, yet not without its many warts. CoffeeScript has great function syntax The fact that you can treat functions as first class objects as well as being able to create anonymous functions is one of JavaScript's most powerful features. However, the syntax can be very awkward and make the code hard to read (especially if you start nesting functions). But CoffeeScript has a fix for this. Have a look at the following snippets: CoffeeScript JavaScript -> alert 'hi there!' square = (n) -> n * n var square; (function() { return alert('hi there!'); }); square = function(n) { return n * n; }; Here, we are creating two anonymous functions, the first just displays a dialog and the second will return the square of its argument. You've probably noticed the funny -> symbol and might have figured out what it does. Yep, that is how you define a function in CoffeeScript. I have come across a couple of different names for the symbol but the most accepted term seems to be a thin arrow or just an arrow. Notice that the first function definition has no arguments and thus we can drop the parenthesis. The second function does have a single argument, which is enclosed in parenthesis, which goes in front of the -> symbol. With what we now know, we can formulate a few simple substitution rules to convert JavaScript function declarations to CoffeeScript. They are as follows: Replace the function keyword with -> If the function has no arguments, drop the parenthesis If it has arguments, move the whole argument list with parenthesis in front of the -> symbol Make sure that the function body is properly indented and then drop the enclosing braces Return isn't required You might have noted that in both the functions, we left out the return keyword. By default, CoffeeScript will return the last expression in your function. It will try to do this in all the paths of execution. CoffeeScript will try turning any statement (fragment of code that returns nothing) into an expression that returns a value. CoffeeScript programmers will often refer to this feature of the language by saying that everything is an expression. This means you don't need to type return anymore, but keep in mind that this can, in many cases, alter your code subtly, because of the fact that you will always return something. If you need to return a value from a function before the last statement, you can still use return. Function arguments Function arguments can also take an optional default value. In the following code snippet you'll see that the optional value specified is assigned in the body of the generated Javascript: CoffeeScript JavaScript square = (n=1) -> alert(n * n) var square; square = function(n) { if (n == null) { n = 1; } return alert(n * n); }; In JavaScript, each function has an array-like structure called arguments with an indexed property for each argument that was passed to the function. You can use arguments to pass in a variable number of parameters to a function. Each parameter will be an element in arguments and thus you don't have to refer to parameters by name. Although the arguments object acts somewhat like an array, it is in not in fact a "real" array and lacks most of the standard array methods. Often, you'll find that arguments doesn't provide the functionality needed to inspect and manipulate its elements like they are used with an array. Summary We saw how it can help you write shorter, cleaner, and more elegant code than you normally would in JavaScript and avoid many of its pitfalls. We came to realize that even though CoffeeScripts' syntax seems to be quite different from JavaScript, it actually maps pretty closely to its generated output. Resources for Article : Further resources on this subject: ASP.Net Site Performance: Improving JavaScript Loading [Article] Build iPhone, Android and iPad Applications using jQTouch [Article] An Overview of the Node Package Manager [Article]
Read more
  • 0
  • 0
  • 7687
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-aspnet-controllers-and-server-side-routes
Packt
19 Aug 2016
22 min read
Save for later

ASP.NET Controllers and Server-Side Routes

Packt
19 Aug 2016
22 min read
In this article by Valerio De Sanctis, author of the book ASP.NET Web API and Angular 2, we will explore the client-server interaction capabilities of our frameworks: to put it in other words, we need to understand how Angular2 will be able to fetch data from ASP.NET Core using its brand new, MVC6-based API structure. We won't be worrying about how will ASP.NET core retrieve these data – be it from session objects, data stores, DBMS, or any possible data source, that will come later on. For now, we'll just put together some sample, static data in order to understand how to pass them back and forth by using a well-structured, highly-configurable and viable interface. (For more resources related to this topic, see here.) The data flow A Native Web App following the single-page application approach will roughly handle the client-server communication in the following way: In case you are wondering about what these Async Data Requests actually are, the answer is simple, everything, as long as it needs to retrieve data from the server, which is something that most of the common user interactions will normally do, including (yet not limiting to), pressing a button to show more data or to edit/delete something, following a link to another app view, submitting a form and so on. That is, unless the task is so trivial or it involves a minimal amount of data that the client can entirely handle it, meaning that it already has everything he needs. Examples of such tasks are, show/hide element toggles, in-page navigation elements (such as internal anchors), and any temporary job requiring to hit a confirmation or save button to be pressed before being actually processed. The above picture shows, in a nutshell, what we're going to do. Define and implement a pattern to serve these JSON-based, server-side responses our application will need to handle the upcoming requests. Since we've chosen a strongly data-driven application pattern such as a Wiki, we'll surely need to put together a bunch of common CRUD based requests revolving around a defined object which will represent our entries. For the sake of simplicity, we'll call it Item from now on. These requests will address some common CMS-inspired tasks such as: display a list of items, view/edit the selected item's details, handle filters, and text-based search queries and also delete an item. Before going further, let's have a more detailed look on what happens between any of these Data Request issued by the client and JSON Responses send out by the server, i.e. what's usually called the Request/Response flow: As we can see, in order to respond to any client-issued Async Data Request we need to build a server-side MVC6 WebAPIControllerfeaturing the following capabilities: Read and/or Write data using the Data Access Layer. Organize these data in a suitable, JSON-serializableViewModel. Serialize the ViewModel and send it to the client as a JSON Response. Based on these points, we could easily conclude that the ViewModel is the key item here. That's not always correct: it could or couldn't be the case, depending on the project we are building. To better clarify that, before going further, it could be useful to spend a couple words on the ViewModel object itself. The role of the ViewModel We all know that a ViewModel is a container-type class which represents only the data we want to display on our webpage. In any standard MVC-based ASP.NET application, the ViewModel is instantiated by the Controller in response to a GET request using the data fetched from the Model: once built, the ViewModel is passed to the View, where it is used to populate the page contents/input fields. The main reason for building a ViewModel instead of directly passing the Model entities is that it only represents the data that we want to use, and nothing else. All the unnecessary properties that are in the model domain object will be left out, keeping the data transfer as lightweight as possible. Another advantage is the additional security it gives, since we can protect any field from being serialized and passed through the HTTP channel. In a standard Web API context, where the data is passed using RESTFul conventions via serialized formats such as JSON or XML, the ViewModel could be easily replaced by a JSON-serializable dynamic object created on the fly, such as this: var response = new{ Id = "1", Title = "The title", Description = "The description" }; This approach is often viable for small or sample projects, where creating one (or many) ViewModel classes could be a waste of time. That's not our case, though, conversely, our project will greatly benefit from having a well-defined, strongly-typed ViewModel structure, even if they will be all eventually converted into JSON strings. Our first controller Now that we have a clear vision of the Request/Response flow and its main actors, we can start building something up. Let's start with the Welcome View, which is the first page that any user will see upon connecting to our native web App. This is something that in a standard web application would be called Home Page, but since we are following a Single Page Application approach that name isn't appropriate. After all, we are not going to have more than one page. In most Wikis, the Welcome View/Home Page contains a brief text explaining the context/topic of the project and then one or more lists of items ordered and/or filtered in various ways, such as: The last inserted ones (most recent first). The most relevant/visited ones (most viewed first). Some random items (in random order). Let's try to do something like that. This will be our master plan for a suitable Welcome View: In order to do that, we're going to need the following set of API calls: api/items/GetLatest (to fetch the last inserted items). api/items/GetMostViewed (to fetch the last inserted items). api/items/GetRandom (to fetch the last inserted items). As we can see, all of them will be returning a list of items ordered by a well-defined logic. That's why, before working on them, we should provide ourselves with a suitable ViewModel. The ItemViewModel One of the biggest advantages in building a Native Web App using ASP.NET and Angular2 is that we can start writing our code without worrying to much about data sources: they will come later, and only after we're sure about what we really need. This is not a requirement either - you are also free to start with your data source for a number of good reasons, such as: You already have a clear idea of what you'll need. You already have your entity set(s) and/or a defined/populated data structure to work with. You're used to start with the data, then moving to the GUI. All the above reasons are perfectly fine: you won't ever get fired for doing that. Yet, the chance to start with the front-end might help you a lot if you're still unsure about how your application will look like, either in terms of GUI and/or data. In building this Native Web App, we'll take advantage of that: hence why we'll start defining our Item ViewModelinstead of creating its Data Source and Entity class. From Solution Explorer, right-click to the project root node and add a new folder named ViewModels. Once created, right-click on it and add a new item: from the server-side elements, pick a standard Class, name it ItemViewModel.cs and hit the Add button, then type in the following code: using System; using System.Collections.Generic; using System.ComponentModel; using System.Linq; using System.Threading.Tasks; using Newtonsoft.Json; namespaceOpenGameListWebApp.ViewModels { [JsonObject(MemberSerialization.OptOut)] publicclassItemViewModel { #region Constructor public ItemViewModel() { } #endregion Constructor #region Properties publicint Id { get; set; } publicstring Title { get; set; } publicstring Description { get; set; } publicstring Text { get; set; } publicstring Notes { get; set; } [DefaultValue(0)] publicint Type { get; set; } [DefaultValue(0)] publicint Flags { get; set; } publicstring UserId { get; set; } [JsonIgnore] publicintViewCount { get; set; } publicDateTime CreatedDate { get; set; } publicDateTime LastModifiedDate { get; set; } #endregion Properties } } As we can see, we're defining a rather complex class: this isn't something we could easily handle using dynamic object created on-the-fly, hence why we're using a ViewModel instead. We will be installing Newtonsoft's Json.NET Package using NuGet. We will start using it in this class, by including its namespace in line 6 and decorating our newly-created Item class with a JsonObject Attribute in line 10. That attribute can be used to set a list of behaviours of the JsonSerializer / JsonDeserializer methods, overriding the default ones: notice that we used MemberSerialization.OptOut, meaning that any field will be serialized into JSON unless being decorated by an explicit JsonIgnore attribute or NonSerializedattribute. We are making this choice because we're going to need most of our ViewModel properties serialized, as we'll be seeing soon enough. The ItemController Now that we have our ItemViewModel class, let's use it to return some server-side data. From your project's root node, open the /Controllers/ folder: right-click on it, select Add>New Item, then create a Web API Controller class, name it ItemController.cs and click the Add button to create it. The controller will be created with a bunch of sample methods: they are identical to those present in the default ValueController.cs, hence we don't need to keep them. Delete the entire file content and replace it with the following code: using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; usingOpenGameListWebApp.ViewModels; namespaceOpenGameListWebApp.Controllers { [Route("api/[controller]")] publicclassItemsController : Controller { // GET api/items/GetLatest/5 [HttpGet("GetLatest/{num}")] publicJsonResult GetLatest(int num) { var arr = newList<ItemViewModel>(); for (int i = 1; i <= num; i++) arr.Add(newItemViewModel() { Id = i, Title = String.Format("Item {0} Title", i), Description = String.Format("Item {0} Description", i) }); var settings = newJsonSerializerSettings() { Formatting = Formatting.Indented }; returnnewJsonResult(arr, settings); } } } This controller will be in charge of all Item-related operations within our app. As we can see, we started defining a GetLatestmethod accepting a single Integerparameter value.The method accepts any GET request using the custom routing rules configured via the HttpGetAttribute: this approach is called Attribute Routing and we'll be digging more into it later in this article. For now, let's stick to the code inside the method itself. The behaviour is really simple: since we don't (yet) have a Data Source, we're basically mocking a bunch of ItemViewModel objects: notice that, although it's just a fake response, we're doing it in a structured and credible way, respecting the number of items issued by the request and also providing different content for each one of them. It's also worth noticing that we're using a JsonResult return type, which is the best thing we can do as long as we're working with ViewModel classes featuring the JsonObject attribute provided by the Json.NET framework: that's definitely better than returning plain string or IEnumerable<string> types, as it will automatically take care of serializing the outcome and setting the appropriate response headers.Let's try our Controller by running our app in Debug Mode: select Debug>Start Debugging from main menu or press F5. The default browser should open, pointing to the index.html page because we did set it as the Launch URL in our project's debug properties. In order to test our brand new API Controller, we need to manually change the URL with the following: /api/items/GetLatest/5 If we did everything correctly, it will show something like the following: Our first controller is up and running. As you can see, the ViewCount property is not present in the Json-serialized output: that's by design, since it has been flagged with the JsonIgnore attribute, meaning that we're explicitly opting it out. Now that we've seen that it works, we can come back to the routing aspect of what we just did: since it is a major topic, it's well worth some of our time. Understanding routes We will acknowledge the fact that the ASP.NET Core pipeline has been completely rewritten in order to merge the MVC and WebAPI modules into a single, lightweight framework to handle both worlds. Although this certainly is a good thing, it comes with the usual downside that we need to learn a lot of new stuff. Handling Routes is a perfect example of this, as the new approach defines some major breaking changes from the past. Defining routing The first thing we should do is giving out a proper definition of what Routing actually is. To cut it simple, we could say that URL routing is the server-side feature that allows a web developer to handle HTTP requests pointing to URIs not mapping to physical files. Such technique could be used for a number of different reasons, including: Giving dynamic pages semantic, meaningful and human-readable names in order to advantage readability and/or search-engine optimization (SEO). Renaming or moving one or more physical files within your project's folder tree without being forced to change their URLs. Setup alias and redirects. Routing through the ages In earlier times, when ASP.NET was just Web Forms, URL routing was strictly bound to physical files: in order to implement viable URL convention patterns the developers were forced to install/configure a dedicated URL rewriting tool by using either an external ISAPI filter such as Helicontech's SAPI Rewrite or, starting with IIS7, the IIS URL Rewrite Module. When ASP.NET MVC got released, the Routing pattern was completely rewritten: the developers could setup their own convention-based routes in a dedicated file (RouteConfig.cs, Global.asax, depending on template) using the Routes.MapRoute method. If you've played along with MVC 1 through 5 or WebAPI 1 and/or 2, snippets like this should be quite familiar to you: routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); This method of defining routes, strictly based upon pattern matching techniques used to relate any given URL requests to a specific Controller Actions, went by the name of Convention-based Routing. ASP.NET MVC5 brought something new, as it was the first version supporting the so-called Attribute-based Routing. This approach was designed as an effort to give to developers a more versatile approach. If you used it at least once you'll probably agree that it was a great addition to the framework, as it allowed the developers to define routes within the Controller file. Even those who chose to keep the convention-based approach could find it useful for one-time overrides like the following, without having to sort it out using some regular expressions: [RoutePrefix("v2Products")] publicclassProductsController : Controller { [Route("v2Index")] publicActionResult Index() { return View(); } } In ASP.NET MVC6, the routing pipeline has been rewritten completely: that's way things like the Routes.MapRoute() method is not used anymore, as well as any explicit default routing configuration. You won't be finding anything like that in the new Startup.cs file, which contains a very small amount of code and (apparently) nothing about routes. Handling routes in ASP.NET MVC6 We could say that the reason behind the Routes.MapRoute method disappearance in the Application's main configuration file is due to the fact that there's no need to setup default routes anymore. Routing is handled by the two brand-new services.AddMvc() and services.UseMvc() methods called within the Startup.cs file, which respectively register MVC using the Dependency Injection framework built into ASP.NET Core and add a set of default routes to our app. We can take a look at what happens behind the hood by looking at the current implementation of the services.UseMvc()method in the framework code (relevant lines are highlighted): public static IApplicationBuilder UseMvc( [NotNull] this IApplicationBuilder app, [NotNull] Action<IRouteBuilder> configureRoutes) { // Verify if AddMvc was done before calling UseMvc // We use the MvcMarkerService to make sure if all the services were added. MvcServicesHelper.ThrowIfMvcNotRegistered(app.ApplicationServices); var routes = new RouteBuilder { DefaultHandler = new MvcRouteHandler(), ServiceProvider = app.ApplicationServices }; configureRoutes(routes); // Adding the attribute route comes after running the user-code because // we want to respect any changes to the DefaultHandler. routes.Routes.Insert(0, AttributeRouting.CreateAttributeMegaRoute( routes.DefaultHandler, app.ApplicationServices)); return app.UseRouter(routes.Build()); } The good thing about this is the fact that the framework now handles all the hard work, iterating through all the Controller's actions and setting up their default routes, thus saving us some work. It worth to notice that the default ruleset follows the standard RESTFulconventions, meaning that it will be restricted to the following action names:Get, Post, Put, Delete. We could say here that ASP.NET MVC6 is enforcing a strict WebAPI-oriented approach - which is much to be expected, since it incorporates the whole ASP.NET Core framework. Following the RESTful convention is generally a great thing to do, especially if we aim to create a set of pragmatic, RESTful basedpublic API to be used by other developers. Conversely, if we're developing our own app and we want to keep our API accessible to our eyes only, going for custom routing standards is just as viable: as a matter of fact, it could even be a better choice to shield our Controllers against some most trivial forms of request flood and/or DDoS-based attacks. Luckily enough, both the Convention-based Routing and the Attribute-based Routing are still alive and well, allowing you to setup your own standards. Convention-based routing If we feel like using the most classic routing approach, we can easily resurrect our beloved MapRoute() method by enhancing the app.UseMvc() call within the Startup.cs file in the following way: app.UseMvc(routes => { // Route Sample A routes.MapRoute( name: "RouteSampleA", template: "MyOwnGet", defaults: new { controller = "Items", action = "Get" } ); // Route Sample B routes.MapRoute( name: "RouteSampleB", template: "MyOwnPost", defaults: new { controller = "Items", action = "Post" } ); }); Attribute-based routing Our previously-shownItemController.cs makes a good use of the Attribute-Based Routing approach, featuring it either at Controller level: [Route("api/[controller]")] public class ItemsController : Controller Also at Action Method level: [HttpGet("GetLatest")] public JsonResult GetLatest() Three choices to route them all Long story short, ASP.NET MVC6 is giving us three different choices for handling routes: enforcing the standard RESTful conventions, reverting back to the good old Convention-based Routing or decorating the Controller files with the Attribute-based Routing. It's also worth noticing that Attribute-based Routes, if and when defined, would override any matchingConvention-basedpattern: both of them, if/when defined, would override the default RESTful conventions created by the built-in UseMvc() method. In this article we're going to use all of these approaches, in order to learn when, where and how to properly make use of either of them. Adding more routes Let's get back to our ItemController. Now that we're aware of the routing patterns we can use, we can use that knowledge to implement the API calls we're still missing. Open the ItemController.cs file and add the following code (new lines are highlighted): using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using OpenGameListWebApp.ViewModels; using Newtonsoft.Json; namespaceOpenGameListWebApp.Controllers { [Route("api/[controller]")] publicclassItemsController : Controller { #region Attribute-based Routing ///<summary> /// GET: api/items/GetLatest/{n} /// ROUTING TYPE: attribute-based ///</summary> ///<returns>An array of {n} Json-serialized objects representing the last inserted items.</returns> [HttpGet("GetLatest/{n}")] publicIActionResult GetLatest(int n) { var items = GetSampleItems().OrderByDescending(i => i.CreatedDate).Take(n); return new JsonResult(items, DefaultJsonSettings); } /// <summary> /// GET: api/items/GetMostViewed/{n} /// ROUTING TYPE: attribute-based /// </summary> /// <returns>An array of {n} Json-serialized objects representing the items with most user views.</returns> [HttpGet("GetMostViewed/{n}")] public IActionResult GetMostViewed(int n) { if (n > MaxNumberOfItems) n = MaxNumberOfItems; var items = GetSampleItems().OrderByDescending(i => i.ViewCount).Take(n); return new JsonResult(items, DefaultJsonSettings); } /// <summary> /// GET: api/items/GetRandom/{n} /// ROUTING TYPE: attribute-based /// </summary> /// <returns>An array of {n} Json-serialized objects representing some randomly-picked items.</returns> [HttpGet("GetRandom/{n}")] public IActionResult GetRandom(int n) { if (n > MaxNumberOfItems) n = MaxNumberOfItems; var items = GetSampleItems().OrderBy(i => Guid.NewGuid()).Take(n); return new JsonResult(items, DefaultJsonSettings); } #endregion #region Private Members /// <summary> /// Generate a sample array of source Items to emulate a database (for testing purposes only). /// </summary> /// <param name="num">The number of items to generate: default is 999</param> /// <returns>a defined number of mock items (for testing purpose only)</returns> private List<ItemViewModel> GetSampleItems(int num = 999) { List<ItemViewModel> lst = new List<ItemViewModel>(); DateTime date = new DateTime(2015, 12, 31).AddDays(-num); for (int id = 1; id <= num; id++) { lst.Add(new ItemViewModel() { Id = id, Title = String.Format("Item {0} Title", id), Description = String.Format("This is a sample description for item {0}: Lorem ipsum dolor sit amet.", id), CreatedDate = date.AddDays(id), LastModifiedDate = date.AddDays(id), ViewCount = num - id }); } return lst; } /// <summary> /// Returns a suitable JsonSerializerSettings object that can be used to generate the JsonResult return value for this Controller's methods. /// </summary> private JsonSerializerSettings DefaultJsonSettings { get { return new JsonSerializerSettings() { Formatting = Formatting.Indented }; } } #endregion } We added a lot of things there, that's for sure. Let's see what's new: We added the GetMostViewed(n) and GetRandom(n) methods, built upon the same mocking logic used for GetLatest(n): either one requires a single parameter of Integer type to specify the (maximum) number of items to retrieve. We added two new private members: The GetLatestItems() method, to generate some sample Item objects when we need them. This method is an improved version of the dummy item generator loop we had inside the previous GetLatest() method implementation, as it acts more like a Dummy Data Provider: we'll tell more about it later on. The DefaultJsonSettings property, so we won't have to manually instantiate a JsonSerializerSetting object every time. We also decorated each class member with a dedicated<summary> documentation tag explaining what it does and its return value. These tags will be used by IntelliSense to show real-time information about the type within the Visual Studio GUI. They will also come handy when we'll want to generate an auto-generated XML Documentationfor our project by using industry-standard documentation tools such as Sandcastle. Finally, we added some #region / #endregion pre-processor directives to separate our code into blocks. We'll do this a lot from now on, as this will greatly increase our source code readability and usability, allowing us to expand or collapse different sections/part when we don't need them, thus focusing more on what we're working on. For more info regarding documentation tags, take a look at the following MSDN official documentation page: https://msdn.microsoft.com/library/2d6dt3kf.aspx If you want know more about C# pre-processor directives, this is the one to check out instead: https://msdn.microsoft.com/library/9a1ybwek.aspx The dummy data provider Our new GetLatestItems() method deserves a couple more words. As we can easily see it emulates the role of a Data Provider, returning a list of items in a credible fashion. Notice that we built it in a way that it will always return identical items, as long as the num parameter value remains the same: The generated items Id will follow a linear sequence, from 1 to num. Any generated item will have incremental CreatedDate and LastModifiedDate values based upon their Id: the higher the Id, the most recent the two dates will be, up to 31 December 2015. This follows the assumption that most recent items will have higher Id, as it normally is for DBMS records featuring numeric, auto-incremental keys. Any generated item will have a decreasing ViewCount value based upon their Id: the higher the Idis, the least it will be. This follows the assumption that newer items will generally get less views than older ones. While it obviously lacksany insert/update/delete feature, this Dummy Data Provideris viable enough to serve our purposes until we'll replace it with an actual, persistence-based Data Source. Technically speaking, we could do something better than we did by using one of the many Mocking Framework available through NuGet:Moq, NMock3,NSubstitute orRhino, just to name a few. Summary We spent some time into putting the standard application data flow under our lens: a two-way communication pattern between the server and their clients, built upon the HTTP protocol. We acknowledged the fact that we'll be mostly dealing with Json-serializable object such as Items, so we chose to equip ourselves with an ItemViewModel server-side class, together with an ItemController that will actively use it to expose the data to the client. We started building our MVC6-based WebAPI interface by implementing a number of methods required to create the client-side UI we chose for our Welcome View, consisting of three item listings to show to our users: last inserted ones, most viewed ones and some random picks. We routed the requests to them by using a custom set of Attribute-based routing rules, which seemed to be the best choice for our specific scenario. While we were there, we also took the chance to add a dedicated method to retrieve a single Item from its unique Id, assuming we're going to need it for sure. Resources for Article: Further resources on this subject: Designing your very own ASP.NET MVC Application [article] ASP.Net Site Performance: Improving JavaScript Loading [article] Displaying MySQL data on an ASP.NET Web Page [article]
Read more
  • 0
  • 0
  • 7290

article-image-working-xml-documents-php-jquery
Packt
23 Dec 2010
8 min read
Save for later

Working with XML Documents in PHP jQuery

Packt
23 Dec 2010
8 min read
PHP jQuery Cookbook Over 60 simple but highly effective recipes to create interactive web applications using PHP with jQuery Create rich and interactive web applications with PHP and jQuery Debug and execute jQuery code on a live site Design interactive forms and menus Another title in the Packt Cookbook range, which will help you get to grips with PHP as well as jQuery Introduction Extensible Markup Language—also known as XML—is a structure for representation of data in human readable format. Contrary to its name, it's actually not a language but a markup which focuses on data and its structure. XML is a lot like HTML in syntax except that where HTML is used for presentation of data, XML is used for storing and data interchange. Moreover, all the tags in an XML are user-defined and can be formatted according to one's will. But an XML must follow the specification recommended by W3C. With a large increase in distributed applications over the internet, XML is the most widely used method of data interchange between applications. Web services use XML to carry and exchange data between applications. Since XML is platform-independent and is stored in string format, applications using different server-side technologies can communicate with each other using XML. Consider the following XML document: From the above document, we can infer that it is a list of websites containing data about the name, URL, and some information about each website. PHP has several classes and functions available for working with XML documents. You can read, write, modify, and query documents easily using these functions. In this article, we will discuss SimpleXML functions and DOMDocument class of PHP for manipulating XML documents. You will learn how to read and modify XML files, using SimpleXML as well as DOM API. We will also explore the XPath method, which makes traversing documents a lot easier. Note that an XML must be well-formed and valid before we can do anything with it. There are many rules that define well-formedness of XML out of which a few are given below: An XML document must have a single root element.   There cannot be special characters like <, >, and soon.   Each XML tag must have a corresponding closing tag.   Tags are case sensitive To know more about validity of an XML, you can refer to this link: http://en.wikipedia.org/wiki/XML#Schemas_and_validation For most of the recipes in this article, we will use an already created XML file. Create a new file, save it as common.xml in the Article3 directory. Put the following contents in this file. <?xml version="1.0"?> <books> <book index="1"> <name year="1892">The Adventures of Sherlock Holmes</name> <story> <title>A Scandal in Bohemia</title> <quote>You see, but you do not observe. The distinction is clear.</quote> </story> <story> <title>The Red-headed League</title> <quote>It is quite a three pipe problem, and I beg that you won't speak to me for fifty minutes.</quote> </story> <story> <title>The Man with the Twisted Lip</title> <quote>It is, of course, a trifle, but there is nothing so important as trifles.</quote> </story> </book> <book index="2"> <name year="1927">The Case-book of Sherlock Holmes</name> <story> <title>The Adventure of the Three Gables</title> <quote>I am not the law, but I represent justice so far as my feeble powers go.</quote> </story> <story> <title>The Problem of Thor Bridge</title> <quote>We must look for consistency. Where there is a want of it we must suspect deception.</quote> </story> <story> <title>The Adventure of Shoscombe Old Place</title> <quote>Dogs don't make mistakes.</quote> </story> </book> <book index="3"> <name year="1893">The Memoirs of Sherlock Holmes</name> <story> <title>The Yellow Face</title> <quote>Any truth is better than indefinite doubt.</quote> </story> <story> <title>The Stockbroker's Clerk</title> <quote>Results without causes are much more impressive. </quote> </story> <story> <title>The Final Problem</title> <quote>If I were assured of your eventual destruction I would, in the interests of the public, cheerfully accept my own.</quote> </story> </book> </books> Loading XML from files and strings using SimpleXML True to its name, SimpleXML functions provide an easy way to access data from XML documents. XML files or strings can be converted into objects, and data can be read from them. We will see how to load an XML from a file or string using SimpleXML functions. You will also learn how to handle errors in XML documents. Getting ready Create a new directory named Article3. This article will contain sub-folders for each recipe. So, create another folder named Recipe1 inside it. How to do it... Create a file named index.php in Recipe1 folder. In this file, write the PHP code that will try to load the common.xml file. On loading it successfully, it will display a list of book names. We have also used the libxml functions that will detect any error and will show its detailed description on the screen. <?php libxml_use_internal_errors(true); $objXML = simplexml_load_file('../common.xml'); if (!$objXML) { $errors = libxml_get_errors(); foreach($errors as $error) { echo $error->message,'<br/>'; } } else { foreach($objXML->book as $book) { echo $book->name.'<br/>'; } } ?> Open your browser and point it to the index.php file. Because we have already validated the XML file, you will see the following output on the screen: The Adventures of Sherlock Holmes The Case-book of Sherlock Holmes The Memoirs of Sherlock Holmes Let us corrupt the XML file now. For this, open the common.xml file and delete any node name. Save this file and reload index.php on your browser. You will see a detailed error description on your screen: How it works... In the first line, passing a true value to the libxml_use_internal_errors function will suppress any XML errors and will allow us to handle errors from the code itself. The second line tries to load the specified XML using the simplexml_load_file function. If the XML is loaded successfully, it is converted into a SimpleXMLElement object otherwise a false value is returned. We then check for the return value. If it is false, we use the libxml_get_errors() function to get all the errors in the form of an array. This array contains objects of type LibXMLError. Each of these objects has several properties. In the previous code, we iterated over the errors array and echoed the message property of each object which contains a detailed error message. If there are no errors in XML, we get a SimpleXMLElement object which has all the XML data loaded in it. There's more... Parameters for simplexml_load_file More parameters are available for the simplexml_load_file method, which are as follows: filename: This is the first parameter that is mandatory. It can be a path to a local XML file or a URL. class_name: You can extend the SimpleXMLElement class. In that case, you can specify that class name here and it will return the object of that class. This parameter is optional. options: This third parameter allows you to specify libxml parameters for more control over how the XML is handled while loading. This is also optional. simplexml_load_string Similar to simplexml_load_file is simplexml_load_string, which also creates a SimpleXMLElement on successful execution. If a valid XML string is passed to it we get a SimpleXMLElement object or a false value otherwise. $objXML = simplexml_load_string('<?xml version="1.0"?><book><name> Myfavourite book</name></book>'); The above code will return a SimpleXMLElement object with data loaded from the XML string. The second and third parameters of this function are same as that of simplexml_load_file. Using SimpleXMLElement to create an object You can also use the constructor of the SimpleXMLElement class to create a new object. $objXML = new SimpleXMLElement('<?xml version="1.0"?><book><name> Myfavourite book</name></book>'); More info about SimpleXML and libxml You can read about SimpleXML in more detail on the PHP site at http://php.net/manual/en/book.simplexml.php and about libxml at http://php.net/manual/en/book.libxml.php.
Read more
  • 0
  • 0
  • 7120

article-image-building-your-first-application
Packt
10 Jan 2013
12 min read
Save for later

Building Your First Application

Packt
10 Jan 2013
12 min read
(For more resources related to this topic, see here.) Improving the scaffolding application In this recipe, we discuss how to create your own scaffolding application and add your own configuration file. The scaffolding application is the collection of files that come with any new web2py application. How to do it... The scaffolding app includes several files. One of them is models/db.py, which imports four classes from gluon.tools (Mail, Auth, Crud, and Service), and defines the following global objects: db, mail, auth, crud, and service. The scaffolding application also defines tables required by the auth object, such as db.auth_user. The default scaffolding application is designed to minimize the number of files, not to be modular. In particular, the model file, db.py, contains the configuration, which in a production environment, is best kept in separate files. Here, we suggest creating a configuration file, models/0.py, that contains something like the following: from gluon.storage import Storage settings = Storage() settings.production = False if settings.production: settings.db_uri = 'sqlite://production.sqlite' settings.migrate = False else: settings.db_uri = 'sqlite://development.sqlite' settings.migrate = True settings.title = request.application settings.subtitle = 'write something here' settings.author = 'you' settings.author_email = 'you@example.come' settings.keywords = '' settings.description = '' settings.layout_theme = 'Default' settings.security_key = 'a098c897-724b-4e05-b2d8-8ee993385ae6' settings.email_server = 'localhost' settings.email_sender = 'you@example.com' settings.email_login = '' settings.login_method = 'local' settings.login_config = '' We also modify models/db.py, so that it uses the information from the configuration file, and it defines the auth_user table explicitly (this makes it easier to add custom fields): from gluon.tools import * db = DAL(settings.db_uri) if settings.db_uri.startswith('gae'): session.connect(request, response, db = db) mail = Mail() # mailer auth = Auth(db) # authentication/authorization crud = Crud(db) # for CRUD helpers using auth service = Service() # for json, xml, jsonrpc, xmlrpc, amfrpc plugins = PluginManager() # enable generic views for all actions for testing purpose response.generic_patterns = ['*'] mail.settings.server = settings.email_server mail.settings.sender = settings.email_sender mail.settings.login = settings.email_login auth.settings.hmac_key = settings.security_key # add any extra fields you may want to add to auth_user auth.settings.extra_fields['auth_user'] = [] # user username as well as email auth.define_tables(migrate=settings.migrate,username=True) auth.settings.mailer = mail auth.settings.registration_requires_verification = False auth.settings.registration_requires_approval = False auth.messages.verify_email = 'Click on the link http://' + request.env.http_host + URL('default','user', args=['verify_email']) + '/%(key)s to verify your email' auth.settings.reset_password_requires_verification = True auth.messages.reset_password = 'Click on the link http://' + request.env.http_host + URL('default','user', args=['reset_password']) + '/%(key)s to reset your password' if settings.login_method=='janrain': from gluon.contrib.login_methods.rpx_account import RPXAccount auth.settings.actions_disabled=['register', 'change_password', 'request_reset_password'] auth.settings.login_form = RPXAccount(request, api_key = settings.login_config.split(':')[-1], domain = settings.login_config.split(':')[0], url = "http://%s/%s/default/user/login" % (request.env.http_host, request.application)) Normally, after a web2py installation or upgrade, the welcome application is tar-gzipped into welcome.w2p, and is used as the scaffolding application. You can create your own scaffolding application from an existing application using the following commands from a bash shell: cd applications/app tar zcvf ../../welcome.w2p * There's more... The web2py wizard uses a similar approach, and creates a similar 0.py configuration file. You can add more settings to the 0.py file as needed. The 0.py file may contain sensitive information, such as the security_key used to encrypt passwords, the email_login containing the password of your smtp account, and the login_config with your Janrain password (http://www.janrain.com/). You may want to write this sensitive information in a read-only file outside the web2py tree, and read them from your 0.py instead of hardcoding them. In this way, if you choose to commit your application to a version-control system, you will not be committing the sensitive information The scaffolding application includes other files that you may want to customize, including views/layout.html and views/default/users.html. Some of them are the subject of upcoming recipes. Building a simple contacts application When you start designing a new web2py application, you go through three phases that are characterized by looking for the answer to the following three questions: What data should the application store? Which pages should be presented to the visitors? How should the page content, for each page, be presented? The answer to these three questions is implemented in the models, the controllers, and the views respectively. It is important for a good application design to try answering those questions exactly in this order, and as accurately as possible. Such answers can later be revised, and more tables, more pages, and more bells and whistles can be added in an iterative fashion. A good web2py application is designed in such a way that you can change the table definitions (add and remove fields), add pages, and change page views, without breaking the application. A distinctive feature of web2py is that everything has a default. This means you can work on the first of those three steps without the need to write code for the second and third step. Similarly, you can work on the second step without the need to code for the third. At each step, you will be able to immediately see the result of your work; thanks to appadmin (the default database administrative interface) and generic views (every action has a view by default, until you write a custom one). Here we consider, as a first example, an application to manage our business contacts, a CRM. We will call it Contacts. The application needs to maintain a list of companies, and a list of people who work at those companies. How to do it... First of all we create the model. In this step we identify which tables are needed and their fields. For each field, we determine whether they: Must contain unique values (unique=True) Contain empty values (notnull=True) Are references (contain a list of a record in another table) Are used to represent a record (format attribute) From now on, we will assume we are working with a copy of the default scaffolding application, and we only describe the code that needs to be added or replaced. In particular, we will assume the default views/layout.html and models/db.py. Here is a possible model representing the data we need to store in models/db_contacts.py: # in file: models/db_custom.py db.define_table('company', Field('name', notnull=True, unique=True), format='%(name)s') db.define_table('contact', Field('name', notnull=True), Field('company', 'reference company'), Field('picture', 'upload'), Field('email', requires=IS_EMAIL()), Field('phone_number', requires=IS_MATCH('[d-() ]+')), Field('address'), format='%(name)s') db.define_table('log', Field('body', 'text',notnull=True), Field('posted_on', 'datetime'), Field('contact', 'reference contact')) Of course, a more complex data representation is possible. You may want to allow, for example, multiple users for the system, allow the same person to work for multiple companies, and keep track of changes in time. Here, we will keep it simple. The name of this file is important. In particular, models are executed in alphabetical order, and this one must follow db.py. After this file has been created, you can try it by visiting the following url: http://127.0.0.1:8000/contacts/appadmin, to access the web2py database administrative interface, appadmin. Without any controller or view, it provides a way to insert, select, update, and delete records. Now we are ready to build the controller. We need to identify which pages are required by the application. This depends on the required workflow. At a minimum we need the following pages: An index page (the home page) A page to list all companies A page that lists all contacts for one selected company A page to create a company A page to edit/delete a company A page to create a contact A page to edit/delete a contact A page that allows to read the information about one contact and the communication logs, as well as add a new communication log Such pages can be implemented as follows: # in file: controllers/default.py def index(): return locals() def companies(): companies = db(db.company).select(orderby=db.company.name) return locals() def contacts(): company = db.company(request.args(0)) or redirect(URL('companies')) contacts = db(db.contact.company==company.id).select( orderby=db.contact.name) return locals() @auth.requires_login() def company_create(): form = crud.create(db.company, next='companies') return locals() @auth.requires_login() def company_edit(): company = db.company(request.args(0)) or redirect(URL('companies')) form = crud.update(db.company, company, next='companies') return locals() @auth.requires_login() def contact_create(): db.contact.company.default = request.args(0) form = crud.create(db.contact, next='companies') return locals() @auth.requires_login() def contact_edit(): contact = db.contact(request.args(0)) or redirect(URL('companies')) form = crud.update(db.contact, contact, next='companies') return locals() @auth.requires_login() def contact_logs(): contact = db.contact(request.args(0)) or redirect(URL('companies')) db.log.contact.default = contact.id db.log.contact.readable = False db.log.contact.writable = False db.log.posted_on.default = request.now db.log.posted_on.readable = False db.log.posted_on.writable = False form = crud.create(db.log) logs = db( db.log.contact==contact.id).select(orderby=db.log.posted_on) return locals() def download(): return response.download(request, db) def user(): return dict(form=auth()) Make sure that you do not delete the existing user, download, and service functions in the scaffolding default.py. Notice how all pages are built using the same ingredients: select queries and crud forms. You rarely need anything else. Also notice the following: Some pages require a request.args(0) argument (a company ID for contacts and company_edit, a contact ID for contact_edit, and contact_logs). All selects have an orderby argument. All crud forms have a next argument that determines the redirection after form submission. All actions return locals(), which is a Python dictionary containing the local variables defined in the function. This is a shortcut. It is of course possible to return a dictionary with any subset of locals(). contact_create sets a default value for the new contact company to the value passed as args(0). The contacts_logs retrieves past logs after processing crud.create for a new log entry. This avoid unnecessarily reloading of the page, when a new log is inserted. At this point our application is fully functional, although the look-and-feel and navigation can be improved.: You can create a new company at: http://127.0.0.1:8000/contacts/default/company_create You can list all companies at: http://127.0.0.1:8000/contacts/default/companies You can edit company #1 at: http://127.0.0.1:8000/contacts/default/company_edit/1 You can create a new contact at: http://127.0.0.1:8000/contacts/default/contact_create You can list all contacts for company #1 at: http://127.0.0.1:8000/contacts/default/contacts/1 You can edit contact #1 at: http://127.0.0.1:8000/contacts/default/contact_edit/1 And you can access the communication log for contact #1 at: http://127.0.0.1:8000/contacts/default/contact_logs/1 You should also edit the models/menu.py file, and replace the content with the following: response.menu = [['Companies', False, URL('default', 'companies')]] The application now works, but we can improve it by designing a better look and feel for the actions. That's done in the views. Create and edit file views/default/companies.html: {{extend 'layout.html'}} <h2>Companies</h2> <table> {{for company in companies:}} <tr> <td>{{=A(company.name, _href=URL('contacts', args=company.id))}}</td> <td>{{=A('edit', _href=URL('company_edit', args=company.id))}}</td> </tr> {{pass}} <tr> <td>{{=A('add company', _href=URL('company_create'))}}</td> </tr> </table> response.menu = [['Companies', False, URL('default', 'companies')]] Here is how this page looks: Create and edit file views/default/contacts.html: {{extend 'layout.html'}} <h2>Contacts at {{=company.name}}</h2> <table> {{for contact in contacts:}} <tr> <td>{{=A(contact.name, _href=URL('contact_logs', args=contact.id))}}</td> <td>{{=A('edit', _href=URL('contact_edit', args=contact.id))}}</td> </tr> {{pass}} <tr> <td>{{=A('add contact', _href=URL('contact_create', args=company.id))}}</td> </tr> </table> Here is how this page looks: Create and edit file views/default/company_create.html: {{extend 'layout.html'}} <h2>New company</h2> {{=form}} Create and edit file views/default/contact_create.html: {{extend 'layout.html'}} <h2>New contact</h2> {{=form}} Create and edit file: views/default/company_edit.html: {{extend 'layout.html'}} <h2>Edit company</h2> {{=form}} Create and edit file views/default/contact_edit.html: {{extend 'layout.html'}} <h2>Edit contact</h2> {{=form}} Create and edit file views/default/contact_logs.html: {{extend 'layout.html'}} <h2>Logs for contact {{=contact.name}}</h2> <table> {{for log in logs:}} <tr> <td>{{=log.posted_on}}</td> <td>{{=MARKMIN(log.body)}}</td> </tr> {{pass}} <tr> <td></td> <td>{{=form}}</td> </tr> </table> Here is how this page looks: Notice that in the last view, we used the function MARKMIN to render the content of the db.log.body, using the MARKMIN markup. This allows embedding links, images, anchors, font formatting information, and tables in the logs. For details about the MARKMIN syntax we refer to: http://web2py.com/examples/static/markmin.html.
Read more
  • 0
  • 0
  • 7105

article-image-introduction-data-binding
Packt
10 May 2010
10 min read
Save for later

Introduction to Data Binding

Packt
10 May 2010
10 min read
Introduction Data binding allows us to build data-driven applications in Silverlight in a much easier and much faster way compared to old-school methods of displaying and editing data. This article and the following one take a look at how data binding works. We'll start by looking at the general concepts of data binding in Silverlight 4 in this article. Analyzing the term data binding immediately reveals its intentions. It is a technique that allows us to bind properties of controls to objects or collections thereof. The concept is, in fact, not new. Technologies such as ASP.NET, Windows Forms, and even older technologies such as MFC (Microsoft Foundation Classes) include data binding features. However, WPF's data binding platform has changed the way we perform data binding; it allows loosely coupled bindings. The BindingsSource control in Windows Forms has to know of the type we are binding to, at design time. WPF's built-in data binding mechanism does not. We simply defi ne to which property of the source the target should bind. And at runtime, the actual data—the object to which we are binding—is linked. Luckily for us, Silverlight inherits almost all data binding features from WPF and thus has a rich way of displaying data. A binding is defined by four items: The source or source object: This is the data we are binding to. The data that is used in data binding scenarios is in-memory data, that is, objects. Data binding itself has nothing to do with the actual data access. It works with the objects that are a result of reading from a database or communicating with a service. A typical example is a Customer object. A property on the source object: This can, for example, be the Name property of the Customer object. The target control: This is normally a visual control such as a TextBox or a ListBox control. In general, the target can be a DependencyObject. In Silverlight 2 and Silverlight 3, the target had to derive from FrameworkElement; this left out some important types such as transformations. A property on the target control: This will, in some way—directly or after a conversion—display the data from the property on the source. The data binding process can be summarized in the following image: In the previous image, we can see that the data binding engine is also capable of synchronization. This means that data binding is capable of updating the display of data automatically. If the value of the source changes, Silverlight will change the value of the target as well without us having to write a single line of code. Data binding isn't a complete black box either. There are hooks in the process, so we can perform custom actions on the data fl owing from source to target, and vice versa. These hooks are the converters. Our applications can still be created without data binding. However, the manual process—that is getting data and setting all values manually on controls from code-behind—is error prone and tedious to write. Using the data-binding features in Silverlight, we will be able to write more maintainable code faster. In this article, we'll explore how data binding works. We'll start by building a small data-driven application, which contains the most important data binding features, to get a grasp of the general concepts. We'll also see that data binding isn't tied to just binding single objects to an interface; binding an entire collection of objects is supported as well. We'll also be looking at the binding modes. They allow us to specify how the data will flow (from source to target, target to source, or both). We'll finish this article series by looking at the support that Blend 4 provides to build applications that use data binding features. In the recipes of this article and the following one, we'll assume that we are building a simple banking application using Silverlight. Each of the recipes in this article will highlight a part of this application where the specific feature comes into play. The following screenshot shows the resulting Silverlight banking application: Displaying data in Silverlight applications When building Silverlight applications, we often need to display data to the end user. Applications such as an online store with a catalogue and a shopping cart, an online banking application and so on, need to display data of some sort. Silverlight contains a rich data binding platform that will help us to write data-driven applications faster and using less code. In this recipe, we'll build a form that displays the data of the owner of a bank account using data binding. Getting ready To follow along with this recipe, you can use the starter solution located in the Chapter02/ SilverlightBanking_Displaying_Data_Starter folder in the code bundle available on the Packt website. The finished application for this recipe can be found in the Chapter02/SilverlightBanking_Displaying_Data_Completed folder. How to do it... Let's assume that we are building a form, part of an online banking application, in which we can view the details of the owner of the account. Instead of wiring up the fields of the owner manually, we'll use data binding. To get data binding up and running, carry out the following steps: Open the starter solution, as outlined in the Getting Ready section. The form we are building will bind to data. Data in data binding is in-memory data, not the data that lives in a database (it can originate from a database though). The data to which we are binding is an instance of the Owner class. The following is the code for the class. Add this code in a new class fi le called Owner in the Silverlight project. public class Owner{public int OwnerId { get; set; }public string FirstName { get; set; }public string LastName { get; set; }public string Address { get; set; }public string ZipCode { get; set; }public string City { get; set; }public string State { get; set; }public string Country { get; set; }public DateTime BirthDate { get; set; }public DateTime CustomerSince { get; set; }public string ImageName { get; set; }public DateTime LastActivityDate { get; set; }public double CurrentBalance { get; set; }public double LastActivityAmount { get; set; }} Now that we've created the class, we are able to create an instance of it in the MainPage.xaml.cs file, the code-behind class of MainPage.xaml. In the constructor, we call the InitializeOwner method, which creates an instance of the Owner class and populates its properties. private Owner owner;public MainPage(){InitializeComponent();//initialize owner dataInitializeOwner();}private void InitializeOwner(){owner = new Owner();owner.OwnerId = 1234567;owner.FirstName = "John";owner.LastName = "Smith";owner.Address = "Oxford Street 24";owner.ZipCode = "W1A";owner.City = "London";owner.Country = "United Kingdom";owner.State = "NA";owner.ImageName = "man.jpg";owner.LastActivityAmount = 100;owner.LastActivityDate = DateTime.Today;owner.CurrentBalance = 1234.56;owner.BirthDate = new DateTime(1953, 6, 9);owner.CustomerSince = new DateTime(1999, 12, 20);} Let's now focus on the form itself and build its UI. For this sample, we're not making the data editable. So for every field of the Owner class, we'll use a TextBlock. To arrange the controls on the screen, we'll use a Grid called OwnerDetailsGrid. This Grid can be placed inside the LayoutRoot Grid.We will want the Text property of each TextBlock to be bound to a specifi c property of the Owner instance. This can be done by specifying this binding using the Binding "markup extension" on this property. <Grid x_Name="OwnerDetailsGrid"VerticalAlignment="Stretch"HorizontalAlignment="Left"Background="LightGray"Margin="3 5 0 0"Width="300" ><Grid.RowDefinitions><RowDefinition Height="100"></RowDefinition><RowDefinition Height="30"></RowDefinition><RowDefinition Height="30"></RowDefinition><RowDefinition Height="30"></RowDefinition><RowDefinition Height="30"></RowDefinition><RowDefinition Height="30"></RowDefinition><RowDefinition Height="30"></RowDefinition><RowDefinition Height="30"></RowDefinition><RowDefinition Height="30"></RowDefinition><RowDefinition Height="30"></RowDefinition><RowDefinition Height="30"></RowDefinition><RowDefinition Height="30"></RowDefinition><RowDefinition Height="*"></RowDefinition></Grid.RowDefinitions><Grid.ColumnDefinitions><ColumnDefinition></ColumnDefinition><ColumnDefinition></ColumnDefinition></Grid.ColumnDefinitions><Image x_Name="OwnerImage"Grid.Row="0"Width="100"Height="100"Stretch="Uniform"HorizontalAlignment="Left"Margin="3"Source="/CustomerImages/man.jpg"Grid.ColumnSpan="2"></Image><TextBlock x_Name="OwnerIdTextBlock"Grid.Row="1"FontWeight="Bold"Margin="2"Text="Owner ID:"></TextBlock><TextBlock x_Name="FirstNameTextBlock"Grid.Row="2"FontWeight="Bold"Margin="2"Text="First name:"></TextBlock><TextBlock x_Name="LastNameTextBlock"Grid.Row="3"FontWeight="Bold"Margin="2"Text="Last name:"></TextBlock><TextBlock x_Name="AddressTextBlock"Grid.Row="4"FontWeight="Bold"Margin="2"Text="Adress:"></TextBlock><TextBlock x_Name="ZipCodeTextBlock"Grid.Row="5"FontWeight="Bold"Margin="2"Text="Zip code:"></TextBlock><TextBlock x_Name="CityTextBlock"Grid.Row="6"FontWeight="Bold"Margin="2"Text="City:"></TextBlock><TextBlock x_Name="StateTextBlock"Grid.Row="7"FontWeight="Bold"Margin="2"Text="State:"></TextBlock><TextBlock x_Name="CountryTextBlock"Grid.Row="8"FontWeight="Bold"Margin="2"Text="Country:"></TextBlock><TextBlock x_Name="BirthDateTextBlock"Grid.Row="9"FontWeight="Bold"Margin="2"Text="Birthdate:"></TextBlock><TextBlock x_Name="CustomerSinceTextBlock"Grid.Row="10"FontWeight="Bold"Margin="2"Text="Customer since:"></TextBlock><TextBlock x_Name="OwnerIdValueTextBlock"Grid.Row="1"Grid.Column="1"Margin="2"Text="{Binding OwnerId}"></TextBlock><TextBlock x_Name="FirstNameValueTextBlock"Grid.Row="2"Grid.Column="1"Margin="2"Text="{Binding FirstName}"></TextBlock><TextBlock x_Name="LastNameValueTextBlock"Grid.Row="3"Grid.Column="1"Margin="2"Text="{Binding LastName}"></TextBlock><TextBlock x_Name="AddressValueTextBlock"Grid.Row="4"Grid.Column="1"Margin="2"Text="{Binding Address}"></TextBlock><TextBlock x_Name="ZipCodeValueTextBlock"Grid.Row="5"Grid.Column="1"Margin="2"Text="{Binding ZipCode}"></TextBlock><TextBlock x_Name="CityValueTextBlock"Grid.Row="6"Grid.Column="1"Margin="2"Text="{Binding City}"></TextBlock><TextBlock x_Name="StateValueTextBlock"Grid.Row="7"Grid.Column="1"Margin="2"Text="{Binding State}"></TextBlock><TextBlock x_Name="CountryValueTextBlock"Grid.Row="8"Grid.Column="1"Margin="2"Text="{Binding Country}"></TextBlock><TextBlock x_Name="BirthDateValueTextBlock"Grid.Row="9"Grid.Column="1"Margin="2"Text="{Binding BirthDate}"></TextBlock><TextBlock x_Name="CustomerSinceValueTextBlock"Grid.Row="10"Grid.Column="1"Margin="2"Text="{Binding CustomerSince}"></TextBlock><Button x_Name="OwnerDetailsEditButton"Grid.Row="11"Grid.ColumnSpan="2"Margin="3"Content="Edit details..."HorizontalAlignment="Right"VerticalAlignment="Top"></Button><TextBlock x_Name="CurrentBalanceValueTextBlock"Grid.Row="12"Grid.Column="1"Margin="2"Text="{Binding CurrentBalance}" ></TextBlock><TextBlock x_Name="LastActivityDateValueTextBlock"Grid.Row="13"Grid.Column="1"Margin="2"Text="{Binding LastActivityDate}" ></TextBlock><TextBlock x_Name="LastActivityAmountValueTextBlock"Grid.Row="14"Grid.Column="1"Margin="2"Text="{Binding LastActivityAmount}" ></TextBlock></Grid> At this point, all the controls know what property they need to bind to. However, we haven't specifi ed the actual link. The controls don't know about the Owner instance we want them to bind to. Therefore, we can use DataContext. We specify the DataContext of the OwnerDetailsGrid to be the Owner instance. Each control within that container can then access the object and bind to its properties . Setting the DataContext in done using the following code: public MainPage(){InitializeComponent();//initialize owner dataInitializeOwner();OwnerDetailsGrid.DataContext = owner;} The result can be seen in the following screenshot:
Read more
  • 0
  • 0
  • 7072
article-image-ajax-form-validation-part-1
Packt
18 Feb 2010
4 min read
Save for later

AJAX Form Validation: Part 1

Packt
18 Feb 2010
4 min read
The server is the last line of defense against invalid data, so even if you implement client-side validation, server-side validation is mandatory. The JavaScript code that runs on the client can be disabled permanently from the browser's settings and/or it can be easily modified or bypassed. Implementing AJAX form validation The form validation application we will build in this article validates the form at the server side on the classic form submit, implementing AJAX validation while the user navigates through the form. The final validation is performed at the server, as shown in Figure 5-1: Doing a final server-side validation when the form is submitted should never be considered optional. If someone disables JavaScript in the browser settings, AJAX validation on the client side clearly won't work, exposing sensitive data, and thereby allowing an evil-intentioned visitor to harm important data on the server (for example, through SQL injection). Always validate user input on the server. As shown in the preceding figure, the application you are about to build validates a registration form using both AJAX validation (client side) and typical server-side validation: AJAX-style (client side): It happens when each form field loses focus (onblur). The field's value is immediately sent to and evaluated by the server, which then returns a result (0 for failure, 1 for success). If validation fails, an error message will appear and notify the user about the failed validation, as shown in Figure 5-3. PHP-style (server side): This is the usual validation you would do on the server—checking user input against certain rules after the entire form is submitted. If no errors are found and the input data is valid, the browser is redirected to a success page, as shown in Figure 5-4. If validation fails, however, the user is sent back to the form page with the invalid fields highlighted, as shown in Figure 5-3. Both AJAX validation and PHP validation check the entered data against our application's rules: Username must not already exist in the database Name field cannot be empty A gender must be selected Month of birth must be selected Birthday must be a valid date (between 1-31) Year of birth must be a valid year (between 1900-2000) The date must exist in the number of days for each month (that is, there's no February 31) E-mail address must be written in a valid email format Phone number must be written in standard US form: xxx-xxx-xxxx The I've read the Terms of Use checkbox must be selected Watch the application in action in the following screenshots: XMLHttpRequest, version 2 We do our best to combine theory and practice, before moving on to implementing the AJAX form validation script, we'll have another quick look at our favorite AJAX object—XMLHttpRequest. On this occasion, we will step up the complexity (and functionality) a bit and use everything we have learned until now. We will continue to build on what has come before as we move on; so again, it's important that you take the time to be sure you've understood what we are doing here. Time spent on digging into the materials really pays off when you begin to build your own application in the real world. Our OOP JavaScript skills will be put to work improving the existing script that used to make AJAX requests. In addition to the design that we've already discussed, we're creating the following features as well: Flexible design so that the object can be easily extended for future needs and purposes The ability to set all the required properties via a JSON object We'll package this improved XMLHttpRequest functionality in a class named XmlHttp that we'll be able to use in other exercises as well. You can see the class diagram in the following screenshot, along with the diagrams of two helper classes: settings is the class we use to create the call settings; we supply an instance of this class as a parameter to the constructor of XmlHttp complete is a callback delegate, pointing to the function we want executed when the call completes The final purpose of this exercise is to create a class named XmlHttp that we can easily use in other projects to perform AJAX calls. With our goals in mind, let's get to it! Time for action – the XmlHttp object In the ajax folder, create a folder named validate, which will host the exercises in this article.
Read more
  • 0
  • 0
  • 6943

article-image-data-manipulation-silverlight-4-data-grid
Packt
26 Apr 2010
9 min read
Save for later

Data Manipulation in Silverlight 4 Data Grid

Packt
26 Apr 2010
9 min read
Displaying data in a customized DataGrid Displaying data is probably the most straightforward task we can ask the DataGrid to do for us. In this recipe, we'll create a collection of data and hand it over to the DataGrid for display. While the DataGrid may seem to have a rather fixed layout, there are many options available on this control that we can use to customize it. In this recipe, we'll focus on getting the data to show up in the DataGrid and customize it to our likings. Getting ready In this recipe, we'll start from an empty Silverlight application. The finished solution for this recipe can be found in the Chapter04/Datagrid_Displaying_Data_Completed folder in the code bundle that is available on the Packt website. How to do it... We'll create a collection of Book objects and display this collection in a DataGrid. However,we want to customize the DataGrid. More specifically, we want to make the DataGridfixed. In other words, we don't want the user to make any changes to the bound data or move the columns around. Also, we want to change the visual representation of the DataGrid by changing the background color of the rows. We also want the vertical column separators to be hidden and the horizontal ones to get a different color. Finally, we'll hook into the LoadingRow event, which will give us access to the values that are bound to a row and based on that value, the LoadingRow event will allow us to make changes to the visual appearance of the row. To create this DataGrid, you'll need to carry out the following steps: Start a new Silverlight solution called DatagridDisplayingData in Visual Studio. We'll start by creating the Book class. Add a new class to the Silverlight project in the solution and name this class as Book. Note that this class uses two enumerations—one for the Category and the other for the Language. These can be found in the sample code. The following is the code for the Book class: public class Book { public string Title { get; set; } public string Author { get; set; } public int PageCount { get; set; } public DateTime PurchaseDate { get; set; } public Category Category { get; set; } public string Publisher { get; set; } public Languages Language { get; set; } public string ImageName { get; set; } public bool AlreadyRead { get; set; } } In the code-behind of the generated MainPage.xaml file, we need to create a generic list of Book instances (List) and load data into this collection.This is shown in the following code: private List<Book> bookCollection; public MainPage() { InitializeComponent(); LoadBooks(); } private void LoadBooks() { bookCollection = new List<Book>(); Book b1 = new Book(); b1.Title = "Book AAA"; b1.Author = "Author AAA"; b1.Language = Languages.English; b1.PageCount = 350; b1.Publisher = "Publisher BBB"; b1.PurchaseDate = new DateTime(2009, 3, 10); b1.ImageName = "AAA.png"; b1.AlreadyRead = true; b1.Category = Category.Computing; bookCollection.Add(b1); ... } Next, we'll add a DataGrid to the MainPage.xaml file. For now, we won't add any extra properties on the DataGrid. It's advisable to add it to the page by dragging it from the toolbox, so that Visual Studio adds the correct references to the required assemblies in the project, as well as adds the namespace mapping in the XAML code. Remove the AutoGenerateColumns="False" for now so that we'll see all the properties of the Book class appear in the DataGrid. The following line of code shows a default DataGrid with its name set to BookDataGrid: <sdk:DataGrid x_Name="BookDataGrid"></sdk:DataGrid> Currently, no data is bound to the DataGrid. To make the DataGrid show the book collection, we set the ItemsSource property from the code-behind in the constructor. This is shown in the following code: public MainPage() { InitializeComponent(); LoadBooks(); BookDataGrid.ItemsSource = bookCollection; } Running the code now shows a default DataGrid that generates a column for each public property of the Book type. This happens because the AutoGenerateColumns property is True by default. Let's continue by making the DataGrid look the way we want it to look. By default, the DataGrid is user-editable, so we may want to change this feature. Setting the IsReadOnly property to True will make it impossible for a user to edit the data in the control. We can lock the display even further by setting both the CanUserResizeColumns and the CanUserReorderColumns properties to False. This will prohibit the user from resizing and reordering the columns inside the DataGrid, which are enabled by default. This is shown in the following code: <sdk:DataGrid x_Name="BookDataGrid" AutoGenerateColumns="True" CanUserReorderColumns="False" CanUserResizeColumns="False" IsReadOnly="True"> </sdk:DataGrid> The DataGrid also offers quite an impressive list of properties that we can use to change its appearance. By adding the following code, we specify alternating the background colors (the RowBackground and AlternatingRowBackground properties), column widths (the ColumnWidth property), and row heights (the RowHeight property). We also specify how the gridlines should be displayed (the GridLinesVisibility and HorizontalGridLinesBrushs properties). Finally, we specify that we also want a row header to be added (the HeadersVisibility property ). <sdk:DataGrid x_Name="BookDataGrid" AutoGenerateColumns="True" CanUserReorderColumns="False" CanUserResizeColumns="False" RowBackground="#999999" AlternatingRowBackground="#CCCCCC" ColumnWidth="90" RowHeight="30" GridLinesVisibility="Horizontal" HeadersVisibility="All" HorizontalGridLinesBrush="Blue"> </sdk:DataGrid> We can also get a hook into the loading of the rows. For this, the LoadingRow event has to be used. This event is triggered when each row gets loaded. Using this event, we can get access to a row and change its properties based on custom code. In the following code, we are specifying that if the book is a thriller, we want the row to have a red background: private void BookDataGrid_LoadingRow(object sender, DataGridRowEventArgs e) { Book loadedBook = e.Row.DataContext as Book; if (loadedBook.Category == Category.Thriller) { e.Row.Background = new SolidColorBrush(Colors.Red); //It's a thriller! e.Row.Height = 40; } else { e.Row.Background = null; } } After completing these steps, we have the DataGrid that we wanted. It displays the data (including headers), fixes the columns and makes it impossible for the user to edit the data. Also, the color of the rows and alternating rows is changed, the vertical grid lines are hidden, and a different color is applied to the horizontal grid lines. Using the LoadingRow event, we have checked whether the book being added is of the "Thriller" category, and if so, a red color is applied as the background color for the row. The result can be seen in the following screenshot: How it works... The DataGrid allows us to display the data easily, while still offering us many customization options to format the control as needed. The DataGrid is defined in the System.Windows.Controls namespace, which is located in the System.Windows.Controls.Data assembly. By default, this assembly is not referenced while creating a new Silverlight application. Therefore, the following extra references are added while dragging the control from the toolbox for the first time: System.ComponentModel.DataAnnotations System.Windows.Controls.Data System.Windows.Controls.Data.Input System.Windows.Data While compiling the application, the corresponding assemblies are added to the XAP file (as can be seen in the following screenshot, which shows the contents of the XAP file). These assemblies need to be added because while installing the Silverlight plugin, they aren't installed as a part of the CLR. This is done in order to keep the plugin size small. However, when we use them in our application, they are embedded as part of the application. This results in an increase of the download size of the XAP file. In most circumstances, this is not a problem. However, if the file size is an important requirement, then it is essential to keep an eye on this. Also, Visual Studio will include the following namespace mapping into the XAML file: From then on, we can use the control as shown in the following line of code: <sdk:DataGrid x_Name="BookDataGrid"> </sdk:DataGrid> Once the control is added on the page, we can use it in a data binding scenario. To do so, we can point the ItemsSource property to any IEnumerable implementation. Each row in the DataGrid will correspond to an object in the collection. When AutoGenerateColumns is set to True (the default), the DataGrid uses a refl ection on the type of objects bound to it. For each public property it encounters, it generates a corresponding column. Out of the box, the DataGrid includes a text column, a checkbox column, and a template column. For all the types that can't be displayed, it uses the ToString method and a text column. If we want the DataGrid to feature automatic synchronization, the collection should implement the INotifyCollectionChanged interface. If changes to the objects are to be refl ected in the DataGrid, then the objects in the collection should themselves implement the INotifyPropertyChanged interface. There's more While loading large amounts of data into the DataGrid, the performance will still be very good. This is the result of the DataGrid implementing UI virtualization, which is enabled by default. Let's assume that the DataGrid is bound to a collection of 1,000,000 items (whether or not this is useful is another question). Loading all of these items into memory would be a time-consuming task as well as a big performance hit. Due to UI virtualization, the control loads only the rows it's currently displaying. (It will actually load a few more to improve the scrolling experience.) While scrolling, a small lag appears when the control is loading the new items. Since Silverlight 3, the ListBox also features UI virtualization. Inserting, updating, and deleting data in a DataGrid The DataGrid is an outstanding control to use while working with large amounts of data at the same time. Through its Excel-like interface, not only can we easily view the data, but also add new records or update and delete existing ones. In this recipe, we'll take a look at how to build a DataGrid that supports all of the above actions on a collection of items. Getting ready This recipe builds on the code that was created in the previous recipe. To follow along with this recipe, you can keep using your code or use the starter solution located in the Chapter04/Datagrid_Editing_Data_Starter folder in the code bundle available on the Packt website. The finished solution for this recipe can be found in the Chapter04/Datagrid_Editing_Data_Completed folder.
Read more
  • 0
  • 0
  • 6188

article-image-ajax-form-validation-part-2
Packt
19 Feb 2010
11 min read
Save for later

AJAX Form Validation: Part 2

Packt
19 Feb 2010
11 min read
How to implement AJAX form validation In this article, we redesigned the code for making AJAX requests when creating the XmlHttp class. The AJAX form validation application makes use of these techniques. The application contains three pages: One page renders the form to be validated Another page validates the input The third page is displayed if the validation is successful The application will have a standard structure, composed of these files: index.php: It is the file loaded initially by the user. It contains references to the necessary JavaScript files and makes asynchronous requests for validation to validate.php. index_top.php: It is a helper file loaded by index.php and contains several objects for rendering the HTML form. validate.css: It is the file containing the CSS styles for the application. json2.js: It is the JavaScript file used for handling JSON objects. xhr.js: It is the JavaScript file that contains our XmlHttp object used for making AJAX requests. validate.js: It is the JavaScript file loaded together with index.php on the client side. It makes asynchronous requests to a PHP script called validate.php to perform the AJAX validation. validate.php: It is a PHP script residing on the same server as index.php, and it offers the server-side functionality requested asynchronously by the JavaScript code in index.php. validate.class.php: It is a PHP script that contains a class called Validate, which contains the business logic and database operations to support the functionality of validate.php. config.php: It will be used to store global configuration options for your application, such as database connection data, and so on. error_handler.php: It contains the error-handling mechanism that changes the text of an error message into a human-readable format. allok.php: It is the page to be displayed if the validation is successful. Time for action – AJAX form validation Connect to the ajax database and create a table named users with the following code: CREATE TABLE users ( user_id INT UNSIGNED NOT NULL AUTO_INCREMENT, user_name VARCHAR(32) NOT NULL, PRIMARY KEY (user_id) ); Execute the following INSERT commands to populate your users table with some sample data: INSERT INTO users (user_name) VALUES ('bogdan'); INSERT INTO users (user_name) VALUES ('audra'); INSERT INTO users (user_name) VALUES ('cristian'); Let's start writing the code with the presentation tier. We'll define the styles for our form by creating a file named validate.css, and adding the following code to it: body { font-family: Arial, Helvetica, sans-serif; font-size: 0.8em; color: #000000; } label { float: left; width: 150px; font-weight: bold; } input, select { margin-bottom: 3px; } .button { font-size: 2em; } .left { margin-left: 150px; } .txtFormLegend { color: #777777; font-weight: bold; font-size: large; } .txtSmall { color: #999999; font-size: smaller; } .hidden { display: none; } .error { display: block; margin-left: 150px; color: #ff0000; } Now create a new file named index_top.php, and add the following code to it. This script will be loaded from the main page index.php. <?php // enable PHP session session_start(); // Build HTML <option> tags function buildOptions($options, $selectedOption) { foreach ($options as $value => $text) { if ($value == $selectedOption) { echo '<option value="' . $value . '" selected="selected">' . $text . '</option>'; } else { echo '<option value="' . $value . '">' . $text . '</option>'; } } } // initialize gender options array $genderOptions = array("0" => "[Select]", "1" => "Male", "2" => "Female"); // initialize month options array $monthOptions = array("0" => "[Select]", "1" => "January", "2" => "February", "3" => "March", "4" => "April", "5" => "May", "6" => "June", "7" => "July", "8" => "August", "9" => "September", "10" => "October", "11" => "November", "12" => "December"); // initialize some session variables to prevent PHP throwing // Notices if (!isset($_SESSION['values'])) { $_SESSION['values']['txtUsername'] = ''; $_SESSION['values']['txtName'] = ''; $_SESSION['values']['selGender'] = ''; $_SESSION['values']['selBthMonth'] = ''; $_SESSION['values']['txtBthDay'] = ''; $_SESSION['values']['txtBthYear'] = ''; $_SESSION['values']['txtEmail'] = ''; $_SESSION['values']['txtPhone'] = ''; $_SESSION['values']['chkReadTerms'] = ''; } if (!isset($_SESSION['errors'])) { $_SESSION['errors']['txtUsername'] = 'hidden'; $_SESSION['errors']['txtName'] = 'hidden'; $_SESSION['errors']['selGender'] = 'hidden'; $_SESSION['errors']['selBthMonth'] = 'hidden'; $_SESSION['errors']['txtBthDay'] = 'hidden'; $_SESSION['errors']['txtBthYear'] = 'hidden'; $_SESSION['errors']['txtEmail'] = 'hidden'; $_SESSION['errors']['txtPhone'] = 'hidden'; $_SESSION['errors']['chkReadTerms'] = 'hidden'; } ?> Now create index.php, and add the following code to it: <?php require_once ('index_top.php'); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www. w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html > <head> <title>Degradable AJAX Form Validation with PHP and MySQL</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link href="validate.css" rel="stylesheet" type="text/css" /> </head> <body onload="setFocus();"> <script type="text/javascript" src="json2.js"></script> <script type="text/javascript" src="xhr.js"></script> <script type="text/javascript" src="validate.js"></script> <fieldset> <legend class="txtFormLegend"> New User Registratio Form </legend> <br /> <form name="frmRegistration" method="post" action="validate.php"> <input type="hidden" name="validationType" value="php"/> <!-- Username --> <label for="txtUsername">Desired username:</label> <input id="txtUsername" name="txtUsername" type="text" onblur="validate(this.value, this.id)" value="<?php echo $_SESSION['values'] ['txtUsername'] ?>" /> <span id="txtUsernameFailed" class="<?php echo $_SESSION['errors']['txtUsername'] ?>"> This username is in use, or empty username field. </span> <br /> <!-- Name --> <label for="txtName">Your name:</label> <input id="txtName" name="txtName" type="text" onblur="validate(this.value, this.id)" value="<?php echo $_SESSION['values']['txtName'] ?>" /> <span id="txtNameFailed" class="<?php echo $_SESSION['errors']['txtName']?>"> Please enter your name. </span> <br /> <!-- Gender --> <label for="selGender">Gender:</label> <select name="selGender" id="selGender" onblur="validate(this.value, this.id)"> <?php buildOptions($genderOptions, $_SESSION['values']['selGender']); ?> </select> <span id="selGenderFailed" class="<?php echo $_SESSION['errors']['selGender'] ?>"> Please select your gender. </span> <br /> <!-- Birthday --> <label for="selBthMonth">Birthday:</label> <!-- Month --> <select name="selBthMonth" id="selBthMonth" onblur="validate(this.value, this.id)"> <?php buildOptions($monthOptions, $_SESSION['values']['selBthMonth']); ?> </select> &nbsp;-&nbsp; <!-- Day --> <input type="text" name="txtBthDay" id="txtBthDay" maxlength="2" size="2" onblur="validate(this.value, this.id)" value="<?php echo $_SESSION['values']['txtBthDay'] ?>" /> &nbsp;-&nbsp; <!-- Year --> <input type="text" name="txtBthYear" id="txtBthYear" maxlength="4" size="2" onblur="validate(document.getElementById ('selBthMonth').options[document.getElementById ('selBthMonth').selectedIndex].value + '#' + document.getElementById('txtBthDay').value + '#' + this.value, this.id)" value="<?php echo $_SESSION['values']['txtBthYear'] ?>" /> <!-- Month, Day, Year validation --> <span id="selBthMonthFailed" class="<?php echo $_SESSION['errors']['selBthMonth'] ?>"> Please select your birth month. </span> <span id="txtBthDayFailed" class="<?php echo $_SESSION['errors']['txtBthDay'] ?>"> Please enter your birth day. </span> <span id="txtBthYearFailed" class="<?php echo $_SESSION['errors']['txtBthYear'] ?>"> Please enter a valid date. </span> <br /> <!-- Email --> <label for="txtEmail">E-mail:</label> <input id="txtEmail" name="txtEmail" type="text" onblur="validate(this.value, this.id)" value="<?php echo $_SESSION['values']['txtEmail'] ?>" /> <span id="txtEmailFailed" class="<?php echo $_SESSION['errors']['txtEmail'] ?>"> Invalid e-mail address. </span> <br /> <!-- Phone number --> <label for="txtPhone">Phone number:</label> <input id="txtPhone" name="txtPhone" type="text" onblur="validate(this.value, this.id)" value="<?php echo $_SESSION['values']['txtPhone'] ?>" /> <span id="txtPhoneFailed" class="<?php echo $_SESSION['errors']['txtPhone'] ?>"> Please insert a valid US phone number (xxx-xxx-xxxx). </span> <br /> <!-- Read terms checkbox --> <input type="checkbox" id="chkReadTerms" name="chkReadTerms" class="left" onblur="validate(this.checked, this.id)" <?php if ($_SESSION['values']['chkReadTerms'] == 'on') echo 'checked="checked"' ?> /> I've read the Terms of Use <span id="chkReadTermsFailed" class="<?php echo$_SESSION['errors'] ['chkReadTerms'] ?>"> Please make sure you read the Terms of Use. </span> <!-- End of form --> <hr /> <span class="txtSmall">Note: All fields arerequired. </span> <br /><br /> <input type="submit" name="submitbutton" value="Register" class="left button" /> </form> </fieldset> </body> </html> Create a new file named allok.php, and add the following code to it: <?php // clear any data saved in the session session_start(); session_destroy(); ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"> <html > <head> <title>AJAX Form Validation</title> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <link href="validate.css" rel="stylesheet" type="text/css" /> </head> <body> Registration Successful!<br /> <a href="index.php" title="Go back">&lt;&lt; Go back</a> </body> </html> Copy json2.js (which you downloaded in a previous exercise from http://json.org/json2.js) to your ajax/validate folder. Create a file named validate.js. This file performs the client-side functionality, including the AJAX requests: // holds the remote server address var serverAddress = "validate.php"; // when set to true, display detailed error messages var showErrors = true; // the function handles the validation for any form field function validate(inputValue, fieldID) { // the data to be sent to the server through POST var data = "validationType=ajax&inputValue=" + inputValue + "&fieldID=" + fieldID; // build the settings object for the XmlHttp object var settings = { url: serverAddress, type: "POST", async: true, complete: function (xhr, response, status) { if (xhr.responseText.indexOf("ERRNO") >= 0 || xhr.responseText.indexOf("error:") >= 0 || xhr.responseText.length == 0) { alert(xhr.responseText.length == 0 ? "Server error." : response); } result = response.result; fieldID = response.fieldid; // find the HTML element that displays the error message = document.getElementById(fieldID + "Failed"); // show the error or hide the error message.className = (result == "0") ? "error" : "hidden"; }, data: data, showErrors: showErrors }; // make a server request to validate the input data var xmlHttp = new XmlHttp(settings); } // sets focus on the first field of the form function setFocus() { document.getElementById("txtUsername").focus(); } Now it's time to add the server-side logic. Start by creating config.php, with the following code in it: <?php // defines database connection data define('DB_HOST', 'localhost'); define('DB_USER', 'ajaxuser'); define('DB_PASSWORD', 'practical'); define('DB_DATABASE', 'ajax'); ?> Now create the error handler code in a file named error_handler.php: <?php // set the user error handler method to be error_handler set_error_handler('error_handler', E_ALL); // error handler function function error_handler($errNo, $errStr, $errFile, $errLine) { // clear any output that has already been generated if(ob_get_length()) ob_clean(); // output the error message $error_message = 'ERRNO: ' . $errNo . chr(10) . 'TEXT: ' . $errStr . chr(10) . 'LOCATION: ' . $errFile . ', line ' . $errLine; echo $error_message; // prevent processing any more PHP scripts exit; } ?> The PHP script that handles the client's AJAX calls, and also handles the validation on form submit, is validate.php: <?php // start PHP session session_start(); // load error handling script and validation class require_once ('error_handler.php'); require_once ('validate.class.php'); // Create new validator object $validator = new Validate(); // read validation type (PHP or AJAX?) $validationType = ''; if (isset($_POST['validationType'])) { $validationType = $_POST['validationType']; } // AJAX validation or PHP validation? if ($validationType == 'php') { // PHP validation is performed by the ValidatePHP method, //which returns the page the visitor should be redirected to //(which is allok.php if all the data is valid, or back to //index.php if not) header('Location:' . $validator->ValidatePHP()); } else { // AJAX validation is performed by the ValidateAJAX method. //The results are used to form a JSON document that is sent //back to the client $response = array('result' => $validator->ValidateAJAX ($_POST['inputValue'],$_POST['fieldID']), 'fieldid' => $_POST['fieldID'] ); // generate the response if(ob_get_length()) ob_clean(); header('Content-Type: application/json'); echo json_encode($response); } ?>
Read more
  • 0
  • 0
  • 5871
article-image-ajax-chat-implementation-part-2
Packt
28 Dec 2009
9 min read
Save for later

AJAX Chat Implementation: Part 2

Packt
28 Dec 2009
9 min read
Create a new file named index.html, and add this code to it: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" " http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xml_lang="en" lang="en"> <head> <title>AJAX Chat</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /> <link href="chat.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="jQuery-1.3.2.js" ></script> <script type="text/javascript" src="chat.js" ></script> </head> <body> <noscript> Your browser does not support JavaScript!! </noscript> <table id="content"> <tr> <td> <div id="scroll"> </div> </td> <td id="colorpicker"> <img src="palette.png" id="palette" alt="Color Palette" border="1"/> <br /> <input id="color" type="hidden" readonly="true" value="#000000" /> <span id="sampleText"> (text will look like this) </span> </td> </tr> </table> <div> <input type="text" id="userName" maxlength="10" size="10"/> <input type="text" id="messageBox" maxlength="2000" size="50" /> <input type="button" value="Send" id="send" /> <input type="button" value="Delete All" id="delete" /> </div> </body> </html> Let's deal with appearances now, creating chat.css and adding the following code to it: body { font-family: Tahoma, Helvetica, sans-serif; margin: 1px; font-size: 12px; text-align: left } #content { border: DarkGreen 1px solid; margin-bottom: 10px } input { border: #999 1px solid; font-size: 10px } #scroll { position: relative; width: 340px; height: 270px; overflow: auto } .item { margin-bottom: 6px } #colorpicker { text-align:center } It's time for another test. We still don't have the color picker in place, but other than that, we have the whole client-server chat mechanism in place. Load index.html at http://localhost/ajax/chat/index.html from multiple browsers and/or computers, and ensure everything works as planned. Figure 8-7: Screenshot of index.html Copy palette.png from the code download to your ajax/chat folder. Create a file named color.php and add the following code to it. This is actually the only step left to make the color picker work as expected. <?php // the name of the image file $imgfile='palette.png'; // load the image file $img=imagecreatefrompng($imgfile); // obtain the coordinates of the point clicked by the user $offsetx=$_GET['offsetx']; $offsety=$_GET['offsety']; // get the clicked color $rgb = ImageColorAt($img, $offsetx, $offsety); $r = ($rgb >> 16) & 0xFF; $g = ($rgb >> 8) & 0xFF; $b = $rgb & 0xFF; // return the color code echo json_encode(array("color" => sprintf('#%02s%02s%02s', dechex($r), dechex($g), dechex($b)))); ?> Make another test to ensure the color picker works and that your users can finally chat in color. What just happened? First, make sure the application works well. Load http://localhost/ajax/chat/index.html with a web browser, and you should get a page that looks like the one in Figure 8-1. If you analyze the code for a bit, the details will become clear. Everything starts with index.html. The only part that is really interesting in index.html is a scrolling region implemented in DHTML. (A little piece of information regarding scrolling can be found at http://www.dyn-web.com/code/scroll/.) The scrolling area allows our users to scroll up and down the history of the chat and ensures that any new messages that might flow out of the area are still viewed. In our example, the scroll element and its inner layers do the trick. The scroll element is the outermost layer; it has a fixed width and height; and its most useful property, overflow, determines how any content that falls (or overflows) outside of its boundaries is displayed. Generally, the content of a block box is confined to the content edges of the box. In CSS, the overflow property has four possible values that specify what should happen when an element overflows its area: visible, hidden, scroll, and auto. (For more details, please see the specification of overflow, at http://www.w3.org/TR/REC-CSS2/visufx.html.)   As you can see, the HTML file is very clean. It contains only the declarations of the HTML elements that make up the user interface. There are no event handlers and there is no JavaScript code inside the HTML file—we have a clean separation between the user interface and the programming. In our client-side JavaScript code, in the chat.js file, the action starts with the ready event, which is defined in jQuery (reference: http://docs.jQuery.com/Events/ready) as a replacement for window.onload. In other words, your ready() function, which you can see in the following code snippet, is called automatically after the HTML page has been loaded by the browser, and the page elements can be safely used and manipulated by your JavaScript code: $(document).ready(function() { } Inside this function, we do several operations involving events related to the user interface. Let's analyze each step! We want to be sure that a username always appears, that is, it should never be left empty. To do this, we can create a function that checks for this and bind it to the blur event of the textbox. // function that ensures that the username is never empty and //if so a random name is generated $('#userName').blur( function(e) { // ensures our user has a default random name when the form loads if (this.value == "") this.value = "Guest" + Math.floor(Math.random() * 1000); } ); If the username is empty, we simply generate a random username suffixing Guest with a randomly generated number. When the page first loads, no username has been set and we trigger the blur event on userName. // populate the username field with // the default value $('#userName').triggerHandler('blur'); The success() function starts by checking if the JSON response contains an errno field, which would mean that an error has happened on the server side. If an error occurred the displayPHPError() function is called passing in the error in JSON format. // function that displays a PHP error message function displayPHPError(error) { displayError ("Error number :" + error.errno + "rn" + "Text :"+ error.text + "rn" + "Location :" + error.location + "rn" + "Line :" + error.line + + "rn"); } The displayPHPError() will retrieve the information from the error and call in turn the displayError() function. The displayError() function shows the error message or a generic alert depending on whether the debugging flag is set or not. // function that displays an error message function displayError(message) { // display error message, with more technical details if debugMode is true alert("Error accessing the server! " + (debugMode ? message : "")); } Next, in our ready event, we set the default color for the sample text to black: // set the default color to black $('#sampleText').css('color', 'black'); Moving on, we hook on to the click event of the Send button. The following code is very simple, as the entire logic behind sending a message is encapsulated in sendMessage(): $('#send').click( function(e) { sendMessage(); } ); Moreover, here we hook on to the click event of the Delete all button in a similar way as the Send button. $('#delete').click( function(e) { deleteMessages(); } ); For the messageBox textbox, where we input messages, we disable autocomplete and we capture the Enter key and invoke the logic for sending a message: // set autocomplete off $('#messageBox').attr('autocomplete', 'off'); // handle the enter key event $('#messageBox').keydown( function(e) { if (e.keyCode == 13) { sendMessage(); } } ); Finally, when the page loads, we want to have the messages that have already been posted and we call retrieveNewMessages() function. Now that we have seen what happens when the page loads, it's time to analyze the logic behind sending and receiving new messages. Because everything starts when the page loads and the existing messages are retrieved, we will start with retrieveNewMessages() function. The function simply makes an AJAX request to the server indicating the retrieval of the latest messages. function retrieveNewMessages() { $.ajax({ url: chatURL, type: 'POST', data: $.param({ mode: 'RetrieveNew', id: lastMessageID }), dataType: 'json', error: function(xhr, textStatus, errorThrown) { displayError(textStatus); }, success: function(data, textStatus) { if(data.errno != null) displayPHPError(data); else readMessages(data); // restart sequence setTimeout("retrieveNewMessages();", updateInterval); } }); } The request contains as parameters the mode indicating the retrieval of new messages and the ID of the last retrieved message: data: $.param({ mode: 'RetrieveNew', id: lastMessageID }), On success, we simply read the retrieved messages and we schedule a new automatic retrieval after a specific period of time: success: function(data, textStatus) { if(data.errno != null) displayPHPError(data); else readMessages(data); // restart sequence setTimeout("retrieveNewMessages();", updateInterval); } Reading messages is the most complicated function as it involves several steps. It starts by checking whether the database has been cleared of messages and, if so, it empties the list of messages and resets the ID of the last retrieved message. function readMessages(data, textStatus) { // retrieve the flag that says if the chat window has been cleared or not clearChat = data.clear; // if the flag is set to true, we need to clear the chat window if (clearChat == 'true') { // clear chat window and reset the id $('#scroll')[0].innerHTML = ""; lastMessageID = -1; } Before retrieving the new messages, we need to check and see if the received messages have not been already processed. If not, we simply store the ID of the last received message in order to know what messages to ask for during the next requests: if (data.messages.length > 0) { // check to see if the first message // has been already received and if so // ignore the rest of the messages if(lastMessageID > data.messages[0].id) return; // the ID of the last received message is stored locally lastMessageID = data.messages[data.messages.length - 1].id; }
Read more
  • 0
  • 0
  • 5857

article-image-data-binding-expression-blend-4-silverlight-4
Packt
13 May 2010
7 min read
Save for later

Data binding from Expression Blend 4 in Silverlight 4

Packt
13 May 2010
7 min read
Using the different modes of data binding to allow persisting data Until now, the data has flowed from the source to the target (the UI controls). However, it can also flow in the opposite direction, that is, from the target towards the source. This way, not only can data binding help us in displaying data, but also in persisting data. The direction of the flow of data in a data binding scenario is controlled by the Mode property of the Binding. In this recipe, we'll look at an example that uses all the Mode options and in one go, we'll push the data that we enter ourselves to the source. Getting ready This recipe builds on the code that was created in the previous recipes, so if you're following along, you can keep using that codebase. You can also follow this recipe from the provided start solution. It can be found in the Chapter02/SilverlightBanking_Binding_ Modes_Starter folder in the code bundle that is available on the Packt website. The Chapter02/SilverlightBanking_Binding_Modes_Completed folder contains the finished application of this recipe. How to do it... In this recipe, we'll build the "edit details" window of the Owner class. On this window, part of the data is editable, while some isn't. The editable data will be bound using a TwoWay binding, whereas the non-editable data is bound using a OneTime binding. The Current balance of the account is also shown—which uses the automatic synchronization—based on the INotifyPropertyChanged interface implementation. This is achieved using OneWay binding. The following is a screenshot of the details screen: Let's go through the required steps to work with the different binding modes: Add a new Silverlight child window called OwnerDetailsEdit.xaml to the Silverlight project. In the code-behind of this window, change the default constructor—so that it accepts an instance of the Owner class—as shown in the following code: private Owner owner; public OwnerDetailsEdit(Owner owner) { InitializeComponent(); this.owner = owner; } In MainPage.xaml, add a Click event on the OwnerDetailsEditButton: <Button x_Name="OwnerDetailsEditButton" Click="OwnerDetailsEditButton_Click" > In the event handler, add the following code, which will create a new instance of the OwnerDetailsEdit window, passing in the created Owner instance: private void OwnerDetailsEditButton_Click(object sender, RoutedEventArgs e) { OwnerDetailsEdit ownerDetailsEdit = new OwnerDetailsEdit(owner); ownerDetailsEdit.Show(); } The XAML of the OwnerDetailsEdit is pretty simple. Take a look at the completed solution (Chapter02/SilverlightBanking_Binding_Modes_Completed)for a complete listing. Don't forget to set the passed Owner instance as the DataContext for the OwnerDetailsGrid. This is shown in the following code: OwnerDetailsGrid.DataContext = owner; For the OneWay and TwoWay bindings to work, the object to which we are binding should be an instance of a class that implements the INotifyPropertyChanged interface. In our case, we are binding an Owner instance. This instance implements the interface correctly. The following code illustrates this: public class Owner : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; ... } Some of the data may not be updated on this screen and it will never change. For this type of binding, the Mode can be set to OneTime. This is the case for the OwnerId field. The users should neither be able to change their ID nor should the value of this field change in the background, thereby requiring an update in the UI. The following is the XAML code for this binding: <TextBlock x_Name="OwnerIdValueTextBlock" Text="{Binding OwnerId, Mode=OneTime}" > </TextBlock> The CurrentBalance TextBlock at the bottom does not need to be editable by the user (allowing a user to change his or her account balance might not be benefi cial for the bank), but it does need to change when the source changes. This is the automatic synchronization working for us and it is achieved by setting the Binding to Mode=OneWay. This is shown in the following code: <TextBlock x_Name="CurrentBalanceValueTextBlock" Text="{Binding CurrentBalance, Mode=OneWay}" > </TextBlock> The final option for the Mode property is TwoWay. TwoWay bindings allow us to persist data by pushing data from the UI control to the source object. In this case, all other fields can be updated by the user. When we enter a new value, the bound Owner instance is changed. TwoWay bindings are illustrated using the following code: <TextBox x_Name="FirstNameValueTextBlock" Text="{Binding FirstName, Mode=TwoWay}" > </TextBox> We've applied all the different binding modes at this point. Notice that when you change the values in the pop-up window, the details on the left of the screen are also updated. This is because all controls are in the background bound to the same source object as shown in the following screenshot: How it works... When we looked at the basics of data binding, we saw that a binding always occurs between a source and a target. The first one is normally an in-memory object, but it can also be a UI control. The second one will always be a UI control. Normally, data flows from source to target. However, using the Mode property, we have the option to control this. A OneTime binding should be the default for data that does not change when displayed to the user. When using this mode, the data flows from source to target. The target receives the value initially during loading and the data displayed in the target will never change. Quite logically, even if a OneTime binding is used for a TextBox, changes done to the data by the user will not flow back to the source. IDs are a good example of using OneTime bindings. Also, when building a catalogue application, OneTime bindings can be used, as we won't change the price of the items that are displayed to the user (or should we...?). We should use a OneWay binding for binding scenarios in which we want an up-to-date display of data. Data will flow from source to target here also, but every change in the values of the source properties will propagate to a change of the displayed values. Think of a stock market application where updates are happening every second. We need to push the updates to the UI of the application. The TwoWay bindings can help in persisting data. The data can now flow from source to target, and vice versa. Initially, the values of the source properties will be loaded in the properties of the controls. When we interact with these values (type in a textbox, drag a slider, and so on), these updates are pushed back to the source object. If needed, conversions can be done in both directions. There is one important requirement for the OneWay and TwoWay bindings. If we want to display up-to-date values, then the INotifyPropertyChanged interface should be implemented. The OneTime and OneWay bindings would have the same effect, even if this interface is not implemented on the source. The TwoWay bindings would still send the updated values if the interface was not implemented; however, they wouldn't notify about the changed values. It can be considered as a good practice to implement the interface, unless there is no chance that the updates of the data would be displayed somewhere in the application. The overhead created by the implementation is minimal. There's more... Another option in the binding is the UpdateSourceTrigger. It allows us to specify when a TwoWay binding will push the data to the source. By default, this is determined by the control. For a TextBox, this is done on the LostFocus event; and for most other controls, it's done on the PropertyChanged event. The value can also be set to Explicit. This means that we can manually trigger the update of the source. BindingExpression expression = this.FirstNameValueTextBlock. GetBindingExpression(TextBox.TextProperty); expression.UpdateSource(); See also Changing the values that flow between source and target can be done using converters. Data binding from Expression Blend 4 While creating data bindings is probably a task mainly reserved for the developer(s) in the team, Blend 4—the design tool for Silverlight applications—also has strong support for creating and using bindings. In this recipe, we'll build a small data-driven application that uses data binding. We won't manually create the data binding expressions; we'll use Blend 4 for this task.
Read more
  • 0
  • 0
  • 5475
Modal Close icon
Modal Close icon