Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-understanding-backbone
Packt
02 Sep 2013
12 min read
Save for later

Understanding Backbone

Packt
02 Sep 2013
12 min read
(For more resources related to this topic, see here.) Backbone.js is a lightweight JavaScript framework that is based on the Model-View-Controller (MVC) pattern and allows developers to create single-page web applications. With Backbone, it is possible to update a web page quickly using the REST approach with a minimal amount of data transferred between a client and a server. Backbone.js is becoming more popular day by day and is being used on a large scale for web applications and IT startups; some of them are as follows: Groupon Now!: The team decided that their first product would be AJAX-heavy but should still be linkable and shareable. Though they were completely new to Backbone, they found that its learning curve was incredibly quick, so they were able to deliver the working product in just two weeks. Foursquare: This used the Backbone.js library to create model classes for the entities in foursquare (for example, venues, check-ins, and users). They found that Backbone's model classes provide a simple and light-weight mechanism to capture an object's data and state, complete with the semantics of a classical inheritance. LinkedIn mobile: This used Backbone.js to create its next-generation HTML5 mobile web app. Backbone made it easy to keep the app modular, organized, and extensible, so it was possible to program the complexities of LinkedIn's user experience. Moreover, they are using the same code base in their mobile applications for iOS and Android platforms. WordPress.com: This is a SaaS version of Wordpress and uses Backbone.js models, collections, and views in its notification system, and is integrating Backbone.js into the Stats tab and into other features throughout the home page. Airbnb: This is a community marketplace for users to list, discover, and book unique spaces around the world. Its development team has used Backbone in many latest products. Recently, they rebuilt a mobile website with Backbone.js and Node.js tied together with a library named Rendr. You can visit the following links to get acquainted with other usage examples of Backbone.js: http://backbonejs.org/#examples Backbone.js was started by Jeremy Ashkenas from DocumentCloud in 2010 and is now being used and improved by lots of developers all over the world using Git, the distributed version control system. In this article, we are going to provide some practical examples of how to use Backbone.js, and we will structure a design for a program named Billing Application by following the MVC and Backbone pattern. Reading this article is especially useful if you are new to developing with Backbone.js. Designing an application with the MVC pattern MVC is a design pattern that is widely used in user-facing software, such as web applications. It is intended for splitting data and representing it in a way that makes it convenient for user interaction. To understand what it does, understand the following: Model: This contains data and provides business logic used to run the application View: This presents the model to the user Controller: This reacts to user input by updating the model and the view There could be some differences in the MVC implementation, but in general it conforms to the following scheme: Worldwide practice shows that the use of the MVC pattern provides various benefits to the developer: Following the separation of the concerned paradigm, which splits an application into independent parts, it is easier to modify or replace It achieves code reusability by rendering a model in different views without the need to implement model functionality in each view It requires less training and has a quicker startup time for the new developers within an organization To have a better understanding of the MVC pattern, we are going to design a Billing Application. We will refer to this design throughout the book when we are learning specific topics. Our Billing Application will allow users to generate invoices, manage them, and send them to clients. According to the worldwide practice, the invoice should contain a reference number, date, information about the buyer and seller, bank account details, a list of provided products or services, and an invoice sum. Let's have a look at the following screenshot to understand how an invoice appears: How to do it... Let's follow the ensuing steps to design an MVC structure for the Billing Application: Let's write down a list of functional requirements for this application. We assume that the end user may want to be able to do the following: Generate an invoice E-mail the invoice to the buyer Print the invoice See a list of existing invoices Manage invoices (create, read, update, and delete) Update an invoice status (draft, issued, paid, and canceled) View a yearly income graph and other reports To simplify the process of creating multiple invoices, the user may want to manage information about buyers and his personal details in the specific part of the application before he/she creates an invoice. So, our application should provide additional functionalities to the end user, such as the following: The ability to see a list of buyers and use it when generating an invoice The ability to manage buyers (create, read, update, and delete) The ability to see a list of bank accounts and use it when generating an invoice The ability to manage his/her own bank accounts (create, read, update, and delete) The ability to edit personal details and use them when generating an invoice Of course, we may want to have more functions, but this is enough for demonstrating how to design an application using the MVC pattern. Next, we architect an application using the MVC pattern. After we have defined the features of our application, we need to understand what is more related to the model (business logic) and what is more related to the view (presentation). Let's split the functionality into several parts. Then, we learn how to define models. Models present data and provide data-specific business logic. Models can be related to each other. In our case, they are as follows: InvoiceModel InvoiceItemModel BuyerModel SellerModel BankAccountModel Then, will define collections of models. Our application allows users to operate on a number of models, so they need to be organized into a special iterable object named Collection. We need the following collections: InvoiceCollection InvoiceItemCollection BuyerCollection BankAccountCollection Next, we define views. Views present a model or a collection to the application user. A single model or collection can be rendered to be used by multiple views. The views that we need in our application are as follows: EditInvoiceFormView InvoicePageView InvoiceListView PrintInvoicePageView EmailInvoiceFormView YearlyIncomeGraphView EditBuyerFormView BuyerPageView BuyerListView EditBankAccountFormView BankAccountPageView BankAccountListView EditSellerInfoFormView ViewSellectInfoPageView ConfirmationDialogView Finally, we define a controller. A controller allows users to interact with an application. In MVC, each view can have a different controller that is used to do following: Map a URL to a specific view Fetch models from a server Show and hide views Handle user input Defining business logic with models and collections Now, it is time to design business logic for the Billing Application using the MVC and OOP approaches. In this recipe, we are going to define an internal structure for our application with model and collection objects. Although a model represents a single object, a collection is a set of models that can be iterated, filtered, and sorted. Relations between models and collections in the Billing Application conform to the following scheme: How to do it... For each model, we are going to create two tables: one for properties and another for methods: We define BuyerModel properties. Name Type Required Unique id Integer Yes Yes name Text Yes   address Text Yes   phoneNumber Text No   Then, we define SellerModel properties. Name Type Required Unique id Integer Yes Yes name Text Yes   address Text Yes   phoneNumber Text No   taxDetails Text Yes   After this, we define BankAccountModel properties. Name Type Required Unique id Integer Yes Yes beneficiary Text Yes   beneficiaryAccount Text Yes   bank Text No   SWIFT Text Yes   specialInstructions Text No   We define InvoiceItemModel properties. Name Arguments Return Type Unique calculateAmount - Decimal   Next, we define InvoiceItemModel methods. We don't need to store the item amount in the model, because it always depends on the price and the quantity, so it can be calculated. Name Type Required Unique id Integer Yes Yes deliveryDate Date Yes   description Text Yes   price Decimal Yes   quantity Decimal Yes   Now, we define InvoiceModel properties. Name Type Required Unique id Integer Yes Yes referenceNumber Text Yes   date Date Yes   bankAccount Reference Yes   items Collection Yes   comments Text No   status Integer Yes   We define InvoiceModel methods. The invoice amount can easily be calculated as the sum of invoice item amounts. Name Arguments Return Type Unique calculateAmount   Decimal   Finally, we define collections. In our case, they are InvoiceCollection, InvoiceItemCollection, BuyerCollection, and BankAccountCollection. They are used to store models of an appropriate type and provide some methods to add/remove models to/from the collections. How it works... Models in Backbone.js are implemented by extending Backbone.Model, and collections are made by extending Backbone.Collection. To implement relations between models and collections, we can use special Backbone extensions. To learn more about object properties, methods, and OOP programming in JavaScript, you can refer to the following resource: https://developer.mozilla.org/en-US/docs/JavaScript/Introduction_to_Object-Oriented_JavaScript Modeling an application's behavior with views and a router Unlike traditional MVC frameworks, Backbone does not provide any distinct object that implements controller functionality. Instead, the controller is diffused between Backbone.Router and Backbone. View and the following is done: A router handles URL changes and delegates application flow to a view. Typically, the router fetches a model from the storage asynchronously. When the model is fetched, it triggers a view update. A view listens to DOM events and either updates a model or navigates an application through a router. The following diagram shows a typical workflow in a Backbone application: How to do it... Let's follow the ensuing steps to understand how to define basic views and a router in our application: First, we need to create wireframes for an application. Let's draw a couple of wireframes in this recipe: The Edit Invoice page allows users to select a buyer, to select the seller's bank account from the lists, to enter the invoice's date and a reference number, and to build a table of shipped products and services. The Preview Invoice page shows how the final invoice will be seen by a buyer. This display should render all the information we have entered in the Edit Invoice form. Buyer and seller information can be looked up in the application storage. The user has the option to either go back to the Edit display or save this invoice. Then, we will define view objects. According to the previous wireframes, we need to have two main views: EditInvoiceFormView and PreviewInvoicePageView. These views will operate with InvoiceModel; it refers to other objects, such as BankAccountModel and InvoiceItemCollection. Now, we will split views into subviews. For each item in the Products or Services table, we may want to recalculate the Amount field depending on what the user enters in the Price and Quantity fields. The first way to do this is to re-render the entire view when the user changes the value in the table; however, it is not an efficient way, and it takes a significant amount of computer power to do this. We don't need to re-render the entire view if we want to update a small part of it. It is better to split the big view into different, independent pieces, such as subviews, that are able to render only a specific part of the big view. In our case, we can have the following views: As we can see, EditInvoiceItemTableView and PreviewInvoiceItemTableView render InvoiceItemCollection with the help of the additional views EditInvoiceItemView and PreviewInvoiceItemView that render InvoiceItemModel. Such separation allows us to re-render an item inside a collection when it is changed. Finally, we will define URL paths that will be associated with a corresponding view. In our case, we can have several URLs to show different views, for example: /invoice/add /invoice/:id/edit /invoice/:id/preview Here, we assume that the Edit Invoice view can be used for either creating a new invoice or editing an existing one. In the router implementation, we can load this view and show it on specific URLs. How it works... The Backbone.View object can be extended to create our own view that will render model data. In a view, we can define handlers to user actions, such as data input and keyboard or mouse events. In the application, we can have a single Backbone.Router object that allows users to navigate through an application by changing the URL in the address bar of the browser. The router object contains a list of available URLs and callbacks. In a callback function, we can trigger the rendering of a specific view associated with a URL. If we want a user to be able to jump from one view to another, we may want him/her to either click on regular HTML links associated with a view or navigate to an application programmatically.
Read more
  • 0
  • 0
  • 3281

article-image-communicating-servers
Packt
02 Sep 2013
24 min read
Save for later

Communicating with Servers

Packt
02 Sep 2013
24 min read
(For more resources related to this topic, see here.) Creating an HTTP GET request to fetch JSON One of the basic means of retrieving information from the server is using HTTP GET. This type of method in a RESTful manner should be only used for reading data. So, GET calls should never change server state. Now, this may not be true for every possible case, for example, if we have a view counter on a certain resource, is that a real change? Well, if we follow the definition literally then yes, this is a change, but it's far from significant to be taken into account. Opening a web page in a browser does a GET request, but often we want to have a scripted way of retrieving data. This is usually to achieve Asynchronous JavaScript and XML (AJAX ), allowing reloading of data without doing a complete page reload. Despite the name, the use of XML is not required, and these days, JSON is the format of choice. A combination of JavaScript and the XMLHttpRequest object provides a method for exchanging data asynchronously, and in this recipe, we are going to see how to read JSON for the server using plain JavaScript and jQuery. Why use plain JavaScript rather than using jQuery directly? We strongly believe that jQuery simplifies the DOM API, but it is not always available to us, and additionally, we need have to know the underlying code behind asynchronous data transfer in order to fully grasp how applications work. Getting ready The server will be implemented using Node.js. In this example, for simplicity, we will use restify (http://mcavage.github.io/node-restify/), a Node.js module for creation of correct REST web services. How to do it... Let's perform the following steps. In order to include restify to our project in the root directory of our server side scripts, use the following command: npm install restify After adding the dependency, we can proceed to creating the server code. We create a server.js file that will be run by Node.js, and at the beginning of it we add restify: var restify = require('restify'); With this restify object, we can now create a server object and add handlers for get methods: var server = restify.createServer(); server.get('hi', respond); server.get('hi/:index', respond); The get handlers do a callback to a function called respond, so we can now define this function that will return the JSON data. We will create a sample JavaScript object called hello, and in case the function was called having a parameter index part of the request it was called from the "hi/:index" handler: function respond(req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " responding"); var hello = [{ 'id':'0', 'hello': 'world' },{ 'id':'1', 'say':'what' }]; if(req.params.index){ var found = hello[req.params.index]; if(found){ res.send(found); } else { res.status(404); res.send(); } }; res.send(hello); addHeaders(req,res); return next(); } The following addHeaders function that we call at the beginning is adding headers to enable access to the resources served from a different domain or a different server port: function addHeaders(req, res) { res.header("Access-Control-Allow-Origin", "*"); res.header("Access-Control-Allow-Headers", "X-Requested-With"); }; The definition of headers and what they mean will be discussed later on in the Article. For now, let's just say they enable accesses to the resources from a browser using AJAX. At the end, we add a block of code that will set the server to listen on port 8080: server.listen(8080, function() { console.log('%s listening at %s', server.name, server.url); }); To start the sever using command line, we type the following command: node server.js If everything went as it should, we will get a message in the log: restify listening at http://0.0.0.0:8080 We can then test it by accessing directly from the browser on the URL we defined http://localhost:8080/hi Now we can proceed with the client-side HTML and JavaScript. We will implement two ways for reading data from the server, one using standard XMLHttpRequest and the other using jQuery.get(). Note that not all features are fully compatible with all browsers. We create a simple page where we have two div elements, one with the ID data and another with the ID say. These elements will be used as placeholders to load data form the server into them: Hello <div id="data">loading</div> <hr/> Say <div id="say">No</div>s <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "example.js"></script> <script src = "exampleJQuery.js"></script> In the example.js file, we define a function called getData that will create a AJAX call to a given url and do a callback if the request went successfully: function getData(url, onSuccess) { var request = new XMLHttpRequest(); request.open("GET", url); request.onload = function() { if (request.status === 200) { console.log(request); onSuccess(request.response); } }; request.send(null); } After that, we can call the function directly, but in order to demonstrate that the call happens after the page is loaded, we will call it after a timeout of three seconds: setTimeout( function() { getData( 'http://localhost:8080/hi', function(response){ console.log('finished getting data'); var div = document.getElementById('data'); var data = JSON.parse(response); div.innerHTML = data[0].hello; }) }, 3000); The jQuery version is a lot cleaner, as the complexity that comes with the standard DOM API and the event handling is reduced substantially: (function(){ $.getJSON('http://localhost:8080/hi/1', function(data) { $('#say').text(data.say); }); }()) How it works... At the beginning, we installed the dependency using npm install restify; this is sufficient to have it working, but in order to define dependencies in a more expressive way, npm has a way of specifying it. We can add a file called package.json, a packaging format that is mainly used for for publishing details for Node.js applications. In our case, we can define package.json with the flowing code: { "name" : "ch8-tip1-http-get-example", "description" : "example on http get", "dependencies" : ["restify"], "author" : "Mite Mitreski", "main" : "html5dasc", "version" : "0.0.1" } If we have a file like this, npm will automatically handle the installation of dependencies after calling npm install from the command line in the directory where the package.json file is placed. Restify has a simple routing where functions are mapped to appropriate methods for a given URL. The HTTP GET request for '/hi' is mapped with server.get('hi', theCallback), where theCallback is executed, and a response should be returned. When we have a parameterized resource, for example in 'hi/:index', the value associated with :index will be available under req.params. For example, in a request to '/hi/john' to access the john value, we simple have req.params.index. Additionally, the value for index will automatically get URL-decoded before it is passed to our handler. One other notable part of the request handlers in restify is the next() function that we called at the end. In our case, it mostly does not makes much sense, but in general, we are responsible for calling it if we want the next handler function in the chain to be called. For exceptional circumstances, there is also an option to call next() with an error object triggering custom responses. When it comes to the client-side code, XMLHttpRequest is the mechanism behind the async calls, and on calling request.open("GET", url, true) with the last parameter value as true, we get a truly asynchronous execution. Now you might be wondering why is this parameter here, isn't the call already done after loading the page? That is true, the call is done after loading the page, but if, for example, the parameter was set to false, the execution of the request will be a blocking method, or to put it in layman's terms, the script will pause until we get a response. This might look like a small detail, but it can have a huge impact on performance. The jQuery part is pretty straightforward; there is function that accepts a URL value of the resource, the data handler function, and a success function that gets called after successfully getting a response: jQuery.getJSON( url [, data ] [, success(data, textStatus, jqXHR) ] ) When we open index.htm, the server should log something like the following: Got HTTP GET on /hi/1 responding Got HTTP GET on /hi responding Here one is from the jQuery request and the other from the plain JavaScript. There's more... XMLHttpRequest Level 2 is one of the new improvements being added to the browsers, although not part of HTML5 it is still a significant change. There are several features with the Level 2 changes, mostly to enable working with files and data streams, but there is one simplification we already used. Earlier we would have to use onreadystatechange and go through all of the states, and if the readyState was 4, which is equal to DONE, we could read the data: var xhr = new XMLHttpRequest(); xhr.open('GET', 'someurl', true); xhr.onreadystatechange = function(e) { if (this.readyState == 4 && this.status == 200) { // response is loaded } } In a Level 2 request however, we can use request.onload = function() {} directly without checking states. Possible states can be seen in the table: table One other thing to note is that XMLHttpRequest Level 2 is supported in all major browsers and IE 10; the older XMLHttpRequest has a different way of instantiation on older versions of IE (older than IE 7), where we can access it through an ActiveX object via new ActiveXObject("Msxml2.XMLHTTP.6.0");. Creating a request with custom headers The HTTP headers are a part of the request object being sent to the server. Many of them give information about the client's user agent setup and configuration, as that is sometimes the basis of making description for the resources being fetched from the server. Several of them such as Etag, Expires, and If-Modified-Since are closely related to caching, while others such as DNT that stands for "Do Not Track" (http://www.w3.org/2011/tracking-protection/drafts/tracking-dnt.html) can be quite controversial. In this recipe, we will take a look at a way for using the custom X-Myapp header in our server and client-side code. Getting ready The server will be implemented using Node.js. In this example, again for simplicity, we will use restify (http://mcavage.github.io/node-restify/). Also, monitoring the console in your browser and server is crucial in order to understand what happens in the background. How to do it... We can start by defining the dependencies for the server side in package.json file: { "name" : "ch8-tip2-custom-headers", "dependencies" : ["restify"], "main" : "html5dasc", "version" : "0.0.1" } After that, we can call npm install from the command line that will automatically retrieve restify and place it in a node_modules folder created in the root directory of the project. After this part, we can proceed to creating the server-side code in a server.js file where we set the server to listen on port 8080 and add a route handler for 'hi' and for every other path when the request method is HTTP OPTIONS: var restify = require('restify'); var server = restify.createServer(); server.get('hi', addHeaders, respond); server.opts(/.*/, addHeaders, function (req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " with headersn"); res.send(200); return next(); }); server.listen(8080, function() { console.log('%s listening at %s', server.name, server.url); }); In most cases, the documentation should be enough when we write the application's build onto Restify, but sometimes, it is a good idea to take a look a the source code as well. It can be found on https://github.com/mcavage/node-restify/. One thing to notice is that we can have multiple chained handlers; in this case, we have addHeaders before the others. In order for every handler to be propagated, next() should be called: function addHeaders(req, res, next) { res.setHeader("Access-Control-Allow-Origin", "*"); res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With, X-Myapp'); res.setHeader('Access-Control-Allow-Methods', 'GET, OPTIONS'); res.setHeader('Access-Control-Expose-Headers', 'X-Myapp, X-Requested-With'); return next(); }; The addHeaders adds access control options in order to enable cross-origin resource sharing. Cross-origin resource sharing (CORS ) defines a way in which the browser and server can interact to determine if the request should be allowed. It is more secure than allowing all cross-origin requests, but is more powerful than simply allowing all of them. After this, we can create the handler function that will return a JSON response with the headers the server received and a hello world kind of object: function respond(req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " with headersn"); console.log("Request: ", req.headers); var hello = [{ 'id':'0', 'hello': 'world', 'headers': req.headers }]; res.send(hello); console.log('Response:n ', res.headers()); return next(); } We additionally log the request and response headers to the sever console log in order to see what happens in the background. For the client-side code, we need a plain "vanilla" JavaScript approach and jQuery method, so in order to do that, include example.js and exampleJquery.js as well as a few div elements that we will use for displaying data retrieved from the server: Hi <div id="data">loading</div> <hr/> Headers list from the request: <div id="headers"></div> <hr/> Data from jQuery: <div id="dataRecieved">loading</div> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "example.js"></script> <script src = "exampleJQuery.js"></script> A simple way to add the headers is to call setRequestHeader on a XMLHttpRequest object after the call of open(): function getData(url, onSucess) { var request = new XMLHttpRequest(); request.open("GET", url, true); request.setRequestHeader("X-Myapp","super"); request.setRequestHeader("X-Myapp","awesome"); request.onload = function() { if (request.status === 200) { onSuccess(request.response); } }; request.send(null); } The XMLHttpRequest automatically sets headers, such as "Content-Length","Referer", and "User-Agent", and does not allow you to change them using JavaScript. A more complete list of headers and the reasoning behind this can be found in the W3C documentation at http://www.w3.org/TR/XMLHttpRequest/#the-setrequestheader%28%29-method. To print out the results, we add a function that will add each of the header keys and values to an unordered list: getData( 'http://localhost:8080/hi', function(response){ console.log('finished getting data'); var data = JSON.parse(response); document.getElementById('data').innerHTML = data[0].hello; var headers = data[0].headers, headersList = "<ul>"; for(var key in headers){ headersList += '<li><b>' + key + '</b>: ' + headers[key] +'</li>'; }; headersList += "</ul>"; document.getElementById('headers').innerHTML = headersList; }); When this gets executed. a list of all the request headers should be displayed on a page, and our custom x-myapp should be shown: host: localhost:8080 connection: keep-alive origin: http://localhost:8000 x-myapp: super, awesome user-agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.27 (KHTML, like Gecko) Chrome/26.0.1386.0 Safari/537.27 The jQuery approach is far simpler, we can use the beforeSend hook to call a function that will set the 'x-myapp' header. When we receive the response, write it down to the element with the ID dataRecived: $.ajax({ beforeSend: function (xhr) { xhr.setRequestHeader('x-myapp', 'this was easy'); }, success: function (data) { $('#dataRecieved').text(data[0].headers['x-myapp']); } Output from the jQuery example will be the data contained in x-myapp header: Data from jQuery: this was easy How it works... You may have noticed that on the server side, we added a route that has a handler for HTTP OPTIONS method, but we never explicitly did a call there. If we take a look at the server log, there should be something like the following output: Got HTTP OPTIONS on /hi with headers Got HTTP GET on /hi with headers This happens because the browser first issues a preflight request , which in a way is the browser's question whether or not there is a permission to make the "real" request. Once the permission has been received, the original GET request happens. If the OPTIONS response is cached, the browser will not issue any extra preflight calls for subsequent requests. The setRequestHeader function of XMLHttpRequest actually appends each value as a comma-separated list of values. As we called the function two times, the value for the header is as follows: 'x-myapp': 'super, awesome' There's more... For most use cases, we do not need custom headers to be part of our logic, but there are plenty of API's that make good use of them. For example, many server-side technologies add the X-Powered-By header that contains some meta information, such as JBoss 6 or PHP/5.3.0. Another example is Google Cloud Storage, where among other headers there are x-goog-meta-prefixed headers such as x-goog-meta-project-name and x-goog-meta-project-manager. Versioning your API We do not always have the best solution while doing the first implementation. The API can be extended up to a certain point, but afterwards needs to undergo some structural changes. But we might already have users that depend on the current version, so we need a way to have different representation versions of the same resource. Once a module has users, the API cannot be changed at our own will. One way to resolve this issue is to use a so-called URL versioning, where we simply add a prefix. For example, if the old URL was http://example.com/rest/employees, the new one could be http://example.com/rest/v1/employees, or under a subdomain it could be http://v1.example.com/rest/employee. This approach only works if you have direct control over all the servers and clients. Otherwise, you need to have a way of handling fallback to older versions. In this recipe, we are going implement a so-called "Semantic versioning", http://semver.org/, using HTTP headers to specify accepted versions. Getting ready The server will be implemented using Node.js. In this example, we will use restify (http://mcavage.github.io/node-restify/) for the server-side logic to monitor the requests to understand what is sent. How to do it... Let's perform the following steps. We need to define the dependencies first, and after installing restify, we can proceed to the creation of the server code. The main difference with the previous examples is the definition of the "Accept-version" header. restify has built-in handling for this header using versioned routes . After creating the server object, we can set which methods will get called for what version: server.get({ path: "hi", version: '2.1.1'}, addHeaders, helloV2, logReqRes); server.get({ path: "hi", version: '1.1.1'}, addHeaders, helloV1, logReqRes); We also need the handler for the HTTP OPTIONS, as we are using cross-origin resource sharing and the browser needs to do the additional request in order to get permissions: server.opts(/.*/, addHeaders, logReqRes, function (req, res, next) { res.send(200); return next(); }); The handlers for Version 1 and Version 2 will return different objects in order for us to easily notice the difference between the API calls. In the general case, the resource should be the same, but can have different structural changes. For Version 1, we can have the following: function helloV1(req, res, next) { var hello = [{ 'id':'0', 'hello': 'grumpy old data', 'headers': req.headers }]; res.send(hello); return next() } As for Version 2, we have the following: function helloV2(req, res, next) { var hello = [{ 'id':'0', 'awesome-new-feature':{ 'hello': 'awesomeness' }, 'headers': req.headers }]; res.send(hello); return next(); } One other thing we must do is add the CORS headers in order to enable the accept-version header, so in the route we included the addHeaders that should be something like the following: function addHeaders(req, res, next) { res.setHeader("Access-Control-Allow-Origin", "*"); res.setHeader('Access-Control-Allow-Headers', 'X-Requested-With, accept-version'); res.setHeader('Access-Control-Allow-Methods', 'GET, OPTIONS'); res.setHeader('Access-Control-Expose-Headers', 'X-Requested-With, accept-version'); return next(); }; Note that you should not forget to the call to next() in order to call the next function in the route chain. For simplicity, we will only implement the client side in jQuery, so we create a simple HTML document, where we include the necessary JavaScript dependencies: Old api: <div id="data">loading</div> <hr/> New one: <div id="dataNew"> </div> <hr/> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "exampleJQuery.js"></script> In the example.js file, we do two AJAX calls to our REST API, one is set to use the Version 1 and other to use Version 2: $.ajax({ url: 'http://localhost:8080/hi', type: 'GET', dataType: 'json', success: function (data) { $('#data').text(data[0].hello); }, beforeSend: function (xhr) { xhr.setRequestHeader('accept-version', '~1'); } }); $.ajax({ url: 'http://localhost:8080/hi', type: 'GET', dataType: 'json', success: function (data) { $('#dataNew').text(data[0]['awesome-new-feature'].hello); }, beforeSend: function (xhr) { xhr.setRequestHeader('accept-version', '~2'); } }); Notice that the accept-version header contains values ~1 and ~2. These designate that all the semantic versions such as 1.1.0 and 1.1.1 1.2.1 will get matched by ~1 and similarly for ~2. At the end, we should get an output like the following text: Old api:grumpy old data New one:awesomeness How it works... Versioned routes are a built-in feature of restify that work through the use of accept-version. In our example, we used Versions ~1 and ~2, but what happens if we don't specify a version? restify will do the choice for us, as the the request will be treated in the same manner as if the client has sent a * version. The first defined matching route in our code will be used. There is also an option to set up the routes to match multiple versions by adding a list of versions for a certain handler: server.get({path: 'hi', version: ['1.1.0', '1.1.1', '1.2.1']}, sendOld); The reason why this type of versioning is very suitable for use in constantly growing applications is because as the API changes, the client can stick with their version of the API without any additional effort or changes needed in the client-side development. Meaning that we don't have to do updates on the application. On the other hand, if the client is sure that their application will work on newer API versions, they can simply change the request headers. There's more... Versioning can be implemented by using custom content types prefixed with vnd for example, application/vnd.mycompany.user-v1. An example of this is Google Earth's content type KML where it is defined as application/vnd.google-earth.kml+xml. Notice that the content type can be in two parts; we could have application/vnd.mycompany-v1+json where the second part will be the format of the response. Fetching JSON data with JSONP JSONP or JSON with padding is a mechanism of making cross-domain requests by taking advantage of the <script> tag. AJAX transport is done by simply setting the src attribute on a script element or adding the element itself if not present. The browser will do an HTTP request to download the URL specified, and that is not subject to the same origin policy, meaning that we can use it to get data from servers that are not under our control. In this recipe, we will create a simple JSONP request, and a simple server to back that up. Getting ready We will make a simplified implementation of the server we used in previous examples, so we need Node.js and restify (http://mcavage.github.io/node-restify/) installed either via definition of package.json or a simple install. For working with Node.js. How to do it... First, we will create a simple route handler that will return a JSON object: function respond(req, res, next) { console.log("Got HTTP " + req.method + " on " + req.url + " responding"); var hello = [{ 'id':'0', 'what': 'hi there stranger' }]; res.send(hello); return next(); } We could roll our own version that will wrap the response into a JavaScript function with the given name, but in order to enable JSONP when using restify, we can simply enable the bundled plugin. This is done by specifying what plugin to be used: var server = restify.createServer(); server.use(restify.jsonp()); server.get('hi', respond); After this, we just set the server to listen on port 8080: server.listen(8080, function() { console.log('%s listening at %s', server.name, server.url); }); The built-in plugin checks the request string for parameters called callback or jsonp, and if those are found, the result will be JSONP with the function name of the one passed as value to one of these parameters. For example, in our case, if we open the browser on http://localhost:8080/hi, we get the following: [{"id":"0","what":"hi there stranger"}] If we access the same URL with the callback parameter or a JSONP set, such as http://localhost:8080/hi?callback=great, we should receive the same data wrapped with that function name: great([{"id":"0","what":"hi there stranger"}]); This is where the P in JSONP, which stands for padded, comes into the picture. So, what we need to do next is create an HTML file where we would show the data from the server and include two scripts, one for the pure JavaScript approach and another for the jQuery way: <b>Hello far away server: </b> <div id="data">loading</div> <hr/> <div id="oneMoreTime">...</div> <script src = "http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src = "example.js"></script> <script src = "exampleJQuery.js"></script> We can proceed with the creation of example.js, where we create two functions; one will create a script element and set the value of src to http://localhost:8080/?callback=cool.run, and the other will serve as a callback upon receiving the data: var cool = (function(){ var module = {}; module.run = function(data){ document.getElementById('data').innerHTML = data[0].what; } module.addElement = function (){ var script = document.createElement('script'); script.src = 'http://localhost:8080/hi?callback=cool.run' document.getElementById('data').appendChild(script); return true; } return module; }()); Afterwards we only need the function that adds the element: cool.addElement(); This should read the data from the server and show a result similar to the following: Hello far away server: hi there stranger From the cool object, we can run the addElement function directly as we defined it as self-executable. The jQuery example is a lot simpler; We can set the datatype to JSONP and everything else is the same as any other AJAX call, at least from the API point of view: $.ajax({ type : "GET", dataType : "jsonp", url : 'http://localhost:8080/hi', success: function(obj){ $('#oneMoreTime').text(obj[0].what); } }); We can now use the standard success callback to handle the data received from the server, and we don't have to specify the parameter in the request. jQuery will automatically append a callback parameter to the URL and delegate the call to the success callback. How it works... The first large leap we are doing here is trusting the source of the data. Results from the server is evaluated after the data is downloaded from the server. There has been some efforts to define a safer JSONP on http://json-p.org/, but it is far from being widespread. The download itself is a HTTP GET method adding another major limitation to usability. Hypermedia as the Engine of Application State (HATEOAS ), among other things, defines the use of HTTP methods for the create, update, and delete operations, making JSONP very unstable for those use cases. Another interesting point is how jQuery delegates the call to the success callback. In order to achieve this, a unique function name is created and is sent to the callback parameter, for example: /hi?callback=jQuery182031846177391707897_1359599143721&_=1359599143727 This function later does a callback to the appropriate handler of jQuey.ajax. There's more... With jQuery, we can also use a custom function if the server parameter that should handle jsonp is not called callback. This is done using the flowing config: jsonp: false, jsonpCallback: "my callback" As with JSONP, we don't do XMLHttpRequest and expect any of the functions that are used with AJAX call to be executed or have their parameters filled as such call. It is a very common mistake to expect just that. More on this can be found in the jQuery documentation at http://api.jquery.com/category/ajax/.
Read more
  • 0
  • 0
  • 2187

article-image-so-what-markdown
Packt
02 Sep 2013
3 min read
Save for later

So, what is Markdown?

Packt
02 Sep 2013
3 min read
(For more resources related to this topic, see here.) Markdown is a lightweight markup language that simplifies the workflow of web writers. It was created in 2004 by John Gruber with contributions and feedback from Aaron Swartz. Markdown was described by John Gruber as: "A text-to-HTML conversion tool for web writers. Markdown allows you to write using an easy-to-read, easy-to-write plain text format, then convert it to structurally valid XHTML (or HTML)." Markdown is two different things: A simple syntax to create documents in plain text A software tool written in Perl that converts the plain text formatting to HTML Markdown's formatting syntax was designed with simplicity and readability as a design goal. We add rich formatting to plain text without considering that we are writing using a markup language. The main features of Markdown Markdown is: Easy to use: Markdown has an extremely simple syntax that you can learn quickly Fast: Writing is much faster than with HTML, we can dramatically reduce the time we spend crafting HTML tags Clean: We can clearly read and write documents that are always translated into HTML without mistakes or errors Flexible: It is suitable for many things such as writing on the Internet, e-mails, creating presentations Portable: Documents are just plain text; we can edit Markdown with any basic text editor in any operating system Made for writers: Writers can focus on distraction-free writing Here, we can see a quick comparison of the same document between HTML and Markdown. This is the final result that we achieve in both cases: The following code is written in HTML: <h1>Markdown</h1><p>This a <strong>simple</strong> example of Markdown.</p><h2>Features:</h2><ul><li>Simple</li><li>Fast</li><li>Portable</li></ul><p>Check the <a href="http://daringfireball.net/projects/markdown/">official website</a>.</p> The following code is an equivalent document written in Markdown: # Markdown This a **simple** example of Markdown. ## Features: - Simple - Fast - Portable Check the [official website]. [official website]:http://daringfireball.net/projects/markdown/ summary In this article, we learned the basics of Markdown and got to know its features. We also saw how convenient Markdown is, thus proving the fact that it's made for writers. Resources for Article: Further resources on this subject: Generating Reports in Notebooks in RStudio [Article] Database, Active Record, and Model Tricks [Article] Formatting and Enhancing Your Moodle Materials: Part 1 [Article]
Read more
  • 0
  • 0
  • 2691

article-image-jquery-refresher
Packt
30 Aug 2013
6 min read
Save for later

jQuery refresher

Packt
30 Aug 2013
6 min read
(For more resources related to this topic, see here.) If you haven't used jQuery in a while, that's okay, we'll get you up to speed very quickly. The first thing to realize is that the Document.Ready function is extremely important when using UI. Although page loading happens incredibly fast, we always want our DOM (the HTML content) to be loaded before our UI code gets applied. Otherwise we have nothing to apply it to! We want to place our code inside the Document.Ready function, and we will be writing it the shorthand way as we did previously. Please remove the previous UI checking code in your header that you had previously: $(function() {// Your code here is called only once the DOM is completelyloaded}); Easy enough. Let's refresh on some jQuery selectors. We'll be using these a lot in our examples so we can manipulate our page. I'll write out a few DOM elements next and how you can select them. I will apply hide() to them so we know what's been selected and hidden. Feel free to place the JavaScript portion in your header script tags and the HTML elements within your <body> tags as follows: Selecting elements (unchanging the HTML entities) as follows: $('p').hide();<p>This is a paragraph</p><p>And here is another</p><p>All paragraphs will go hidden!</p> Selecting classes as follows: $('.edit').hide();<p>This is an intro paragraph</p><p class="edit">But this will go hidden!</p><p>Another paragraph</p><p class="edit">This will also go hidden!</p> Selecting IDs as follows: <div id="box">Hide the Box </div><div id="house">Just a random divider</div> Those are the three basic selectors. We can get more advanced and use the CSS3 selectors as follows: $("input[type=submit]").hide();<form><input type="text" name="name" /><input type="submit" /></form> Lastly, you can chain your DOM tree to hide elements more specifically: $("table tr td.hidden").hide(); <table> <tbody> <tr> <td>Data</td> <td class="hidden">Hide Me</td> </tr> </tbody> </table> Step 3 – console.log is your best friend I brought up that developing with the console open is very helpful. When you need to know details about a JavaScript item you have, whether it be the typeof type or value, a friend of yours is the console.log() method. Notice that it is always in lowercase. This allows you to place things in the console rather than somewhere on your page. For example, if I were having trouble figuring out what a value was returning to me, I would simply do the following: function add(a, b) {return a + b;}var total = add(5, 20);console.log(total); This will give me the result I wanted to know quickly and easily. Internet Explorer does not support console logging, it will prevent your JavaScript from running once it hits a console.log method. Make sure to comment out or remove all the console logs before releasing a live project or else all the IE users will have a serious problem. Step 4 – creating the slider widget Let's get busy! Open your template file and let's create a DOM element to attach a slider widget to. And to make it more interesting, we are also going to add an additional DIV to show a text value. Here is what I placed in my <body> tag: <div id="slider"></div><div id="text"></div> It doesn't have to be a <div> tag, but it's a good generic block-level element to use. Next, to attach a slider element we place the following in our <script> tags (the empty ones): $(function() {var my_slider = $("#slider").slider();}); Refresh your page, and you will have a widget that can slide along a bar. If you don't see a slider, first check your browser's development tools console to see if there are any JavaScript errors. If you don't see any still, make sure you don't have a JavaScript blocker on! The reason we assign a variable to the slider is because, later on, we may want to reference the options, which you'll see next. You are not required to do this, but if you want to access the slider outside of its initial setup, you must give it a variable name. Our widget doesn't do much now, but it feels cool to finally make something, whatever it is! Let's break down a few things we can customize. There are three categories: Options: These are defined in a JavaScript object ({}) and will determine how you want your widget to behave when it's loaded, for example, you could set your slider to have minimum and maximum values. Events: These are always a function and they are triggered when a user does something to your item. Methods: You can use methods to destroy a widget, get and set values from outside of the widget, and even set different options from what you started with. To play with a few categories, the easiest start is to adjust the options. Let's do it by creating an empty object inside our slider: var my_slider = $("#slider").slider({}); Then we'll create a minimum and maximum value for our slider using the following code: var my_slider = $("#slider").slider({min: 1,max: 50}); Now our slider will accept and move along a bar with 50 values. There are many more options at UI API located at api.jquery.com under slider. You'll find many other options we won't have time to cover such as a step option to make the slider count every two digits, as follows: var my_slider = $("#slider").slider({min: 1,max: 50,step: 2}); If we want to attach this to a text field we created in the DOM, a good way to start is by assigning the minimum value in the DIV, as this way we only have to change it once: var min = my_slider.slider('option', 'min');$("#text").html(min); Next we want to update the text value every time the slider is moved, easy enough; this will introduce us to our first event. Let's add it: var my_slider = $("#slider").slider({min: 1,max: 50,step: 2,change: function(event, ui) {$("#text").html(ui.value);}}); Summary This article describes the basis for all widgets. Creating them, setting the options, events, and methods. That is the very simple pattern that handles everything for us. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] jQuery Animation: Tips and Tricks [Article] New Effects Added by jQuery UI [Article]
Read more
  • 0
  • 0
  • 2805

article-image-handling-sessions-and-users
Packt
30 Aug 2013
4 min read
Save for later

Handling sessions and users

Packt
30 Aug 2013
4 min read
(For more resources related to this topic, see here.) Getting ready We will work from the app.py file from the sched directory and the models.py file. How to do it... Flask provides a session object, which behaves like a Python dictionary, and persists automatically across requests. You can, in your Flask application code: from flask import session# ... in a request ...session['spam'] = 'eggs'# ... in another request ...spam = session.get('spam') # 'eggs' Flask-Login provides a simple means to track a user in Flask's session. Update requirements.txt: FlaskFlask-LoginFlask-ScriptFlask-SQLAlchemyWTForms Then: $ pip install -r requirements.txt We can then load Flask-Login into sched's request handling, in app.py: from flask.ext.login import LoginManager, current_userfrom flask.ext.login import login_user, logout_userfrom sched.models import User# Use Flask-Login to track current user in Flask's session.login_manager = LoginManager()login_manager.setup_app(app)login_manager.login_view = 'login'@login_manager.user_loaderdef load_user(user_id):"""Flask-Login hook to load a User instance from ID."""return db.session.query(User).get(user_id) Flask-Login requires four methods on the User object, inside class User in models.py: def get_id(self):return str(self.id)def is_active(self):return Truedef is_anonymous(self):return Falsedef is_authenticated(self):return True Flask-Login provides a UserMixin (flask.ext.login.UserMixin) if you prefer to use its default implementation. We then provide routes to log the user in when authenticated and log out. In app.py: @app.route('/login/', methods=['GET', 'POST']) def login(): if current_user.is_authenticated(): return redirect(url_for('appointment_list')) form = LoginForm(request.form) error = None if request.method == 'POST' and form.validate(): email = form.username.data.lower().strip() password = form.password.data.lower().strip() user, authenticated = User.authenticate(db.session.query, email, password) if authenticated: login_user(user) return redirect(url_for('appointment_list')) else: error = 'Incorrect username or password.' return render_template('user/login.html', form=form, error=error) @app.route('/logout/') def logout(): logout_user() return redirect(url_for('login')) We then decorate every view function that requires a valid user, in app.py: from flask.ext.login import login_required@app.route('/appointments/')@login_requireddef appointment_list():# ... How it works... On login_user, Flask-Login gets the user object's ID from User.get_id and stores it in Flask's session. Flask-Login then sets a before_request handler to load the user instance into the current_user object, using the load_user hook we provide. The logout_user function then removes the relevant bits from the session. If no user is logged in, then current_user will provide an anonymous user object which results in current_user.is_anonymous() returning True and current_user. is_authenticated() returning False, which allows application and template code to base logic on whether the user is valid. (Flask-Login puts current_user into all template contexts.) You can use User.is_active to make user accounts invalid without actually deleting them, by returning False as appropriate. View functions decorated with login_required will redirect the user to the login view if the current user is not authenticated, without calling the decorated function. There's more... Flask's session supports display of messages and protection against request forgery. Flashing messages When you want to display a simple message to indicate a successful operation or a failure quickly, you can use Flask's flash messaging, which loads the message into the session until it is retrieved. In application code, inside request handling code: from flask import flashflash('Sucessfully did that thing.', 'success') In template code, where you can use the 'success' category for conditional display: {% for cat, m in get_flashed_messages(with_categories=true) %}<div class="alert">{{ m }}</div>{% endfor %} Cross-site request forgery protection Malicious web code will attempt to forge data-altering requests for other web services. To protect against forgery, you can load a randomized token into the session and into the HTML form, and reject the request when the two do not match. This is provided in the Flask-SeaSurf extension, pythonhosted.org/Flask-SeaSurf/ or the Flask-WTF extension (which integrates WTForms), pythonhosted.org/Flask-ETF/. Summary This article explained how to keep users logged in for on-going requests after authentication. It shed light on how Flask provides a session object, which behaves like a Python dictionary, and persists automatically across requests. It also spoke about coding in Flask application. We got acquainted with flashing messages and cross-site request forgery protection. Resources for Article: Further resources on this subject: Python Testing: Installing the Robot Framework [Article] Getting Started with Spring Python [Article] Creating Skeleton Apps with Coily in Spring Python [Article]
Read more
  • 0
  • 0
  • 6165

article-image-using-third-party-plugins-non-native-plugins
Packt
30 Aug 2013
4 min read
Save for later

Using third-party plugins (non-native plugins)

Packt
30 Aug 2013
4 min read
(For more resources related to this topic, see here.) We want to focus on a particular case here, because we have already seen how to add a new property, and for some components, we can easily add the plugins or features property, and then add the plugin configuration. But the components that have native plugins supported by the API do not allow us to do so, like for instance, the grid panel from Ext JS: We can only use the plugins and features that are available within Sencha Architect. What if we want to use a third-party plugin or feature such as the Filter Plugin? It is possible, but we need to use an advanced feature from Sencha Architect, which is "creating overrides". A disclaimer about overrides: this has to be avoided. Whenever you can use a set method to change a property, use it. Overrides should be your last resource and they should be used very carefully, because if you do not use them carefully, you can change the behavior of a component and something may stop working. But we will demonstrate how to do it in a safe way! We will use the BooksGrid as an example in this topic. Let's say we need to use the Filter Plugin on it, so we need to create an override first. To do it, select the BooksGrid from the project inspector, open the code editor, and click on the Create Override button (Step 1). Sencha Architect will display a warning (Step 2). We can click on Yes to continue: The code editor will open (Step 3) the override class so we can enter our code. In this case, we will have complete freedom to do whatever we need to on this file. So let's add the features() function with the declaration of the plugin and also the initComponent()function as shown in the following screenshot (Step 4): One thing that is very important is that we must call the callParent()function (callOverriden()is deprecated already in Ext JS 4.1 and later versions) to make sure we will continue to have all the original behavior of the component (in this case the BooksGridclass). The only thing we want to do is to add a new feature to it. And we are done with the override! To go back to the original class we can use the navigator as shown in the following screenshot: Notice that requires was added to the class Packt.view.override. BooksGrid, which is the class we just wrote. The next step is to add the plugin on the class requires. To do so, we need to select the BooksGrid, go to the config panel, and add the requires with the name of the plugin (Ext.ux.grid.FiltersFeature): Some developers like to add the plugin file directly as a JavaScript file on app.html/index.html. Sencha provides the dynamic loading feature so let's take advantage of it and use it! First, we cannot forget to add the uxfolder with the plugin on the project root folder as shown in the following screenshot: Next, we need to set the application loader. Select the Application from the project inspector (Step 5), then go to the config panel, locate the Loader Config property, click on the +icon (Step 6), then click on the arrow icon (Step 7). The details of the loader will be available on the config panel. Locate the paths property and click on it (Step 8). The code editor will be opened with the loader path's default value, which is {"Ext": "."}(Step 9). Do not remove it; simply add the path of the Ext.uxnamespace which is the uxfolder (Step 10): And we are almost done! We need to add the filterable option in each column we want to allow the user to filter its values (Step 11): we can use the config panel to add a new property (we need to select the desired column from the project inspector first—always remember to do this). And then, we can choose what type of property we want to add (Step 12 and Step 14). For example, we can add filterable: true(Step 13) for the id column and filterable: {type: 'string'}(Step 15 and Step 16) to the Name column as shown in the following screenshot: And the plugin is ready to be used! Summary In this article we learned some useful tricks that can help in our everyday tasks while working with Sencha projects using Sencha Architect. Also we covered advanced topics such as creating overrides to use third party plugins and features and implement multilingual apps. Resources for Article: Further resources on this subject: Sencha Touch: Layouts Revisited [Article] Sencha Touch: Catering Form Related Needs [Article] Creating a Simple Application in Sencha Touch [Article]
Read more
  • 0
  • 0
  • 3320
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-customization
Packt
29 Aug 2013
18 min read
Save for later

Customization

Packt
29 Aug 2013
18 min read
(For more resources related to this topic, see here.) Now that you've got a working multisite installation, we can start to add some customizations. Customizations can come in a few different forms. You're probably aware of the customizations that can be made via WordPress plugins and custom WordPress themes. Another way we can customize a multisite installation is by creating a landing page that displays information about each blog in the multisite network, as well as displaying information about the author for that individual blog. I wrote a blog post shortly after WordPress 3.0 came out detailing how to set this landing page up. At the time, I was working for a local newspaper and we were setting up a blog network for some of our reporters to blog about politics (being in Iowa, politics are a pretty big deal here, especially around Caucus time). You can find the post at http://www.longren.org/how-to-wordpress-3-0-multi-site-blog-directory/ if you'd like to read it. There's also a blog-directory.zip file attached to the post that you can download and use as a starting point. Before we get into creating the landing page, let's get the really simple stuff out of the way and briefly go over how themes and plugins are managed in WordPress multisite installations. We'll start with themes. Themes can be activated network-wide, which is really nice if you have a theme that you want every site in your blog network to use. You can also activate a theme for an individual blog, instead of activating the theme for the entire network. This is helpful if one or two individual blogs need to have a totally unique theme that you don't want to be available to the other blogs. Theme management You can install themes on a multisite installation the same way you would with a regular WordPress install. Just upload the theme folder to your wp-content/themes folder to install the theme. Installing a theme is only part of the process for individual blogs to use the themes; you'll need to activate them for the entire blog network or for specific blogs. To activate a theme for an entire network, click on Themes and then click on Installed Themes in the Network Admin dashboard. Check the themes that you want to enable, select Network Enable in the Bulk Actions drop-down menu, and then click on the Apply button. That's all there is to activating a theme (or multiple themes) for an entire multisite network. The individual blog owners can apply the theme just as you would in a regular, nonmultisite WordPress installation. To activate a theme for just one specific blog and not the entire network, locate the target blog using the Sites menu option in the Network Admin dashboard. After you've found it, put your mouse cursor over the blog URL or domain. You should see the action menu appear immediately under the blog URL or domain. The action menu includes options such as Edit, Dashboard, and Deactivate. Click on the Edit action menu item and then navigate to the Themes tab. To activate an individual theme, just click on Enable below the theme that you want to activate. Or, if you want to activate multiple themes for the blog, check all the themes you want through the checkboxes on the left-hand side of each theme from the list, select Enable in the Bulk Actions drop-down menu, and then click on the Apply button. An important thing to keep in mind is that themes that have been activated for the entire network won't be shown here. Now the blog administrator can apply the theme to their blog just as they normally would. Plugin management To install a plugin for network use, upload the plugin folder to wp-content/plugins/ as you normally would. Unlike themes, plugins cannot be activated on a per-site basis. As network administrator, you can add a plugin to the Plugins page for all sites, but you can't make a plugin available to one specific site. It's all or nothing. You'll also want to make sure that you've enabled the Plugins page for the sites that need it. You can enable the Plugins page by visiting the Network Admin dashboard and then navigating to the Network Settings page. At the bottom of that page you should see a Menu Settings section where you can check a box next to Plugins to enable the plugins page. Make sure to click on the Save Changes button at the bottom or nothing will change. You can see the Menu Settings section in the following screenshot. That's where you'll want to enable the Plugins page. Enabling the Plugins page After you've ensured that the Plugins page is enabled, specific site administrators will be able to enable or disable plugins as they normally would. To enable a plugin for the entire network go to the Network Admin dashboard, mouse over the Plugins menu item, and then click on Installed Plugins. This will look pretty familiar to you; it looks pretty much like the Installed Plugins page does on a typical WordPress single-site installation. The following screenshot shows the installed Plugins page: Enable plugins for the entire network You'll notice below each plugin there's some text that reads Network Activate. I bet you can guess what clicking that will do. Yes, clicking on the Network Activate link will activate that plugin for the entire network. That's all there is to the basic plugin setup in WordPress multisite. There's another plugin feature that is often overlooked in WordPress multisite, and that's must-use plugins. These are plugins that are required for every blog or site on the network. Must-use plugins can be installed in the wp-content/mu-plugins/ folder but they must be single-file plugins. The files within folders won't be read. You can't deactivate or activate the must-use plugins. If they exist in the mu-plugins folder, they're used. They're entirely hidden from the Plugin pages, so individual site administrators won't even see them or know they're there. I don't think must-use plugins are a commonly used thing, but it's nice information to have just in case. Some plugins, especially domain mapping plugins, need to be installed in mu-plugins and need to be activated before the normal plugins. Third-party plugins and plugins for plugin management We should also discuss some of the plugins that are available for making the management of plugins and themes on WordPress multisite installations a bit easier. One of the most popular plugins is called Multisite Plugin Manager, and is developed by Aaron Edwards of UglyRobot.com. The Multisite Plugin Manager plugin was previously known as WPMU Plugin Manager. The plugin can be obtained from the WordPress Plugin Directory at http://wordpress.org/plugins/multisite-plugin-manager/. Here's a quick rundown of some of the plugin features: Select which plugins specific sites have access to Set certain plugins to autoactivate itself for new blogs or sites Activate/deactivate a plugin on all network sites Assign some special plugin access permissions to specific network sites Another plugin that you may find useful is called WordPress MU Domain Mapping. It allows you to easily map any blog or site to an external domain. You can find this plugin in the WordPress Plugin Directory at http://wordpress.org/plugins/wordpress-mu-domain-mapping/. There's one other plugin I want to mention; the only drawback is that it's not a free plugin. It's called WP Multisite Replicator, and you can probably guess what it does. This plugin will allow you to set up a "template" blog or site and then replicate that site when adding new sites or blogs. The idea is that you'd create a blog or site that has all the features that other sites in your network will need. Then, you can easily replicate that site when creating a new site or blog. It will copy widgets, themes, and plugin settings to the new site or blog, which makes deploying new, identical sites extremely easy. It's not an expensive plugin, costing about $36 at the moment of writing, which is well worth it in my opinion if you're going to be creating lots of sites that have the same basic feature set. WP Multisite Replicator can be found at http://wpebooks.com/replicator/. Creating a blog directory / landing page Now that we've got the basic theme and plugin stuff taken care of, I think it's time to move onto creating a blog directory or a landing page, whichever you prefer to call it. From this point on I'll be referring to it as a blog directory. You can see a basic version of what we're going to make in the following screenshot. The users on my example multisite installation, at http://multisite.longren.org/, are Kayla and Sydney, my wife and daughter. Blog directory example As I mentioned earlier in this article, I wrote a post about creating this blog directory back when WordPress 3.0 was first released in 2010. I'll be using that post as the basis for most of what we'll do to create the blog directory with some things changed around, so this will integrate more nicely into whatever theme you're using on the main network site. The first thing we need to do is to create a basic WordPress page template that we can apply to a newly created WordPress page. This template will contain the HTML structure for the blog directory and will dictate where the blog names will be shown and where the recent posts and blog description will be displayed. There's no reason that you need to stick with the following blog directory template specifically. You can take the code and add or remove various elements, such as the recent post if you don't want to show them. You'll want to implement this blog directory template as a child theme in WordPress. To do that, just make a new folder in wp-content/themes/. I typically name my child theme folders after their parent themes. So, the child theme folder I made was wp-content/themes/twentythirteen-tyler/. Once you've got the child theme folder created, make a new file called style.css and make sure it has the following code at the top: /*Theme Name: Twenty Thirteen Child ThemeTheme URI: http://yourdomain.comDescription: Child theme for the Twenty Thirteen themeAuthor: Your name hereAuthor URI: http://example.com/about/Template: twentythirteenVersion: 0.1.0*//* ================ *//* = The 1Kb Grid = */ /* 12 columns, 60 pixels each, with 20pixel gutter *//* ================ */.grid_1 { width:60px; }.grid_2 { width:140px; }.grid_3 { width:220px; }.grid_4 { width:300px; }.grid_5 { width:380px; }.grid_6 { width:460px; }.grid_7 { width:540px; }.grid_8 { width:620px; }.grid_9 { width:700px; }.grid_10 { width:780px; }.grid_11 { width:860px; }.grid_12 { width:940px; }.column {margin: 0 10px;overflow: hidden;float: left;display: inline;}.row {width: 960px;margin: 0 auto;overflow: hidden;}.row .row {margin: 0 -10px;width: auto;display: inline-block;}.author_bio {border: 1px solid #e7e7e7;margin-top: 10px;padding-top: 10px;background:#ffffff url('images/sign.png') no-repeat right bottom;z-index: -99999;}small { font-size: 12px; }.post_count {text-align: center;font-size: 10px;font-weight: bold;line-height: 15px;text-transform: uppercase;float: right;margin-top: -65px;margin-right: 20px;}.post_count a {color: #000;}#content a {text-decoration: none;-webkit-transition: text-shadow .1s linear;outline: none;}#content a:hover {color: #2DADDA;text-shadow: 0 0 6px #278EB3;} The preceding code adds the styling to your child theme, and also tells WordPress the name of your child theme. You can set a custom theme name if you want by changing the Theme Name line to whatever you like. The only fields in that big comment block that are required are the Theme Name and Template. Template, which should be set to whatever the parent theme's folder name is. Now create another file in your child theme folder and name it blog-directory.php. The remaining blocks of code need to go into that blog-directory.php file: <?php/*** Template Name: Blog Directory** A custom page template with a sidebar.* Selectable from a dropdown menu on the add/edit page screen.** @package WordPress* @subpackage Twenty Thirteen*/?><?php get_header(); ?><div id="container" class="onecolumn"><div id="content" role="main"><?php the_post(); ?><div id="post-<?php the_ID(); ?>" <?php post_class(); ?>><?php if ( is_front_page() ) { ?><h2 class="entry-title"><?php the_title(); ?></h2><?php } else { ?><h1 class="entry-title"><?php the_title(); ?></h1><?php } ?><div class="entry-content"><!-- start blog directory --><?php// Get the authors from the database ordered randomlyglobal $wpdb;$query = "SELECT ID, user_nicename from $wpdb->users WHERE ID != '1'ORDER BY 1 LIMIT 50";$author_ids = $wpdb->get_results($query);// Loop through each authorforeach($author_ids as $author) {// Get user data$curauth = get_userdata($author->ID);// Get link to author page$user_link = get_author_posts_url($curauth->ID);// Get blog details for the authors primary blog ID$blog_details = get_blog_details($curauth->primary_blog);$postText = "posts";if ($blog_details->post_count == "1") {$postText = "post";}$updatedOn = strftime("%m/%d/%Y at %l:%M %p",strtotime($blog_details->last_updated));if ($blog_details->post_count == "") {$blog_details->post_count = "0";}$posts = $wpdb->get_col( "SELECT ID FROM wp_".$curauth->primary_blog."_posts WHERE post_status='publish' AND post_type='post' ANDpost_author='$author->ID' ORDER BY ID DESC LIMIT 5");$postHTML = "";$i=0;foreach($posts as $p) {$postdetail=get_blog_post($curauth->primary_blog,$p);if ($i==0) {$updatedOn = strftime("%m/%d/%Y at %l:%M %p",strtotime($postdetail->post_date));}$postHTML .= "&#149; <a href="$postdetail->guid">$postdetail->post_title</a><br />";$i++;}?> The preceding code sets up the theme and queries the WordPress database for authors. In WordPress multisite, users who have the Author permission type have a blog on the network. There's also code for grabbing posts from each of the network sites for displaying the recent posts from them: <div class="author_bio"><div class="row"><div class="column grid_2"><a href="<?php echo $blog_details->siteurl; ?>"><?php echo get_avatar($curauth->user_email, '96','http://www.gravatar.com/avatar/ad516503a11cd5ca435acc9bb6523536'); ?></a></div><div class="column grid_6"><a href="<?php echo $blog_details->siteurl; ?>" title="<?php echo$curauth->display_name; ?> - <?=$blog_details->blogname?>"><?php //echo $curauth->display_name; ?> <?=$curauth->display_name;?></a><br /><small><strong>Updated <?=$updatedOn?></strong></small><br /><?php echo $curauth->description; ?></div><div class="column grid_3"><h3>Recent Posts</h3><?=$postHTML?></div></div><span class="post_count"><a href="<?php echo $blog_details->siteurl;?>" title="<?php echo $curauth->display_name; ?>"><?=$blog_details->post_count?><br /><?=$postText?></a></span></div><?php } ?><!-- end blog directory --><?php wp_link_pages( array( 'before' => '<div class="page-link">' .__( 'Pages:', 'twentythirteen' ), 'after' => '</div>' ) ); ?><?php edit_post_link( __( 'Edit', 'twentythirteen' ), '<spanclass="edit-link">', '</span>' ); ?></div><!-- .entry-content --></div><!-- #post-<?php the_ID(); ?> --><?php comments_template( '', true ); ?></div><!-- #content --></div><!-- #container --><?php //get_sidebar(); ?><?php get_footer(); ?> Once you've got your blog-directory.php template file created, we can get actually started by setting up the page to serve as our blog directory. You'll need to set the root site's theme to your child theme; do it just as you would on a nonmultisite WordPress installation. Before we go further, let's create a couple of network sites so we have something to see on our blog directory. Go to the Network Admin dashboard, mouse over the Sites menu option in the left-hand side menu, and then click on Add New. If you're using a directory network type, as I am, the value you enter for the Site Address field will be the path to the directory that site sits in. So, if you enter tyler as the Site Address value, that the site can be reached at http://multisite.longren.org/tyler/. The settings that I used to set up multisite.longren.org/tyler/ can be seen in the following screenshot. You'll probably want to add a couple of sites just so you get a good idea of what your blog directory page will look like. Example individual site setup Now we can set up the actual blog directory page. On the main dashboard (that is, /wp-admin/index.php), mouse over the Pages menu item on the left-hand side of the page and then click on Add New to create a new page. I usually name this page Home, as I use the blog directory as the first page that visitors see when visiting the site. From there, visitors can choose which blog they want to visit and are also shown a list of the most recent posts from each blog. There's no need to enter any content on the page, unless you want to. The important part is selecting the Blog Directory template. Before you publish your new Home / blog directory page, make sure that you select Blog Directory as the Template value in the Page Attributes section. An example a Home / blog directory page can be seen in the following screenshot: Example Home / blog directory page setup Once you've got your page looking like the example, as shown in the previous screenshot, you can go ahead and publish that page. The Update button in the previous screenshot will say Publish if you've not yet published the page. Next you'll want to set the newly created Home / blog directory page as the front page for the site. To do this, mouse over the Settings menu option on the left-hand side of the page and then click on Reading. For the Front page displays value, check A static page (select below). Previously, Your latest posts was checked. Then for the Front Page drop-down menu, just select the Home page that we just created and click on the Save Changes button at the bottom of the page. I usually don't set anything for the Posts page drop-down menu because I never post to the "parent" site. If you do intend to make posts on the parent site, I'd suggest that you create a new blank page titled Posts and then select that page as your Posts page. The reading settings I use at multisite.longren.org can be as shown in the following screenshot: Reading settings setup After you've saved your reading settings, open up your parent site in your browser and you should see something similar to what I showed in the Blog directory example screenshot. Again, there's no need for you to keep the exact setup that I've used in the example blog-directory.php file. You can give that any style/design that you want. You can rearrange the various pieces on the page as you prefer. You should probably have a decent working knowledge of HTML and CSS to accomplish this, however. You should have a basic blog directory at this point. If you have any experience with PHP, HTML, and CSS, you can probably extend this basic code and do a whole lot more with it. The number of plugins is astounding and they are of very good quality, generally. And I think Automattic has done great things for WordPress in general. No other CMS can claim to have anything like the number of plugins that WordPress does. Summary You should be able to effectively manage themes and plugins in a multisite installation now. If you set the code up, you've got a directory showcasing network member content and, more importantly, know how to set up and customize a WordPress child theme now. Resources for Article : Further resources on this subject: Customization using ADF Meta Data Services [Article] Overview of Microsoft Dynamics CRM 2011 [Article] Customizing an Avatar in Flash Multiplayer Virtual Worlds [Article]
Read more
  • 0
  • 0
  • 2700

article-image-getting-started-your-first-jquery-plugin
Packt
29 Aug 2013
9 min read
Save for later

Getting started with your first jQuery plugin

Packt
29 Aug 2013
9 min read
(For more resources related to this topic, see here.) Getting ready Before we dive into our development, we need to have a good idea of how our plugin is going to work. For this, we will write some simple HTML to declare a shape and a button. Each shape will be declared in the CSS, and then we will use the JavaScript to toggle which shape is shown by toggling the CSS class appended to it. The aim of this recipe is to help you familiarize yourself both with jQuery plugin development and the jQuery Boilerplate template. How to do it Our first step is to set up our HTML. For this we need to open up our index.html file. We will need to add two elements in HTML: shape and wrapper to contain our shape. The button for changing the shape element will be added dynamically by our JavaScript. We will then add an event listener to it so that we can change the shape. The HTML code for this is as follows: <div class="shape_wrapper"> <div class="shape"> </div> </div> This should be placed in the div tag with class="container" in our index.html file. We then need to define each of the shapes we intend to use using CSS. In this example, we will draw a square, a circle, a triangle, and an oval, all of which can be defined using CSS. The shape we will be manipulating will be 100px * 100px. The following CSS should be placed in your main.css file: .shape{ width: 100px; height: 100px; background: #ff0000; margin: 10px 0px; } .shape.circle{ border-radius: 50px; } .shape.triangle{ width: 0; height: 0; background: transparent; border-left: 50px solid transparent; border-right: 50px solid transparent; border-bottom: 100px solid #ff0000; } .shape.oval{ width: 100px; height: 50px; margin: 35px 0; border-radius: 50px / 25px; } Now it's time to get onto the JavaScript. The first step in creating the plugin is to name it; in this case we will call it shapeShift. In the jQuery Boilerplate code, we will need to set the value of the pluginName variable to equal shapeShift. This is done as: var pluginName = "shapeShift" Once we have named the plugin, we can edit our main.js file to call the plugin. We will call the plugin by selecting the element using jQuery and creating an instance of our plugin by running .shapeShift(); as follows: (function(){$('.shape_wrapper').shapeShift();}()); For now this will do nothing, but it will enable us to test our plugin once we have written the code. To ensure the flexibility of our plugin, we will store our shapes as part of the defaults object literal, meaning that, in the future, the shapes used by the plugin can be changed without the plugin code being changed. We will also set the class name of the shape in the defaults object literal so that this can be chosen by the plugin user as well. After doing this, your defaults object should look like the following: defaults = {shapes: ["square", "circle", "triangle", "oval"],shapeClass: ".shape"}; When the .shapeShift() function is triggered, it will create an instance of our plugin and then fire the init function. For this instance of our plugin, we will store the current shape location in the array; this is done by adding it to this by using this.shapeRef = 0. The reason we are storing the shape reference on this is that it attaches it to this instance of the plugin, and it will not be available to other instances of the same plugin on the same page. Once we have stored the shape reference, we need to apply the first shape class to the div element according to our shape. The simplest way to do this is to use jQuery to get the shape and then use addClass to add the shape class as follows: $(this.element).find(this.options.shapeClass).addClass(this.options.shapes[this.shapeRef]); The final step that we need to do in our init function is to add our button to enable the user to change the shape. To do this, we simply append a button element to the shape container as follows: $(this.element).append('<button>Change Shape</button>'); Once we have our button element, we then need to add the shape reference, which changes the shape of the elements. To do this we will create a separate function called changeShape. While we are still in our init function, we can add an event handler to call the changeShape function onto the button. For reasons that will become apparent shortly, we will use the event delegation format of the jQuery. on() function to do this: $(this.element).on('click','button',this.changeShape); We now need to create our changeShape function; the first thing we will do is change this function name to changeShape. We will then change the function declaration to accept a parameter, in this case e. The first thing to note is that this function is called from an event listener on a DOM element and therefore this is actually the element that has been clicked on. This function was called using event delegation; the reason for this becomes apparent here as it allows us to find out which instance of the plugin belongs to the button that has been clicked on. We do this by using the e parameter that was passed to the function. The e parameter passed to the function is the jQuery event object related to the click event that has been fired. Inside it, we will find a reference to the original element that the click event was set to, which in this case is the element that the instance of the plugin is tied to. To retrieve the instance of the plugin, we can simply use the jQuery.data() function. The instance of the plugin is stored on the element as data using the data key plugin_pluginName, so we are able to retrieve it the same way as follows: var plugin = $(e.delegateTarget).data("plugin_" + pluginName); Now that we have the plugin instance, we are able to access everything it contains; the first thing we need to do is to remove the current shape class from the shape element in the DOM. To do this, we will simply find the shape element then look up in the shapes array to get the currently displayed shape, and then use the jQuery.removeClass function to remove the individual class. The code for doing this starts with a simple jQuery selector that allows us to work with the plugin element; we do this using $(plugin.element). We then look inside the plugin element to find the actual shape. As the name of the shape class is configurable, we can read this from our plugin option; so when we are finding the shape we use .find(plugin.options.shapeClass). Finally we add the class; so that we know which shape is next, we look up the shape class from the shapes array stored in the plugin options, selecting the item indicated by the plugin.shapeRef. The full command then looks as follows: $(plugin.element).find(plugin.options.shapeClass).removeClass(plugin.options.shapes[plugin.shapeRef]); We then need to work out which is the next shape we should show; we know that the current shape reference can be found in plugin.shapeRef, so we just need to work out if we have any more shapes left in the shape array or if we should start from the beginning. To do this, we look at the value of plugin.shapeRef and compare it to the length of the shapes array minus 1 (we substract 1 because arrays start at 0); if the shape reference is equal to the length of the shapes array minus 1, we know that we have reached the last shape, so we reset the plugin.shapeRef parameter to 0. Otherwise, we simply increment the shapeRef parameter by 1 as shown in the snippet: if((plugin.shapeRef) === (plugin.options.shapes.length -1)){plugin.shapeRef = 0;}else{plugin.shapeRef = plugin.shapeRef+1;} Our final step is to add the new shape class to the shape element; this can be achieved by finding the shape element and using the jQuery.addClass function to add the shape from the shapes array. This is very similar to our removeClass command that we used earlier with addClass replacing removeClass. $(plugin.element).find(plugin.options.shapeClass).addClass(plugin.options.shapes[plugin.shapeRef]); At this point we should now have a working plugin; so if we fire up the browser and navigate to the index.html file, we should get a square with a button beneath it. Clicking on the button should show the next shape. If your code is working correctly, the shapes should be shown in the order: square, circle, triangle, oval, and then loop back to square. As a final test to show that each plugin instance is tied to one element, we will add a second element to the page. This is as simple as duplicating the original shape_wrapper and creating a second one as shown: <div class="shape_wrapper"><div class="shape"></div></div> If everything is working correctly when loading the index.html page, we will have 2 squares each with a button underneath them, and on clicking the button only the shape above will change. Summary This article explained how to create your first jQuery plugin that manipulates the shape of a div element. We achieved this by writing some HTML to declare a shape and a button, declaring each shape in the CSS, and then using the JavaScript to toggle which shape is shown by toggling the CSS class appended to it. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] jQuery Animation: Tips and Tricks [Article] New Effects Added by jQuery UI [Article]
Read more
  • 0
  • 0
  • 1787

article-image-nokogiri
Packt
27 Aug 2013
8 min read
Save for later

Nokogiri

Packt
27 Aug 2013
8 min read
(For more resources related to this topic, see here.) Spoofing browser agents When you request a web page, you send metainformation along with your request in the form of headers. One of these headers, User-agent, informs the web server which web browser you are using. By default open-uri, the library we are using to scrape, will report your browser as Ruby. There are two issues with this. First, it makes it very easy for an administrator to look through their server logs and see if someone has been scraping the server. Ruby is not a standard web browser. Second, some web servers will deny requests that are made by a nonstandard browsing agent. We are going to spoof our browser agent so that the server thinks we are just another Mac using Safari. An example is as follows: # import nokogiri to parse and open-uri to scraperequire 'nokogiri'require 'open-uri'# this string is the browser agent for Safari running on a Macbrowser = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4)AppleWebKit/536.30.1 (KHTML, like Gecko) Version/6.0.5Safari/536.30.1'# create a new Nokogiri HTML document from the scraped URL and pass inthe# browser agent as a second parameterdoc = Nokogiri::HTML(open('http://nytimes.com', browser))# you can now go along with your request as normal# you will show up as just another safari user in the logsputs doc.at_css('h2 a').to_s Caching It's important to remember that every time we scrape content, we are using someone else's server's resources. While it is true that we are not using any more resources than a standard web browser request, the automated nature of our requests leave the potential for abuse. In the previous examples we have searched for the top headline on The New York Times website. What if we took this code and put it in a loop because we always want to know the latest top headline? The code would work, but we would be launching a mini denial of service (DOS) attack on the server by hitting their page potentially thousands of times every minute. Many servers, Google being one example, have automatic blocking set up to prevent these rapid requests. They ban IP addresses that access their resources too quickly. This is known as rate limiting. To avoid being rate limited and in general be a good netizen, we need to implement a caching layer. Traditionally in a large app this would be implemented with a database. That's a little out of scope for this article, so we're going to build our own caching layer with a simple TXT file. We will store the headline in the file and then check the file modification date to see if enough time has passed before checking for new headlines. Start by creating the cache.txt file in the same directory as your code: $ touch cache.txt We're now ready to craft our caching solution: # import nokogiri to parse and open-uri to scraperequire 'nokogiri'require 'open-uri'# set how long in minutes until our data is expired# multiplied by 60 to convert to secondsexpiration = 1 * 60# file to store our cache incache = "cache.txt"# Calculate how old our cache is by subtracting it's modification time# from the current time.# Time.new gets the current time# The mtime methods gets the modification time on a filecache_age = Time.new - File.new(cache).mtime# if the cache age is greater than our expiration timeif cache_age > expiration# our cache has expireputs "cache has expired. fetching new headline"# we will now use our code from the quick start to# snag a new headline# scrape the web pagedata = open('http://nytimes.com')# create a Nokogiri HTML Document from our datadoc = Nokogiri::HTML(data)# parse the top headline and clean it upheadline = doc.at_css('h2 a').content.gsub(/n/," ").strip# we now need to save our new headline# the second File.open parameter "w" tells Ruby to overwrite# the old fileFile.open(cache, "w") do |file| # we then simply puts our text into the file file.puts headlineendputs "cache updated"else # we should use our cached copy puts "using cached copy" # read cache into a string using the read method headline = IO.read("cache.txt")end puts "The top headline on The New York Times is ..."puts headline Our cache is set to expire in one minute, so assuming it has been one minute since you created your cache.txt file, let's fire up our Ruby script: ruby cache.rbcache has expired. fetching new headlinecache updatedThe top headline on The New York Times is ...Supreme Court Invalidates Key Part of Voting Rights Act If we run our script again before another minute passes, it should use the cached copy: $ ruby cache.rbusing cached copyThe top headline on The New York Times is ...Supreme Court Invalidates Key Part of Voting Rights Act SSL By default, open-uri does not support scraping a page with SSL. This means any URL that starts with https will give you an error. We can get around this by adding one line below our require statements: # import nokogiri to parse and open-uri to scraperequire 'nokogiri'require 'open-uri'# disable SSL checking to allow scrapingOpenSSL::SSL::VERIFY_PEER = OpenSSL::SSL::VERIFY_NONE Mechanize Sometimes you need to interact with a page before you can scrape it. The most common examples are logging in or submitting a form. Nokogiri is not set up to interact with pages. Nokogiri doesn't even scrape or download the page. That duty falls on open-uri. If you need to interact with a page, there is another gem you will have to use: Mechanize. Mechanize is created by the same team as Nokogiri and is used for automating interactions with websites. Mechanize includes a functioning copy of Nokogiri. To get started, install the mechanize gem: $ gem install mechanizeSuccessfully installed mechanize-2.7.1 We're going to recreate the code sample from the installation where we parsed the top Google results for "packt", except this time we are going to start by going to the Google home page and submitting the search form: # mechanize takes the place of Nokogiri and open-urirequire 'mechanize'# create a new mechanize agent# think of this as launching your web browseragent = Mechanize.new# open a URL in your agent / web browserpage = agent.get('http://google.com/')# the google homepage has one big search box# if you inspect the HTML, you will find a form with the name 'f'# inside of the form you will find a text input with the name 'q'google_form = page.form('f')# tell the page to set the q input inside the f form to 'packt'google_form.q = 'packt'# submit the formpage = agent.submit(google_form)# loop through an array of objects matching a CSS# selector. mechanize uses the search method instead of# xpath or css. search supports xpath and css# you can use the search method in Nokogiri too if you# like itpage.search('h3.r').each do |link| # print the link text puts link.contentend Now execute the Ruby script and you should see the titles for the top results: $ ruby mechanize.rbPackt Publishing: HomeBooksLatest BooksLogin/registerPacktLibSupportContactPackt - Wikipedia, the free encyclopediaPackt Open Source (PacktOpenSource) on TwitterPackt Publishing (packtpub) on TwitterPackt Publishing | LinkedInPackt Publishing | Facebook For more information refer to the site: http://mechanize.rubyforge.org/ People and places you should get to know If you need help with Nokogiri, here are some people and places that will prove invaluable. Official sites The following are the sites you can refer: Homepage and documentation: http://nokogiri.org Source code: https://github.com/sparklemotion/nokogiri/ Articles and tutorials The top five Nokogiri resources are as follows: Nokogiri History, Present, and Future presentation slides from Nokogiri co-author Mike Dalessio: http://bit.ly/nokogiri-goruco-2013 In-depth tutorial covering Ruby, Nokogiri, Sinatra, and Heroku complete with 90 minute behind-the-scenes screencast written by me: http://hunterpowers.com/data-scraping-and-more-with-ruby-nokogiri-sinatra-and-heroku RailsCasts episode 190: Screen Scraping with Nokogiri – an excellent Nokogiri quick start video: http://railscasts.com/episodes/190-screen-scraping-with-nokogiri Mechanize – an excellent Mechanize quick start video: http://railscasts.com/episodes/191-mechanize RailsCasts episode 191 Nokogiri co-author Mike Dalessio's blog: http://blog.flavorjon.es Community The community sites are as follows: Listserve: http://groups.google.com/group/nokogiri-talk GitHub: https://github.com/sparklemotion/nokogiri/ Wiki: http://github.com/sparklemotion/nokogiri/wikis Known issues: http://github.com/sparklemotion/nokogiri/issues Stackoverflow: http://stackoverflow.com/search?q=nokogiri Twitter Nokogiri leaders on Twitter are: Nokogiri co-author Mike Dalessio: @flavorjones Nokogiri co-author Aaron Patterson: @tenderlove Me: @TheHunter For more information on open source, follow Packt Publishing: @PacktOpenSource Summary Thus, we learnt about Nokogiri open source library in this article. Resources for Article : Further resources on this subject: URL Shorteners – Designing the TinyURL Clone with Ruby [Article] Introducing RubyMotion and the Hello World app [Article] Building the Facebook Clone using Ruby [Article]
Read more
  • 0
  • 0
  • 3002

article-image-creating-camel-project-simple
Packt
27 Aug 2013
8 min read
Save for later

Creating a Camel project (Simple)

Packt
27 Aug 2013
8 min read
(For more resources related to this topic, see here.) Getting ready For the examples in this article, we are going to use Apache Camel version 2.11 (http://maven.apache.org/) and Apache Maven version 2.2.1 or newer (http://maven.apache.org/) as a build tool. Both of these projects can be downloaded for free from their websites. The complete source code for all the examples in this article is available on github at https://github.com/bibryam/camel-message-routing-examples repository. It contains Camel routes in Spring XML and Java DSL with accompanying unit tests. The source code for this tutorial is located under the project: camel-message-routing-examples/creating-camel-project. How to do it... In a new Maven project add the following Camel dependency to the pom.xml: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-core</artifactId> <version>${camel-version}</version></dependency> With this dependency in place, creating our first route requires only a couple of lines of Java code: public class MoveFileRoute extends RouteBuilder { @Override public void configure() throws Exception { from("file://source") .to("log://org.apache.camel.howto?showAll=true") .to("file://target"); }} Once the route is defined, the next step is to add it to CamelContext, which is the actual routing engine and run it as a standalone Java application: public class Main { public static void main(String[] args) throws Exception { CamelContext camelContext = new DefaultCamelContext(); camelContext.addRoutes(new MoveFileRoute()); camelContext.start(); Thread.sleep(10000); camelContext.stop(); }} That's all it takes to create our first Camel application. Now, we can run it using a Java IDE or from the command line with Maven mvn exec:java. How it works... Camel has a modular architecture; its core (camel-core dependency) contains all the functionality needed to run a Camel application—DSL for various languages, the routing engine, implementations of EIPs, a number of data converters, and core components. This is the only dependency needed to run this application. Then there are optional technology specific connector dependencies (called components) such as JMS, SOAP, JDBC, Twitter, and so on, which are not needed for this example, as the file and log components we used are all part of the camel-core. Camel routes are created using a Domain Specific Language (DSL), specifically tailored for application integration. Camel DSLs are high-level languages that allow us to easily create routes, combining various processing steps and EIPs without going into low-level implementation details. In the Java DSL, we create a route by extending RouteBuilder and overriding the configure method. A route represents a chain of processing steps applied to a message based on some rules. The route has a beginning defined by the from endpoint, and one or more processing steps commonly called "Processors" (which implement the Processor interface). Most of these ideas and concepts originate from the Pipes and Filters pattern from the Enterprise Integration Patterns articlee by Gregor Hohpe and Bobby Woolf. The article provides an extensive list of patterns, which are also available at http://www.enterpriseintegrationpatterns.com, and the majority of which are implemented by Camel. With the Pipes and Filters pattern, a large processing task is divided into a sequence of smaller independent processing steps (Filters) that are connected by channels (Pipes). Each filter processes messages received from the inbound channel, and publishes the result to the outbound channel. In our route, the processing steps are reading the file using a polling consumer, logging it and writing the file to the target folder, all of them piped by Camel in the sequence specified in the DSL. We can visualize the individual steps in the application with the following diagram: A route has exactly one input called consumer and identified by the keyword from. A consumer receives messages from producers or external systems, wraps them in a Camel specific format called Exchange , and starts routing them. There are two types of consumers: a polling consumer that fetches messages periodically (for example, reading files from a folder) and an event-driven consumer that listens for events and gets activated when a message arrives (for example, an HTTP server). All the other processor nodes in the route are either a type of integration pattern or producers used for sending messages to various endpoints. Producers are identified by the keyword to and they are capable of converting exchanges and delivering them to other channels using the underlying transport mechanism. In our example, the log producer logs the files using the log4J API, whereas the file producer writes them to a target folder. The route is not enough to have a running application; it is only a template that defines the processing steps. The engine that runs and manages the routes is called Camel Context. A high level view of CamelContext looks like the following diagram: CamelContext is a dynamic multithread route container, responsible for managing all aspects of the routing: route lifecycle, message conversions, configurations, error handling, monitoring, and so on. When CamelContext is started, it starts the components, endpoints and activates the routes. The routes are kept running until CamelContext is stopped again when it performs a graceful shutdown giving time for all the in-flight messages to complete processing. CamelContext is dynamic, it allows us to start, stop routes, add new routes, or remove running routes at runtime. In our example, after adding the MoveFileRoute, we start CamelContext and let it copy files for 10 seconds, and then the application terminates. If we check the target folder, we should see files copied from the source folder. There's more... Camel applications can run as standalone applications or can be embedded in other containers such as Spring or Apache Karaf. To make development and deployment to various environments easy, Camel provides a number of DSLs, including Spring XML, Blueprint XML, Groovy, and Scala. Next, we will have a look at the Spring XML DSL. Using Spring XML DSL Java and Spring XML are the two most popular DSLs in Camel. Both provide access to all Camel features and the choice is mostly a matter of taste. Java DSL is more flexible and requires fewer lines of code, but can easily become complicated and harder to understand with the use of anonymous inner classes and other Java constructs. Spring XML DSL, on the other hand, is easier to read and maintain, but it is too verbose and testing it requires a little more effort. My rule of thumb is to use Spring XML DSL only when Camel is going to be part of a Spring application (to benefit from other Spring features available in Camel), or when the routing logic has to be easily understood by many people. For the routing examples in the article, we are going to show a mixture of Java and Spring XML DSL, but the source code accompanying this article has all the examples in both DSLs. In order to use Spring, we also need the following dependency in our projects: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-spring</artifactId> <version>${camel-version}</version></dependency> The same application for copying files, written in Spring XML DSL looks like the following: <beans xsi_schemaLocation=" http ://www.springframework.org/schema/beans http ://www.springframework.org/schema/beans/spring-beans.xsd http ://camel.apache.org/schema/spring http ://camel.apache.org/schema/spring/camel-spring.xsd"><camelContext > <route> <from uri="file://source"/> <to uri="log://org.apache.camel.howto?showAll=true"/> <to uri="file://target"/> </route></camelContext></beans> Notice that this is a standard Spring XML file with an additional CamelContext element containing the route. We can launch the Spring application as part of a web application, OSGI bundle, or as a standalone application: public static void main(String[] args) throws Exception { AbstractApplicationContext springContext = new ClassPathXmlApplicationContext("META-INF/spring/move-file-context.xml"); springContext.start(); Thread.sleep(10000); springContext.stop();} When the Spring container starts, it will instantiate a CamelContext, start it and add the routes without any other code required. That is the complete application written in Spring XML DSL. More information about Spring support in Apache Camel can be found at http://camel.apache.org/spring.html. Summary This article provides a high-level overview of Camel architecture, and demonstrates how to create a simple message driven application. Resources for Article: Further resources on this subject: Binding Web Services in ESB—Web Services Gateway [Article] Drools Integration Modules: Spring Framework and Apache Camel [Article] Routing to an external ActiveMQ broker [Article]
Read more
  • 0
  • 0
  • 3956
article-image-irc-style-chat-tcp-server-and-event-bus
Packt
27 Aug 2013
6 min read
Save for later

IRC-style chat with TCP server and event bus

Packt
27 Aug 2013
6 min read
(For more resources related to this topic, see here.) Step 1 – fresh start In a new folder called, for example, 1_PubSub_Chat, let's open our editor of choice and create here a file called pubsub_chat.js. Also, make sure that you have a terminal window open and you moved into the newly created project directory. Step 2 – creating the TCP server TCP servers are called net servers in Vert.x. Creating and using a net server is really similar to HTTP servers: Obtain the vertx bridge object to access the framework features: var vertx = require('vertx'); /* 1 */var netServer = vertx.createNetServer(); /* 2 */netServer.listen(1234); /* 3 */ Ask Vert.x to create a TCP server (called NetServer in Vert.x). Actually, start the server by telling it to listen on TCP port 1234. Let's test whether this works. This time we need another terminal to run the telnet command: $ telnet localhost 1234 The terminal should now be connected and waiting to send/receive characters. If you have "connection refused" errors, make sure the server is running. Step 3 – adding a connect handler Now, we need to place a block of code to be executed as soon as a client connects: Define a handler function. This function will be called every time a client connects to the server: var vertx = require('vertx')var server = vertx.createNetServer().connectHandler(function(socket) {// Composing a client address stringaddr = socket.remoteAddress();addr = addr.ipaddress + addr.port;socket.write('Welcome to the chat ' + addr + '!');}).listen(1234) A NetServer connect handler accepts the socket object as a parameter; this object is our gateway to reading, writing, or closing the connection to a client. Use the socket object to write a greeting to the newly connected clients. If we test this one as in step 2 (Step 2 – creating the TCP server), we see that the server now welcomes us with a message containing an identifier of the client as its origin host and origin port. Step 4 – adding a data handler We just learned how to execute a block of code at the moment in which the client connects. However now we are interested in doing something else at the time when we receive new data from a client connection. The socket object we used in the previous step for writing data back to the client, accepts a handler function too: the data handler. Let's add one: Add a data handler function to the socket object. This is going to be called every time the client sends a new string of data: var vertx = require('vertx') var server = vertx.createNetServer().connectHandler( function(socket) { // Composing a client address string addr = socket.remoteAddress(); addr = addr.ipaddress + addr.port; socket.write('Welcome to the chat ' + addr + '!'); socket.dataHandler(function(data) { var now = new Date(); now = now.getHours() + ':' + now.getMinutes() + ':' + now.getSeconds(); var msg = now + ' <' + addr + '> ' + data; socket.write(msg); }) }).listen(1234) React to the new data event by writing the same data back to the socket (plus a prefix). What we have now is a sort of an echo server, which returns back to the sender the same message with a prefix string. Step 5 – adding the event bus magic The base requirement of a chat server is that every time a client sends a message, the rest of the connected clients should receive it. We will use event bus, the messaging service provided by the framework, to send (publish) received messages to a broadcast address. Each client will subscribe to the address upon connection and receive other clients' messages from there: Add a data handler function to the socket object: var vertx = require('vertx') var server = vertx.createNetServer().connectHandler( function(socket) { // Composing a client address string addr = socket.remoteAddress(); addr = addr.ipaddress + addr.port; socket.write('Welcome to the chat ' + addr + '!'); vertx.eventBus.registerHandler('broadcast_address', function(event){ socket.write(event); }); socket.dataHandler(function(data) { var now = new Date(); now = now.getHours() + ':' + now.getMinutes() + ':' + now.getSeconds(); var msg = now + ' <' + addr + '> ' + data; vertx.eventBus.publish('broadcast_address', msg); }) }).listen(1234) As soon as a client connects, they listen to the event bus for new data to be published on the address broadcast_address. When a client sends a string of characters to the server, this data is published to the broadcast address, triggering a handler function that writes the string to all the clients' sockets. The chat server is now complete! To try it out, just open three terminals: Terminal 1: $ vertx run pubsub_chat.js Terminal 2: $ telnet localhost 1234 Terminal 3: $ telnet localhost 1234 Now, we have a server and two clients running and connected. Type something in terminal 2 or 3 and see the message being broadcast to both the other windows: $ telnet localhost 1234Trying ::1...Connected to localhost.Escape character is '^]'.Hello from terminal two!13:6:56 <0:0:0:0:0:0:0:155991> Hello from terminal two!13:7:24 <0:0:0:0:0:0:0:155992> Hi there, here's terminal three!13:7:56 <0:0:0:0:0:0:0:155992> Great weather today! Step 6 – Organizing a more complex project Since Vert.x is a polyglot platform, we can choose to write an application (or a part of it) in either of the many supported languages. The granularity of the language choice is at verticle level. It's important to give a good architecture to a non-trivial project from the beginning. Follow this list of generic principles to avoid performance bottlenecks or the need for massive refactoring in the future: Wrap synchronous libraries or legacy code inside a worker verticle (or a module). This will keep the blocking code away from the event loop threads. Divide the problem in isolated domains and write a verticle to handle each of them (for example, database persistor verticle, web server verticle, authenticator verticle, and cache manager verticle). Use a startup verticle. This will be the single entry point to the application. Its responsibilities will be to: Validate the configuration file Programmatically deploy other verticles in the correct order Decide how many instances of a verticle to create (the decision might depend on the environment: for example, the amount of available processors) Register periodic tasks Summary: In this article, we learned in a step-wise procedure how we can create an Internet Relay Chat using the TCP server, and interconnect the server with the clients using an event bus, and enable different types of communication between them. Resources for Article: Further resources on this subject: Getting Started with Zombie.js [Article] Building a Chat Application [Article] Accessing and using the RDF data in Stanbol [Article]
Read more
  • 0
  • 0
  • 3427

article-image-publishing-project-mobile
Packt
26 Aug 2013
5 min read
Save for later

Publishing the project for mobile

Packt
26 Aug 2013
5 min read
(For more resources related to this topic, see here.) Standard HTML5 publishing You will first publish your project using the standard HTML5 publishing options: Open the HDSB_publish.cptx file. Click on the publish icon situated right next to the preview icon in the main toolbar. Alternatively, you can also go to the File | Publish menu item. The Publish dialog contains all of the publishing options of Captivate 7, as shown in the following screenshot. In the left column of the dialog, of six icons marked as 1, represent the main publishing formats supported by Captivate. The area in the center, marked as 2, displays the options pertaining to the selected format. Take some time to click on each of the six icons of the left column one-by-one. While doing so, take a close look at the right area of the dialog to see how the set of available options changes based on the selected format. When done, return to the SWF/HTML5 format, which is the first icon at the top of the left column. Type hdStreet_standard in the Project Title field. Click on the Browse button associated with the Folder field and choose the /published folder of your exercises as the publish location. In the Output Format Options section, make sure that the HTML5 checkbox is the only the one selected. If necessary, adjust the other options so that the Publish dialog looks like the previous screenshot. When ready, click on the Publish button at the bottom-right corner of the dialog box to trigger the actual publishing process. This process can take some time depending on the size of the project to publish, and on the overall performances of your computer system. When done, Captivate displays a message, acknowledging the successful completion of the publishing process and asking you if you want to view the output. Click on No to close both the message and the Publish dialog. Make sure you save the file before proceeding. Publishing your project to HTML5 is that easy! We will also use Windows Explorer (Windows) or Finder (Mac) to take a closer look at the generated files. Examining the HTML5 output By publishing the project to HTML 5, Captivate has generated a whole bunch of HTML, CSS, and JavaScript files: Use Windows Explorer (Windows) or Finder (Mac) to go to the /published/hdStreet_standard folder of your exercises. Note that Captivate has created a subfolder in the /published folder we specified as the publish destination. Also notice that the name of that subfolder is what we typed in the Project Title field of the Publish dialog. The /published/hdstreet_standard folder should contain the index.html file and five subfolders, as illustrated by the following screenshot: The index.html file is the main HTML file. It is the file to open in a modern web browser in order to view the e-learning content. The /ar folder contains the audio assets of the project. These assets include the voice over narrations and the mouse-click sound in .mp3 format. Every single HTML5 Captivate project includes the same /assets folder. It contains the standard images, CSS, and JavaScript files used to power the objects and features that can be included in a project. The web developers reading these lines will probably recognize some of these files. For example, the jQuery library is included in the /assets/js folder. The /dr folder contains the images that are specific to this project. These images include the slide backgrounds in .png format, the mouse pointers, and the various states of the buttons used in this project. Finally, the /vr folder contains the video assets. These include the video we inserted on slide 2, as well as all of the full motion recording slides of the project. All of these files and folders are necessary for your HTML5 project to work as expected. In other words, you need to upload all of these files and folders to the web server (or to the LMS) to make the project available to your students. Never try to delete, rename, or move any of these files! Double-click on the index.html file to open the project in the default web browser. Make sure everything works as expected. When done, close the web browser and return to Captivate. This concludes our overview of the standard HTML5 publishing feature of Captivate 7. Testing the HTML5 content Producing content for mobile devices raises the issue of testing the content in a situation as close as possible to reality. Most of the time, you'll test the HTML5 output of Captivate only on the mobile device you own, or even worse, in the desktop version of an HTML5 web browser. If you are a Mac user, I've written a blog post on how to test the Captivate HTML5 content on iOS devices, without even owning such a device at http://www.dbr-training.eu/index.cfm/blog/test-your-html5-elearning-on-an-ios-device-without-an-ios-device/. Summary You learned about the publishing step of the typical Captivate production work flow. You learned how to publish your project using the standard HTML5 publishing options. We also used Windows Explorer (Windows) or Finder (Mac) to take a closer look at the generated files. By publishing the project to HTML 5, Captivate has generated a whole bunch of HTML, CSS, and JavaScript files. Resources for Article: Further resources on this subject: Top features you'll want to know about [Article] Remotely Preview and test mobile web pages on actual devices with Adobe Edge Inspect [Article] An Introduction to Flash Builder 4-Network Monitor [Article]
Read more
  • 0
  • 0
  • 1592

article-image-scalability-limitations-and-effects
Packt
23 Aug 2013
26 min read
Save for later

Scalability, Limitations, and Effects

Packt
23 Aug 2013
26 min read
(For more resources related to this topic, see here.) HTML5 limitations If you haven't noticed by now, many of the HTML5 features you will use either have failsafes, multiple versions, or special syntax to enable your code to cover the entire spectrum of browsers and supported HTML5 feature sets within them. As time passes and standards become solidified, one can assume that many of these failsafes and other content display measures will mature into a single standard that all browsers will share. However, in reality this process may take a while and even at its best, developers may still have to utilize many of these failsafe features indefinitely. Therefore, a solid understanding of when, where, and why to use these failsafe measures will enable you develop your HTML5 web pages in a way that can be viewed as intended on all modern browsers. To aid developers in overcoming these previously stated issues, many frameworks and external scripts have been created and open sourced, allowing for a more universal development environment saving developers countless hours when starting each new project. Modernizr (http://modernizr.com) has quickly become a must-have addition for many HTML5 developers as it contains many of the conditions and verifications needed to allow developers to write less code and cover more browsers. Modernizr does all this by checking for a large majority (more then 40) of the new features available in HTML5 in the clients browser and reporting back if they are available or not in a matter of milliseconds. This will allow you as the developer to determine if you should display an alternate version of your content or a warning to the user. Getting your web content to display properly in all browsers is and always has been the biggest challenge for any web developer and when it comes to creating cutting edge interesting content, the challenge usually becomes harder. To allow you to better understand how these features look without the use of third-party integration, we will avoid using external libraries for the time being. It is worth noting how each of these features and others look in all browsers. Therefore make sure to test the examples as well as your own work in not just your favorite browser, but many of the other popular choices as well. Object manipulation with CSS3 Prior to the advent of CSS3, web developers used a laundry list of content manipulation, asset preparation, and asset presentation techniques in order to get their web page layout the way they wanted in every browser. Most of these techniques would be considered "hacks" as they would pretty much be a work around to enable the browser to do something it normally wouldn't. Features such as rounded corners, drop shadows, and transforms were all absent from a web developer's arsenal and the process of getting things the way you want could get mind numbing. Understandably, the excitement level surrounding CSS3 for all web developers is very high as it enables developers to perform more content manipulation techniques then ever before without the need for prior preparation or special browser hacks. Although the list of available properties in CSS3 is massive, let's cover some of the newest and most exciting of the lot. box-shadow It's true that some designers and developers say drop shadows are a part of the past, but the usage of shadowing HTML elements is still a popular design choice for many. In the past, web developers needed to perform tricks such as stretching small gradient images or creating the shadow directly into their background image to achieve this effect in their HTML documents. CSS3 has solved this issue by creating the box-shadow property to allow for drop shadow like effects on your HTML elements. To remind us how this effect was accomplished in ActionScript 3, let's review this code snippet: var dropShadow:DropShadowFilter = new DropShadowFilter();dropShadow.distance = 0;dropShadow.angle = 45;dropShadow.color = 0x333333;dropShadow.alpha = 1;dropShadow.blurX = 10;dropShadow.blurY = 10;dropShadow.strength = 1;dropShadow.quality = 15;dropShadow.inner = false;var mySprite:Sprite = new Sprite();mySprite.filters = new Array(dropShadow); As mentioned before, the new box-shadow property in CSS3 allows you to append these shadowing effects with relative ease and many of the same configuration properties: .box-shadow-example { box-shadow: 3px 3px 5px 6px #000000;} Despite the lack of property names on each of the values applied to this style, you can see that many of the value types coincide with what was appended to the drop shadow we created in ActionScript 3. This box-shadow property is assigned to the .box-shadow-example class and therefore will be applied to any element that has that classname appended to it. By creating a div element with the box-shadow-example class, we can alter our content to look something like the following: <div class="box-shadow-example">CSS3 box-shadow Property</div> As straightforward as this CSS property is to add to your project, it declares a lot of values all in a single line. Let's review each of these values in order that we can understand them better for future usage. To simplify the identification of each of the variables in the property, each of these have been updated to be different: box-shadow: 1px 2px 3px 4px #000000; These variables are explained as follows: The initial value (1px) is the shadow's horizontal offset or if the shadow is going to the left or to the right. A positive value would place the shadow on the right of the element, a negative offset will put the shadow on the left. The second value (2px) is the vertical offset, and just like the horizontal offset value, a negative number would generate a shadow going up and a positive value would generate the shadow going down. The third value (3px) is the blur radius that controls how much blur effect will be added to the shadow. Declaring a value, for example, 0 would create no blur and display a very sharp looking shadow. Negative values placed into the blur radius will be ignored and render no different then using 0. The fourth value (4px) and last of the numerical properties is the spread radius. The spread radius controls how far the drop shadow blur will spread past the initial shadow size declaration. If a value 0 is used, the shadow will display with the default blur radius set and apply no changes. Positive numerical values will yield a shadow that blurs further and negative value will make the shadow blur smaller. The final value is the hexadecimal color value, which states the color that the shadow will be in. Alternatively, you could use box-shadow to apply the shadow effect to the interior of your element rather then the exterior. With ActionScript 3, this was accomplished by appending dropShadow.inner = true; to the list of parameters in your DropShadowFiler object. The CSS3 syntax to apply box-shadow properties in this manner is very similar as all that is required is the addition of the inset keyword. Consider the following code snippet, for example: .box-shadow-example { box-shadow: 3px 3px 5px 6px #666666 inset;} This would produce a shadow that would look like the following screenshot: text-shadow Just like the box-shadow property, text-shadow lives up to its name by creating the same drop-shadowing effect, specifically for text: text-shadow: 2px 2px 6px #ff0000; Like box-shadow, the initial two values for text-shadow are the horizontal and vertical offsets for the shadow placement. The third value, which is optional is the blur size and the fourth value is the hexadecimal color: border-radius Just like element or text shadowing, adding rounded corners to your elements prior to CSS3 was a chore. Developers would usually append separate images or use other object manipulation techniques to achieve this effect on the typically square or rectangle shaped elements. With the addition of the border-radius setting in CSS3, developers can easily and dynamically set element corner roundness with only a couple of line of CSS all without the usage of vector 9 slicing like in Flash. Since HTML elements have four corners, when appending the border-radius styling, we can either target each corner individually, or all the corners at once. In order to easily append a border radius setting to all the corners at once, we would create our CSS properties as follows: #example { background-color:#ff0000; // Red background width: 200px; height: 200px;border-radius: 10px;} The preceding CSS not only appends a 10px border radius to all of the corners of the #example element, by using all the properties, which the modern browsers use, we can be assured that the effect will be visible to all users attempting to view this content: As mentioned above, each of the individual corners of the element can be targeted to only append the radius to a specific part of the element: #example { border-top-left-radius: 0px; // This is doing nothing border-top-right-radius: 5px; border-bottom-right-radius: 20px; border-bottom-left-radius: 100px;} The preceding CSS now removes our #example element's left border radius by setting it to 0px and sets a specific radius to each of the other corners. It's worth noting here that setting a border radius equal to 0 is no different than leaving that property completely out of the CSS styles: Fonts Dealing with customized fonts in Flash has had its ups and downs over the years. Any Flash developer who has needed to incorporate and use customized fonts in their Flash applications probably knows the pain that comes with choosing a font embedding method as well as making sure it works properly for users who don't have the font installed on their computer viewing the Flash application. CSS3 font embedding has implemented a "no fuss" way to include custom fonts into your HTML5 documents with the addition of the @font-face declaration: @font-face { font-family: ClickerScript; src: url('ClickerScript-Regular.ttf'), url('ClickerScript-Regular .otf'), url('ClickerScript-Regular .eot');} CSS can now directly reference your TTF, OTF, or EOT font which can be placed on your web server for accessibility. With the font source declared in our CSS document and a unique font-family identification applied to it, we can start using it on specific elements by using the font-family property: #example { font-family: ClickerScript;} Since we declared a specific font family name in the @font-face property, we can use that custom name on pretty much any element henceforth. Custom fonts can be applied to almost anything that contains text in your HTML document. Form elements such as button labels and text inputs also can be styled to used your custom fonts. You can even remake assets such as website logos in pure HTML and CSS with the same custom fonts used in the original asset creation. Acceptable font formats Like many of the other embedding methods for assets online, fonts needs to be converted into multiple formats to enable all common modern browsers to display them properly. Almost all of the available browsers will be able to handle the common True Type Fonts (.ttffile types) or Open Type Fonts (.otffile types), so embedding one of those two formats will be all that is needed. Unfortunately Internet Explorer 9 does not have support built in for either of those two popular formats and requires fonts to be saved in the EOT file format. External font libraries Many great services have appeared online in the last couple of years allowing web developers to painlessly prepare and embed fonts into their websites. Google's Web Fonts archive available at http://www.google.com/webfonts hosts a large set of open source fonts which can be added to your project without the need to worry about licensing or payment issues. Simply add a couple of extras lines of code into your HTML document and you are ready to go. Another great site that is worth checking out is Font Squirrel, which can be found at http://www.fontsquirrel.com. Like Google Web Fonts, Font Squirrel hosts a large archive of web-ready fonts with the copy-and-paste-ready code snippets to add them to your document. Another great feature on this site is the @font-face generator which give you the ability to convert your preexisting fonts into all the web compatible formats. Before getting carried away and converting all your favorite fonts into web ready formats and integrating them into your work, it is worth noting the End User License Agreement or EULA that came with the font to begin with. Converting many available fonts for use on the web will break license agreements and could cause legal issues for you down the road. Opacity More commonly known as "alpha" to the Flash developer, setting the opacity of an element not only allows you to change the look and feel of your designs, but allows you to add features like content that fades in and out. As simple as this concept seems, it is relatively new to the available list of CSS properties available to web developers. Setting the opacity of an element is extremely easy and looks something like the following: #example { opacity: 0.5;} As you can see from the preceding example, like ActionScript 3, the opacity value is a numerical value between 0 and 1. The preceding example would display a element at 50 percent transparency. The opacity property in CSS3 is now supported in all the major browsers, so there is no need to worry about using alternative property syntax when declaring it. RGB and RGBA coloring When dealing with color values in CSS, many developers would typically use hexadecimal values, which would resemble something like #000000 to declare the usage of the color black. Colors can also be implemented in their RGB representation in CSS by utilizing the rgb() or rgba() calls in place of the hexadecimal value. As you can see by the method name, the rgba color lookup in CSS also requires a forth parameter which declares the colors alpha transparency or opacity amount. Using RGBA in CSS3 rather than hexadecimal colors can be beneficial for a couple of reasons. Consider you have just created a div element which will be displayed on top of existing content within your web page layout. If you ever wanted to set a background color to the div as a specific color but wish for only that background to be semi transparent and not the interior content, the RGBA color declaration now allows you to do this easily as you can set the colors transparency: #example { // Background opacity background: rgba(0, 0, 0, 0.5); // Black 50% opacity // Box-shadow box-shadow: 1px 2px 3px 4px rgba(255, 255, 255, 0.8); // White 80% opacity // Text opacity color: rgba(255, 255, 255, 1); // White no transparency color: rgb(255, 255, 255); // This would accomplish the same styling // Text Drop Shadows (with opacity) text-shadow: 5px 5px 3px rgba(135, 100, 240, 0.5);} As you can see in the preceding example, you can freely use RGB and RGBA values rather than hexadecimal anywhere color values are required in CSS syntax. Element transforms Personally, Ifind CSS3 transforms to be one of the most exciting and fun new features in CSS. Transforming assets in the Flash IDE as well as with ActionScript has always been easily accessible and easy to implement. Transforming HTML elements is a relatively new feature to CSS and is still gaining full support by all the modern browsers. Transforming an element allows you to manipulate its shape and size by opening up a ton of possibilities for animations and visual effects to assets without the need to prepare the source before hand. When we refer to "transforming an element", we are actually describing a number of properties that can be applied to the transformation to give it different characteristics. If you have transformed objects in Flash or possibly in Photoshop before, these properties may be familiar to you. Translate As a Flash developer used to primarily dealing with X and Y coordinates when positioning elements, the CSS3 Translate Transform property is a very handy way of placing elements and it works on the same principal. The translate property takes two parameters which are the X and the Y values to translate, or effectively move the element: transform:translate(-25px, -25px); Unfortunately, to get your transforms to work in all browsers, you will need to target each of them when you append transform styles. Therefore, the standard transform style and property would now look something like this: transform:translate(-25px, -25px);-ms-transform:translate(-25px, -25px); /* IE 9 */-moz-transform:translate(-25px, -25px); /* Firefox */-webkit-transform:translate(-25px, -25px); /* Safari and Chrome */-o-transform:translate(-25px, -25px); /* Opera */ Rotate Rotation is pretty self-explanatory and extremely easy to implement. The rotate properties take a single parameter to specify the amount of rotation, in degrees, to apply to the specific element: transform:rotate(45deg);-ms-transform:rotate(45deg); /* IE 9 */-moz-transform:rotate(45deg); /* Firefox */-webkit-transform:rotate(45deg); /* Safari and Chrome */-o-transform:rotate(45deg); /* Opera */ It is worth noting that regardless of the fact that the supplied value is always intended to be a value in degrees, the value must always have deg appended for the value to be properly recognized. Scale Just like rotate transforms, scaling is pretty straightforward. The scale property requires two parameters, which declare the scale amount for both X and Y: transform:scale(0.5, 2);-ms-transform:scale(0.5, 2); /* IE 9 */-moz-transform:scale(0.5, 2); /* Firefox */-webkit-transform:scale(0.5, 2); /* Safari and Chrome */-o-transform:scale(0.5, 2); /* Opera */ Skew Skewing a element will result in the angling of the X and Y axes: transform:skew(10deg, 20deg);-ms-transform:skew(10deg, 20deg); /* IE 9 */-moz-transform:skew(10deg, 20deg); /* Firefox */-webkit-transform:skew(10deg, 20deg); /* Safari and Chrome */-o-transform:skew(10deg, 20deg); /* Opera */ The following illustration is a representation of skewing an image with the preceding properties: Matrix The matrix properties combine all of the preceding transforms into a single property and can easily eliminate many extra lines of CSS in your source: transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20);/* IE 9 */-ms-transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20);/* Firefox */-moz-transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20); /* Safari and Chrome */ -webkit-transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20);/* Opera */-o-transform:matrix(0.586, 0.8, -0.8, 0.586, 40, 20); The preceding example utilizes the CSS transform matrix property to apply multiple transform styles in a single call. The matrix property requires six parameters to rotate, scale, move, and skew the element. Using the matrix property is only really useful when you actually need to implement all of the transform properties at once. If you only need to utilize one aspect of element transforms, you will be better off using just that CSS style property. 3D transforms Up until now, all of the transform properties we have reviewed have been two dimensional transformations. CSS3 now also supports 3D as well as 2D transforms.One of the best parts of CSS3 3D transforms is the fact that many devices and browsers support hardware acceleration allowing this complex graphical processing to be done on your video cards GPU. At the time of writing this, only Chrome, Safari, and firefox have support for CSS 3D transforms. Interested in what browsers will support all these great HTML5 features before you start developing? Check out http://caniuse.com to see what popular browsers support in a simple, easy-to-use website. When dealing with elements in a 3D world, we make use of the Z coordinate, which allows the use of some new transform properties. transform:rotateX(angle)transform:rotateY(angle)transform:rotateZ(angle)transform:translateZ(px)transform:scaleZ(px) Let's create a 3D cube from HTML elements to put all of these properties into a working example. To start creating our 3D cube, we will begin by writing the HTML elements which will contain the cube as well as the elements which will be making up the cube itself: <body> <div class="container"> <div id="cube"> <div class="front"></div> <div class="back"></div> <div class="right"></div> <div class="left"></div> <div class="top"></div> <div class="bottom"></div> </div> </div></body> This HTML creates a simple layout for our cube by not only creating each of the six sides, which makes up a cube with specific class names, but the container for the entire cube as well as the main container to display all of our page content. Of course, since there is no internal content in these containers and no styling yet, opening this HTML file in your browser would yield an empty page. So let's start writing our CSS to make all of these elements visible and position each to form our three dimensional cube. We will start by setting up our main containers which will position our content and contain our cubes sides: .container { width: 640px; height: 360px; position: relative; margin: 200px auto; /* Currently only supported by Webkit browsers. */ -webkit-perspective: 1000px; perspective: 1000px;}#cube { width: 640px; height: 320px; position: absolute; /* Let the transformed child elements preserve the 3D transformations: */ transform-style: preserve-3d; -webkit-transform-style: preserve-3d; -moz-transform-style: preserve-3d;} The container class is our main element, which contains all of the other elements within this example. After appending a width and height, we set the top margin to 200px to push the display down the page a bit for better viewing and the left and right margins to auto which will align this element in the center of the page: #cube div { display: block; position: absolute; border: 1px solid #000000; width: 640px; height: 320px; opacity:0.8;} By defining properties to the #cube div, we set the styles to every div element within the #cube element. We are also kind of cheating the system of cube by setting the width and height to rectangular proportions as the intention is to add videos to each of the cube sides once we structure and position it. With the basic cube-side styles appended, its time to start transforming each of the sides to form the three-dimensional cube. We will start with the front of the cube by translating it on the Z axis, bringing it closer to the perspective: #cube .front {-webkit-transform: translateZ(320px); -moz-transform: translateZ(320px); transform: translateZ(320px);} In order to append this style to our element in all modern browsers, we will need to specify the property in multiple syntaxes for each browser that doesn't support the default transform property: The preceding screenshot shows what has happened to the .front div after appending a Z translation of 320px. The larger rectangle is the .front div, which is now 320px closer to our perspective. For simplicity's sake, let's do the same to the .back div and push it 320px away from the perspective: #cube .back { -webkit-transform: rotateX(-180deg) rotate(-180deg) translateZ(320px); -moz-transform: rotateX(-180deg) rotate(-180deg) translateZ(320px); transform: rotateX(-180deg) rotate(-180deg) translateZ(320px);} As you can see from the preceding code, to properly move the .back element into place without placing it upside down, we flip the element by 180 degrees on the X axis and then translate Z by 320px just like we did for .front. Note that we didn't set a negative value on the translate Z because the element was flipped. With the .back CSS styles in place, our cube should look like the following: Now the smallest rectangle visible is the element with the classname .back, the largest is our .front element, and the middle rectangle is the remaining elements to be transformed. To position the sides of our cubes we will need to rotate the side elements on the Y axis to get them to face the proper direction. Once they are rotated into place, we can translate the position on the Z axis to push it out from the center as we did with the front and back faces: #cube .right { -webkit-transform: rotateY(90deg) translateZ( 320px ); -moz-transform: rotateY(90deg) translateZ( 320px ); transform: rotateY(90deg) translateZ( 320px );} With the right side in place, we can do the same to the left side but rotate it in the opposite direction to get it facing the other way: #cube .left {-webkit-transform: rotateY(-90deg) translateZ( 320px ); -moz-transform: rotateY(-90deg) translateZ( 320px ); transform: rotateY(-90deg) translateZ( 320px );} Now that we have all four sides of our cube aligned properly, we can finalize the cube positioning by aligning the top and bottom sides. To properly size the top and bottom we will set their own width and height to override the initial values set in the #cube div styles: #cube .top { width: 640px; height: 640px; -webkit-transform: rotateX(90deg) translateZ( 320px ); -moz-transform: rotateX(90deg) translateZ( 320px ); transform: rotateX(90deg) translateZ( 320px );}#cube .bottom { width: 640px; height: 640px; -webkit-transform: rotateX(-90deg) translateZ( 0px ); -moz-transform: rotateX(-90deg) translateZ( 0px ); transform: rotateX(-90deg) translateZ( 0px );} To properly position the top and bottom sides, we rotate the .top and .bottom elements +-90 degrees on the X axis to get them to face up and down, and only need to translate the top on the Z axis to raise it to the proper height to connect with all of the other sides. With all of those transforms appended to our layout, the resulting cube should look like the following: Although it looks 3D, since there is nothing in the containers, the perspective isn't really showing off our cube very well. So let's add some content such as a video in each of the sides of the cube to get a better visualization of our work. Within each of the sides, let's add the same HTML5 video element code: <video width="640" height="320" autoplay="true" loop="true"> <source src = "cube-video.mp4" type="video/mp4"> <source src = "cube-video.webm" type="video/webm"> Your browser does not support the video tag.</video> Since we have not added the element playback controls in order to display more visible area of the cube, our video element is set to autoplay the video as well as loop the playback on completion. Now we get a result that properly demonstrates what 3D transforms can do and is a little more visually appealing: Since we set the opacity of each of the cube sides, we can now see all four videos playing on each side, pretty cool! Since we are already here, why not kick it up one more notch and add user interaction to this cube so we can spin it around and see the video on each side. To perform this user interaction, we need to use JavaScript to translate the mouse coordinates on the page document to the X and Y 3D rotation of our cube. So let's start by creating the JavaScript to listen for mouse events: window.addEventListener("load", init, false);function init() { // Listen for mouse movement window.addEventListener('mousemove', onMouseMove, false);}function onMouseMove(e) { var mouseX = 0; var mouseY = 0; // Get the mouse position if (e.pageX || e.pageY) { mouseX = e.pageX; mouseY = e.pageY; } else if (e.clientX || e.clientY) { mouseX = e.clientX + document.body.scrollLeft + document.documentElement.scrollLeft; mouseY = e.clientY + document.body.scrollTop + document.documentElement.scrollTop; } console.log("Mouse Position: x:" + mouseX + " y:" + mouseY);} As you can see from the preceding code example, when the mousemove event fires and calls the onMouseMove function, we need to run some conditionals to properly parse the proper mouse position. Since, like so many other parts of web development, retrieving the mouse coordinates differs from browser to browser, we have added a simple condition to attempt to gather the mouse X and Y in a couple of different ways. With the mouse position ready to be translated into the transform rotation of our cube, there is one final bit of preparation we need to complete prior to setting the CSS style updates. Since different browsers support the application of CSS transforms in different syntaxes, we need to figure out, in JavaScript, which syntax to use during runtime to allow our script to run on all browsers. The following code example does just that. By setting a predefined array of the possible property values and attempting to check the type of each as an element style property, we can find which element is not undefined and know it can be used for CSS transform styles: // Get the support transform propertyvar availableProperties = [ 'transform', 'MozTransform', 'WebkitTransform', 'msTransform', 'OTransform' ];// Loop over each of the propertiesfor (var i = 0; i < availableProperties.length; i++) { // Check if the type of the property style is a string (ie. valid) if (typeof document.documentElement.style[availableProperties[i]] == 'string'){ // If we found the supported property, assign it to a variable // for later use. var supportedTranformProperty = availableProperties[i]; }} Now that we have the user's mouse position and the proper syntax for CSS transform updates for our cube, we can put it all together and finally have 3D rotational control of our video cube: <script> var supportedTranformProperty; window.addEventListener("load", init, false); function init() { // Get the support transform property var availableProperties = ['transform', 'MozTransform', 'WebkitTransform', 'msTransform', 'OTransform']; for (var i = 0; i < availableProperties.length; i++) { if (typeof document.documentElement. style[availableProperties[i]] == 'string'){ supportedTranformProperty = availableProperties }} // Listen for mouse movement window.addEventListener('mousemove', onMouseMove, false); } function onMouseMove(e) { // Get the mouse position if (e.pageX || e.pageY) { mouseX = e.pageX; mouseY = e.pageY; } else if (e.clientX || e.clientY) { mouseX = e.clientX + document.body.scrollLeft + document.documentElement.scrollLeft; mouseY = e.clientY + document.body.scrollTop + document.documentElement.scrollTop;} // Update the cube rotation rotateCube(mouseX, mouseY); } function rotateCube(posX, posY) { // Update the CSS transform styles document.getElementById("cube").style[supportedTranformProperty] = 'rotateY(' + posX + 'deg) rotateX(' + posY * -1 + 'deg)'; }</script> Regardless of the fact we have attempted to allow for multi browser use of this example, it is worth opening it up in each to see how something like 3D transforms with heavy internal content run. During the time of writing this, all WebKit browsers were the easy choice when viewing content like this, as browsers such as firefox and Internet Explorer render this example at a much slower and lower quality output: Transitions With CSS3, we can add an effect when changing from one style to another, without using Flash animations or JavaScripts: div { transition: width 2s; -moz-transition: width 2s; /* Firefox 4 */ -webkit-transition: width 2s; /* Safari and Chrome */ -o-transition: width 2s; /* Opera */} If the duration is not specified, the transition will have no effect, because the default value is 0: div { transition: width 2s, height 2s, transform 2s; -moz-transition: width 2s, height 2s, -moz-transform 2s; -webkit-transition: width 2s, height 2s, -webkit-transform 2s; -o-transition: width 2s, height 2s,-o-transform 2s;} It should be worth noting that Internet Explorer currently does not have support for CSS3 transitions. Browser compatibility If you haven't noticed yet, the battle of browser compatibility is one of the biggest aspects of a web developer's job. Over time, many great services and applications have been created to help developers overcome these hurdles in a much simpler manner than trial-and-error techniques. Websites such as http://css3test.com, http://caniuse.com, and http://html5readiness.com are all great resources to keep on top of HTML5 specification developer and browser support for all the features within.
Read more
  • 0
  • 0
  • 11943
article-image-need-directives
Packt
22 Aug 2013
7 min read
Save for later

The Need for Directives

Packt
22 Aug 2013
7 min read
(For more resources related to this topic, see here.) What makes a directive a directive Angular directives have several distinguishing features, but for the sake of simplicity we'll focus on just three in this article. In contrast to most plugins or other forms of drop-in functionality, directives are declarative, data driven, and conversational. Directives are declarative If you've done any JavaScript development before, you've almost certainly used jQuery (or perhaps Prototype), and likely one of the thousands of plugins available for it. Perhaps you've even written your own such plugin. In either case, you probably have a decent understanding of the flow required to integrate it. They all look something like the following code: $(document).ready(function() { $('#myElement').myPlugin({pluginOpts});}); In short, we're finding the DOM element matching #myElement, then applying our jQuery plugin to it. These frameworks are built from the ground up on the principle of DOM manipulation. In contrast, Angular directives are declarative, meaning we write them into the HTML elements themselves. Declarative programming means that instead of telling an object how to behave (imperative programming), we describe what an object is. So, where in jQuery we might grab an element and apply certain properties or behaviors to it, with Angular we label that element as a type of directive, and, elsewhere, maintain code that defines what properties and behaviors make up that type of object: <html> <body> <div id="myElement" my-awesome-directive></div> </body></html> At a first glance, this may seem rather pedantic, merely a difference in styles, but as we begin to make our applications more complex, this approach serves to streamline many of the usual development headaches. In a more fully developed application, our messages would likely be interactive, and in addition to growing or shrinking during the course of the user's visit, we'd want them to be able to reply to some or retweet themselves. If we were to implement this with a DOM manipulation library (such as jQuery or Prototype), that would require rebuilding the HTML with each change (assuming you want it sorted, just using .append() won't be enough), and then rebinding to each of the appropriate elements to allow the various interactions. In contrast, if we use Angular directives, this all becomes much simpler. As before, we use the ng-repeat directive to watch our list and handle the iterated display of tweets, so any changes to our scoped array will automatically be reflected within the DOM. Additionally, we can create a simple tweet directive to handle the messaging interactions, starting with the following basic definition. Don't worry right now about the specific syntax of creating a directive; for now just take a look at the overall flow in the following code: angular.module('myApp', []) .directive('tweet', ['api', function (api) { return function ($scope, $element, $attributes) { $scope.retweet = function () { api.retweet($scope.tweet);// Each scope inherits from it's parent, so we still have access to the full tweet object of { author : '…', text : '…' } }; $scope.reply = function () { api.replyTo($scope.tweet); }; } }]); For now just know that we're getting an instance of our Twitter API connection and passing it into the directive in the variable api, then using that to handle the replies and retweets. Our HTML for each message now looks like the following code: <p ng-repeat="tweet in tweets" tweet> <!-- ng-click allows us to bind a click event to a function on the $scope object --> @{{tweet.author}}: {{tweet.text}} <span ng-click="retweet()">RT</span> | <span ng-click="reply()">Reply</span></p> By adding the tweet attribute to the paragraph tag, we tell Angular that this element should use the tweet directive, which gives us access to the published methods, as well as anything else we later decide to attach to the $scope object. Directives in Angular can be declared in multiple ways, including classes and comments, though attributes are the most common. Scoping within directives is simultaneously one of the most powerful and most complicated features within Angular, but for now it's enough to know that every property and function we attach to the scope is accessible to us within the HTML declarations. Directives are data driven Angular directives are built from the ground up with this philosophy. The scope and attribute objects accessible to each directive form the skeleton around which the rest of a directive is built and can be monitored for changes both within the DOM as well as the rest of your JavaScript code. What this means for developers is that we no longer have to constantly poll for changes, or ensure that every data change that might have an impact elsewhere within our application is properly broadcast. Instead, the scope object handles all data changes for us, and because directives are declarative as well, that data is already connected to the elements of the view that need to update when the data changes. There's a proposal for ECMAScript 6 to support this kind of data watching natively with Object.observe(), but until that is implemented and fully supported, Angular's scope provides the much needed intermediary. Directives are conversational Modular coding emphasizes the use of messages to communicate between separate building blocks within an application. You're likely familiar with DOM events, used by many plugins to broadcast internal changes (for example, save, initialized, and so on) and subscribe to external events (for example, click, focus, and so on). Angular directives have access to all those events as well (the $element variable you saw earlier is actually a jQuery wrapped DOM element), but $scope also provides an additional messaging system that functions only along the scope tree. The $emit and $broadcast methods serve to send messages up and down the scope tree respectively, and like DOM events, allow directives to subscribe to changes or events within other parts of the application, while still remaining modular and uncoupled from the specific logic used to implement those changes. If you don't have jQuery included in your application, Angular wraps the element in jqLite, which is a lightweight wrapper that provides the same basic methods. Additionally, when you add in the use of Angular services, directives gain an even greater vocabulary. Services, among many other things, allow you to share specific pieces of data between the different pieces of your application, such as a collection of user preferences or utility mapping item codes to their names. Between this shared data and the messaging methods, separate directives are able to communicate fully with each other without requiring a retooling of their internal architecture. Directives are everything you've dreamed about Ok, that might be a bit of hyperbole, but you've probably noticed by now that the benefits outlined so far here are exactly in line with the best practices. One of the most common criticisms of Angular is that it's relatively new (especially compared to frameworks such as Backbone and Ember). In contrast, however, I consider that to be one of its greatest assets. Older frameworks all defined themselves largely before there was a consensus on how frontend web applications should be developed. Angular, on the other hand, has had the advantage of being defined after many of the existing best practices had been established, and in my opinion provides the cleanest interface between an application's data and its display. As we've seen already, directives are essentially data driven modules. They allow developers to easily create a packageable feature that declaratively attaches to an element, molds to fit the data at its disposal, and communicates with the other directives around it to ensure coordinated functionality without disruption of existing features. Summary In this article, we learned about what attributes define directives and why they're best suited for frontend development, as well as what makes them different from the JavaScript techniques and packages you've likely used before. I realize that's a bold statement, and likely one that you don't fully believe yet. Resources for Article: Further resources on this subject: Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article] So, what is EaselJS? [Article] So, what is KineticJS? [Article]
Read more
  • 0
  • 0
  • 2012

article-image-packing-everything-together
Packt
22 Aug 2013
13 min read
Save for later

Packing Everything Together

Packt
22 Aug 2013
13 min read
(For more resources related to this topic, see here.) Creating a package When you are distributing your extensions, often, the problem you are helping your customer solve cannot be achieved with a single extension, it actually requires multiple components, modules, and plugins that work together. Rather than making the user install all of these extensions manually one by one, you can package them all together to create a single install package. Our click-to-call plugin and folio component go together nicely, so let's package them together. Create a folder named pkg_folio_v1.0.0 on your desktop, and within it, create a folder named packages. Copy into the packages folder the latest version of com_folio and plg_content_clicktocall, for example, com_folio_v2.7.0.zip and plg_content_clicktocall_v1.2.0.zip. Now create a file named pkg_folio.xml in the root of the pkg_folio_v1.0.0 folder, and add the following code to it: <?xml version="1.0" encoding="UTF-8" ?> <extension type="package" version="3.0"> <name>Folio Package</name> <author>Tim Plummer</author> <creationDate>May 2013</creationDate> <packagename>folio</packagename> <license>GNU GPL</license> <version>1.0.0</version> <url>www.packtpub.com</url> <packager>Tim Plummer</packager> <packagerurl>www.packtpub.com</packagerurl> <description>Single Install Package combining Click To Call plugin with Folio component</description> <files folder="packages"> <file type="component" id="folio" >com_folio_v2.7.0.zip</file> <file type="plugin" id="clicktocall" group="content">plg_content_clicktocall_v1.2.0.zip</file> </files> </extension> This looks pretty similar to our installation XML file that we created for each component; however, there are a few differences. Firstly, the extension type is package: <extension type="package" version="3.0"> We have some new tags that help us to describe what this package is and who made it. The person creating the package may be different to the original author of the extensions: <packagename>folio</packagename><packager>Tim Plummer</packager><packagerurl>www.packtpub.com</packagerurl> You will notice that we are looking for our extensions in the packages folder; however, this could potentially have any name you like: <files folder="packages"> For each extension, we need to say what type of extension it is, what its name is, and the file containing it: <file type="component" id="folio" >com_folio_v2.7.0.zip</file> You can package together as many components, modules, and plugins as you like, but be aware that some servers have a maximum size for uploaded files that is quite low, so, if you try to package too much together, you may run into problems. Also, you might get timeout issues if the file is too big. You'll avoid most of these problems if you keep the package file under a couple of megabytes. You can install packages via Extension Manager in the same way you install any other Joomla! extension: However, you will notice that the package is listed in addition to all of the individual extensions within it: Setting up an update server Joomla! has a built-in update software that allows you to easily update your core Joomla! version, often referred to as one-click updates (even though it usually take a few clicks to launch it). This update mechanism is also available to third-party Joomla! extensions; however, it involves you setting up an update server. You can try this out on your local development environment. To do so, you will need two Joomla! sites: http://localhost/joomla3, which will be our update server, and http://localhost/joomlatest, which will be our site that we are going to try to update the extensions on. Note that the update server does not need to be a Joomla! site; it could be any folder on a web server. Install our click-to-call plugin on the http://localhost/joomlatest site, and make sure it's enabled and working. To enable the update manager to be able to check for updates, we need to add some code to the clicktocall.xml installation XML file under /plugins/content/clicktocall/: <?xml version="1.0" encoding="UTF-8"?> <extension version="3.0" type="plugin" group="content" method="upgrade"> <name>Content - Click To Call</name> <author>Tim Plummer</author> <creationDate>April 2013</creationDate> <copyright>Copyright (C) 2013 Packt Publishing. All rights reserved.</copyright> <license>http://www.gnu.org/licenses/gpl-3.0.html</license> <authorEmail>example@packtpub.com</authorEmail> <authorUrl>http://packtpub.com</authorUrl> <version>1.2.0</version> <description>This plugin will replace phone numbers with click to call links. Requires Joomla 3.0 or greater. Don't forget to publish this plugin! </description> <files> <filename plugin="clicktocall">clicktocall.php</filename> <filename plugin="clicktocall">index.html</filename> </files> <languages> <language tag="en-GB">language/en-GB/en-GB.plg_content_clicktocall.ini</language> </languages> <config> <fields name="params"> <fieldset name="basic"> <field name="phoneDigits1" type="text" default="4" label="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_LABEL" description="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_DESC" /> <field name="phoneDigits2" type="text" default="4" label="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_LABEL" description="PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_DESC" /> </fieldset> </fields> </config> <updateservers> <server type="extension" priority="1" name="Click To Call Plugin Updates">http://localhost/joomla3/updates/clicktocall.xml</server> </updateservers> </extension> The type can either be extension or collection; in most cases you'll be using extension, which allows you to update a single extension, as opposed to collection, which allows you to update multiple extensions via a single file: type="extension" When you have multiple update servers, you can set a different priority for each, so you can control the order in which the update servers are checked. If the first one is available, it won't bother checking the rest: priority="1" The name attribute describes the update server; you can put whatever value you like in here: name="Click To Call Plugin Updates" We have told the extension where it is going to check for updates, in this case http://localhost/joomla3/updates/clicktocall.xml. Generally, this should be a publically accessible site so that users of your extension can check for updates. Note that you can specify multiple update servers for redundancy. Now on your http://localhost/joomla3 site, create a folder named updates and put the usual index.html file in it. Copy it in the latest version of your plugin, for example, plg_content_clicktocall_v1.2.1.zip. You may wish to make a minor visual change so you can see if the update actually worked. For example, you could edit the en-GB.plg_content_clicktocall.ini language file under /language/en-GB/, then zip it all back up again. PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_LABEL="Digits first part"PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS1_DESC="How many digits inthe first part of the phone number?"PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_LABEL="Digits last part"PLG_CONTENT_CLICKTOCALL_FIELD_PHONEDIGITS2_DESC="How many digits inthe second part of the phone number?" Now create the clicktocall.xml file with the following code in your updates folder: <?xml version="1.0" encoding="utf-8"?> <updates> <update> <name>Content - Click To Call</name> <description>This plugin will replace phone numbers with click to call links. Requires Joomla 3.0 or greater. Don't forget to publish this plugin! </description> <element>clicktocall</element> <type>plugin</type> <folder>content</folder> <client>0</client> <version>1.2.1</version> <infourl title="Click To Call Plugin 1.2.1">http://packtpub.com</infourl> <downloads> <downloadurl type="full" format="zip">http://localhost/joomla3/updates/plg_content_clicktocall_v1.2.1.zip</downloadurl> </downloads> <targetplatform name="joomla" version="3.1" /> </update> </updates> This file could be called anything you like, it does not need to be the extensionname.xml as long as it matches the name you set in your installation XML for the extension. The updates tag surrounds all the update elements. Each time you release a new version, you will need to create another update section. Also, if your extension supports both Joomla! 2.5 and Joomla! 3, you will need to have separate <update> definitions for each version. And if you want to support updates for both Joomla! 3.0 and Joomla! 3.1, you will need separate tags for each of them. The value of the name tag is shown in the Extension Manager Update view, so using the same name as your extension should avoid confusion: <name>Content - Click To Call</name> The value of the description tag is shown when you hover over the name in the update view. The value of the element tag is the installed name of the extension. This should match the value in the element column in the jos_extensions table in your database: <element>clicktocall</element> The value of the type tag describes whether this is a component, module, or a plugin: <type>plugin</type> The value of the folder tag is only required for plugins, and describes the type of plugin this is, in our case a content plugin. Depending on your plugin type, this may be system, search, editor, user, and so on. <folder>content</folder> The value of the client tag describes the client_id in the jos_extensions table, which tells Joomla! if this is a site (0) or an administrator (1) extension type. Plugins will always be 0, components will always be 1; however, modules could vary depending on whether it's a frontend or a backend module: <client>0</client> Plugins must have <folder> and <client> elements, otherwise the update check won't work. The value of the version tag is the version number for this release. This version number needs to be higher than the currently installed version of the extension for available updates to be shown: <version>1.2.1</version> The the infourl tag is optional, and allows you to show a link to information about the update, such as release notes: <infourl title="Click To Call Plugin 1.2.1">http://packtpub.com</infourl> The downloads tag shows all of the available download locations for the update. The value of the Downloadurl tag is the URL to download the extension from. This file could be located anywhere you like, it does not need to be in the updates folder on the same site. The type attribute describes whether this is a full package or an update, and the format attribute defines the package type such as zip or tar: <downloadurl type="full" format="zip">http://localhost/joomla3/updates/plg_content_clicktocall_v1.2.1.zip</downloadurl> The targetplatform tag describes the Joomla! version this update is meant for. The value of the name attribute should always be set to joomla. If you want to target your update to a specific Joomla! version, you can use min_dev_level and max_dev_level in here, but in most cases you'd want your update to be available for all Joomla! versions in that Joomla! release. Note that min_dev_level and max_dev_level are only available in Joomla! 3.1 or higher. <targetplatform name="joomla" version="3.1" /> So, now you should have the following files in your http://localhost/joomla3/updates folder. clicktocall.xmlindex.htmlplg_content_clicktocall_v1.2.1.zip You can make sure the XML file works by typing the full URL http://localhost/joomla3/updates/clicktocall.xml: As the update server was not defined in our extension when we installed it, we need to manually add an entry to the jos_update_sites table in our database before the updates will work. So, now go to your http://localhost/joomlatest site and log in to the backend. From the menu navigate to Extensions | Extension Manager, and then click on the Update menu on the left-hand side. Click on the Find Updates button, and you should now see the update, which you can install: Select the Content – Click To Call update and press the Update button, and you should see the successful update message: And if all went well, you should now see the visual changes that you made to your plugin. These built-in updates are pretty good, so why doesn't every extension developer use them? They work great for free extensions, but there is a flaw that prevents many extension developers using this; there is no way to authenticate the user when they are updating. Essentially, what this means is that anyone who gets hold of your extension or knows the details of your update server can get ongoing free updates forever, regardless of whether they have purchased your extension or are an active subscriber. Many commercial developers have either implemented their own update solutions, or don't bother using the update manager, as their customers can install new versions via extension manager over the top of previous versions. This approach although is slightly inconvenient for the end user, it is easier for the developer to control the distribution. One such developer who has come up with his own solution to this, is Nicholas K. Dionysopoulos from Akeeba, and he has kindly shared his solution, the Akeeba Release System, which you can get for free from his website and easily integrate into your own extensions. As usual, Nicholas has excellent documentation that you can read if you are interested, but it's beyond the scope of this book to go into detail about this alternative solution (https://www.akeebabackup.com/products/akeeba-release-system.html). Summary Now you know how to package up your extensions and get them ready for distribution. You learnt how to set up an update server, so now you can easily provide your users with the latest version of your extensions. Resources for Article: Further resources on this subject: Tips and Tricks for Joomla! Multimedia [Article] Adding a Random Background Image to your Joomla! Template [Article] Showing your Google calendar on your Joomla! site using GCalendar [Article]
Read more
  • 0
  • 0
  • 1749
Modal Close icon
Modal Close icon