Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-search-using-beautiful-soup
Packt
20 Jan 2014
6 min read
Save for later

Search Using Beautiful Soup

Packt
20 Jan 2014
6 min read
(For more resources related to this topic, see here.) Searching with find_all() The find() method was used to find the first result within a particular search criteria that we applied on a BeautifulSoup object. As the name implies, find_all() will give us all the items matching the search criteria we defined. The different filters that we see in find() can be used in the find_all() method. In fact, these filters can be used in any searching methods, such as find_parents() and find_siblings(). Let us consider an example of using find_all(). Finding all tertiary consumers We saw how to find the first and second primary consumer. If we need to find all the tertiary consumers, we can't use find(). In this case, find_all() will become handy. all_tertiaryconsumers = soup.find_all(class_="tertiaryconsumerslist") The preceding code line finds all the tags with the = "tertiaryconsumerlist" class. If given a type check on this variable, we can see that it is nothing but a list of tag objects as follows: print(type(all_tertiaryconsumers)) #output <class 'list'> We can iterate through this list to display all tertiary consumer names by using the following code: for tertiaryconsumer in all_tertiaryconsumers: print(tertiaryconsumer.div.string) #output lion tiger Understanding parameters used with find_all() Like find(), the find_all() method also has a similar set of parameters with an extra parameter, limit, as shown in the following code line: find_all(name,attrs,recursive,text,limit,**kwargs) The limit parameter is used to specify a limit to the number of results that we get. For example, from the e-mail ID sample we saw, we can use find_all() to get all the e-mail IDs. Refer to the following code: email_ids = soup.find_all(text=emailid_regexp) print(email_ids) #output [u'abc@example.com',u'xyz@example.com',u'foo@example.com'] Here, if we pass limit, it will limit the result set to the limit we impose, as shown in the following example: email_ids_limited = soup.find_all(text=emailid_regexp,limit=2) print(email_ids_limited) #output [u'abc@example.com',u'xyz@example.com'] From the output, we can see that the result is limited to two. The find() method is find_all() with limit=1. We can pass True or False values to find the methods. If we pass True to find_all(), it will return all tags in the soup object. In the case of find(), it will be the first tag within the object. The print(soup.find_all(True)) line of code will print out all the tags associated with the soup object. In the case of searching for text, passing True will return all text within the document as follows: all_texts = soup.find_all(text=True) print(all_texts) #output [u'n', u'n', u'n', u'n', u'n', u'plants', u'n', u'100000', u'n', u'n', u'n', u'algae', u'n', u'100000', u'n', u'n', u'n', u'n', u'n', u'deer', u'n', u'1000', u'n', u'n', u'n', u'rabbit', u'n', u'2000', u'n', u'n', u'n', u'n', u'n', u'fox', u'n', u'100', u'n', u'n', u'n', u'bear', u'n', u'100', u'n', u'n', u'n', u'n', u'n', u'lion', u'n', u'80', u'n', u'n', u'n', u'tiger', u'n', u'50', u'n', u'n', u'n', u'n', u'n'] The preceding output prints every text content within the soup object including the new-line characters too. Also, in the case of text, we can pass a list of strings and find_all() will find every string defined in the list: all_texts_in_list = soup.find_all(text=["plants","algae"]) print(all_texts_in_list) #output [u'plants', u'algae'] This is same in the case of searching for the tags, attribute values of tag, custom attributes, and the CSS class. For finding all the div and li tags, we can use the following code line: div_li_tags = soup.find_all(["div","li"]) Similarly, for finding tags with the producerlist and primaryconsumerlist classes, we can use the following code lines: all_css_class = soup.find_all(class_=["producerlist","primaryconsumerlist"]) Both find() and find_all() search an object's descendants (that is, all children coming after it in the tree), their children, and so on. We can control this behavior by using the recursive parameter. If recursive = False, search happens only on an object's direct children. For example, in the following code, search happens only at direct children for div and li tags. Since the direct child of the soup object is html, the following code will give an empty list: div_li_tags = soup.find_all(["div","li"],recursive=False) print(div_li_tags) #output [] If find_all() can't find results, it will return an empty list, whereas find() returns None. Navigation using Beautiful Soup Navigation in Beautiful Soup is almost the same as the searching methods. In navigating, instead of methods, there are certain attributes that facilitate the navigation. So each Tag or NavigableString object will be a member of the resulting tree with the Beautiful Soup object placed at the top and other objects as the nodes of the tree. The following code snippet is an example for an HTML tree: html_markup = """<div class="ecopyramid"> <ul id="producers"> <li class="producerlist"> <div class="name">plants</div> <div class="number">100000</div> </li> <li class="producerlist"> <div class="name">algae</div> <div class="number">100000</div> </li> </ul> </div>""" For the previous code snippet, the following HTML tree is formed: In the previous figure, we can see that Beautiful Soup is the root of the tree, the Tag objects make up the different nodes of the tree, while NavigableString objects make up the leaves of the tree. Navigation in Beautiful Soup is intended to help us visit the nodes of this HTML/XML tree. From a particular node, it is possible to: Navigate down to the children Navigate up to the parent Navigate sideways to the siblings Navigate to the next and previous objects parsed We will be using the previous html_markup as an example to discuss the different navigations using Beautiful Soup. Summary In this article, we discussed in detail the different search methods in Beautiful Soup, namely, find(), find_all(), find_next(), and find_parents(); code examples for a scraper using search methods to get information from a website; and understanding the application of search methods in combination. We also discussed in detail the different navigation methods provided by Beautiful Soup, methods specific to navigating downwards and upwards, and sideways, to the previous and next elements of the HTML tree. Resources for Article: Further resources on this subject: Web Services Testing and soapUI [article] Web Scraping with Python [article] Plotting data using Matplotlib: Part 1 [article]
Read more
  • 0
  • 0
  • 8522

article-image-your-first-application
Packt
15 Jan 2014
7 min read
Save for later

Your First Application

Packt
15 Jan 2014
7 min read
(For more resources related to this topic, see here.) Sketching out the application We are going to build a browsable database of cat profiles. Visitors will be able to create pages for their cats and fill in basic information such as the name, date of birth, and breed for each cat. This application will support the default Create-Retrieve-Update-Delete operations (CRUD). We will also create an overview page with the option to filter cats by breed. All of the security, authentication, and permission features are intentionally left out since they will be covered later. Entities, relationships, and attributes Firstly, we need to define the entities of our application. In broad terms, an entity is a thing (person, place, or object) about which the application should store data. From the requirements, we can extract the following entities and attributes: Cats, which have a numeric identifier, a name, a date of birth, and a breed Breeds, which only have an identifier and a name This information will help us when defining the database schema that will store the entities, relationships, and attributes, as well as the models, which are the PHP classes that represent the objects in our database. The map of our application We now need to think about the URL structure of our application. Having clean and expressive URLs has many benefits. On a usability level, the application will be easier to navigate and look less intimidating to the user. For frequent users, individual pages will be easier to remember or bookmark and, if they contain relevant keywords, they will often rank higher in search engine results. To fulfill the initial set of requirements, we are going to need the following routes in our application: Method Route Description GET / Index GET /cats Overview page GET /cats/breeds/:name Overview page for specific breed GET /cats/:id Individual cat page GET /cats/create Form to create a new cat page POST /cats Handle creation of new cat page GET /cats/:id/edit Form to edit existing cat page PUT /cats/:id Handle updates to cat page GET /cats/:id/delete Form to confirm deletion of page DELETE /cats/:id Handle deletion of cat page We will shortly learn how Laravel helps us to turn this routing sketch into actual code. If you have written PHP applications without a framework, you can briefly reflect on how you would have implemented such a routing structure. To add some perspective, this is what the second to last URL could have looked like with a traditional PHP script (without URL rewriting): /index.php?p=cats&id=1&_action=delete&confirm=true. The preceding table can be prepared using a pen and paper, in a spreadsheet editor, or even in your favorite code editor using ASCII characters. In the initial development phases, this table of routes is an important prototyping tool that forces you to think about URLs first and helps you define and refine the structure of your application iteratively. If you have worked with REST APIs, this kind of routing structure will look familiar to you. In RESTful terms, we have a cats resource that responds to the different HTTP verbs and provides an additional set of routes to display the necessary forms. If, on the other hand, you have not worked with RESTful sites, the use of the PUT and DELETE HTTP methods might be new to you. Even though web browsers do not support these methods for standard HTTP requests, Laravel uses a technique that other frameworks such as Rails use, and emulates those methods by adding a _method input field to the forms. This way, they can be sent over a standard POST request and are then delegated to the correct route or controller method in the application. Note also that none of the form submissions endpoints are handled with a GET method. This is primarily because they have side effects; a user could trigger the same action multiple times accidentally when using the browser history. Therefore, when they are called, these routes never display anything to the users. Instead, they redirect them after completing the action (for instance, DELETE /cats/:id will redirect the user to GET/cats). Starting the application Now that we have the blueprints for the application, let's roll up our sleeves and start writing some code. Start by opening a new terminal window and create a new project with Composer, as follows: $ composer create-project laravel/laravel cats --prefer-dist $ cd cats Once Composer finishes downloading Laravel and resolving its dependencies, you will have a directory structure identical to the one presented previously. Using the built-in development server To start the application, unless you are running an older version of PHP (5.3.*), you will not need a local server such as WAMP on Windows or MAMP on Mac OS since Laravel uses the built-in development server that is bundled with PHP 5.4 or later. To start the development server, we use the following artisan command: $ php artisan serve Artisan is the command-line utility that ships with Laravel and its features will be covered in more detail later. Next, open your web browser and visit http://localhost:8000; you will be greeted with Laravel's welcome message. If you get an error telling you that the php command does not exist or cannot be found, make sure that it is present in your PATH variable. If the command fails because you are running PHP 5.3 and you have no upgrade possibility, simply use your local development server (MAMP/WAMP) and set Apache's DocumentRoot to point to cats-app/public/. Writing the first routes Let's start by writing the first two routes of our application inside app/routes.php. This file already contains some comments as well as a sample route. You can keep the comments but you must remove the existing route before adding the following routes: Route::get('/', function(){   return "All cats"; }); Route::get('cats/{id}', function($id){   return "Cat #$id"; }); The first parameter of the get method is the URI pattern. When a pattern is matched, the closure function in the second parameter is executed with any parameters that were extracted from the pattern. Note that the slash prefix in the pattern is optional; however, you should not have any trailing slashes. You can make sure that your routes work by opening your web browser and visiting http://localhost:8000/cats/123. If you are not using PHP's built-in development server and are getting a 404 error at this stage, make sure that Apache's mod_rewrite configuration is enabled and works correctly. Restricting the route parameters In the pattern of the second route, {id} currently matches any string or number. To restrict it so that it only matches numbers, we can chain a where method to our route as follows: Route::get('cats/{id}', function($id){ return "Cat #$id"; })->where('id', '[0-9]+'); The where method takes two arguments: the first one is the name of the parameter and the second one is the regular expression pattern that it needs to match. Summary We have covered a lot in this article. We learned how to define routes, prepare the models of the application, and interact with them. Moreover, we have had a glimpse at the many powerful features of Eloquent, Blade, as well as the other convenient helpers in Laravel to create forms and input fields: all of this in under 200 lines of code! Resources for Article: Further resources on this subject: Laravel 4 - Creating a Simple CRUD Application in Hours [Article] Creating and Using Composer Packages [Article] Building a To-do List with Ajax [Article]
Read more
  • 0
  • 0
  • 2573

article-image-preparing-and-configuring-your-magento-website
Packt
10 Jan 2014
8 min read
Save for later

Preparing and Configuring Your Magento Website

Packt
10 Jan 2014
8 min read
(For more resources related to this topic, see here.) Focusing on your keywords We'll focus on three major considerations when choosing where to place our keywords within a Magento store: Purpose : What is the purpose of optimizing this keyword? Relevance : Is the keyword relevant to the page we have chosen to optimize it for? Structure : Does the structure of the website re-enforce the nature of our keyword? The purpose for choosing keywords to optimize on our Magento store must always be to increase our sales. It is true that (generically speaking) optimizing keywords means driving visitors to our website, but in the case of an e-commerce website, the end goal—the true justification of any SEO campaign—must be increasing the number of sales. We must then make sure that our visitors not just visit our website, but visit with the intention of buying something. The keywords we have chosen to optimize must be relevant to the page we are optimizing them on. The page, therefore, must contain elements specifically related to our keyword, and any unrelated material must be kept to a minimum. Driving potential customers to a page where their search term is unrelated to the content not only frustrates the visitor, but also lessens their desire to purchase from our website. The structure of our website must complement our chosen keyword. Competitive phrases, usually broader phrases with the highest search volume, are naturally the hardest to optimize. These types of keywords require a strong page to effectively optimize them. In most cases, the strength of a page is related to its level or tier within the URL. For example, the home page is normally seen as being the strongest page suitable for high search volume broad phrases followed by a tiered structure of categories, subcategories, and finally, product pages, as this diagram illustrates: With that said, we must be mindful of all three considerations when matching our keywords to our pages. As the following diagram shows, the relationship between these three elements is vital for ensuring not only that our keyword resides on a page with enough strength to enable it to perform, but also that it has enough relevance to retain our user intent at the same time as adhering to our overall purpose: The role of the home page You may be forgiven for thinking that optimizing our most competitive keyword on the home page would lead to the best results. However, when we take into account the relevance of our home page, does it really match our keyword? The answer is usually that it doesn't. In most cases, the home page should be used exclusively as a platform for building our brand identity . Our brand identity is the face of our business and is how customers will remember us long after they've purchased our goods and exited our website. In rare cases, we could optimize keywords on our home page that directly match our brand; for example, if our company name is "Wooden Furniture Co.", it might be acceptable to optimize for "Wooden Furniture" on our home page. It would also be acceptable if we were selling a single item on a single-page e-commerce website. In a typical Magento store, we would hope to see the following keyword distribution pattern: The buying intention of our visitors will almost certainly differ between each of these types of pages. Typically, a user entering our website via a broad phrase will have less of an intention to buy our products than a visitor entering our website through a more specific, product-related search term. Structuring our categories for better optimization Normally, our most competitive keywords will be classified as broad keywords, meaning that their relevance could be attributed to a variety of similar terms. This is why it makes sense to use top-level or parent categories as a basis for our broad phrases. To use our example, Wooden Furniture would be an ideal top-level category to contain subcategories such as 'Wooden Tables', 'Wooden Chairs', and 'Wooden Wardrobes', with content on our top-level category page to highlight these subcategories. On the Magento administration panel, go to Catalog | Manage Categories . Here, we can arrange our category structure to match our keyword relevance and broadness. In an ideal world, we would plan out our category structure before implementing it; sadly, that is not always the case. If we need to change our category structure to better match our SEO strategy, Magento provides a simple way to alter our category hierarchy. For example, say we currently have a top-level category called Furniture , and within this category, we have Wooden Furniture , and we decide that we're only optimizing for Wooden Furniture ; we can use Magento's drag-and-drop functionality to move Wooden Furniture to become a top-level category. To do this, we would have to perform the following steps: Navigate to Catalog | Manage Categories . Drag our Wooden Furniture category to the same level as Furniture . We will see that our URL has now changed from http://www.mydomain.com/furniture/wooden-furniture.html to http://www.mydomain.com/wooden-furniture.html. We will also notice that our old URL now redirects to our new URL; this is due to Magento's inbuilt URL Rewrite System. When moving our categories within the hierarchy, Magento will remember the old URL path that was specified and automatically create a redirect to the new location. This is fantastic for our SEO strategy as 301 redirects are vital for passing on authority from the old page to the new. If we wanted to have a look at these rewrites ourselves, we could perform the following steps: Navigate to Catalog | URL Rewrite Management . From the table, we could find our old request path and see the new target path that has been assigned. Not only does Magento keep track of our last URL, but any previous URLs also become rewritten. It is therefore not surprising that a large Magento store with numerous products and categories could have thousands upon thousands of rows within this table, especially when each URL is rewritten on a per-store basis. There are many configuration options within Magento that allow us to decide how and what Magento rewrites for us automatically. Another important point to note is that your category URL key may change depending on whether an existing category with the same URL key at the same level had existed previously in the system. If this situation occurs, an automatic incremental integer is appended to the URL key, for example, wooden-furniture-2.html. Magento Enterprise Edition has been enhanced to only allow unique URL keys. To know more, go to goo.gl/CKprNB. Optimizing our CMS pages CMS pages within Magento are primarily used as information pages. Terms and conditions, privacy policy, and returns policy are all examples of CMS pages that are created and configured within the Magento administration panel under CMS | Pages . By default, the home page of a Magento store is a CMS page with the title Home Page . The page that is served as the home page can be configured within the Magento Configuration under System | Configuration | Web | Default Pages . The most important part of a CMS page setup is that its URL key is always relative to the website's base URL. This means that when creating CMS pages, you can manually choose how deep you wish the page to exist on the site. This gives us the ability to create as many nested CMS pages as we like. Another important point to note is that, by default, CMS pages have no file extension (URL suffix) as opposed to the category and product URLs where we can specify which extension to use (if any). For CMS pages, the default optimization methods that are available to us are found within the Page Information tabs after selecting a CMS page: Under the Page Information subtab, we can choose our Page Title and URL key Under the Content subtab, we can enter our Content Heading (by default, this gets inserted into an <h1> tag) and enter our body content Under the Meta Data subtab, we can specify our keywords and description As mentioned previously, we would focus optimization on these pages purely for the intent of our users. If we were not using custom blocks or other methods to display product information, we would not optimize these information pages for keywords relating to purchasing a product. Summary In this article, we have learned the basic concepts of keyword placement and the roles of the different types of pages to prepare and configure your Magento website. Resources for Article : Further resources on this subject: Magento: Exploring Themes [Article] Magento : Payment and shipping method [Article] Integrating Twitter with Magento [Article]
Read more
  • 0
  • 0
  • 4032

article-image-marionette-view-types-and-their-use
Packt
07 Jan 2014
15 min read
Save for later

Marionette View Types and Their Use

Packt
07 Jan 2014
15 min read
(For more resources related to this topic, see here.) Marionette.View and Marionette.ItemView The Marionete.View extends the Backbone.View, and it's important to remember this, because all the knowledge that we already have on creating a view will be useful while working with these new set of views of Marionette. Each of them aims to provide a specific out of the box functionality so that you spend less time focusing on the glue code needed to make things work, and more time on things that are related to the needs of your application. This allows you to focus all your attention on the specific logic of your application. We will start by describing the Marionette.View part of Marionette, as all of the other views extend from it; the reason we do this is because this view provides a very useful functionality. But it's important to notice that this view is not intended to be used directly. As it is the base view from which all the other views inherit from, it is an excellent place to contain some of the glue code that we just talked about. A good example of that functionality is the close method, which will be responsible for removing .el from DOM. This method will also take care of calling unbind to all your events, thus avoiding the problem called Zombie views. This an issue that you can have if you don't do this carefully in a regular Backbone view, where new instantiations of previously closed fire events are present. These events remain bound to the HTML elements used in the view. These are now present again in the DOM now that the view has been rerendered, and during the recreation of the view, new event listeners are attached to these HTML elements. From the documentation of the Marionette.View, we exactly know what the close method does. It calls an onBeforeClose event on the view, if one is provided It calls an onClose event on the view, if one is provided It unbinds all custom view events It unbinds all DOM events It removes this.el from the DOM It unbinds all listenTo events The link to the official documentation of the Marionette.View object is https://github.com/marionettejs/backbone.marionette/blob/master/docs/marionette.view.md. It's important to mention that the third point, unbind all custom view events, will unbind events created using the modelEvents hash, those created on the events hash, and events created via this.listenTo. As the close method is already provided and implemented, you don't need to perform the unbind and remove previously listed tasks. While most of the time this would be enough, at times, one of your views will need you to perform extra work in order to properly close it; in this case, two events will be fired at the same time to close a view. The event onBeforeClose, as the name indicates, will be fired just before the close method. It will call a function of the same name, onBeforeClose, where we can add the code that needs to be executed at this point. function : onBeforeClose () { // code to be run before closing the view } The second event will be onClose, which will be fired after the close method so that the .el of the view won't be present anymore and all the unbind tasks will have been performed. function : onClose () { // code to be run after closing the view } One of the core ideas behind Marionette is to reduce the boilerplate code that you have to write when building apps with Backbone. A perfect example of which is the render method that you have to implement in every Backbone view, and the code there is pretty much the same in each of your views. Load the template with the underscore _.template function and then pass the model converted to JSON to the template. The following is an example of repetitive code needed to render a view in Backbone: render : function () { var template = $( '#mytemplate' ).html(); var templateFunction = _.template( template ); var modelToJSON = this.model.toJSON(); var result = templateFunction(modelToJSON); var myElement = $( '#MyElement' ); myElement.html( result ); } As Marionette defining a render function is no longer required, just like the close method, the preceding code will be called for you behind the scenes. In order to render a view, we just need to declare it with a template property set. var SampleView = Backbone.Marionette.ItemView.extend({ template : '#sample-template' }); Next, we just create a Backbone model, and we pass it to the ItemView constructor. var SampleModel = Backbone.Model.extend({ defaults : { value1 : "A random Value", value2 : "Another Random Value" } }) var sampleModel = new SampleModel(); var sampleView = new SampleView({model:sampleModel); And then the only thing left is to call the render function. sampleView.render(); If you want to see it running, please go through this JSFiddle that illustrates the previous code: http://jsfiddle.net/rayweb_on/VS9hA/ One thing to note is that we just needed one line to specify the template, and Marionette did the rest by rendering our view with the specified template. Notice that in this example, we used the ItemView constructor; we should not use Marionette.View directly, as it does not have many functionalities of its own. It just serves as the base for other views. So some of the following examples of the functionalities provided by Marionette.View will be demonstrated using ItemView, as this view inherits all of these functionalities through extension. As we saw in the previous example, ItemView works perfectly for rendering a single model using a template, but what about rendering a collection of models? If you just need to render, for example, a list of books or categories, you still can use ItemView. To accomplish this, the template that you would assign to ItemView must know how to handle the creation of the DOM to properly display that list of items. Let's render a list of books. The Backbone model will have two properties: the book name and the book ID. We just want to create a list of links using the book name as the value to be displayed; the ID of the book will be used to create a link to see the specific book. First, let's create the book Backbone model for this example and its collection: var BookModel = Backbone.Model.extend({ defaults : { id : "1", name : "First", } }); var BookCollection = Backbone.Collection.extend({ model : BookModel }); Now let's instantiate the collection and add three models to it: var bookModel = new BookModel(); var bookModel2 = new BookModel({id:"2",name:"second"}); var bookModel3 = new BookModel({id:"3",name:"third"}); var bookCollection = new BookCollection(); bookCollection.add(bookModel); bookCollection.add(bookModel2); bookCollection.add(bookModel3); In our HTML, let's create the template to be used in this view; the template should look like the following: <script id="books-template" type="text/html"> <ul> <% _.each(items, function(item){ %> <li><a href="book/'+<%= item.id %> +"><%= item.name %> </li> <% }); %> </ul> </script> Now we could render the book list using the following code snippet: var BookListView = Marionette.ItemView.extend({ template: "#books-template" }); var view = new BookListView ({ collection: bookCollection }); view.Render(); If you want to see it in action, go to the working code in JSFiddle at http://jsfiddle.net/rayweb_on/8QAgQ/. The previous code would produce an unordered list of books with links to the specific book. Again, we gained the benefit of writing very little code once again, as we didn't need to specify the Render function, which could be misleading, because the ItemView is perfectly capable of rendering a model or a collection. Whether to use CollectionView or ItemView will depend on what we are trying to accomplish. If we need a set of individual views with its own functionality, CollectionView is the right choice, as we will see when we get to the point of reviewing it. But if we just need to render the values of a collection, ItemView would be the perfect choice. Handling events in the views To keep track of model events or collection events, we must write the following code snippet on a regular Backbone view: this.listenTo(this.model, "change:title", this.titleChanged); this.listenTo(this.collection, "add", this.collectionChanged); To start these events, we use the following handler functions: titleChanged: function(model, value){alert("changed");}, collectionChanged: function(model, value){alert("added");}, This still works fine in Marionette, but we can accomplish the same thing by declaring these events using the following configuration hash: modelEvents: { "change:title": "titleChanged" }, collectionEvents: { "add": "collectionChanged" }, This will give us exactly the same result, but the configuration hash is very convenient as we can keep adding events to our model or collection, and the code is cleaner and very easy to follow. The modelEvents and collectionEvents are not the only configuration hash sets that we have available in each one of the Marionette views; the UI configuration hash is also available. It may be the case that one of the DOM elements on your view will be used many times to read its value, and doing this using jQuery can not be optimal in terms of performance. Also, we would have the jQuery reference in several places, repeating ourselves and making our code less DRY. Inside a Backbone view, we can define a set of events that will be fired once an action is taken in the DOM; for instance, we pass the function that we want to handle in this event at the click of a button. events : { "click #button2" : "updateValue" }, This will invoke the updateValue function once we click on button2. This works fine, but what about calling a method that is not inside the view? To accomplish this, Marionette provides the triggers functionality that will fire events which can be listened to outside of your view. To declare a trigger, we can use the same syntax used in the events object as follows: triggers : { "click #button1": "trigger:alert"}, And then, we can listen to that event somewhere else using the following code: sampleView.on("trigger:alert", function(args){ alert(args.model.get("value2")); }); In the previous code, we used the model to alert and display the value of the property, value2. The args parameter received by the function will contain objects that you can use: The view that fired the trigger The Backbone model or collection of that view UI and templates While working with a view, you will need a reference to a particular HTML element through jQuery in more than one place in your view. This means you will make a reference to a button during initialization and in few other methods of the view. To avoid having the jQuery selector duplicated on each of these methods, you can map that UI element in a hash so that the selector is preserved. If you need to change it, the change will be done in a single place. To create this mapping of UI elements, we need to add the following declaration: ui: { quantity: "#quantity" saveButton : "#Save" }, And to make use of these mapper UI elements, we just need to refer them inside any function by the name given in the configuration. validateQuantity: function() { if (this.ui.quantity.val() > 0 { this.ui.saveButton.addClass('active'); } } There will be times when you need to pass a different template to your view. To do this in Marionette, we remove the template declaration and instead add a function called getTemplate. The following code snippet would illustrate the use of this function: getTemplate: function(){ if (this.model.get("foo"){ return "#sample-template"; }else { return "#a-different-template"; } }, In this case, we check the existence of the property foo; if it's not present, we use a different template and that will be it. You don't need to specify the render function because it will work the same way as declaring a template variable as seen in one of the previous examples. If you want to learn more about all the concepts that we have discussed so far, please refer to the jsFiddle link: http://jsfiddle.net/rayweb_on/NaHQS/. If you find yourself needing to make calculations involving a complicated process while rendering a value, you can make use of templeteHelpers that are functions contained in an object called templateHelpers. Let's look at an example that will illustrate its use better. Suppose we need to show the value of a book but are offering a discount that we need to calculate, use the following code: var PriceView = Backbone.Marionette.ItemView.extend({ template: "#price-template", templateHelpers: { calculatePrice: function(){ // logic to calculate the price goes here return price; } } }); As you can see the in the previous code, we declared an object literal that will contain functions that can be called from the templates. <script id="my-template" type="text/html"> Take this book with you for just : <%= calculatePrice () %> </script> Marionette.CollectionView Rendering a list of things like books inside one view is possible, but we want to be able to interact with each item. The solution for this will be to create a view one-by-one with the help of a loop. But Marionette solves this in a very elegant way by introducing the concept of CollectionView that will render a child view for each of the elements that we have in the collection we want to display. A good example to put into practice could be to list the books by category and create a Collection view. This is incredible easy. First, you need to define how each of your items should be displayed; this means how each item will be transformed in a view. For our categories example, we want each item to be a list <li> element and part of our collection; the <ul> list will contain each category view. We first declare ItemView as follows: var CategoryView = Backbone.Marionette.ItemView.extend({ tagName : 'li', template: "#categoryTemplate", }); Then we declare CollectionView, which specifies the view item to use. var CategoriesView = Backbone.Marionette.CollectionView.extend({ tagName : 'ul', className : 'unstyled', itemView: CategoryView }); A good thing to notice is that even when we are using Marionette views, we are still able to use the standard properties that Backbone views offer, such as tagName and ClassName. Finally, we create a collection and we instantiate CollectionView by passing the collection as a parameter. var categoriesView = new CategoriesView({collection:categories); categoriesView.render(); And that's it. Simple huh? The advantage of using this view is that it will render a view for each item, and it can have a lot of functionality; we can control all those views in the CollectionView that serves as a container. You can see it in action at http://jsfiddle.net/rayweb_on/7usdJ/. Marionette.CompositeView The Marionette.Composite view offers the possibility of not only rendering a model or collection models but, also the possibility of rendering both a model and a collection. That's why this view fits perfectly in our BookStore website. We will be adding single items to the shopping cart, books in this case, and we will be storing these books in a collection. But we need to calculate the subtotal of the order, show the calculated tax, and an order total; all of these properties will be part of our totals model that we will be displaying along with the ordered books. But there is a problem. What should we display in the order region when there are no items added? Well, in the CompositeView and the CollectionView, we can set an emptyView property, which will be a view to show in case there are no models in the collection. Once we add a model, we can then render the item and the totals model. Perhaps at this point, you may think that you lost control over your render functionality, and there will be cases where you need to do things to modify your HTML. Well, in that scenario, you should use the onRender() function, which is a very helpful method that will allow you to manipulate the DOM just after your render method was called. Finally, we would like to set a template with some headers. These headers are not part of an ItemView, so how can we display it? Let's have a look at part of the code snippet that explains how each part solves our needs. var OrderListView = Backbone.Marionette.CompositeView.extend({ tagName: "table", template: "#orderGrid", itemView: CartApp.OrderItemView, emptyView: CartApp.EmptyOrderView, className: "table table-hover table-condensed", appendHtml: function (collectionView, itemView) { collectionView.$("tbody").append(itemView.el); }, So far we defined the view and set the template; the Itemview and EmptyView properties will be used to render our view. The onBeforeRender is a function that will be called, as the name indicates, before the render method; this function will allow us to calculate the totals that will be displayed in the total model. onBeforeRender: function () { var subtotal = this.collection.getTotal(); var tax = subtotal * .08; var total = subtotal + tax; this.model.set({ subtotal: subtotal }); this.model.set({ tax: tax }); this.model.set({ total: total }); }, The onRender method is used here to check whether there are no models in the collection (that is, the user hasn't added a book to the shopping cart). If not, we should not display the header and footer regions of the view. onRender: function () { if (this.collection.length > 0) { this.$('thead').removeClass('hide'); this.$('tfoot').removeClass('hide'); } }, As we can see, Marionette does a great job offering functions that can remove a lot of boilerplate code and also give us full control over what is being rendered. Summary This article covered the introduction and usage of view types that Marionette has. Now you must be quite familiar with the Marionette.View and Marionette.ItemView view types of Marionette. Resources for Article: Further resources on this subject: Mobile Devices [Article] Puppet: Integrating External Tools [Article] Understanding Backbone [Article]
Read more
  • 0
  • 0
  • 5850

article-image-creating-identity-and-resource-pools
Packt
24 Dec 2013
7 min read
Save for later

Creating Identity and Resource Pools in Cisco Unified Computing System

Packt
24 Dec 2013
7 min read
Computers and their various peripherals have some unique identities such as Universally Unique Identifiers (UUIDs), Media Access Control (MAC) addresses of Network Interface Cards (NICs), World Wide Node Numbers (WWNNs) for Host Bus Adapters (HBAs), and others. These identities are used to uniquely identify a computer system in a network. For traditional computers and peripherals, these identities were burned into the hardware and, hence, couldn't be altered easily. Operating systems and some applications rely on these identities and may fail if these identities are changed. In case of a full computer system failure or failure of a computer peripheral with unique identity, administrators have to follow cumbersome firmware upgrade procedures to replicate the identities of the failed components on the replacement components. The Unified Computing System (UCS) platform introduced the idea of creating identity and resource pools to abstract the compute node identities from the UCS Manager (UCSM) instead of using the hardware burned-in identities. In this article, we'll discuss the different pools you can create during UCS deployments and server provisioning. We'll start by looking at what pools are and then discuss the different types of pools and show how to configure each of them. Understanding identity and resource pools The salient feature of the Cisco UCS platform is stateless computing . In the Cisco UCS platform, none of the computer peripherals consume the hardware burned-in identities. Rather, all the unique characteristics are extracted from identity and resource pools, which reside on the Fabric Interconnects (FIs) and are managed using UCSM. These resource and identity pools are defined in an XML format, which makes them extremely portable and easily modifiable. UCS computers and peripherals extract these identities from UCSM in the form of a service profile. A service profile has all the server identities including UUIDs, MACs, WWNNs, firmware versions, BIOS settings, and other server settings. A service profile is associated with the physical server using customized Linux OS that assigns all the settings in a service profile to the physical server. In case of server failure, the failed server needs to be removed and the replacement server has to be associated with the existing service profile of the failed server. In this service profile association process, the new server will automatically pick up all the identities of the failed server, and the operating system or applications dependent upon these identities will not observe any change in the hardware. In case of peripheral failure, the replacement peripheral will automatically acquire the identities of the failed component. This greatly improves the time required to recover a system in case of a failure. Using service profiles with the identity and resource pools also greatly improves the server provisioning effort. A service profile with all the settings can be prepared in advance while an administrator is waiting for the delivery of the physical server. The administrator can create service profile templates that can be used to create hundreds of service profiles; these profiles can be associated with the physical servers with the same hardware specifications. Creating a server template is highly recommended as this greatly reduces the time for server provisioning. This is because a template can be created once and used for any number of physical servers with the same hardware. Server identity and resource pools are created using the UCSM. In order to better organize, it is possible to define as many pools as are needed in each category. Keep in mind that each defined resource will consume space in the UCSM database. It is, therefore, a best practice to create identity and resource pool ranges based on the current and near-future assessments. For larger deployments, it is best practice to define a hierarchy of resources in the UCSM based on geographical, departmental, or other criteria; for example, a hierarchy can be defined based on different departments. This hierarchy is defined as an organization, and the resource pools can be created for each organizational unit. In the UCSM, the main organization unit is root, and further suborganizations can be defined under this organization. The only consideration to be kept in mind is that pools defined under one organizational unit can't be migrated to other organizational units unless they are deleted first and then created again where required. The following diagram shows how identity and resource pools provide unique features to a stateless blade server and components such as the mezzanine card: Learning to create a UUID pool UUID is a 128-bit number assigned to every compute node on a network to identify the compute node globally. UUID is denoted as 32 hexadecimal numbers. In the Cisco UCSM, a server UUID can be generated using the UUID suffix pool. The UCSM software generates a unique prefix to ensure that the generated compute node UUID is unique. Operating systems including hypervisors and some applications may leverage UUID number binding. The UUIDs generated with a resource pool are portable. In case of a catastrophic failure of the compute node, the pooled UUID assigned through a service profile can be easily transferred to a replacement compute node without going through complex firmware upgrades. Following are the steps to create UUIDs for the blade servers: Log in to the UCSM screen. Click on the Servers tab in the navigation pane. Click on the Pools tab and expand root. Right-click on UUID Suffix Pools and click on Create UUID Suffix Pool as shown in the following screenshot: In the pop-up window, assign the Name and Description values to the UUID pool. Leave the Prefix value as Derived to make sure that UCSM makes the prefix unique. The selection of Assignment Order as Default is random. Select Sequential to assign the UUID sequentially. Click on Next as shown in the following screenshot: Click on Add in the next screen. In the pop-up window, change the value for Size to create a desired number of UUIDs. Click on OK and then on Finish in the previous screen as shown in the following screenshot: In order to verify the UUID suffix pool, click on the UUID Suffix Pools tab in the navigation pane and then on the UUID Suffixes tab in the work pane as shown in the following screenshot: Learning to create a MAC pool MAC is a 48-bit address assigned to the network interface for communication in the physical network. MAC address pools make server provisioning easier by providing scalable NIC configurations before the actual deployment. Following are the steps to create MAC pools: Log in to the UCSM screen. Click on the LAN tab in the navigation pane. Click on the Pools tab and expand root. Right-click on MAC Pools and click on Create MAC Pool as shown in the following screenshot: In the pop-up window, assign the Name and Description values to the MAC pool. The selection of Default as the Assignment Order value is random. Select Sequential to assign the MAC addresses sequentially. Click on Next as shown in the following screenshot: Click on Add in the next screen. In the pop-up window, change Size to create the desired number of MAC addresses. Click on OK and then on Finish in the previous screen as shown in the following screenshot: In order to verify the MAC pool, click on the MAC Pools tab in the navigation pane and then on the MAC Addresses tab in the work pane as shown in the following screenshot:
Read more
  • 0
  • 0
  • 13381

article-image-working-tooltips
Packt
23 Dec 2013
6 min read
Save for later

Working with Tooltips

Packt
23 Dec 2013
6 min read
(For more resources related to this topic, see here.) The jQuery team introduced their version of the tooltip as part of changes to Version 1.9 of the library; it was designed to act as a direct replacement for the standard tooltip used in all browsers. The difference here, though, was that whilst you can't style the standard tooltip, jQuery UI's replacement is intended to be accessible, themeable, and completely customizable. It has been set to display not only when a control receives focus, but also when you hover over that control, which makes it easier to use for keyboard users. Implementing a default tooltip Tooltips were built to act as direct replacements for the browser's native tooltips. They will recognize the default markup of the title attribute in a tag, and use it to automatically add the additional markup required for the widget. The target selector can be customized though using tooltip's items and content options. Let's first have a look at the basic structure required for implementing tooltips. In a new file in your text editor, create the following page: <!DOCTYPE HTML> <html> <head> <meta charset="utf-8"> <title>Tooltip</title> <link rel="stylesheet" href="development- bundle/themes/redmond/jquery.ui.all.css"> <style> p { font-family: Verdana, sans-serif; } </style> <script src = "js/jquery-2.0.3.js"></script> <script src = "development- bundle/ui/jquery.ui.core.js"></script> <script src = "development-bundle/ui/jquery.ui.widget.js"> </script> <script src = "development-bundle/ui/jquery.ui.position.js"> </script> <script src = "development-bundle/ui/jquery.ui.tooltip.js"> </script> <script> $(document).ready(function($){ $(document).tooltip(); }); </script> </head> <body> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nulla blandit mi quis imperdiet semper. Fusce vulputate venenatis fringilla. Donec vitae facilisis tortor. Mauris dignissim nibh ac justo ultricies, nec vehicula ipsum ultricies. Mauris molestie felis ligula, id tincidunt urna consectetur at. Praesent <a href="http://www. ipsum.com" title="This was generated from www.ipsum.com">blandit</a> faucibus ante ut semper. Pellentesque non tristique nisi. Ut hendrerit tempus nulla, sit amet venenatis felis lobortis feugiat. Nam ac facilisis magna. Praesent consequat, risus in semper imperdiet, nulla lorem aliquet nisi, a laoreet nisl leo rutrum mauris.</p> </body> </html> Save the code as tooltip1.html in your jqueryui working folder. Let's review what was used. The following script and CSS resources are needed for the default tooltip widget configuration: jquery.ui.all.css jquery-2.0.3.js jquery.ui.core.js jquery.ui.widget.js jquery.ui.tooltip.js The script required to create a tooltip, when using the title element in the underlying HTML can be as simple as this, which should be added after the last <script> element in your code, as shown in the previous example: <script> $(document).ready(function($){ $(document).tooltip(); }); </script> In this example, when hovering over the link, the library adds in the requisite aria described by the code for screen readers into the HTML link. The widget then dynamically generates the markup for the tooltip, and appends it to the document, just before the closing </body> tag. This is automatically removed as soon as the target element loses focus. ARIA, or Accessible Rich Internet Applications, provides a way to make content more accessible to people with disabilities. You can learn more about this initiative at https://developer.mozilla.org/en-US/docs/Accessibility/ARIA. It is not necessary to only use the $(document) element when adding tooltips. Tooltips will work equally well with classes or selector IDs; using a selector ID, will give a finer degree of control. Overriding the default styles When styling the Tooltip widget, we are not limited to merely using the prebuilt themes on offer, we can always elect to override existing styles with our own. In our next example, we’ll see how easy this is to accomplish, by making some minor changes to the example from tooltip1.html. In a new document, add the following styles, and save it as tooltipOverride.css, within the css folder: p { font-family: Verdana, sans-serif; } .ui-tooltip { background: #637887; color: #fff; } Don't forget to link to the new style sheet from the <head> of your document: <link rel="stylesheet" href="css/tooltipOverride.css"> Before we continue, it is worth explaining a great trick for styling tooltips before committing the results to code. If you are using Firefox, you can download and install the Toggle JS add-on for Firefox, which is available from https://addons.mozilla.org/en-US/firefox/addon/toggle-js/. This allows us to switch off JavaScript on a per-page basis; we can then hover over the link to create the tooltip, before expanding the markup in Firebug and styling it at our leisure. Save your HTML document as tooltip2.html. When we run the page in a browser, you should see the modified tooltip appear when hovering over the link in the text: Using prebuilt themes If creating completely new styles by hand is overkill for your needs, you can always elect to use one of the prebuilt themes that are available for download from the jQuery UI site. This is a really easy change to make. We first need to download a copy of the replacement theme; in our example, we’re going to use one called Excite Bike. Let’s start by browsing to http://jqueryui.com/download/, then deselecting the Toggle All option. We don’t need to download the whole library, just the theme at the bottom, change the theme option to display Excite Bike then select Download. Next, open a copy of tooltip2.html then look for this line: <link rel="stylesheet" href="development-bundle/themes/redmond /jquery.ui.all.css"> You will notice the highlighted word in the above line. This is the name of the existing theme. Change this to excite-bike then save the document as tooltip3.html, then remove the tooltipOverride.css link, and you’re all set. The following is our replacement theme in action: With a single change of word, we can switch between any of the prebuilt themes available for use with jQuery UI (or indeed even any of the custom ones that others have made available online), as long as you have downloaded and copied the theme into the appropriate folder. There may be occasions though, were we need to tweak the settings. This gives us the best of both worlds, where we only need to concentrate on making the required changes. Let’s take a look at how we can alter an existing theme, using ThemeRoller.
Read more
  • 0
  • 0
  • 12279
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-direct2d-game-window-class
Packt
23 Dec 2013
12 min read
Save for later

Creating a Direct2D game window class

Packt
23 Dec 2013
12 min read
(For more resources related to this topic, see here.) To put some graphics on the screen; the first step for us would be creating a new game window class that will use Direct2D. This new game window class will derive from our original game window class, while adding the Direct2D functionality. Open Visual Studio. Add a new class to the project called GameWindow2D. We need to change its declaration to: public class GameWindow2D : GameWindow, IDispoable As you can see, it inherits from the GameWindow class meaning that it has all of the public and protected members of the GameWindow class, as though we had implemented them again in this class. It also implements the IDisposable interface, just as the GameWindow class does. Also, don't forget to add a reference to SlimDX to this project if you haven't already. We need to add some using statements to the top of this class file as well. They are all the same using statements that the GameWindow class has, plus one more. The new one is SlimDX.Direct2D. They are as follows: using System.Windows.Forms; using System.Diagnostics; using System.Drawing; using System; using SlimDX; using SlimDX.Direct2D; using SlimDX.Windows; Next, we need to create a handful of member variables: WindowRenderTarget m_RenderTarget; Factory m_Factory; PathGeometry m_Geometry; SolidColorBrush m_BrushRed; SolidColorBrush m_BrushGreen; SolidColorBrush m_BrushBlue; The first variable is a WindowRenderTarget object. The term render target is used to refer to the surface we are going to draw on. In this case, it is our game window. However, this is not always the case. Games can render to other places as well. For example, rendering into a texture object is used to create various effects. One example would be a simple security camera effect. Say, we have a security camera in one room and a monitor in another room. We want the monitor to display what our security camera sees. To do this, we can render the camera's view into a texture, which can then be used to texture the screen of the monitor. Of course, this has to be re-done in every frame so that the monitor screen shows what the camera is currently seeing. This idea is useful in 2D too. Back to our member variables, the second one is a Factory object that we will be using to set up our Direct2D stuff. It is used to create Direct2D resources such as RenderTargets. The third variable is a PathGeometry object that will hold the geometry for the first thing we will draw, which will be a rectangle. The last three variables are all SolidColorBrush objects. We use these to specify the color we want to draw something with. There is a little more to them than that, but that's all we need right now. The constructor Let's turn our attention now to the constructor of our Direct2D game window class. It will do two things. Firstly, it will call the base class constructor (remember the base class is the original GameWindow class), and it will then get our Direct2D stuff initialized. The following is the initial code for our constructor: public GameWindow2D(string title, int width, int height,   bool fullscreen)     : base(title, width, height, fullscreen) {     m_Factory = new Factory();     WindowRenderTargetProperties properties = new       WindowRenderTargetProperties();     properties.Handle = FormObject.Handle;     properties.PixelSize = new Size(width, height);     m_RenderTarget = new WindowRenderTarget(m_Factory,       properties); } In the preceding code, the line starting with a colon is calling the constructor of the base class for us. This ensures that everything inherited from the base class is initialized. In the body of the constructor, the first line creates a new Factory object and stores it in our m_Factory member variable. Next, we create a WindowRenderTargetProperties object and store the handle of our RenderForm object in it. Note that FormObject is one of the properties defined in our GameWindow base class. Remember that the RenderForm object is a SlimDX object that represents a window for us to draw on. The next line saves the size of our game window in the PixelSize property. The WindowRenderTargetProperties object is basically how we specify the initial configuration for a WindowRenderTarget object when we create it. The last line in our constructor creates our WindowRenderTarget object, storing it in our m_RenderTarget member variable. The two parameters we pass in are our Factory object and the WindowRenderTargetProperties object we just created. A WindowRenderTarget object is a render target that refers to the client area of a window. We use the WindowRenderTarget object to draw in a window. Creating our rectangle Now that our render target is set up, we are ready to draw stuff, but first we need to create something to draw! So, we will add a bit more code at the bottom of our constructor. First, we need to initialize our three SolidColorBrush objects. Add these three lines of code at the bottom of the constructor: m_BrushRed = new SolidColorBrush(m_RenderTarget, new Color4(1.0f,   1.0f, 0.0f, 0.0f)); m_BrushGreen = new SolidColorBrush(m_RenderTarget, new   Color4(1.0f, 0.0f, 1.0f, 0.0f)); m_BrushBlue = new SolidColorBrush(m_RenderTarget, new Color4(1.0f,   0.0f, 0.0f, 1.0f)); This code is fairly simple. For each brush, we pass in two parameters. The first parameter is the render target we will use this brush on. The second parameter is the color of the brush, which is an ARGB (Alpha Red Green Blue) value. The first parameter we give for the color is 1.0f. The f character on the end indicates that this number is of the float data type. We set alpha to 1.0 because we want the brush to be completely opaque. A value of 0.0 will make it completely transparent, and a value of 0.5 will be 50 percent transparent. Next, we have the red, green, and blue parameters. These are all float values in the range 0.0 to 1.0 as well. As you can see for the red brush, we set the red channel to 1.0f and the green and blue channels are both set to 0.0f. This means we have maximum red, but no green or blue in our color. With our SolidColorBrush objects set up, we now have three brushes we can draw with, but we still lack something to draw! So, let's fix that by adding some code to make our rectangle. Add this code to the end of the constructor: m_Geometry = new PathGeometry(m_RenderTarget.Factory); using (GeometrySink sink = m_Geometry.Open()) {     int top = (int) (0.25f * FormObject.Height);     int left = (int) (0.25f * FormObject.Width);     int right = (int) (0.75f * FormObject.Width);     int bottom = (int) (0.75f * FormObject.Height);     PointF p0 = new Point(left, top);     PointF p1 = new Point(right, top);     PointF p2 = new Point(right, bottom);     PointF p3 = new Point(left, bottom);     sink.BeginFigure(p0, FigureBegin.Filled);     sink.AddLine(p1);     sink.AddLine(p2);     sink.AddLine(p3);     sink.EndFigure(FigureEnd.Closed);     sink.Close(); } This code is a bit longer, but it's still fairly simple. The first line creates a new PathGeometry object and stores it in our m_Geometry member variable. The next line starts the using block and creates a new GeometrySink object that we will use to build the geometry of our rectangle. The using block will automatically dispose of the GeometrySink object for us when program execution reaches the end of the using block. The using blocks only work with objects that implement the IDisposable interface. The next four lines calculate where each edge of our rectangle will be. For example, the first line calculates the vertical position of the top edge of the rectangle. In this case, we are making the rectangle's top edge be 25 percent of the way down from the top of the screen. Then, we do the same thing for the other three sides of our rectangle. The second group of four lines of code creates four Point objects and initializes them using the values we just calculated. These four Point objects represent the corners of our rectangle. A point is also often referred to as a vertex. When we have more than one vertex, we call them vertices (pronounced as vert-is-ces). The final group of code has six lines. They use the GeometrySink and the Point objects we just created to set up the geometry of our rectangle inside the PathGeometry object. The first line uses the BeginFigure() method to begin the creation of a new geometric figure. The next three lines each add one more line segment to the figure by adding another point or vertex to it. With all four vertices added, we then call the EndFigure() method to specify that we are done adding vertices. The last line calls the Close() method to specify that we are finished adding geometric figures, since we can have more than one if we want. In this case, we are only adding one geometric figure, our rectangle. Drawing our rectangle Since our rectangle never changes, we don't need to add any code to our UpdateScene() method. We will override the base class's UpdateScene() method anyway, in case we need to add some code in here later, which is given as follows: public override void UpdateScene(double frameTime) {     base.UpdateScene(frameTime); } As you can see, we only have one line of code in this override modifier of the base class's UpdateScene() method. It simply calls the base class's version of this method. This is important because the base class's UpdateScene() method contains our code that gets the latest user input data each frame. Now, we are finally ready to write the code that will draw our rectangle on the screen! We will override the RenderScene() method so we can add our custom code. The following is the code: public override void RenderScene() {     if ((!this.IsInitialized) || this.IsDisposed)     {         return;     }     m_RenderTarget.BeginDraw();     m_RenderTarget.Clear(ClearColor);     m_RenderTarget.FillGeometry(m_Geometry, m_BrushBlue);     m_RenderTarget.DrawGeometry(m_Geometry, m_BrushRed, 1.0f);     m_RenderTarget.EndDraw(); } First, we have an if statement, which happens to be identical to the one we put in the base class's RenderScene() method. This is because we are not calling the base class's RenderScene() method, since the only code in it is this if statement. Not calling the base class version of this method will give us a slight performance boost, since we don't have the overhead of that function call. We could do the same thing with the UpdateScene() method as well. In this case we didn't though, because the base class version of that method has a lot more code in it. In your own projects you may want to copy and paste that code into your override of the UpdateScene() method. The next line of code calls the render target's BeginDraw() method to tell it that we are ready to begin drawing. Then, we clear the screen on the next line by filling it with the color stored in the ClearColor property that is defined by our GameWindow base class. The last three lines draw our geometry twice. First, we draw it using the FillGeometry() method of our render target. This will draw our rectangle filled in with the specified brush (in this case, solid blue). Then, we draw the rectangle a second time, but this time with the DrawGeometry() method. This draws only the lines of our shape but doesn't fill it in, so this draws a border on our rectangle. The extra parameter on the DrawGeometry() method is optional and specifies the width of the lines we are drawing. We set it to 1.0f, which means the lines will be one-pixel wide. And the last line calls the EndDraw() method to tell the render target that we are finished drawing. Cleanup As usual, we need to clean things up after ourselves when the program closes. So, we need to add override of the base class's Dispose(bool) method. We've already done this a few times, so it should be somewhat familiar and is not shown here. Our blue rectangle with a red border As you might guess, there is a lot more you can do with drawing geometry. You can draw curved line segments and draw shapes with gradient brushes too for example. You can also draw text on the screen using the render target's DrawText() method. But since we have limited space on these pages, we're going to look at how to draw bitmap images on the screen. These images are something that make up the graphics of most 2D games. Summary In this article, we first made a simple demo application that drew a rectangle on the screen. Then, we got a bit more ambitious and built a 2D tile-based game world. Resources for Article: Further resources on this subject: HTML5 Games Development: Using Local Storage to Store Game Data [Article] Flash Game Development: Creation of a Complete Tetris Game [Article] Interface Designing for Games in iOS [Article]
Read more
  • 0
  • 0
  • 13536

article-image-reporting
Packt
19 Dec 2013
4 min read
Save for later

Reporting

Packt
19 Dec 2013
4 min read
(For more resources related to this topic, see here.) Creating a pie chart First, we made the component test CT for display purposes, but now let's create the CT to make it run. We will use the Direct function, so let's prepare that as well. In reality we've done this already. Duplicate a different app.html and change the JavaScript file like we have done before. Please see the source file for the code: 03_making_a_pie_chart/ct/dashboard/pie_app.html. Implementing the Direct function Next, prepare the Direct function to read the data. First, it's the config.php file that defines the API. Let's gather them together and implement the four graphs (source file: 04_implement_direct_function/php/config.php). .... 'MyAppDashBoard'=>array( 'methods'=>array( 'getPieData'=>array( 'len'=>0 ), 'getBarData'=>array( 'len'=>0 ), 'getLineData'=>array( 'len'=>0 ), 'getRadarData'=>array( 'len'=>0 ) ) .... Next, let's create the following methods to acquire data for the various charts: getPieData getBarData getLineData getRadarData First, implement the getPieData method for the pie chart. We'll implement the Direct method to get the data for the pie chart. Please see the actual content for the source code (source file: 04_implement_direct_function/php/classes/ MyAppDashBoard.php ). This is acquiring valid quotation and bill data items. With the data to be sent back to the client, set the array in items and set up the various names and data in a key array. You will now combine the definitions in the next model. Preparing the store for the pie chart Charts need a store, so let's define the store and model (source file: 05_prepare_the_store_for_the_pie_chart/app/model/ Pie.js). We'll create the MyApp.model.Pie class that has the name and data fields. Connect this with the data you set with the return value of the Direct function. If you increased the number of fields inside the model you just defined, make sure to amend the return field values, otherwise it won't be applied to the chart, so be careful. We'll use the model we made in the previous step and implement the store (source file: 05_prepare_the_store_for_the_pie_chart/app/model/ Pie.js). Ext.define('MyApp.store.Pie', { extend: 'Ext.data.Store', storeId: 'DashboardPie', model: 'MyApp.model.Pie', proxy: { type: 'direct', directFn: 'MyAppDashboard.getPieData', reader: { type: 'json', root: 'items' } } }) Then, define the store using the model we made and set up the Direct function we made earlier in the proxy. Creating the View We have now prepared the presentation data. Now, let's quickly create the view to display it (source file: 06_making_the_view/app/view/dashboard/Pie.js). Ext.define('MyApp.view.dashboard.Pie', { extend: 'Ext.panel.Panel', alias : 'widget.myapp-dashboard-pie', title: 'Pie Chart', layout: 'fit', requires: [ 'Ext.chart.Chart', 'MyApp.store.Pie' ], initComponent: function() { var me = this, store; store = Ext.create('MyApp.store.Pie'); Ext.apply(me, { items: [{ xtype: 'chart', store: store, series: [{ type: 'pie', field: 'data', showInLegend: true, label: { field: 'name', display: 'rotate', contrast: true, font: '18px Arial' } }] }] }); me.callParent(arguments); } }); Implementing the controller With the previous code, data is not being read by the store and nothing is being displayed. In the same way that reading was performed with onShow, let's implement the controller (source file: 06_making_the_view/app/controller/DashBoard.js): Ext.define('MyApp.controller.dashboard.DashBoard', { extend: 'MyApp.controller.Abstract', screenName: 'dashboard', init: function() { var me = this; me.control({ 'myapp-dashboard': { 'myapp-show': me.onShow, 'myapp-hide': me.onHide } }); }, onShow: function(p) { p.down('myapp-dashboard-pie chart').store.load(); }, onHide: function() { } }); With the charts we create from now on, as we create them it would be good to add the reading process to onShow. Let's take a look at our pie chart which appears as follows: Summary You must agree this is starting to look like an application! The dashboard is the first screen you see right after logging in. Charts are extremely effective in order to visually check a large and complicated amount of data. If you keep adding panels as and when you feel it's needed, you'll increase its practicability. This sample will become a customizable base for you to use in future projects. Resources for Article: Further resources on this subject: So, what is Ext JS? [Article] Buttons, Menus, and Toolbars in Ext JS [Article] Displaying Data with Grids in Ext JS [Article]
Read more
  • 0
  • 0
  • 4550

article-image-background-animation
Packt
19 Dec 2013
4 min read
Save for later

Background Animation

Packt
19 Dec 2013
4 min read
(For more resources related to this topic, see here.) Background-color animation Animating the background color of an element is a great way to draw our user's eyes to the object we want them to see. Another use for animating the background color of an element is to show that something has happened to the element. It's typically used in this way if the state of the object changes (added, moved, deleted, and so on), or if it requires attention to fix a problem. Due to the lack of support in jQuery 2.0 for animating background-color, we'll be using jQuery UI to give us the functionality we need to create this effect. Introducing the animate method The animate() method is one of the most useful methods jQuery has to offer in its bag of tricks in the animation realm. With it, we’re able to do things like, move an element across the page or alter and animating the properties of colors, backgrounds, text, fonts, the box model, position, display, lists, tables, generated content, and so on. Time for action – animating the body background-color Following the steps below, we're going to start by creating an example that changes the body background color. Start by creating a new file (using our template) called background-color.html and save it in our jquery-animation folder. Next, we'll need to include the jQuery UI library by adding this line directly under our jQuery library by adding this line: <script src = "js/jquery-ui.min.js"></script> A custom or stable build of jQuery UI can be downloaded from http://jqueryui.com, or you can link to the library using one of the three Content Delivery Networks (CDN) below. For fastest access to the library, go to http://jqueryui.com, scroll to the very bottom and look for the Quick Access section. Using the jQuery UI library JS file there will work just fine for our needs for the examples in this article. Media Template: http://code.jquery.com Google: http://developers.google.com/speed/libraries/devguide#jquery-ui Microsoft: http://asp.net/ajaxlibrary/cdn.ashx#jQuery_Releases_on_the_CDN_0 CDNJS: http://cdnjs.com/libraries/jquery Then, we'll add the following jQuery code to the anonymous function: var speed = 1500; $( "body").animate({ backgroundColor: "#D68A85" },speed); $( "body").animate({ backgroundColor: "#E7912D" },speed); $( "body").animate({ backgroundColor: "#CECC33" },speed); $( "body").animate({ backgroundColor: "#6FCD94" },speed); $( "body").animate({ backgroundColor: "#3AB6F1" },speed); $( "body").animate({ backgroundColor: "#8684D8" },speed); $( "body").animate({ backgroundColor: "#DD67AE" },speed); What just happened? First we added in the jQuery UI library to our page. This was needed because of the lack of support for animating the background color in the current version of jQuery. Next, we added in the code that will animate our background. We then set the speed variable to 1500 (milliseconds) so that we can control the duration of our animation. Lastly, using the animate() method, we set the background color of the body element and set duration to the variable we set above named speed. We duplicated the same line several times, changing only the hexadecimal value of the background color. The following screenshot is an illustration of colors the entire body background color animates through: Chaining together jQuery methods It's important to note that jQuery methods (animate() in this case) can be chained together. Our code mentioned previously would look like the following if we chained the animate() methods together: $("body")   .animate({ backgroundColor: "#D68A85"}, speed)  //red   .animate({ backgroundColor: "#E7912D"}, speed)  //orange   .animate({ backgroundColor: "#CECC33"}, speed)  //yellow   .animate({ backgroundColor: "#6FCD94"}, speed)  //green   .animate({ backgroundColor: "#3AB6F1"}, speed)  //blue   .animate({ backgroundColor: "#8684D8"}, speed)  //purple   .animate({ backgroundColor: "#DD67AE"}, speed); //pink Here's another example of chaining methods together: (selector).animate(properties).animate(properties).animate(properties) Have a go hero – extending our script with a loop In this example we used the animate() method and with some help from jQuery UI, we were able to animate the body background color of our page. Have a go at extending the script to use a loop, so that the colors continually animate without stopping once the script gets to the end of the function. Pop quiz – chaining with the animate() method Q1. Which code will properly animate our body background color from red to blue using chaining? $("body")   .animate({ background: "red"}, "fast")   .animate({ background: "blue"}, "fast"); $("body")   .animate({ background-color: "red"}, "slow")   .animate({ background-color: "blue"}, "slow"); $("body")   .animate({ backgroundColor:"red" })   .animate({ backgroundColor:"blue" }); $("body")   .animate({ backgroundColor,"red" }, "slow")   .animate({ backgroundColor,"blue" }, "slow");
Read more
  • 0
  • 0
  • 2520

article-image-crud-applications-using-laravel-4
Packt
19 Dec 2013
18 min read
Save for later

CRUD Applications using Laravel 4

Packt
19 Dec 2013
18 min read
(for more resources related to this topic, see here.) Getting familiar with Laravel 4 Let's Begin the Journey, and install Laravel 4. Now if everything is installed correctly you will be greeted by this beautiful screen, as shown in the following screenshot, when you hit your browser with http://localhost/laravel/public or http://localhost/<installeddirectory>/public: Now that you can see we have installed Laravel correctly, you would be thinking how can I use Laravel? How do I create apps with Laravel? Or you might be wondering why and how this screen is shown to us? What's behind the scenes? How Laravel 4 sets this screen for us? So let's review that. When you visit the http://localhost/laravel/public, Laravel 4 detects that you are requesting for the default route which is "/". You would be wondering what route is this if you are not familiar with the MVC world. Let me explain that. In traditional web applications we use a URL with page name, say for example: http://www.shop.com/products.php The preceding URL will be bound to the page products.php in the web server hosting shop.com. We can assume that it displays all the products from the database. Now say for example, we want to display a category of books from all the products. You will say, "Hey, it's easy!" Just add the category ID into the URL as follows: http://www.shop.com/products.php?cat=1 Then put the filter in the page products.php that will check whether the category ID is passed. This sounds perfect, but what about pagination and other categories? Soon clients will ask you to change one of your category page layouts to change and you will hack your code more. And your application URLs will look like the following: http://www.shop.com/products.php?cat=2 http://www.shop.com/products.php?cat=3&page=1&total=20 http://www.shop.com/products.php?cat=3&page=1&total=20&layout=1 If you look at your code after six months, you would be looking at one huge products.php page with all of your business and view code mixed in one large file. You wouldn't remember those easy hacks you did in order to manage client requests. On top of that, a client or client's SEO executive might ask you why are all the URLs so badly formatted? Why are they are not human friendly? In a way they are right. Your URLs are not as pretty as the following: http://www.shop.com/products http://www.shop.com/products/books http://www.shop.com/products/cloths The preceding URLs are human friendly. Users can easily change categories themselves. In addition to that, your client's SEO executives will love you for those URLs just as a search engine likes those URLs. You might be puzzled now; how do you do that? Here my friend MVC (Model View Controller) comes into the picture. MVC frameworks are meant specifically for doing this. It's one of the core goals of using the MVC framework in web development. So let's go back to our topic "routing"; routing means decoupling your URL request and assigning it to some specific action via your controller/route. In the Laravel MVC world, you register all your routes in a route file and assign an action to them. All your routes are generally found at /app/routes.php. If you open your newly downloaded Laravel installation's routes.php file, you will notice the following code: Route::get('/', function() { return View::make('hello'); }); The preceding code registers a route with / means default URL with view /app/views/hello.php. Here view is just an .html file. Generally view files are used for managing your presentation logic. So check /app/views/hello.php, or better let's create an about page for our application ourselves. Let's register a route about by adding the following code to app/routes.php: Route::get('about', function() { return View::make('about'); }); We would need to create a view at app/views/about.php. So create the file and insert the following code in to it: <!doctype html> <html lang="en"> <head> <meta charset="UTF-8"> <title>About my little app</title> </head> <body> <h1>Hello Laravel 4!</h1> <p> Welcome to the Awesomeness! </p> </body> </html> Now head over to your browser and run http://localhost/laravel/public/about. You will be greeted with the following output: Hello Laravel 4! Welcome to the Awesomeness! Isn't it easy? You can define your route and separate the view for each type of request. Now you might be thinking what about Controllers as the term MVC has C for Controllers? And isn't it difficult to create routes and views for each action? What advantage will we have if we use the preceding pattern? Well we found that mapping URLs to a particular action in comparison to the traditional one-file-based method. Well first you are organizing your code way better as you will have actions responding to specific URLs mapped in the route file. Any developer can recognize routes and see what's going on with your code. Developers do not have to check many files to see which files are using which code. Your presentation logic is separated, so if a designer wants to change something, he will know he needs to look at the view folder of your application. Now about Controllers; they allow us to group related actions into a single class. So in a typical MVC project, there will be one user Controller that will be responsible for all user-related actions, such as registering, logging in, editing a profile, and changing the password. Generally routes are used for small applications or creating static pages quickly. Controllers provide more in-depth options to create a group of methods that belong to a specific class related to the application. Here is how we can create Controllers in Laravel 4. Open your app/routes.php file and add following code: Route::get('contact', 'Pages@contact'); The preceding code will register the http://yourapp.com/contact URL in the Pages Controller's contact method. So let's write a page's Controller. Create a file PagesController.php at /app/controllers/ in your Laravel 4 installation directory. The following are the contents of the PagesController.php file: <?php class PagesController extends BaseController { public function contact() { return View::make('hello'); } } Here BaseController is a class provided by Laravel so we can place our Controller shared logic in a common class. And it extends the framework's Controller class and provides the Controller functionality. You can check Basecontroller.php in the Controller's directory to add shared logic. Controllers versus routes So you are wondering now, "What's the difference between Controllers and routes?" Which one to use? Controllers or routes? Here are the differences between Controllers and routes: A disadvantage of routes is that you can't share code between routes, as routes work via Closure functions. And the scope of a function is bound within function. Controllers give a structure to your code. You can define your system in well-grouped classes, which are divided in such a way that it makes sense, for example, users, dashboard, products, and so on. Compared to routes, Controllers have only one disadvantage and it's that you have to create a file for each Controller; however, if you think in terms of organizing the code in a large application, it makes more sense to use Controllers.   Creating a simple CRUD application with Laravel 4 Now as we have a basic understanding of how we can create pages, let's create a simple CRUD application with Laravel 4. The application we want to create will manage the users of our application. We will create the following list of features for our application: List users (read users from the database) Create new users Edit user information Delete user information Adding pagination to the list of users Now to start off with things, we would need to set up a database. So if you have phpMyAdmin installed with your local web server setup, head over to http://localhost/phpmyadmin; if you don't have phpMyAdmin installed, use the MySQL admin tool workbench to connect with your database and create a new database. Now we need to configure Laravel 4 to connect with our database. So head over to your Laravel 4 application folder, open /app/config/database.php, change the MySQL array, and match your current database settings. Here is the MySQL database array from database.php file: 'mysql' => array( 'driver' => 'mysql', 'host' => 'localhost', 'database' => '<yourdbname>', 'username' => 'root', 'password' => '<yourmysqlpassord>', 'charset' => 'utf8', 'collation' => 'utf8_unicode_ci', 'prefix' => '', ), Now we are ready to work with the database in our application. Let's first create the database table Users via the following SQL queries from phpMyAdmin or any MySQL database admin tool; CREATE TABLE IF NOT EXISTS 'users' ( 'id' int(10) unsigned NOT NULL AUTO_INCREMENT, 'username' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'password' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'email' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'phone' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'name' varchar(255) COLLATE utf8_unicode_ci NOT NULL, 'created_at' timestamp NOT NULL DEFAULT '0000-00-00 00:00:00', 'updated_at' timestamp NOT NULL DEFAULT '0000-00-00 00:00:00', PRIMARY KEY ('id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci AUTO_INCREMENT=3 ; Now let's seed some data into the Users table so when we fetch the users we won't get empty results. Run the following queries into your database admin tool: INSERT INTO 'users' ('id', 'username', 'password', 'email', 'phone', 'name', 'created_at', 'updated_at') VALUES (1, 'john', 'johndoe', 'johndoe@gmail.com', '123456', 'John', '2013-06-07 08:13:28', '2013-06-07 08:13:28'), (2, 'amy', 'amy.deg', 'amy@outlook.com', '1234567', 'amy', '2013-06-07 08:14:49', '2013-06-07 08:14:49');   Listing the users – read users from database Let's read users from the database. We would need to follow the steps described to read users from database: A route that will lead to our page A controller that will handle our method The Eloquent Model that will connect to the database A view that will display our records in the template So let's create our route at /app/routes.php. Add the following line to the routes.php file: Route::resource('users', 'UserController'); If you have noticed previously, we had Route::get for displaying our page Controller. But now we are using resource. So what's the difference? In general we face two types of requests during web projects: GET and POST. We generally use these HTTP request types to manipulate our pages, that is, you will check whether the page has any POST variables set; if not, you will display the user form to enter data. As a user submits the form, it will send a POST request as we generally define the <form method="post"> tag in our pages. Now based on page's request type, we set the code to perform actions such as inserting user data into our database or filtering records. What Laravel provides us is that we can simply tap into either a GET or POST request via routes and send it to the appropriate method. Here is an example for that: Route::get('/register', 'UserController@showUserRegistration'); Route::post('/register', 'UserController@saveUser'); See the difference here is we are registering the same URL, /register, but we are defining its GET method so Laravel can call UserController class' showUserRegistration method. If it's the POST method, Laravel should call the saveUser method of the UserController class. You might be wondering what's the benefit of it? Well six months later if you want to know how something's happening in your app, you can just check out the routes.php file and guess which Controller and which method of Controller handles the part you are interested in, developing it further or solving some bug. Even some other developer who is not used to your project will be able to understand how things work and can easily help move your project. This is because he would be able to somewhat understand the structure of your application by checking routes.php. Now imagine the routes you will need for editing, deleting, or displaying a user. Resource Controller will save you from this trouble. A single line of route will map multiple restful actions with our resource Controller. It will automatically map the following actions with HTTP verbs: HTTP VERB ACTION GET READ POST CREATE PUT UPDATE DELETE DELETE On top of that you can actually generate your Controller via a simple command-line artisan using the following command: $ php artisan Usercontroller:make users This will generate UsersController.php with all the RESTful empty methods, so you will have an empty structure to play with. Here is what we will have after the preceding command: class UserController extends BaseController { /** * Display a listing of the resource. * * @return Response */ public function index() { // } /** * Show the form for creating a new resource. * * @return Response */ public function create() { // } /** * Store a newly created resource in storage. * * @return Response */ public function store() { // } /** * Display the specified resource. * * @param int $id * @return Response */ public function show($id) { // } /** * Show the form for editing the specified resource. * * @param int $id * @return Response */ public function edit($id) { // } /** * Update the specified resource in storage. * * @param int $id * @return Response */ public function update($id) { // } /** * Remove the specified resource from storage. * * @param int $id * @return Response */ public function destroy($id) { // } } Now let's try to understand what our single line route declaration created relationship with our generated Controller. HTTP VERB Path Controller Action/method GET /Users Index GET /Users/create Create POST /Users Store GET /Users/{id} Show (individual record) GET /Users/{id}/edit Edit PUT /Users/{id} Update DELETE /Users/{id} Destroy As you can see, resource Controller really makes your work easy. You don't have to create lots of routes. Also Laravel 4's artisan-command-line generator can generate resourceful Controllers, so you will write very less boilerplate code. And you can also use the following command to view the list of all the routes in your project from the root of your project, launching command line: $ php artisan routes Now let's get back to our basic task, that is, reading users. Well now we know that we have UserController.php at /app/controller with the index method, which will be executed when somebody launches http://localhost/laravel/public/users. So let's edit the Controller file to fetch data from the database. Well as you might remember, we will need a Model to do that. But how do we define one and what's the use of Models? You might be wondering, can't we just run the queries? Well Laravel does support queries through the DB class, but Laravel also has Eloquent that gives us our table as a database object, and what's great about object is that we can play around with its methods. So let's create a Model. If you check your path /app/models/User.php, you will already have a user Model defined. It's there because Laravel provides us with some basic user authentication. Generally you can create your Model using the following code: class User extends Eloquent {} Now in your controller you can fetch the user object using the following code: $users = User::all(); $users->toarray(); Yeah! It's that simple. No database connection! No queries! Isn't it magic? It's the simplicity of Eloquent objects that many people like in Laravel. But you have the following questions, right? How does Model know which table to fetch? How does Controller know what is a user? How does the fetching of user records work? We don't have all the methods in the User class, so how did it work? Well models in Laravel use a lowercase, plural name of the class as the table name unless another name is explicitly specified. So in our case, User was converted to a lowercase user and used as a table to bind with the User class. Models are automatically loaded by Laravel, so you don't have to include the reference of the Model file. Each Model inherits an Eloquent instance that resolves methods defined in the model.php file at vendor/Laravel/framework/src/Illumininate/Database/Eloquent/ like all, insert, update, delete and our user class inherit those methods and as a result of this, we can fetch records via User::all(). So now let's try to fetch users from our database via the Eloquent object. I am updating the index method in our app/controllers/UsersController.php as it's the method responsible as per the REST convention we are using via resource Controller. public function index() { $users = User::all(); return View::make('users.index', compact('users')); } Now let's look at the View part. Before that, we need to know about Blade. Blade is a templating engine provided by Laravel. Blade has a very simple syntax, and you can determine most of the Blade expressions within your view files as they begin with @. To print anything with Blade, you can use the {{ $var }} syntax. Its PHP-equivalent syntax would be: <?php echo $var; ?> Now back to our view; first of all, we need to create a view file at /app/views/users/index.blade.php, as our statement would return the view file from users.index. We are passing a compact users array to this view. So here is our index.blade.php file: @section('main') <h1>All Users</h1> <p>{{ link_to_route('users.create', 'Add new user') }}</p> @if ($users->count()) <table class="table table-striped table-bordered"> <thead> <tr> <th>Username</th> <th>Password</th> <th>Email</th> <th>Phone</th> <th>Name</th> </tr> </thead> <tbody> @foreach ($users as $user) <tr> <td>{{ $user->username }}</td> <td>{{ $user->password }}</td> <td>{{ $user->email }}</td> <td>{{ $user->phone }}</td> <td>{{ $user->name }}</td> <td>{{ link_to_route('users.edit', 'Edit', array($user->id), array('class' => 'btn btn-info')) }}</td> <td> {{ Form::open(array('method' => 'DELETE', 'route' => array('users.destroy', $user->id))) }} {{ Form::submit('Delete', array('class' => 'btn btn-danger')) }} {{ Form::close() }} </td> </tr> @endforeach </tbody> </table> @else There are no users @endif @stop Let's see the code line by line. In the first line we are extending the user layouts via the Blade template syntax @extends. What actually happens here is that Laravel will load the layout file at /app/views/layouts/user.blade.php first. Here is our user.blade.php file's code: <!doctype html> <html> <head> <meta charset="utf-8"> <link href="//netdna.bootstrapcdn.com/twitter-bootstrap/2.3.1/css/bootstrap-combined.min.css" rel="stylesheet"> <style> table form { margin-bottom: 0; } form ul { margin-left: 0; list-style: none; } .error { color: red; font-style: italic; } body { padding-top: 20px; } </style> </head> <body> <div class="container"> @if (Session::has('message')) <div class="flash alert"> <p>{{ Session::get('message') }}</p> </div> @endif @yield('main') </div> </body> </html> Now in this file we are loading the Twitter bootstrap framework for styling our page, and via yield('main') we can load the main section from the view that is loaded. So here when we load http://localhost/laravel/public/users, Laravel will first load the users.blade.php layout view and then the main section will be loaded from index.blade.php. Now when we get back to our index.blade.php, we have the main section defined as @section('main'), which will be used by Laravel to load it into our layout file. This section will be merged into the layout file where we have put the @yield ('main') section. We are using Laravel's link_to_route method to link to our route, that is, /users/create. This helper will generate an HTML link with the correct URL. In the next step, we are looping through all the user records and displaying it simply in a tabular format. Now if you have followed everything, you will be greeted by the following screen:
Read more
  • 0
  • 0
  • 23465
article-image-enabling-your-new-theme-magento
Packt
18 Dec 2013
3 min read
Save for later

Enabling your new theme in Magento

Packt
18 Dec 2013
3 min read
(For more resources related to this topic, see here.) After your new theme is in place, you can enable it in Magento. Log in to your Magento store's administration panel. Once you have logged in, navigate to System | Configuration, as shown in the following screenshot: From there, select the global configuration scope (labeled Default Config in the following screenshot) you want to apply your new theme to, from the Current Configuration Scope dropdown in the top left of your screen: Once this has loaded, navigate to the Design tab under GENERAL in the left-hand column and expand the Themes block in the right-hand column, as shown in the following screenshot: From here, you can tell Magento to use your new theme. The values given here correspond to the name you gave to the directories when creating your theme. The example uses responsive as the value here, as shown in the following screenshot: Click on the Save Config button at the top right of your screen to save the changes. Next, check that your new theme has been activated. Remember the styles.css file you added in the skin/frontend/default/responsive/css directory? The presence of that file is telling Magento to load your new theme's CSS file instead of the default styles.css file for Magento from the default package, so your store now has none of the original CSS styling it. As such, you should see the following screenshot when you attempt to view the frontend of your Magento store: Overwriting the default Magento templates Noticed the name of your Magento theme appearing next to the logo in the header of your store? You can overwrite the default header.phtml that's causing it by copying the contents of app/design/frontend/base/default/template/page/html/header.phtml into app/design/frontend/default/responsive/template/ page/html/header.phtml. Open the file and find the following lines: <?php if ($this->getIsHomePage()):?> <h1 class="logo"><strong><?php echo $this->getLogoAlt() ?></strong><a href="<?php echo $this->getUrl('') ?>" title= "<?php echo $this->getLogoAlt() ?>" class="logo"><img src = "<?php echo $this->getLogoSrc() ?>" alt="<?php echo $this->getLogoAlt() ?>" /></a></h1> <?php else:?> <a href="<?php echo $this->getUrl('') ?>" title="<?php echo $this->getLogoAlt() ?>" class="logo"><strong><?php echo $this->getLogoAlt() ?></strong><img src = "<?php echo $this->getLogoSrc() ?>" alt="<?php echo $this->getLogoAlt() ?>" /></a> <?php endif?> Replace them with these lines: <a href="<?php echo $this->getUrl('') ?>" title="<?php echo $this- >getLogoAlt() ?>" class="logo"><img src = "<?php echo $this-> getLogoSrc() ?>" alt="<?php echo $this->getLogoAlt() ?>" /></a> Now if you save that file (and upload it to your server, if needed), you can see that the logo now looks tidier, as shown in the following screenshot: That's it! Your basic responsive Magento theme is up and running. Summary Hopefully after reading this article you will get a better understanding of how to enable your new theme in Magento. Resources for Article: Further resources on this subject: Magento : Payment and shipping method [Article] Categories and Attributes in Magento: Part 2 [Article] Magento: Exploring Themes [Article]
Read more
  • 0
  • 0
  • 12541

article-image-code-editing
Packt
18 Dec 2013
9 min read
Save for later

Code Editing

Packt
18 Dec 2013
9 min read
(For more resources related to this topic, see here.) Discovering Search and Replace Search and Replace is one of the common actions we use in every editor, sublime text has two main search features: Single file Multiple files Before covering these topics, let's talk about the best tool available for searching text and especially, patterns, namely, Regular Expressions. Regular Expressions Regular Expressions can find complex patterns in text. To take full advantage of the Search and Replace features of Sublime, you should at least know the basics of Regular Expressions, also known as regex or regexp. Regular Expressions can be really annoying, painful, and joyful at the same time! We won't cover Regular Expressions in this article because it's an endless topic. We will only note that Sublime Text uses the Boost's Perl Syntax for Regular Expressions; this can be found at http://www.boost.org/doc/libs/1_47_0/libs/regex/doc/html/boost_regex/syntax/perl_syntax.html I recommend going to http://www.regular-expressions.info/quickstart.html if you are not familiar with Regular Expressions. Search and Replace – a single file Let's open the Search panel by pressing Ctrl + F on Windows and Linux or command + F on OS X. The search panel options can be controlled using keyboard shortcuts: Search panel Options Windows/Linux OS X Toggle Regular Expressions Alt + R command + Option + R Toggle Case Sensitivity Alt + C command + Option + C Toggle Exact Match Alt + W command + Option + W Find Next Enter Enter Find Previous Shift + Enter Shift + Enter Find All Alt + Enter Option + Enter As we can see in the following screenshot, we have the Regular Expression option turned on: Let's try Search and Replace now by pressing Ctrl + H on Windows and Linux or Option + command + F on OS X and examining the following screenshot: We can see that this time, both, the Regular Expression option and the Case Sensitivity option are turned on. Because of the Case Sensitivity option being on, line 8 isn't selected, the pattern messages/(d) doesn't match line 2 because d only matches numbers, and the 1 on the Replace with field will replace match group number 1, indicated by the parentheses around d. We can also refer to the group by using $1 instead of 1. Let's see what happens after we press Ctrl + Alt + Enter for Replace All: We can see that lines 2 and 8 still say messages and not message; that's exactly what we expected! The incremental search Incremental search is another cool feature that is here to save us keyboard clicks. We can bring up the incremental search panel by pressing Ctrl + I on Windows and Linux or command + I on OS X. The only difference between the incremental search and a regular search is the behavior of the Enter key; in incremental searches, the Enter key will select the next match and dismiss the search panel. This saves us from pressing Esc to dismiss the regular search panel. Search and Replace – multiple files Sublime Text also allows a multiple file search by pressing Ctrl + Shift + F or command + Shift + F on OS X. The same shortcuts from the single file search also apply here; the difference is that we have the Where field and a … button near it. The Where field determines where the files can be searched for; we can define the scope of the search in several ways: Adding individual directories (Unix-style paths, even on Windows( Adding/excluding files based on the wildcard pattern Adding Sublime-symbolic locations such as <open folders>, <open files> We can also combine all the filters by separating them with commas. We can do it in the following manner: /C/Users/Dan/Cool Project,*.rb,<open files> This will look in all files in C:UsersDanCool Project that ends with .rb and are currently open by Sublime. Results will be opened in a new tab called Find Results containing all found results separated by file paths, double clicking on a result will get you to the exact location of the result in the original file. Mastering Column and Multiple Selection Multiple Selections is one of Sublime's coolest features; TextMate users might be familiar with it. So how can we select multiple lines? We select one line like we usually do and selecting the second line while holding Ctrl or command on OS X. We can also subtract a line by holding the Alt key or command + Shift keys on OS X. This feature is really useful so it is recommended to play with it, the following are some shortcuts that can help us feel more comfortable with multiple selections: Multiple Selection action Windows/Linux OS X Return to Single Selection Mode Esc Esc Undo last selection motion Ctrl + U command + U Add next occurrence of selected text to selection Ctrl + D command + D Add all occurrences of selected text to selection Alt + F3 Control + command + G Turn Single Linear Selection into Block Selection Ctrl + Shift + L Shift + command + L Column Selection The Column Selection feature is one of my favorites! We can select multiple lines by pressing Shift and dragging the right mouse button on Windows or Linux and pressing Option and dragging the left mouse button on OS X. Here we want to remove the letter s from messages, as shown in the following screenshot: We have selected all s using Column selection; now we just need to hit backspace to delete them. Navigating through everything Sublime is known for its ability to quickly move between and around files and lines. Here, we are going to master how to navigate our code quickly and easily. Going To Anything We already learned how to use the Go To Anything feature, but it can do more than just searching for filenames. We can conduct a fuzzy search inside a "fuzzily found" file. Really? Yeah, we can. For example, we can type the following inside the Go To Anything window: isl#wld This will make Sublime perform a fuzzy search for wld inside the file that we found by fuzzy searching isl; it can thus find the word world inside a file named island. We can also perform a fuzzy search in the current file by pressing Ctrl + ; in Windows or Linux and command + P, # in OS X. It is very common to use fuzzy search inside HTML files because it will immediately show all the elements and classes in order to accelerate navigation. Symbol search Sometimes we want to search for a specific function or specific class inside the current file. With Sublime we can do it simply by pressing Ctrl + R on Windows or Linux and command + R on OS X. Projects Project is a group of files and folders. To save a project we just need to add folders and files to the sidebar, and then from the menu, we navigate to Project | Save Project As… The saved file is our projects data, and it is stored in a JSON formatted file with a .sublime-project extension. The following is a sample project file: {     "folders":     [         {             "path":"src",             "follow_symlinks":true         },         {             "path":"docs",             "name":"Documentation",       "file_exclude_patterns":["*.xml"]         }     ],     "settings":     {         "tab_size":6     },     "build_systems":     [         {             "name":"List",             "shell_cmd":"ls -l"         }     ] } As we can see in the preceding code, there are three elements written as JSON arrays. Folders Each folder must have a valid folder path that can be absolute or relative to the project directory, which is where the project file is. A folder can also include the following keys: name: This is the name that will be shown on the sidebar file_execlude_pattern: This is the folder that will exclude all the files matching the given Regular Expression file_include_pattern: This is the folder that will include only files matching the given Regular Expression folder_execlude_pattern: This is the folder that will exclude all subfolders matching the given Regular Expression folder_include_pattern: This is the folder that will include only subfolders matching the given Regular Expression follow_symlinks: This will include symlinks if set to true Settings The project-specific settings array will contain all the settings that we want to apply only on this project. These settings will override our user settings. Build systems In an array of build system definitions, we must specify a name for each definition; these build systems will then be specified in Tools | Build Systems. For more information about build systems, please visit http://sublimetext.info/docs/en/reference/build_systems.html. Navigating between projects To switch between projects quickly, we can press Ctrl + Alt + P in Windows or Linux and Control + command + P in OS X. Summary By now, we have learned some of Sublime's basic features to the most advanced features and techniques that need to be used while editing code. Resources for Article: Further resources on this subject: Top features you need to know about [Article] Setting up environment for Cucumber BDD Rails [Article] Implementation of SASS [Article]
Read more
  • 0
  • 0
  • 1468

article-image-bootstrap-30-mobile-first
Packt
16 Dec 2013
10 min read
Save for later

Bootstrap 3.0 is Mobile First

Packt
16 Dec 2013
10 min read
(For more resources related to this topic, see here.) But why Mobile First? Why did Bootstrap completely change its course from Desktop First to Mobile First to get into this new way to develop more suitable websites and web applications? Why did the most popular frontend framework embrace this change at a time when responsive web design is continuously growing with better suited and standard techniques such as media-queries, fluid layout, and JavaScript on demand? Mobile browsers are increasing support for the brand new HTML5 and CSS3, with the philosophy to offer, for older browsers, a less stylized but fully functional component, and for capable browsers a rich and full experience that comes from mobiles to larger screens such as TVs. For older browsers (such as IE 8 and IE 9), Bootstrap has functional support, but enhanced features such as rounded corners and a placeholder attribute for tips in input fields are not supported for these browsers. To see the full details on browser support, check the Bootstrap documentation from the Getting started section (http://getbootstrap.com/getting-started/#browsers). We are living at a time when mobile use is increasing at a pace that will soon surpass desktop usage (http://www.businessinsider.com/mobile-will-eclipsedesktop-by-2014-2012-6). Apart from the statistics, one thing we can presume is that the web scenario is changing so fast that we have to embrace the certainty of devices getting better and smarter. In this article, we will explore the main changes in Bootstrap 3. If you are already familiar with Bootstrap 2, check the migration guide (http://getbootstrap.com/getting-started/#migration) to have a practical overview about what has changed. If you're not familiar with Bootstrap, there's nothing that's too difficult for you to understand directly from this article about this new version. The only thing you need to have in mind is the Mobile First approach, which is covered well in this article. You will be guided to design with Mobile First, discover why Mobile First is so important, and how to make Bootstrap a powerful frontend platform to make your site friendly for a wider range of devices. We can take a step further and add to your previous Bootstrap knowledge by thinking of a concrete way to design processes as a continuous layer of capabilities and embrace the constraints and not fight with them. Mobile First with Bootstrap is an elegant solution for frontend development. Combined with server-side techniques, we get a full bag of solutions to get your product better suited to different users and needs in different platforms. This article will cover the following topics: Bootstrap reviewed Desktop to responsive   Bootstrap reviewed In the third era of Bootstrap that is coming, the developers have redesigned the whole framework with a different approach. Let's get started building interface components of small and simple screens, instead of adapting the existent UI components to fit in a constrained environment. From mobile, we will then go to desktop. However, we will not adapt the experience as we usually do with responsive design going from desktop to mobile. Now with Mobile First we will enhance accordingly as we increase the device screens. Why should I do this if my target audience will be using desktops? Going to mobile indirectly benefits desktop users. But how? To better understand this, let's recap Bootstrap history for a while. In 2011, Bootstrap was launched to serve as a live and agnostic style guide that was used by Twitter to create their products. It became an open source framework at that time. It was a time when we worked in pixel-perfect layouts and explored CSS3 animations, and we found in Bootstrap a well-documented and standardized set of features. Bootstrap creates a new design for the browsers because you don't need to define basic interface elements from scratch, such as buttons. At the same time, you have utility elements like badges to cover the most common interface elements. Bootstrap does what a framework is supposed to do: Bootstrapping! The term means the act of taking off a new project; it's like saying, "give me the tools that I will need to start developing my application for different needs". Bootstrap is a toolkit belt with standard conventions from well-defined classes with clean and practical documentation to live code that is ready to use and be customized for your needs. It's not a magic solution to solve the interface element reuse issue, but it's a kick-start. It fits in so many scenarios that developers are increasing its use with their own tools. "CSS moved beyond type, forms and grids. People get tired to create the same stuffs"— Mark Otto, one of Bootstrap's creators, in the Desktop First to Mobile First Bootstrap presentation (https://speakerdeck.com/mdo/desktop-firstto- with-bootstrap) A must-have from this breeding ground of possibilities is the Bootstrap extension font-awesome (http://fortawesome.github.io/Font-Awesome/). It uses font-face, which is widely supported and flexible, instead of sprites for icons. With a single CSS file and font resources used to render the custom fonts, you have a tool that can handle all your icons. This shows the flexibility of Bootstrap tools; for example, font-awesome is independent, works as a standalone project, and is a great fit with Bootstrap. There are a lot of ways to use Bootstrap. You can customize and extend components, from editing the source code in LESS variables or customize via the Bootstrap download page (http://getbootstrap.com/customize/). At the time of this writing, Bootstrap is the most popular project on Github, so it's just one more reason to consider its importance. There's now an official Bootstrap Expo (http://expo.getbootstrap.com/). This is one of the changes in this new version. Bootstrap Expo is the official directory for websites and web applications that are being developed using this framework. A lot of developers get their first touch with the capabilities of HTML5 and CSS3 with this framework. Bootstrap has amazing capabilities such as offering a responsive grid, dozens of JavaScript components, and a customizer in a web interface or through the LESS variables, if you're an experienced developer. It's suitable for any level of developer and designers because it has solutions that suit both scenarios. This is the second of Bootstrap's main philosophies—it's made for everyone. Desktop to responsive With the rise of smart phones, there is a need for responsive content to cover the growing demand. It's possible to add an optional file with media queries and a bunch of CSS code and be adapted to mobile needs. Media queries, a CSS3 module introduced in June 2012, is a basic structure that gives a namespace with a bunch of CSS rules and declarations according to the user resolution, density, and screen capabilities. So, with CSS files, it is possible to manage the ongoing rise of smartphones. It was possible with just one stylesheet file with good support to adapt according with the device and make a website mobile friendly. In Bootstrap Version 2, we used to have an optional file (responsive.less) that used to have all the media queries necessary for Bootstrap to work well with mobiles. Another good news is that we can adapt to tablets as a bonus. We have breakpoints for the most common mobile resolutions—this means we have a range of width (768 px to 979 px) that can represent tablet devices. A breakpoint is the extreme point (minimum and/or maximum) where you can define CSS rules specific to that range and change your layout. This could be achieved with a simple declaration of media queries in your CSS: @media (min-width: 768px) and (max-width: 979px) { ... } But sometimes it's indispensable to rethink some elements—some of those already developed only for desktops—in a pixel-perfect scenario. There's no flexibility in a pixel-width accommodation. No matter how much the screen is different, the website will behave like you were using a desktop when we work with fixed units. This is when we can use a bunch of media queries to get more flexible. Even with this solution, redefining dimensions and CSS rules according to the device using media queries will solve screen flexibility issues but not solve performance issues on mobiles. Performance is one of the main concerns when we go mobile. We have to consider scenarios where the Internet connection is slow and it is a recurrent issue. You will have to perform reverse engineering to make your JavaScript optimize loading, and combine it with server-side solutions. A worse solution would be to just hide content after considering what could be painful for your page load; for example, images have a deep impact on the final performance. Lower page response time is equivalent to more money spent, as we can see in this article about page loading versus user patience (http://blog.kissmetrics.com/loading-time/). One of the curious things this research points to is that mobile Internet users expect their browsing experience in phones to be comparable to what they get on their desktops. We are living at a time when the Web is filled with rich content and we have faster Internet connections. We have to be prepared to offer the closest thing to a fast and optimized loading, at least for our most important content. This does not involve just the use of CSS to hide content and show content depending on the device, as we can do using media queries. It's all about keeping the concepts simple and focused and developing each interface component thoughtfully from scratch—the primary use, with the constraints and its enhanced capabilities. It's not just about adapting, it's exploring the device's capabilities and delivering the best user experience across platforms. Sounds familiar? Yes, for sure, the same concept as progressive enhancement, you might think. You're not wrong. Progressive enhancement was a term widely used at a time when we talked about HTML page dependency on JavaScript to be functional. Progressive enhancement is a strategy for web design that relies on semantic markup and technologies such as JavaScript. Nowadays, progressive enhancement is a longer term for Mobile First because it's not just about JavaScript disabled, as it was vastly talked before. A hundred of articles tried to show its benefits in a no JavaScript environment scenarios. Now progressive enhancement is about to be faster (http://coding.smashingmagazine.com/2013/09/03/progressive-enhancement-is-faster/). Progressive enhancement is one of the three keys of Mobile First, together with responsive design and giving priority to content over navigation. So, these three rationales are at the background of all the details of Bootstrap 3, from your CSS components to your grid structure. Summary In this article we saw the Twitter Bootstrap's latest version Mobile First. We also saw how developers developed this framework. The growing world of smartphones have forced for the need for Mobile First. Resources for Article: Further resources on this subject: Downloading and setting up Bootstrap [Article] Introduction to RWD frameworks [Article] Getting started with using Chef [Article]
Read more
  • 0
  • 44
  • 29027
article-image-magic
Packt
16 Dec 2013
7 min read
Save for later

The Magic

Packt
16 Dec 2013
7 min read
(For more resources related to this topic, see here.) Application flow In the following diagram, from the Angular manual, you find a comprehensive schematic depiction of the program flow inside Angular: After the browser loads the HTML and parses it into a DOM, the angular.js script file is loaded. This can be added before or at the bottom of the <body> tag, although adding it at the bottom is preferred. Angular waits for the browser to fire the DOMContentLoaded event. This is similar to the way jQuery is bootstrapped, as illustrated in the following code: $(document).ready(function(){ // do jQuery }) In the Angular.js file, towards the end, after the entire code has been parsed by the browser, you will find the following code: jqLite(document).ready(function() { angularInit(document, bootstrap); }); The preceding code calls the function that looks for various flavors of the ng-app directive that you can use to bootstrap your Angular application. ['ng:app', 'ng-app', 'x-ng-app', 'data-ng-app'] Typically, the ng-app directive will be the HTML tag, but in theory, it could be any tag as long as there is only one of them. The module specification is optional and can tell the $injector service which of the defined modules to load. //index.html <!doctype html> <html lang="en" ng-app="tempApp"> <head> …... // app.js ….. angular.module('tempApp', ['serviceModule']) ….. In turn, the $injector service will create $rootscope, the parent scope of all Angular scopes, as the name suggests. This $rootscope is linked to DOM itself as a parent to all other Angular scopes. The $injector service will also create the $compile service that will traverse the DOM and look for directives. These directives are searched for within the complete list of declared Angular internal directives and custom directives at hand. This way, it can recognize directives declared as an element, as attributes, inside the class definition, or as a comment. Now that Angular is properly Bootstrapped, we can actually start executing some application code. This can be done in a variety of ways, shown as follows: In the initial examples, we started creating some Angular code with curly braces using some built-in Angular functions It is also possible to define a controller to control a specific part of the HTML page, as we have shown in the first tempCtrl code snippet We have also shown you how to use Angular's built-in router to manage your application using client-side routing As you can see, Angular extends the capabilities of HTML by providing a clever way to add new directives. The key ingredient here is the $injector service, which provides a way to look up for dependencies and create $rootscope. Different ways of injecting Let's look a bit more at how $injector does its work. Throughout all the examples in this book, we have used the array-style notation to define our controllers, modules, services, and directives. // app/ controllers.js tempApp.controller('CurrentCtrl', ['$scope', 'reading', function ($scope, reading) { $scope.temp = 17; ... This style is commonly referred to as annotation. Each injected value is annotated in the same order inside an array. You may have looked through the AngularJs website and may have seen different ways of defining functions. // angularJs home page JavaScript Projects example functionListCtrl($scope, Project) { $scope.projects = Project.query(); } So, what is the difference and why are we using another way of defining functions? The first difference you may notice is the definition of all the functions in the global scope. For reference, let's call this the simple injection method. The documentation states that this is a concise notation that really is only suited for demo applications because it is nothing but a potential clash waiting to happen. Any other JS library or framework you may have included could potentially have a function with the same name and cause your software to malfunction by executing this function instead of yours. After assigning the Angular module to a variable such as tempApp, we will chain the methods to that variable like we have done in this book so far; you could also just chain them directly as follows: angular.module('tempApp').controller('CurrentCtrl', function($scope) {}) These are essentially the same definitions and don't cause pollution in the global scope. The second difference that you may have noticed is in the way the dependencies are injected in the function. At the time of writing this book, most, if not all of the examples on the AngularJs website use the simple injection method. The dependencies are just parameters in the function definitions. Magically, Angular is able to figure out which parameter is what by the name because the order does not matter. So the preceding example could be rewritten as follows, and it would still function correctly: // reversedangularJs home page JavaScript Projects example functionListCtrl( Project, Scope ) { $scope.projects = Project.query(); } This is not a feature of the JavaScript language, so it must have been added by those smart Angular engineers. The magic behind this can be found in the injector. The parameters of the function are scanned, and Angular extracts the names of the parameters to be able to resolve them. The problem with this approach is that when you deploy a wonderful new application to production, it will probably be minified and even obfuscated. This will rename $scope and Project to something like a and b. Even Angular will then be unable to resolve the dependencies. There are two ways to solve this problem in Angular. You have seen one of them already, but we will explain it further. You can wrap the function in an array and type the names of the dependencies as strings before the function definition in the order in which you supplied them as arguments to the function. // app/ controllers.js tempApp.controller('CurrentCtrl', ['$scope', 'reading', function ($scope, reading) { $scope.temp = 17; ....... The corresponding order of the strings and the function arguments is significant here. Also, the strings should appear before the function arguments. If you prefer the definition without the array notation, there is still some hope. Angular provides a way to inform the injector service of the dependencies you are trying to inject. varCurrentCtrl = function($scope, reading) { $scope.temp = 17; $scope.save = function() { reading.save($scope.temp); } }; CurrentCtrl.$inject = ['$scope', 'reading']; tempApp.controller('CurrentCtrl', CurrentCtrl); As you can see, the definition is a bit more sizable, but essentially the same thing is happening here. The injector is informed by filling the $inject property of the function with an array of the injected dependencies. This is where Angular will then pick them up from. To understand how Angular accomplishes all of this, you should read this excellent blog post by Alex Rothenberg. Here, he explains how all of this works internally. The link to his blog is as follows: http://www.alexrothenberg.com/2013/02/11/the-magic-behind-angularjs-dependency-injection.html. Angular cleverly uses the toString() function of objects to be able to examine in which order the arguments were specified and what their names are. There is actually a third way to specify dependencies called ngmin, which is not native to Angular. It lets you use the simple injection method and parses and translates it to avoid minification problems. https://github.com/btford/ngmin Consider the following code: angular.module('whatever').controller('MyCtrl', function ($scope, $http) { ... }); ngmin will turn the preceding code into the following: angular.module('whatever').controller('MyCtrl', ['$scope','$http', function ($scope, $http) { ... }]);   Summary In this article, we started by looking at how AngularJS is bootstrapped. Then, we looked at how the injector works and why minification might ruin your plans there. We also saw that there are ways to avoid these problems by specifying dependencies differently. Resources for Article: Further resources on this subject: The Need for Directives [Article] Understanding Backbone [Article] Quick start – creating your first template [Article]
Read more
  • 0
  • 0
  • 2348

article-image-handling-authentication
Packt
13 Dec 2013
9 min read
Save for later

Handling Authentication

Packt
13 Dec 2013
9 min read
(for more resources related to this topic, see here.) Understanding Authentication methods In a world where security on the Internet is such a big issue, the need for great authentication methods is something that cannot be missed. Therefore, Zend Framework 2 provides a range of authentication methods that suits everyone's needs. Getting ready To make full use of this, I recommend a working Zend Framework 2 skeleton application to be set up. How to do it… The following is a list of authentication methods—or as they are called adapters—that are readily available in Zend Framework 2. We will provide a small overview of the adapter, and instructions on how you can use it. The DbTable adapter Constructing a DbTable adapter is pretty easy, if we take a look at the following constructor: public function __construct( // The ZendDbAdapterAdapter DbAdapter $zendDb, // The table table name to query on $tableName = null, // The column that serves as 'username' $identityColumn = null, // The column that serves as 'password' $credentialColumn = null, // Any optional treatment of the password before // checking, such as MD5(?), SHA1(?), etcetera $credentialTreatment = null ); The HTTP adapter After constructing the object we need to define the FileResolver to make sure there are actually user details parsed in. Depending on what we configured in the accept_schemes option, the FileResolver can either be set as a BasicResolver, a DigestResolver, or both. Let's take a quick look at how to set a FileResolver as a DigestResolver or BasicResolver (we do this in the /module/Application/src/Application/Controller/IndexController.php file): <?php namespace Application; // Use the FileResolver, and also the Http // authentication adapter. use ZendAuthenticationAdapterHttpFileResolver; use ZendAuthenticationAdapterHttp; use ZendMvcControllerAbstractActionController; class IndexController extends AbstractActionController { public function indexAction() { // Create a new FileResolver and read in our file to use // in the Basic authentication $basicResolver = new FileResolver(); $basicResolver->setFile( '/some/file/with/credentials.txt' ); // Now create a FileResolver to read in our Digest file $digestResolver = new FileResolver(); $digestResolver->setFile( '/some/other/file/with/credentials.txt' ); // Options doesn't really matter at this point, we can // fill them in to anything we like $adapter = new Http($options); // Now set our DigestResolver/BasicResolver, depending // on our $options set $adapter->setBasicResolver($basicResolver); $adapter->setDigestResolver($digestResolver); } } How it works… After two short examples, let's take a look at the other adapters available. The DbTable adapter Let's begin with probably the most used adapter of them all, the DbTable adapter. This adapter connects to a database and pulls the requested username/password combination from a table and, if all went well, it will return to you an identity, which is nothing more than the record that matched the username details. To instantiate the adapter, it requires a ZendDbAdapterAdapter in its constructor to connect with the database with the user details; there are also a couple of other options that can be set. Let's take a look at the definition of the constructor: The second (tableName) option speaks for itself as it is just the table name, which we need to use to get our users, the third and the fourth (identityColumn, credentialColumn) options are logical and they represent the username and password (or what we use) columns in our table. The last option, the credentialTreatment option, however, might not make a lot of sense. The credentialTreatment tells the adapter to treat the credentialColumn with a function before trying to query it. Examples of this could be to use the MD5 (?) function, PASSWORD (?), or SHA1 (?) function, if it was a MySQL database, but obviously this can differ per database as well. To give a small example on how the SQL can look like (the actual adapter builds this query up differently) with and without a credential treatment, take a look at the following examples: With credential treatment: SELECT * FROM `users` WHERE `username` = 'some_user' AND `password` = MD5('some_password'); Without credential treatment: SELECT * FROM `users` WHERE `username` = 'some_user' AND `password` = 'some_password'; When defining the treatment we should always include a question mark for where the password needs to come, for example, MD5 (?) would create MD5 ('some_password'), but without the question mark it would not insert the password. Lastly, instead of giving the options through the constructor, we can also use the setter methods for the properties: setTableName(), setIdentityColumn(), setCredentialColumn(), and setCredentialTreatment(). The HTTP adapter The HTTP authentication adapter is an adapter that we have probably all come across at least once in our Internet lives. We can recognize the authentication when we go to a website and there is a pop up showing where we can fill in our usernames and passwords to continue. This form of authentication is very basic, but still very effective in certain implementations, and therefore, a part of Zend Framework 2. There is only one big massive but to this authentication, and that is that it can (when using the basic authentication) send the username and password clear text through the browser (ouch!). There is however a solution to this problem and that is to use the Digest authentication, which is also supported by this adapter. If we take a look at the constructor of this adapter, we would see the following code line: public function __construct(array $config); The constructor accepts a load of keys in its config parameter, which are as follows: accept_schemes: This refers to what we want to accept authentication wise; this can be basic, digest, or basic digest. realm: This is a description of the realm we are in, for example Member's area. This is for the user only and is only to describe what the user is logging in for. digest_domains: These are URLs for which this authentication is working for. So if a user logs in with his details on any of the URLs defined, they will work. The URLs should be defined in a space-separated (weird, right?) list, for example /members/area /members/login. nonce_timeout: This will set the number of seconds the nonce (the hash users login with when we are using Digest authentication) is valid. Note, however, that nonce tracking and stale support are not implemented in Version 2.2 yet, which means it will authenticate again every time the nonce times out. use_opaque: This is either true or false (by default is true) and tells our adapter to send the opaque header to the client. The opaque header is a string sent by the server, which needs to be returned back on authentication. This does not work sometimes on Microsoft Internet Explorer browsers though, as they seem to ignore that header. Ideally the opaque header is an ever-changing string, to reduce predictability, but ZF 2 doesn't randomize the string and always returns the same hash. algorithm: This includes the algorithm to use for the authentication, it needs to be a supported algorithm that is defined in the supportedAlgos property. At the moment there is only MD5 though. proxy_auth: This boolean (by default is false) tells us if the authentication used is a proxy Authentication or not. It should be noted that there is a slight difference in files when using either Digest or Basic. Although both files have the same layout, they cannot be used interchangeably as the Digest requires the credentials to be MD5 hashed, while the Basic requires the credentials to be plain text. There should also always be a new line after every credential, meaning that the last line in the credential file should be empty. The layout of a credential file is as follows: username:realm:credentials For example: some_user:My Awesome Realm:clear text password Instead of a FileResolver, one can also use the ApacheResolver which can be used to read out htpasswd generated files, which comes in handy when there is already such a file in place. The Digest adapter The Digest adapter is basically the Http adapter without any Basic authentication. As the idea behind it is the same as the Http adapter, we will just go on and talk about the constructor, as that is a bit different in implementation: public function __construct($filename = null, $realm = null, $identity = null, $credential = null); As we can see the following options can be set when constructing the object: filename: This is the direct filename of the file to use with the Digest credentials, so no need to use a FileResolver with this one. realm: This identifies to the user what he/she is logging on to, for example My Awesome Realm or The Dragonborn's lair. As we are immediately trying to log on when constructing this, it does need to correspond with the credential file. identity: This is the username we are trying to log on with, and again it needs to resemble a user that is defined in the credential file to work. credential: This is the Digest password we try to log on with, and this again needs to match the password exactly like the one in the credential file. We can then, for example, just run $digestAdapter->getIdentity() to find out if we are successfully authenticated or not, resulting in NULL if we are not, and resulting in the identity column value if we are. The LDAP adapter Using the LDAP authentication is obviously a little more difficult to explain, so we will not go in to that full as that would take quite a while. What we will do is show the constructor of the LDAP adapter and explain its various options. However, if we want to know more about setting up an LDAP connection, we should take a look at the documentation of ZF2, as it is explained in there very well: public function __construct(array $options = array(), $identity = null, $credential = null); The options parameter in the construct refers to an array of configuration options that are compatible with the ZendLdapLdap configuration. There are literally dozens of options that can be set here so we advice to go and look at the LDAP documentation of ZF2 to know more about that. The next two parameters identity and credential are respectively the username and password again, so that explains itself really. Once you have set up the connection with the LDAP there isn't much left to do but to get the identity and see whether we were successfully validated or not. About Authentication Authentication in Zend Framework 2 works through specific adapters, which are always an implementation of the ZendAuthenticationAdapterAdapterInterface and thus, always provides the methods defined in there. However, the methods of Authentication are all different, and strong knowledge of the methods displayed previously is always a requirement. Some work through the browser, like the Http and Digest adapter, and others just require us to create a whole implementation like the LDAP and the DbTable adapter.
Read more
  • 0
  • 0
  • 4516
Modal Close icon
Modal Close icon