Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
Packt
17 Sep 2014
12 min read
Save for later

What is REST?

Packt
17 Sep 2014
12 min read
This article by Bhakti Mehta, the author of Restful Java Patterns and Best Practices, starts with the basic concepts of REST, how to design RESTful services, and best practices around designing REST resources. It also covers the architectural aspects of REST. (For more resources related to this topic, see here.) Where REST has come from The confluence of social networking, cloud computing, and era of mobile applications creates a generation of emerging technologies that allow different networked devices to communicate with each other over the Internet. In the past, there were traditional and proprietary approaches for building solutions encompassing different devices and components communicating with each other over a non-reliable network or through the Internet. Some of these approaches such as RPC, CORBA, and SOAP-based web services, which evolved as different implementations for Service Oriented Architecture (SOA) required a tighter coupling between components along with greater complexities in integration. As the technology landscape evolves, today’s applications are built on the notion of producing and consuming APIs instead of using web frameworks that invoke services and produce web pages. This requirement enforces the need for easier exchange of information between distributed services along with predictable, robust, well-defined interfaces. API based architecture enables agile development, easier adoption and prevalence, scale and integration with applications within and outside the enterprise HTTP 1.1 is defined in RFC 2616, and is ubiquitously used as the standard protocol for distributed, collaborative and hypermedia information systems. Representational State Transfer (REST) is inspired by HTTP and can be used wherever HTTP is used. The widespread adoption of REST and JSON opens up the possibilities of applications incorporating and leveraging functionality from other applications as needed. Popularity of REST is mainly because it enables building lightweight, simple, cost-effective modular interfaces, which can be consumed by a variety of clients. This article covers the following topics Introduction to REST Safety and Idempotence HTTP verbs and REST Best practices when designing RESTful services REST architectural components Introduction to REST REST is an architectural style that conforms to the Web Standards like using HTTP verbs and URIs. It is bound by the following principles. All resources are identified by the URIs. All resources can have multiple representations All resources can be accessed/modified/created/deleted by standard HTTP methods. There is no state on the server. REST is extensible due to the use of URIs for identifying resources. For example, a URI to represent a collection of book resources could look like this: http://foo.api.com/v1/library/books A URI to represent a single book identified by its ISBN could be as follows: http://foo.api.com/v1/library/books/isbn/12345678 A URI to represent a coffee order resource could be as follows: http://bar.api.com/v1/coffees/orders/1234 A user in a system can be represented like this: http://some.api.com/v1/user A URI to represent all the book orders for a user could be: http://bar.api.com/v1/user/5034/book/orders All the preceding samples show a clear readable pattern, which can be interpreted by the client. All these resources could have multiple representations. These resource examples shown here can be represented by JSON or XML and can be manipulated by HTTP methods: GET, PUT, POST, and DELETE. The following table summarizes HTTP Methods and descriptions for the actions taken on the resource with a simple example of a collection of books in a library. HTTP method Resource URI Description GET /library/books Gets a list of books GET /library/books/isbn/12345678 Gets a book identified by ISBN “12345678” POST /library/books Creates a new book order DELETE /library/books/isbn/12345678 Deletes a book identified by ISBN “12345678” PUT /library/books/isbn/12345678 Updates a specific book identified by ISBN “12345678’ PATCH /library/books/isbn/12345678 Can be used to do partial update for a book identified by ISBN “12345678” REST and statelessness REST is bound by the principle of statelessness. Each request from the client to the server must have all the details to understand the request. This helps to improve visibility, reliability and scalability for requests. Visibility is improved, as the system monitoring the requests does not have to look beyond one request to get details. Reliability is improved, as there is no check-pointing/resuming to be done in case of partial failures. Scalability is improved, as the number of requests that can be processed is increases as the server is not responsible for storing any state. Roy Fielding’s dissertation on the REST architectural style provides details on the statelessness of REST, check http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm With this initial introduction to basics of REST, we shall cover the different maturity levels and how REST falls in it in the following section. Richardson Maturity Model Richardson maturity model is a model, which is developed by Leonard Richardson. It talks about the basics of REST in terms of resources, verbs and hypermedia controls. The starting point for the maturity model is to use HTTP layer as the transport. Level 0 – Remote Procedure Invocation This level contains SOAP or XML-RPC sending data as POX (Plain Old XML). Only POST methods are used. This is the most primitive way of building SOA applications with a single method POST and using XML to communicate between services. Level 1 – REST resources This uses POST methods and instead of using a function and passing arguments uses the REST URIs. So it still uses only one HTTP method. It is better than Level 0 that it breaks a complex functionality into multiple resources with one method. Level 2 – more HTTP verbs This level uses other HTTP verbs like GET, HEAD, DELETE, PUT along with POST methods. Level 2 is the real use case of REST, which advocates using different verbs based on the HTTP request methods and the system can have multiple resources. Level 3 – HATEOAS Hypermedia as the Engine of Application State (HATEOAS) is the most mature level of Richardson’s model. The responses to the client requests, contains hypermedia controls, which can help the client decide what the next action they can take. Level 3 encourages easy discoverability and makes it easy for the responses to be self- explanatory. Safety and Idempotence This section discusses in details about what are safe and idempotent methods. Safe methods Safe methods are methods that do not change the state on the server. GET and HEAD are safe methods. For example GET /v1/coffees/orders/1234 is a safe method. Safe methods can be cached. PUT method is not safe as it will create or modify a resource on the server. POST method is not safe for the same reasons. DELETE method is not safe as it deletes a resource on the server. Idempotent methods An idempotent method is a method that will produce the same results irrespective of how many times it is called. For example GET method is idempotent, as multiple calls to the GET resource will always return the same response. PUT method is idempotent as calling PUT method multiple times will update the same resource and not change the outcome. POST is not idempotent and calling POST method multiple times can have different results and will result in creating new resources. DELETE is idempotent because once the resource is deleted it is gone and calling the method multiple times will not change the outcome. HTTP verbs and REST HTTP verbs inform the server what to do with the data sent as part of the URL GET GET is the simplest verb of HTTP, which enables to get access to a resource. Whenever the client clicks a URL in the browser it sends a GET request to the address specified by the URL. GET is safe and idempotent. GET requests are cached. Query parameters can be used in GET requests. For example a simple GET request is as follows: curl http://api.foo.com/v1/user/12345 POST POST is used to create a resource. POST requests are neither idempotent nor safe. Multiple invocations of the POST requests can create multiple resources. POST requests should invalidate a cache entry if exists. Query parameters with POST requests are not encouraged For example a POST request to create a user can be curl –X POST -d’{“name”:”John Doe”,“username”:”jdoe”, “phone”:”412-344-5644”} http://api.foo.com/v1/user PUT PUT is used to update a resource. PUT is idempotent but not safe. Multiple invocations of PUT requests should produce the same results by updating the resource. PUT requests should invalidate the cache entry if exists. For example a PUT request to update a user can be curl –X PUT -d’{ “phone”:”413-344-5644”} http://api.foo.com/v1/user DELETE DELETE is used to delete a resource. DELETE is idempotent but not safe. DELETE is idempotent because based on the RFC 2616 "the side effects of N > 0 requests is the same as for a single request". This means once the resource is deleted calling DELETE multiple times will get the same response. For example, a request to delete a user is as follows: curl –X DELETE http://foo.api.com/v1/user/1234 HEAD HEAD is similar like GET request. The difference is that only HTTP headers are returned and no content is returned. HEAD is idempotent and safe. For example, a request to send HEAD request with curl is as follows: curl –X HEAD http://foo.api.com/v1/user It can be useful to send a HEAD request to see if the resource has changed before trying to get a large representation using a GET request. PUT vs POST According to RFC the difference between PUT and POST is in the Request URI. The URI identified by POST defines the entity that will handle the POST request. The URI in the PUT request includes the entity in the request. So POST /v1/coffees/orders means to create a new resource and return an identifier to describe the resource In contrast PUT /v1/coffees/orders/1234 means to update a resource identified by “1234” if it does not exist else create a new order and use the URI orders/1234 to identify it. Best practices when designing resources This section highlights some of the best practices when designing RESTful resources: The API developer should use nouns to understand and navigate through resources and verbs with the HTTP method. For example the URI /user/1234/books is better than /user/1234/getBook. Use associations in the URIs to identify sub resources. For example to get the authors for book 5678 for user 1234 use the following URI /user/1234/books/5678/authors. For specific variations use query parameters. For example to get all the books with 10 reviews /user/1234/books?reviews_counts=10. Allow partial responses as part of query parameters if possible. An example of this case is to get only the name and age of a user, the client can specify, ?fields as a query parameter and specify the list of fields which should be sent by the server in the response using the URI /users/1234?fields=name,age. Have defaults for the output format for the response incase the client does not specify which format it is interested in. Most API developers choose to send json as the default response mime type. Have camelCase or use _ for attribute names. Support a standard API for count for example users/1234/books/count in case of collections so the client can get the idea of how many objects can be expected in the response. This will also help the client, with pagination queries. Support a pretty printing option users/1234?pretty_print. Also it is a good practice to not cache queries with pretty print query parameter. Avoid chattiness by being as verbose as possible in the response. This is because if the server does not provide enough details in the response the client needs to make more calls to get additional details. That is a waste of network resources as well as counts against the client’s rate limits. REST architecture components This section will cover the various components that must be considered when building RESTful APIs As seen in the preceding screenshot, REST services can be consumed from a variety of clients and applications running on different platforms and devices like mobile devices, web browsers etc. These requests are sent through a proxy server. The HTTP requests will be sent to the resources and based on the various CRUD operations the right HTTP method will be selected. On the response side there can be Pagination, to ensure the server sends a subset of results. Also the server can do Asynchronous processing thus improving responsiveness and scale. There can be links in the response, which deals with HATEOAS. Here is a summary of the various REST architectural components: HTTP requests use REST API with HTTP verbs for the uniform interface constraint Content negotiation allows selecting a representation for a response when there are multiple representations available. Logging helps provide traceability to analyze and debug issues Exception handling allows sending application specific exceptions with HTTP codes Authentication and authorization with OAuth2.0 gives access control to other applications, to take actions without the user having to send their credentials Validation provides support to send back detailed messages with error codes to the client as well as validations for the inputs received in the request. Rate limiting ensures the server is not burdened with too many requests from single client Caching helps to improve application responsiveness. Asynchronous processing enables the server to asynchronously send back the responses to the client. Micro services which comprises breaking up a monolithic service into fine grained services HATEOAS to improve usability, understandability and navigability by returning a list of links in the response Pagination to allow clients to specify items in a dataset that they are interested in. The REST Architectural components in the image can be chained one after the other as shown priorly. For example, there can be a filter chain, consisting of filters related with Authentication, Rate limiting, Caching, and Logging. This will take care of authenticating the user, checking if the requests from the client are within rate limits, then a caching filter which can check if the request can be served from the cache respectively. This can be followed by a logging filter, which can log the details of the request. For more details, check RESTful Patterns and best practices.
Read more
  • 0
  • 0
  • 2615

article-image-quizzes-and-interactions-camtasia-studio
Packt
21 Aug 2014
12 min read
Save for later

Quizzes and Interactions in Camtasia Studio

Packt
21 Aug 2014
12 min read
This article by David B. Demyan, the author of the book eLearning with Camtasia Studio, covers the different types of interactions, description of how interactions are created and how they function, and the quiz feature. In this article, we will cover the following topics specific topics: The types of interactions available in Camtasia Studio Video player requirements Creating simple action hotspots Using the quiz feature (For more resources related to this topic, see here.) Why include learner interactions? Interactions in e-learning support cognitive learning, the application of behavioral psychology to teaching. Students learn a lot when they perform an action based on the information they are presented. Without exhausting the volumes written about this subject, your own background has probably prepared you for creating effective materials that support cognitive learning. To boil it down for our purposes, you present information in chunks and ask learners to demonstrate whether they have received the signal. In the classroom, this is immortalized as a teacher presenting a lecture and asking questions, a basic educational model. In another scenario, it might be an instructor showing a student how to perform a mechanical task and then asking the student to repeat the same task. We know from experience that learners struggle with concepts if you present too much information too rapidly without checking to see if they understand it. In e-learning, the most effective ways to prevent confusion involve chunking information into small, digestible bites and mapping them into an overall program that allows the learner to progress in a logical fashion, all the while interacting and demonstrating comprehension. Interaction is vital to keep your students awake and aware. Interaction, or two-way communication, can take your e-learning video to the next level: a true cognitive learning experience. Interaction types While Camtasia Studio does not pretend to be a full-featured interactive authoring tool, it does contain some features that allow you to build interactions and quizzes. This section defines those features that support learners to take action while viewing an e-learning video when you request them for an interaction. There are three types of interactions available in Camtasia Studio: Simple action hotspots Branching hotspots Quizzes You are probably thinking of ways these techniques can help support cognitive learning. Simple action hotspots Hotspots are click areas. You indicate where the hotspot is using a visual cue, such as a callout. Camtasia allows you to designate the area covered by the callout as a hotspot and define the action to take when it is clicked. An example is to take the learner to another time in the video when the hotspot is clicked. Another click could take the learner back to the original place in the video. Quizzes Quizzes are simple questions you can insert in the video, created and implemented to conform to your testing strategy. The question types available are as follows: Multiple choice Fill in the blanks Short answers True/false Video player requirements Before we learn how to create interactions in Camtasia Studio, you should know some special video player requirements. A simple video file playing on a computer cannot be interactive by itself. A video created and produced in Camtasia Studio without including some additional program elements cannot react when you click on it except for what the video player tells it to do. For example, the default player for YouTube videos stops and starts the video when you click anywhere in the video space. Click interactions in videos created with Camtasia are able to recognize where clicks occur and the actions to take. You provide the click instructions when you set up the interaction. These instructions are required, for example, to intercept the clicking action, determine where exactly the click occurred, and link that spot with a command and destination. These click instructions may be any combination of HyperText Markup Language (HTML), HTML5, JavaScript, and Flash ActionScript. Camtasia takes care of creating the coding behind the scenes, associated with the video player being used. In the case of videos produced with Camtasia Studio, to implement any form of interactivity, you need to select the default Smart Player output options when producing the video. Creating simple hotspots The most basic interaction is clicking a hotspot layered over the video. You can create an interactive hotspot for many purposes, including the following: Taking learners to a specific marker or frame within the video, as determined on the timeline Allowing learners to replay a section of the video Directing learners to a website or document to view reference material Showing a pop up with additional information, such as a phone number or web link Try it – creating a hotspot If you are building the exercise project featured in this book, let's use it to create an interactive hotspot. The task in this exercise is to pause the video and add a Replay button to allow viewers to review a task. After the replay, a prompt will be added to resume the video from where it was paused. Inserting the Replay/Continue buttons The first step is to insert a Replay button to allow viewers to review what they just saw or continue without reviewing. This involves adding two hotspot buttons on the timeline, which can be done by performing the following steps: Open your exercise project in Camtasia Studio or one of your own projects where you can practice. Position the play head right after the part where text is shown being pasted into the CuePrompter window. From the Properties area, select Callouts from the task tabs above the timeline. In the Shape area, select Filled Rounded Rectangle (at the upper-right corner of the drop-down selection). A shape is added to the timeline. Set the Fade in and Fade out durations to about half a second. Select the Effects dropdown and choose Style. Choose the 3D Edge style. It looks like a raised button. Set any other formatting so the button looks the way you want in the preview window. In the Text area, type your button text. For the sample project, enter Replay Copy & Paste. Select the button in the preview window and make a copy of the button. You can use Ctrl + C to copy and Ctrl + V to paste the button. In the second copy of the button, select the text and retype it as Continue. It should be stacked on the timeline as shown in the following screenshot: Select the Continue button in the preview window and drag it to the right-hand side, at the same height and distance from the edge. The final placement of the buttons is shown in the sample project. Save the project. Adding a hotspot to the Continue button The buttons are currently inactive images on the timeline. Viewers could click them in the produced video, but nothing would happen. To make them active, enable the Hotspot properties for each button. To add a hotspot to the Continue button, perform the following steps: With the Continue button selected, select the Make hotspot checkbox in the Callouts panel. Click on the Hotspot Properties... button to set properties for the callout button. Under Actions, make sure to select Click to continue. Click on OK. The Continue button now has an active hotspot assigned to it. When published, the video will pause when the button appears. When the viewer clicks on Continue, the video will resume playing. You can test the video and the operation of the interactive buttons as described later in this article. Adding a hotspot to the Replay button Now, let's move on to create an action for the Replay copy & paste button: Select the Replay copy & paste button in the preview window. Select the Make hotspot checkbox in the Callouts panel. Click on the Hotspot properties... button. Under Actions, select Go to frame at time. Enter the time code for the spot on the timeline where you want to start the replay. In the sample video, this is around 0:01:43;00, just before text is copied in the script. Click on OK. Save the project. The Replay copy & paste button now has an active hotspot assigned to it. Later, when published, the video will pause when the button appears. When viewers click on Replay copy & paste, the video will be repositioned at the time you entered and begin playing from there. Using the quiz feature A quiz added to a video sets it apart. The addition of knowledge checks and quizzes to assess your learners' understanding of the material presented puts the video into the true e-learning category. By definition, a knowledge check is a way for the student to check their understanding without worrying about scoring. Typically, feedback is given to the student for them to better understand the material, the question, and their answer. The feedback can be terse, such as correct and incorrect, or it can be verbose, informing if the answer is correct or not and perhaps giving additional information, a hint, or even the correct answers, depending on your strategy in creating the knowledge check. A quiz can be in the same form as a knowledge check but a record of the student's answer is created and reported to an LMS or via an e-mail report. Feedback to the student is optional, again depending on your testing strategy. In Camtasia Studio, you can insert a quiz question or set of questions anywhere on the timeline you deem appropriate. This is done with the Quizzing task tab. Try it – inserting a quiz In this exercise, you will select a spot on the timeline to insert a quiz, enable the Quizzing feature, and write some appropriate questions following the sample project, Using CuePrompter. Creating a quiz Place your quiz after you have covered a block of information. The sample project, Using CuePrompter, is a very short task-based tutorial, showing some basic steps. Assume for now that you are teaching a course on CuePrompter and need to assess students' knowledge. I believe a good place for a quiz is after the commands to scroll forward, speed up, slow down, and scroll reverse. Let's give it a try with multiple choice and true/false questions: Position the play head at the appropriate part of the timeline. In the sample video, the end of the scrolling command description is at about 3 minutes 12 seconds. Select Quizzing in the task tabs. If you do not see the Quizzing tab above the timeline, select the More tab to reveal it. Click on the Add quiz button to begin adding questions. A marker appears on the timeline where your quiz will appear during the video, as illustrated in the following screenshot: In the Quiz panel, add a quiz name. In the sample project, the quiz is entitled CuePrompter Commands. Scroll down to Question type. Make sure Multiple Choice is selected from the dropdown. In the Question box, type the question text. In the sample project, the first question is With text in the prompter ready to go, the keyboard control to start scrolling forward is _________________. In the Answers box, double-click on the checkbox text that says Default Answer Text. Retype the answer Control-F. In the next checkbox text that says <Type an answer choice here>, double-click on it and add the second possible answer, Spacebar. Check the box next to it to indicate that it is the correct answer. Add two more choices: Alt-Insert and Tab. Your Quiz panel should look like the following screenshot: Click on Add question. From the Question type dropdown, select True/False. In the Question box, type You can stop CuePrompter with the End key. In Answers, select False. For the final question, click on Add question again. From the Question type dropdown, select Multiple Choice. In the Question box, type Which keyboard command tells CuePrompter to reverse?. Enter the four possible answers: Left arrow Right arrow Down arrow Up arrow Select Down arrow as the correct answer. Save the project. Now you have entered three questions and answer choices, while indicating the choice that will be scored correct if selected. Next, preview the quiz to check format and function. Previewing the quiz Camtasia Studio allows you to preview quizzes for correct formatting, wording, and scoring. Continue to follow along in the exercise project and perform the following steps: Leave checkmarks in the Score quiz and Viewer can see answers after submitting boxes. Click on the Preview button. A web page opens in your Internet browser showing the questions, as shown in the following screenshot: Select an answer and click on Next. The second quiz question is displayed. Select an answer and click on Next. The third quiz question is displayed. Select an answer and click on Submit Answers. As this is the final question, there is no Next. Since we left the Score quiz and Viewer can see answers after submitting options selected, the learner receives a prompt, as shown in the following screenshot: Click on View Answers to review the answers you gave. Correct responses are shown with a green checkmark and incorrect ones are shown with a red X mark. If you do not want your learners to see the answers, remove the checkmark from Viewer can see answers after submitting. Exit the browser to discontinue previewing the quiz. Save the project. This completes the Try it exercise for inserting and previewing a quiz in your video e-learning project. Summary In this article, we learned different types of interactions, video player requirements, creating simple action hotspots, and inserting and previewing a quiz. Resources for Article: Further resources on this subject: Introduction to Moodle [article] Installing Drupal [article] Make Spacecraft Fly and Shoot with Special Effects using Blender 3D 2.49 [article]
Read more
  • 0
  • 0
  • 12758

article-image-setting-rig
Packt
21 Aug 2014
16 min read
Save for later

Setting Up The Rig

Packt
21 Aug 2014
16 min read
In this article by Vinci Rufus, the author of the book AngularJS Web Application Development Blueprints, we will see the process of setting up various tools required to start building AngularJS apps. I'm sure you would have heard the saying, "A tool man is known by the tools he keeps." OK fine, I just made that up, but that's actually true, especially when it comes to programming. Sure you can build complete and fully functional AngularJS apps just using a simple text editor and a browser, but if you want to work like a ninja, then make sure that you start using some of these tools as a part of your development workflow. Do note that these tools are not mandatory to build AngularJS apps. Their use is recommended mainly to help improve the productivity. In this article, we will see how to set up and use the following productivity tools: Node.js Grunt Yeoman Karma Protractor Since most of us are running a Mac, Windows, Ubuntu, or another flavor of the Linux operating system, we'll be covering the deployment steps common for all of them. (For more resources related to this topic, see here.) Setting up Node.js Depending on your technology stack, I strongly recommend you have either Ruby or Node.js installed. In case of AngularJS, most of the productivity tools or plugins are available as Node Package Manager (npm), and, hence, we will be setting up Node.js along with npm. Node.js is an open source JavaScript-based platform that uses an event-based Input/output model, making it lightweight and fast. Let us head over to www.nodejs.org and install Node.js. Choose the right version as per your operating system. The current version of Node.js at the time of writing this article is v0.10.x which comes with npm built in, making it a breeze to set up Node.js and npm. Node.js doesn't come with a Graphical User Interface (GUI), so to use Node.js, you will need to open up your terminal and start firing some commands. Now would also be a good time to brush up on your DOS and Unix/Linux commands. After installing Node.js, the first thing you'd want to check is to see if Node.js has been installed correctly. So, let us open up the terminal and write the following command: node –-version This should output the version number of Node.js that's installed on your system. The next would be to see what version of npm we have installed. The command for that would be as follows: npm –-version This will tell you the version number for your npm. Creating a simple Node.js web server with ExpressJS For basic, simple AngularJS apps, you don't really need a web server. You can simply open the HTML files from your filesystem and they would work just fine. However, as you start building complex applications where you are passing data in JSON, web services, or using a Content Delivery Network (CDN), you would find the need to use a web server. The good thing about AngularJS apps is that they could work within any web server, so if you already have IIS, Apache, Nginx, or any other web server running on your development environment, you can simply run your AngularJS project from within the web root folder. In case you don't have a web server and are looking for a lightweight web server, then let us set one up using Node.js and ExpressJS. One could write the entire web server in pure Node.js; however, ExpressJS provides a nice layer of abstraction on top of Node.js so that you can just work with the ExpressJS APIs and don't have to worry about the low-level calls. So, let's first install the ExpressJS module for Node.js. Open up your terminal and fire the following command: npm install -g express-generator This will globally install ExpressJS. Omit the –g to install ExpressJS locally in the current folder. When installing ExpressJS globally on Linux or Mac, you will need to run it via sudo as follows: sudo npm install –g express-generator This will let npm have the necessary permissions to write to the protected local folder under the user. The next step is to create an ExpressJS app; let us call it my-server. Type the following command in the terminal and hit enter: express my-server You'll see something like this: create : my-server create : my-server/package.json create : my-server/app.js create : my-server/public create : my-server/public/javascripts create : my-server/public/images create : my-server/public/stylesheets create : my-server/public/stylesheets/style.css create : my-server/routes create : my-server/routes/index.js create : my-server/routes/user.js create : my-server/views create : my-server/views/layout.jade create : my-server/views/index.jade install dependencies: $ cd my-server && npm install run the app: $ DEBUG=my-server ./bin/www This will create a folder called my-server and put in a bunch of files inside the folder. The package.json file is created, which contains the skeleton of your app. Open it and ensure the name says my-server; also, note the dependencies listed. Now, to install ExpressJS along with the dependencies, first change into the my-server directory and run the following command in the terminal: cd my-server npm install Now, in the terminal, type in the following command: npm start Open your browser and type http://localhost:3000 in the address bar. You'll get a nice ExpressJS welcome message. Now to test our Address Book App, we will copy our index.html, scripts.js, and styles.css into the public folder located within my-server. I'm not copying the angular.js file because we'll use the CDN version of the AngularJS library. Open up the index.html file and replace the following code: <script src= "angular.min.js" type="text/javascript"> </script> With the CDN version of AngularJS as follows: <script src="//ajax.googleapis.com/ajax/libs/angularjs/1.2.17/angular.min.js"></script> A question might arise, as to what if the CDN is unreachable. In such cases, we can add a fall back to use a local version of the AngularJS library. We do this by adding the following script after the CDN link is called: <script>window.angular || document.write('<script src="lib/angular/angular.min.js"></script>');</script> Save the file in the browser and enter localhost:3000/index.html. Your Address Book app is now running from a server and taking advantage of Google's CDN to serve the AngularJS file. Referencing the files using only // is also called the protocol independent absolute path. This means that the files are requested using the same protocol that is being used to call the parent page. For example, if the page you are loading is via https://, then the CDN link will also be called via HTTPS. This also means that when using // instead of http:// during development, you will need to run your app from within a server instead of a filesystem. Setting up Grunt Grunt is a JavaScript-based task runner. It is primarily used for automating tasks such as running unit tests, concatenating, merging, and minifying JS and CSS files. You can also run shell commands. This makes it super easy to perform server cleanups and deploy code. Essentially, Grunt is to JavaScript what Rake would be to Ruby or Ant/Maven would be to Java. Installing Grunt-cli Installing Grunt-cli is slightly different from installing other Node.js modules. We first need to install the Grunt's Command Line Interface (CLI) by firing the following command in the terminal: npm install -g grunt-cli Mac or Linux users can also directly run the following command: sudo npm install –g grunt-cli Make sure you have administrative privileges. Use sudo if you are on a Mac or Linux system. If you are on Windows, right-click and open the command prompt with administrative rights. An important thing to note is that installing Grunt-cli doesn't automatically install Grunt and its dependencies. Grunt-cli merely invokes the version of Grunt installed along with the Grunt file. While this may seem a little complicated at start, the reason it works this way is so that we can run different versions of Grunt from the same machine. This comes in handy when your project has dependencies on a specific version of Grunt. Creating the package.json file To install Grunt first, let's create a folder called my-project and create a file called package.json with the following content: { "name": "My-Project", "version": "0.1.0", "devDependencies": { "grunt": "~0.4.5", "grunt-contrib-jshint": "~0.10.0", "grunt-contrib-concat": "~0.4.0", "grunt-contrib-uglify": "~0.5.0", "grunt-shell": "~0.7.0" } } Save the file. The package.json is where you define the various parameters of your app; for example, the name of your app, the version number, and the list of dependencies needed for the app. Here we are calling our app My-Project with Version 0.1.0, and listing out the following dependencies that need to be installed as a part of this app: grunt (v0.4.5): This is the main Grunt application grunt-contrib-jshint (v0.10.0): This is used for code analysis grunt-contrib-concat (v0.4.0): This is used to merge two or more files into one grunt-contrib-uglify (v0.5.0): This is used to minify the JS file grunt-shell (v0.7.0): This is the Grunt shell used for running shell commands Visit http://gruntjs.com/plugins to get a list of all the plugins available for Grunt and also their exact names and version numbers. You may also choose to create a default package.json file by running the following command and answering the questions: npm init Open the package.json file and add the dependencies as mentioned earlier. Now that we have the package.json file, load the terminal and navigate into the my-project folder. To install Grunt and the modules specified in the file, type in the following command: npm install --save-dev You'll see a series of lines getting printed in the console, let that continue for a while and wait until it returns to the command prompt. Ensure that the last line printed by the previous command ends with OK code 0. Once Grunt is installed, a quick version check command will ensure that Grunt is installed. The command is as follows: grunt –-version There is a possibility that you got a bunch of errors and it ended with a not ok code 0 message. There could be multiple reasons why that would have happened, ranging from errors in your code to a network connection issue or something changing at Grunt's end due to a new version update. If grunt --version throws up an error, it means Grunt wasn't installed properly. To reinstall Grunt, enter the following commands in the terminal: rm –rf node_modules npm cache clean npm install Windows users may manually delete the node_modules folder from Windows Explorer, before running the cache clean command in the command prompt. Refer to http://www.gruntjs.com to troubleshoot the problem. Creating your Grunt tasks To run our Grunt tasks, we'll need a JavaScript file. So, let's copy our scritps.js file and place it into the my-projects folder. The next step is to create a Grunt file that will list out the tasks that we need Grunt to perform. For now, we will ask it to do four simple tasks, first check if our JS code is clean using JSHint, then we will merge three JS files into one and then minify the JS file, and finally we will run some shell commands to clean up. Until Version 0.3, the init command was a part of the Grunt tool and one could create a blank project using grunt-init. With Version 0.4, init is now available as a separate tool called grunt-init and needs to be installed using the npm install –g grunt-init command line. Also note that the structure of the grunt.js file from Version 0.4 onwards is fairly different from the earlier versions you've used. For now, we will resort to creating the Grunt file manually. Refer to the following screenshot: In the same location as where you have your package.json, create a file called gruntfile.js as shown earlier and type in the following code: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ jshint:{ all:['scripts.js'] } }); grunt.loadNpmTasks('grunt-contrib-jshint'); // Default task. grunt.registerTask('default', ['jshint']); }; To start, we will add only one task which is jshint and specify scripts.js in the list of files that need to be linted. In the next line, we specify grunt-contrib-jshint as the npm task that needs to be loaded. In the last line, we define the jshint as the task to be run when Grunt is running in default mode. Save the file and in the terminal run the following command: grunt You would probably get to see the following message in the terminal: So JSHint is saying that we are missing a semicolon on lines 18 and 24. Oh! Did I mention that JSHint is like your very strict math teacher from high school. Let's open up scripts.js and put in those semicolons and rerun Grunt. Now you should get a message in green saying 1 file lint free. Done without errors. Let's add some more tasks to Grunt. We'll now ask it to concatenate and minify a couple of JS files. Since we currently have just one file, let's go and create two dummy JS files called scripts1.js and scripts2.js. In scripts1.js we'll simply write an empty function as follows: // This is from script 1 function Script1Function(){ //------// } Similarly, in scripts2.js we'll write the following: // This is from script 2 function Script2Function(){ //------// } Save these files in the same folder where you have scripts.js. Grunt tasks to merge and concatenate files Now, let's open our Grunt file and add the code for both the tasks—to merge the JS file, and minify them as follows: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ jshint:{ all:['scripts.js'] }, concat: { dist: { src: ['scripts.js', 'scripts1.js','scripts2.js'], dest: 'merged.js' } }, uglify: { dist: { src: 'merged.js', dest: 'build/merged.min.js' } } }); grunt.loadNpmTasks('grunt-contrib-jshint'); grunt.loadNpmTasks('grunt-contrib-concat'); grunt.loadNpmTasks('grunt-contrib-uglify'); // Default task. grunt.registerTask('default', ['jshint','concat','uglify']); }; As you can see from the preceding code, after the jshint task, we added the concat task. Under the src attribute, we define the files separated by a comma that need to be concatenated. And in the dest attribute, we specify the name of the merged JS file. It is very important that the files are entered in the same sequence as they need to be merged. If the sequence of the files entered is incorrect, the merged JS file will cause errors in your app. The uglify task is used to minify the JS file and the structure is very similar to the concat task. We add the merged.js file to the src attribute and in the dest attribute, we will place the merged.min.js file into a folder called build. Grunt will auto create the build folder. After defining the tasks, we will load the necessary plugins, namely the grunt-contrib-concat and the grunt-contrib-uglify, and finally we will register the concat and uglify tasks to the default task. Save the file and run Grunt. And if all goes well, you should see Grunt running these tasks and informing the status of each of the tasks. If you get the final message saying, Done, without any errors, it means things went well, and this was your lucky day! If you now open your my-project folder in the file manager, you should see a new file called merged.js. Open it in the text editor and you'll notice that all the three files have been merged into this. Also, go into the build/merged.min.js file and verify whether the file is minified. Running shell commands via Grunt Another really helpful plugin in Grunt is grunt-shell. This allows us to effectively run clean-up activities such as deleting .tmp files and moving files from one folder to another. Let's see how to add the shell tasks to our Grunt file. Add the following highlighted piece of code to your Grunt file: module.exports = function(grunt) { // Project configuration. grunt.initConfig({ jshint:{ all:['scripts.js'] }, concat: { dist: { src: ['scripts.js', 'scripts1.js','scripts2.js'], dest: 'merged.js' } }, uglify: { dist: { src: 'merged.js', dest: 'build/merged.min.js' } } , shell: { multiple: { command: [ 'rm -rf merged.js', 'mkdir deploy', 'mv build/merged.min.js deploy/merged.min.js' ].join('&&') } } }); grunt.loadNpmTasks('grunt-contrib-jshint'); grunt.loadNpmTasks('grunt-contrib-concat'); grunt.loadNpmTasks('grunt-contrib-uglify'); grunt.loadNpmTasks('grunt-shell'); // Default task. grunt.registerTask('default', ['jshint','concat','uglify', 'shell' ]); }; As you can see from the code we added, we are first deleting the merged.js file, then creating a new folder called deploy and moving our merged.min.js file into it. Windows users would need to use the appropriate DOS commands for deleting and copying the files. Note that .join('&&') is used when you want Grunt to run multiple shell commands. The next steps are to load the npm tasks and add shell to the default task list. To see Grunt perform all these tasks, run the Grunt command in the terminal. Once it's done, open up the filesystem and verify whether Grunt has done what you had asked it to do. Just like we used the preceding four plugins, there are numerous other plugins that you can use with Grunt to automate your tasks. A point to note is while the default Grunt command will run all the tasks mentioned in the grunt.registerTask statement, if you would need to run a specific task instead of all of them, then you can simply type the following in the command line: grunt jshint Alternatively, you can type the following command: grunt concat Alternatively, you can type the following command: grunt ugligy At times if you'd like to run just two of the three tasks, then you can register them separately as another bundled task in the Grunt file. Open up the gruntfile.js file, and just after the line where you have registered the default task, add the following code: grunt.registerTask('concat-min', ['concat','uglify']); This will register a new task called concat-min and will run only the concat and uglify tasks. In the terminal run the following command: grunt concat-min Verify whether Grunt only concatenated and minified the file and didn't run JSHint or your shell commands. You can run grunt --help to see a list of all the tasks available in your Grunt file.
Read more
  • 0
  • 0
  • 2115

article-image-now-youre-ready
Packt
20 Aug 2014
14 min read
Save for later

Now You're Ready!

Packt
20 Aug 2014
14 min read
In this article by Ryan John, author of the book Canvas LMS Course Design, we will have a look at the key points encountered during the course-building process, along with connections to educational philosophies and practices that support technology as a powerful way to enhance teaching and learning. (For more resources related to this topic, see here.) As you finish teaching your course, you will be well served to export your course to keep as a backup, to upload and reteach later within Canvas to a new group of students, or to import into another LMS. After covering how to export your course, we will tie everything we've learned together through a discussion of how Canvas can help you and your students achieve educational goals while acquiring important 21st century skills. Overall, we will cover the following topics: Exporting your course from Canvas to your computer Connecting Canvas to education in the 21st century Exporting your course Now that your course is complete, you will want to export the course from Canvas to your computer. When you export your course, Canvas compiles all the information from your course and allows you to download a single file to your computer. This file will contain all of the information for your course, and you can use this file as a master template for each time you or your colleagues teach the course. Exporting your course is helpful for two main reasons: It is wise to save a back-up version of your course on a computer. After all the hard work you have put into building and teaching your course, it is always a good decision to export your course and save it to a computer. If you are using a Free for Teachers account, your course will remain intact and accessible online until you choose to delete it. However, if you use Canvas through your institution, each institution has different procedures and policies in place regarding what happens to courses when they are complete. Exporting and saving your course will preserve your hard work and protect it from any accidental or unintended deletion. Once you have exported your course, you will be able to import your course into Canvas at any point in the future. You are also able to import your course into other LMSs such as Moodle or BlackBoard. You might wish to import your course back into Canvas if your course is removed from your institution-specific Canvas account upon completion. You will have a copy of the course to import for the next time you are scheduled to teach the same course. You might build and teach a course using a Free for Teachers account, and then later wish to import that version of the course into an institution-specific Canvas account or another LMS. Exporting your course does not remove the course from Canvas—your course will still be accessible on the Canvas site unless it is automatically deleted by your institution or if you choose to delete it. To export your entire course, complete the following steps: Click on the Settings tab at the bottom of the left-hand side menu, as pictured in the following screenshot: On the Settings page, look to the right-hand side menu. Click on the Export Course Content button, which is highlighted in the following screenshot: A screen will appear asking you whether you would like to export the Course or export a Quiz. To export your entire course, select the Course option and then click on Create Export, as shown in the following screenshot: Once you click on Create Export, a progress bar will appear. As indicated in the message below the progress bar, the export might take a while to complete, and you can leave the page while Canvas exports the content. The following screenshot displays this progress bar and message: When the export is complete, you will receive an e-mail from notifications@instructure.com that resembles the following screenshot. Click on the Click to view exports link in the e-mail: A new window or tab will appear in your browser that shows your Content Exports. Below the heading of the page, you will see your course export listed with a link that reads Click here to download, as pictured in the following screenshot. Go ahead and click on the link, and the course export file will be downloaded to your computer. Your course export file will be downloaded to your computer as a single .imscc file. You can then move the downloaded file to a folder on your computer's hard drive for later access. Your course export is complete, and you can save the exported file for later use. To access the content stored in the exported .imscc file, you will need to import the file back into Canvas or another LMS. You might notice an option to Conclude this Course on the course Settings page if your institution has not hidden or disabled this option. In most cases, it is not necessary to conclude your course if you have set the correct course start and end dates in your Course Details. Concluding your course prevents you from altering grades or accessing course content, and you cannot unconclude your course on your own. Some institutions conclude courses automatically, which is why it is always best to export your course to preserve your work. Now that we have covered the last how-to aspects of Canvas, let's close with some ways to apply the skills we have learned in this book to contemporary educational practices, philosophies, and requirements that you might encounter in your teaching. Connecting Canvas to education in the 21st century While learning how to use the features of Canvas, it is easy to forget the main purpose of Canvas' existence—to better serve your students and you in the process of education. In the midst of rapidly evolving technology, students and teachers alike require skills that are as adaptable and fluid as the technologies and new ideas swirling around them. While the development of various technologies might seem daunting, those involved in education in the 21st century have access to new and exciting tools that have never before existed. As an educator seeking to refine your craft, utilizing tools such as Canvas can help you and your students develop the skills that are becoming increasingly necessary to live and thrive in the 21st century. As attainment of these skills is indeed proving more and more valuable in recent years, many educational systems have begun to require evidence that instructors are cognizant of these skills and actively working to ensure that students are reaching valuable goals. Enacting the Framework for 21st Century Learning As education across the world continues to evolve through time, the development of frameworks, methods, and philosophies of teaching have shaped the world of formal education. In recent years, one such approach that has gained prominence in the United States' education systems is the Framework for 21st Century Learning, which was developed over the last decade through the work of the Partnership for 21st Century Skills (P21). This partnership between education, business, community, and government leaders was founded to help educators provide children in Kindergarten through 12th Grade (K-12) with the skills they would need going forward into the 21st century. Though the focus of P21 is on children in grades K-12, the concepts and knowledge articulated in the Framework for 21st Century Learning are valuable for learners at all levels, including those in higher education. In the following sections, we will apply our knowledge of Canvas to the desired 21st century student outcomes, as articulated in the P21 Framework for 21st Century Learning, to brainstorm the ways in which Canvas can help prepare your students for the future. Core subjects and 21st century themes The Framework for 21st Century Learning describes the importance of learning certain core subjects including English, reading or language arts, world languages, the arts, Mathematics, Economics, Science, Geography, History, Government, and Civics. In connecting these core subjects to the use of Canvas, the features of Canvas and the tips throughout this book should enable you to successfully teach courses in any of these subjects. In tandem with teaching and learning within the core subjects, P21 also advocates for schools to "promote understanding of academic content at much higher levels by weaving 21st century interdisciplinary themes into core subjects." The following examples offer insight and ideas for ways in which Canvas can help you integrate these interdisciplinary themes into your course. As you read through the following suggestions and ideas, think about strategies that you might be able to implement into your existing curriculum to enhance its effectiveness and help your students engage with the P21 skills: Global awareness: Since it is accessible from anywhere with an Internet connection, Canvas opens the opportunity for a myriad of interactions across the globe. Utilizing Canvas as the platform for a purely online course enables students from around the world to enroll in your course. As a distance-learning tool in colleges, universities, or continuing education departments, Canvas has the capacity to unite students from anywhere in the world to directly interact with one another: You might utilize the graded discussion feature for students to post a reflection about a class reading that considers their personal cultural background and how that affects their perception of the content. Taking it a step further, you might require students to post a reply comment on other students' reflections to further spark discussion, collaboration, and cross-cultural connections. As a reminder, it is always best to include an overview of online discussion etiquette somewhere within your course—you might consider adding a "Netiquette" section to your syllabus to maintain focus and a professional tone within these discussions. You might set up a conference through Canvas with an international colleague as a guest lecturer for a course in any subject. As a prerequisite assignment, you might ask students to prepare three questions to ask the guest lecturer to facilitate a real-time international discussion within your class. Financial, economic, business, and entrepreneurial literacy: As the world becomes increasingly digitized, accessing and incorporating current content from the Web is a great way to incorporate financial, economic, business, and entrepreneurial literacy into your course: In a Math course, you might consider creating a course module centered around the stock market. Within the module, you could build custom content pages offering direct instruction and introductions to specific topics. You could upload course readings and embed videos of interviews with experts with the YouTube app. You could link to live steam websites of the movement of the markets and create quizzes to assess students' understanding. Civic literacy: In fostering students' understanding of their role within their communities, Canvas can serve as a conduit of information regarding civic responsibilities, procedures, and actions: You might create a discussion assignment in which students search the Internet for a news article about a current event and post a reflection with connections to other content covered in the course. Offering guidance in your instructions to address how local and national citizenship impacts students' engagement with the event or incident could deepen the nature of responses you receive. Since discussion posts are visible to all participants in your course, a follow-up assignment might be for students to read one of the articles posted by another student and critique or respond to their reflection. Health literacy: Canvas can allow you to facilitate the exploration of health and wellness through the wide array of submission options for assignments. By utilizing the variety of assignment types you can create within Canvas, students are able to explore course content in new and meaningful ways: In a studio art class, you can create an out-of-class assignment to be submitted to Canvas in which students research the history, nature, and benefits of art therapy online and then create and upload a video sharing their personal relationship with art and connecting it to what they have found in the art therapy stories of others. Environmental literacy: As a cloud-based LMS, Canvas allows you to share files and course content with your students while maintaining and fostering an awareness of environmental sustainability: In any course you teach that involves readings uploaded to Canvas, encourage your students to download the readings to their computers or mobile devices rather than printing the content onto paper. Downloading documents to read on a device instead of printing them saves paper, reduces waste, and helps foster sustainable environmental habits. For PDF files embedded into content pages on Canvas, students can click on the preview icon that appears next to the document link and read the file directly on the content page without downloading or printing anything. Make a conscious effort to mention or address the environmental impacts of online learning versus traditional classroom settings, perhaps during a synchronous class conference or on a discussion board. Learning and innovation skills A number of specific elements combined can enable students to develop learning and innovation skills to prepare them for the increasingly "complex life and work environments in the 21st century." The communication setup of Canvas allows for quick and direct interactions while offering students the opportunity to contemplate and revise their contributions before posting to the course, submitting an assignment, or interacting with other students. This flexibility, combined with the ways in which you design your assignments, can help incorporate the following elements into your course to ensure the development of learning and innovation skills: Creativity and innovation: There are many ways in which the features of Canvas can help your students develop their creativity and innovation. As you build your course, finding ways for students to think creatively, work creatively with others, and implement innovations can guide the creation of your course assignments: You might consider assigning groups of students to assemble a content page within Canvas dedicated to a chosen or assigned topic. Do so by creating a content page, and then enable any user within the course to edit the page. Allowing students to experiment with the capabilities of the Rich Content Editor, embedding outside content and synthesizing ideas within Canvas allows each group's creativity to shine. As a follow-up assignment, you might choose to have students transfer the content of their content page to a public website or blog using sites such as Wikispaces, Wix, or Weebly. Once the sites are created, students can post their group site to a Canvas discussion page, where other students can view and interact with the work of their peers. Asking students to disseminate the class sites to friends or family around the globe could create international connections stemming from the creativity and innovation of your students' web content. Critical thinking and problem solving: As your students learn to overcome obstacles and find multiple solutions to complex problems, Canvas offers a place for students to work together to develop their critical thinking and problem-solving skills: Assign pairs of students to debate and posit solutions to a global issue that connects to topics within your course. Ask students to use the Conversations feature of Canvas to debate the issue privately, finding supporting evidence in various forms from around the Internet. Using the Collaborations feature, ask each pair of students to assemble and submit a final e-report on the topic, presenting the various solutions they came up with as well as supporting evidence in various electronic forms such as articles, videos, news clips, and websites. Communication and collaboration: With the seemingly impersonal nature of electronic communication, communication skills are incredibly important to maintain intended meanings across multiple means of communication. As the nature of online collaboration and communication poses challenges for understanding, connotation, and meaning, honing communication skills becomes increasingly important: As a follow-up assignment to the preceding debate suggestion, use the conferences tool in Canvas to set up a full class debate. During the debate, ask each pair of students to present their final e-report to the class, followed by a group discussion of each pair's findings, solutions, and conclusions. You might find it useful for each pair to explain their process and describe the challenges and/or benefits of collaborating and communicating via the Internet in contrast to collaborating and communicating in person.
Read more
  • 0
  • 0
  • 12209

article-image-angularjs-0
Packt
20 Aug 2014
15 min read
Save for later

AngularJS

Packt
20 Aug 2014
15 min read
In this article, by Rodrigo Branas, author of the book, AngularJS Essentials, we will go through the basics of AngularJS. Created by Miško Hevery and Adam Abrons in 2009, AngularJS is an open source, client-side JavaScript framework that promotes a high productivity web development experience. It was built over the belief that declarative programming is the best choice to construct the user's interface, while imperative programming is much better and preferred to implement the application's business logic. To achieve that, AngularJS empowers the traditional HTML by extending its current vocabulary, making the life of developers easier. The result is the development of expressive, reusable, and maintainable application components, leaving behind a lot of unnecessary code and keeping the team focused on the valuable and important things. (For more resources related to this topic, see here.) Architectural concepts It's been a long time since the famous Model-View-Controller, also known as MVC, started to be widely used in the software development industry, thereby becoming one of the legends of the enterprise architecture design. Basically, the model represents the knowledge that the view is responsible to present, while the controller mediates their relationship. However, these concepts are a little bit abstract, and this pattern may have different implementations depending on the language, platform, and purposes of the application. After a lot of discussions about which architectural pattern the framework follows, its authors declared that from now on, AngularJS is adopting Model-View-Whatever (MVW). Regardless of the name, the most important benefit is that the framework provides a clear separation of the concerns between the application layers, providing modularity, flexibility, and testability. In terms of concepts, a typical AngularJS application consists primarily of view, model, and controller, but there are other important components, such as services, directives, and filters. The view, also called template, is entirely written in HTML, which becomes a great opportunity to see web designers and JavaScript developers working side-by-side. It also takes advantage of the directives mechanism, a kind of extension of the HTML vocabulary that brings the ability to perform the programming language tasks, such as iterating over an array or even evaluating an expression conditionally. Behind the view, there is the controller. At first, the controller contains all business logic implementation used by the view. However, as the application grows, it becomes really important to perform some refactoring activities, such as moving the code from the controller to other components like services, in order to keep the cohesion high. The connection between the view and the controller is done by a shared object called scope. It is located between them and is used to exchange information related to the model. The model is a simple Plain-Old-JavaScript-Object (POJO). It looks very clear and easy to understand, bringing simplicity to the development by not requiring any special syntax to be created. Setting up the framework The configuration process is very simple and in order to set up the framework, we start by importing the angular.js script to our HTML file. After that, we need to create the application module by calling the module function, from the Angular's API, with it's name and dependencies. With the module already created, we just need to place the ng-app attribute with the module's name inside the html element or any other that surrounds the application. This attribute is important because it supports the initialization process of the framework. In the following code, there is an introductory application about a parking lot. At first, we are able to add and also list the parked cars, storing it’s plate in memory. Throughout the book, we will evolve this parking control application by incorporating each newly studied concept. index.html <!doctype html> <!-- Declaring the ng-app --> <html ng-app="parking"> <head> <title>Parking</title> <!-- Importing the angular.js script --> <script src="angular.js"></script> <script> // Creating the module called parking var parking = angular.module("parking", []); // Registering the parkingCtrl to the parking module parking.controller("parkingCtrl", function ($scope) { // Binding the car’s array to the scope $scope.cars = [ {plate: '6MBV006'}, {plate: '5BBM299'}, {plate: '5AOJ230'} ]; // Binding the park function to the scope $scope.park = function (car) { $scope.cars.push(angular.copy(car)); delete $scope.car; }; }); </script> </head> <!-- Attaching the view to the parkingCtrl --> <body ng-controller="parkingCtrl"> <h3>[Packt] Parking</h3> <table> <thead> <tr> <th>Plate</th> </tr> </thead> <tbody> <!-- Iterating over the cars --> <tr ng-repeat="car in cars"> <!-- Showing the car’s plate --> <td>{{car.plate}}</td> </tr> </tbody> </table> <!-- Binding the car object, with plate, to the scope --> <input type="text" ng-model="car.plate"/> <!-- Binding the park function to the click event --> <button ng-click="park(car)">Park</button> </body> </html> The ngController, was used to bind the parkingCtrl to the view while the ngRepeat iterated over the car's array. Also, we employed expressions like {{car.plate}} to display the plate of the car. Finally, to add new cars, we applied the ngModel, which creates a new object called car with the plate property, passing it as a parameter of the park function, called through the ngClick directive. To improve the loading page performance, it is recommended to use the minified and obfuscated version of the script that can be identified by angular.min.js. Both minified and regular distributions of the framework can be found on the official site of AngularJS, that is, http://www.angularjs.org, or they can be directly referenced to Google Content Delivery Network (CDN). What is a directive? A directive is an extension of the HTML vocabulary that allows the creation of new behaviors. This technology lets the developers create reusable components that can be used within the whole application and even provide their own custom components. It may be applied as an attribute, element, class, and even as a comment, by using the camelCase syntax. However, because HTML is case-insensitive, we need to use a lowercase form. For the ngModel directive, we can use ng-model, ng:model, ng_model, data-ng-model, and x-ng-model in the HTML markup. Using AngularJS built-in directives By default, the framework brings a basic set of directives, such as iterate over an array, execute a custom behavior when an element is clicked, or even show a given element based on a conditional expression and many others. ngBind This directive is generally applied to a span element and replaces the content of the element with the result of the provided expression. It has the same meaning as that of the double curly markup, for example, {{expression}}. Why would anyone like to use this directive when a less verbose alternative is available? This is because when the page is being compiled, there is a moment when the raw state of the expressions is shown. Since the directive is defined by the attribute of the element, it is invisible to the user. Here is an example of the ngBind directive usage: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> </body> </html> ngRepeat The ngRepeat directive is really useful to iterate over arrays and objects. It can be used with any kind of element such as rows of a table, elements of a list, and even options of select. We must provide a special repeat expression that describes the array to iterate over and the variable that will hold each item in the iteration. The most basic expression format allows us to iterate over an array, attributing each element to a variable: variable in array In the following code, we will iterate over the cars array and assign each element to the car variable: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> </body> </html> ngModel The ngModel directive attaches the element to a property in the scope, binding the view to the model. In this case, the element could be input (all types), select, or textarea. <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> There is an important advice regarding the use of this directive. We must pay attention to the purpose of the field that is using the ngModel directive. Every time the field is being part of the construction of an object, we must declare in which object the property should be attached. In this case, the object that is being constructed is a car, so we use car.plate inside the directive expression. However, sometimes it might occur that there is an input field that is just used to change a flag, allowing the control of the state of a dialog or another UI component. In these cases, we may use the ngModel directive without any object, as far as it will not be used together with other properties or even persisted. ngClick and other event directives The ngClick directive is one of the most useful kinds of directives in the framework. It allows you to bind any custom behavior to the click event of the element. The following code is an example of the usage of the ngClick directive calling a function: index.html <!doctype html> <html ng-app="parking"> <head> <title>[Packt] Parking</title> <script src="angular.js"></script> <script> var parking = angular.module("parking", []); parking.controller("parkingCtrl", function ($scope) { $scope.appTitle = "[Packt] Parking"; $scope.cars = []; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; }); </script> </head> <body ng-controller="parkingCtrl"> <h3 ng-bind="appTitle"></h3> <table> <thead> <tr> <th>Plate</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-repeat="car in cars"> <td><span ng-bind="car.plate"></span></td> <td><span ng-bind="car.entrance"></span></td> </tr> </tbody> </table> <input type="text" ng-model="car.plate" placeholder="What's the plate?" /> <button ng-click="park(car)">Park</button> </body> </html> Here there is another pitfall. Inside the ngClick directive, we call the park function, passing the car as a parameter. As far as we have access to the scope through the controller, would not be easier if we just access it directly, without passing any parameter at all? Keep in mind that we must take care of the coupling level between the view and the controller. One way to keep it low is by avoid reading the scope object directly from the controller, replacing this intention by passing everything it need by parameter from the view. It will increase the controller testability and also make the things more clear and explicitly. Other directives that have the same behavior, but are triggered by other events, are ngBlur, ngChange, ngCopy, ngCut, ngDblClick, ngFocus, ngKeyPress, ngKeyDown, ngKeyUp, ngMousedown, ngMouseenter, ngMouseleave, ngMousemove, ngMouseover, ngMouseup, and ngPaste. Filters The filters are, associated with other technologies like directives and expressions, responsible for the extraordinary expressiveness of framework. It lets us easily manipulate and transform any value, not only combined with expressions inside a template, but also injected in other components like controllers and services. It is really useful when we need to format date and money according to our current locale or even support the filtering feature of a grid component. Filters are the perfect answer to easily perform any data manipulating. currency The currency filter is used to format a number based on a currency. The basic usage of this filter is without any parameter: {{ 10 | currency}} The result of the evaluation will be the number $10.00, formatted and prefixed with the dollar sign. In order to achieve the correct output, in this case R$10,00 instead of R$10.00, we need to configure the Brazilian (PT-BR) locale, available inside the AngularJS distribution package. There, we may find locales to the most part of the countries and we just need to import it to our application such as: <script src="js/lib/angular-locale_pt-br.js"></script> After import the locale, we will not need to use the currency symbol anymore because it's already wrapped inside. Besides the currency, the locale also defines the configuration of many other variables like the days of the week and months, very useful when combined with the next filter used to format dates. date The date filter is one of the most useful filters of the framework. Generally, a date value comes from the database or any other source in a raw and generic format. In this way, filters like that are essential to any kind of application. Basically, we can use this filter by declaring it inside any expression. In the following example, we use the filter on a date variable attached to the scope. {{ car.entrance | date }} The output will be Dec 10, 2013. However, there are thousands of combinations that we can make with the optional format mask. {{ car.entrance | date:'MMMM dd/MM/yyyy HH:mm:ss' }} Using this format, the output changes to December 10/12/2013 21:42:10. filter Have you ever needed to filter a list of data? This filter performs exactly this task, acting over an array and applying any filtering criteria. Now, let's include in our car parking application a field to search any parked car and use this filter to do the job. index.html <input type="text" ng-model="criteria" placeholder="What are you looking for?" /> <table> <thead> <tr> <th></th> <th>Plate</th> <th>Color</th> <th>Entrance</th> </tr> </thead> <tbody> <tr ng-class="{selected: car.selected}" ng-repeat="car in cars | filter:criteria" > <td> <input type="checkbox" ng-model="car.selected" /> </td> <td>{{car.plate}}</td> <td>{{car.color}}</td> <td>{{car.entrance | date:'dd/MM/yyyy hh:mm'}}</td> </tr> </tbody> </table> The result is really impressive. With an input field and the filter declaration we did the job. Integrating the backend with AJAX AJAX, also known as Asynchronous JavaScript and XML, is a technology that allows the applications to send and retrieve data from the server asynchronously, without refreshing the page. The $http service wraps the low-level interaction with the XMLHttpRequest object, providing an easy way to perform calls. This service could be called by just passing a configuration object, used to set many important information like the method, the URL of the requested resource, the data to be sent, and many others: $http({method: "GET", url: "/resource"}) .success(function (data, status, headers, config, statusText) { }) .error(function (data, status, headers, config, statusText) { }); To make it easier to use, there are the following shortcuts methods available for this service. In this case, the configuration object is optional. $http.get(url, [config]) $http.post(url, data, [config]) $http.put(url, data, [config]) $http.head(url, [config]) $http.delete(url, [config]) $http.jsonp(url, [config]) Now, it’s time to integrate our parking application with the back-end by calling the resource cars with the method GET. It will retrieve the cars, binding it to the $scope object. In case of something went wrong, we are going to log it to the console. controllers.js parking.controller("parkingCtrl", function ($scope, $http) { $scope.appTitle = "[Packt] Parking"; $scope.park = function (car) { car.entrance = new Date(); $scope.cars.push(car); delete $scope.car; }; var retrieveCars = function () { $http.get("/cars") .success(function(data, status, headers, config) { $scope.cars = data; }) .error(function(data, status, headers, config) { switch(status) { case 401: { $scope.message = "You must be authenticated!" break; } case 500: { $scope.message = "Something went wrong!"; break; } } console.log(data, status); }); }; retrieveCars(); }); Summary This article introduced you to the fundamentals of AngularJS in order to design and construct reusable, maintainable, and modular web applications. Resources for Article: Further resources on this subject: AngularJS Project [article] Working with Live Data and AngularJS [article] CreateJS – Performing Animation and Transforming Function [article]
Read more
  • 0
  • 0
  • 8380

article-image-transforming-data-service
Packt
20 Aug 2014
4 min read
Save for later

Transforming data in the service

Packt
20 Aug 2014
4 min read
This article written by, Jim Lavin, author of the book AngularJS Services will cover ways on how to transform data. Sometimes, you need to return a subset of your data for a directive or controller, or you need to translate your data into another format for use by an external service. This can be handled in several different ways; you can use AngularJS filters or you could use an external library such as underscore or lodash. (For more resources related to this topic, see here.) How often you need to do such transformations will help you decide on which route you take. If you are going to transform data just a few times, it isn't necessary to add another library to your application; however, if you are going to do it often, using a library such as underscore or lodash will be a big help. We are going to limit our discussion to using AngularJS filters to handle transforming our data. Filters are an often-overlooked component in the AngularJS arsenal. Often, developers will end up writing a lot of methods in a controller or service to filter an array of objects that are iterated over in an ngRepeat directive, when a simple filter could have easily been written and applied to the ngRepeat directive and removed the excess code from the service or controller. First, let's look at creating a filter that will reduce your data based on a property on the object, which is one of the simplest filters to create. This filter is designed to be used as an option to the ngRepeat directive to limit the number of items displayed by the directive. The following fermentableType filter expects an array of fermentable objects as the input parameter and a type value to filter as the arg parameter. If the fermentable's type value matches the arg parameter passed into the filter, it is pushed onto the resultant array, which will in turn cause the object to be included in the set provided to the ngRepeat directive. angular.module('brew-everywhere').filter('fermentableType', function () {return function (input, arg) {var result = [];angular.forEach(input, function(item){if(item.type === arg){result.push(item);}})return result;};}); To use the filter, you include it in your partial in an ngRepeat directive as follows: <table class="table table-bordered"><thead><tr><th>Name</th><th>Type</th><th>Potential</th><th>SRM</th><th>Amount</th><th>&nbsp;</th></tr></thead><tbody><tr ng-repeat="fermentable in fermentables |fermentableType:'Grain'"><td class="col-xs-4">{{fermentable.name}}</td><td class="col-xs-2">{{fermentable.type}}</td><td class="col-xs-2">{{fermentable.potential}}</td><td class="col-xs-2">{{fermentable.color}}</td></tr></tbody></table> The result of calling fermentableType with the value, Grain is only going to display those fermentable objects that have a type property with a value of Grain. Using filters to reduce an array of objects can be as simple or complex as you like. The next filter we are going to look at is one that uses an object to reduce the fermentable object array based on properties in the passed-in object. The following filterFermentable filter expects an array of fermentable objects as an input and an object that defines the various properties and their required values that are needed to return a matching object. To build the resulting array of objects, you walk through each object and compare each property with those of the object passed in as the arg parameter. If all the properties match, the object is added to the array and it is returned. angular.module('brew-everywhere').filter('filterFermentable', function () {return function (input, arg) {var result = [];angular.forEach(input, function (item) {var add = truefor (var key in arg) {if (item.hasOwnProperty(key)) {if (item[key] !== arg[key]) {add = false;}}}if (add) {result.push(item);}});return result;};});
Read more
  • 0
  • 0
  • 4794
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-bootstrap-grid-system
Packt
19 Aug 2014
3 min read
Save for later

The Bootstrap grid system

Packt
19 Aug 2014
3 min read
This article is written by Pieter van der Westhuizen, the author of Bootstrap for ASP.NET MVC. Many websites are reporting an increasing amount of mobile traffic and this trend is expected to increase over the coming years. The Bootstrap grid system is mobile-first, which means it is designed to target devices with smaller displays and then grow as the display size increases. Fortunately, this is not something you need to be too concerned about as Bootstrap takes care of most of the heavy lifting. (For more resources related to this topic, see here.) Bootstrap grid options Bootstrap 3 introduced a number of predefined grid classes in order to specify the sizes of columns in your design. These class names are listed in the following table: Class name Type of device Resolution Container width Column width col-xs-* Phones Less than 768 px Auto Auto col-sm-* Tablets Larger than 768 px 750 px 60 px col-md-* Desktops Larger than 992 px 970 px 1170 px col-lg-* High-resolution desktops Larger than 1200 px 78 px 95 px The Bootstrap grid is divided into 12 columns. When laying out your web page, keep in mind that all columns combined should be a total of 12. To illustrate this, consider the following HTML code: <div class="container"><div class="row"><div class="col-md-3" style="background-color:green;"><h3>green</h3></div><div class="col-md-6" style="background-color:red;"><h3>red</h3></div><div class="col-md-3" style="background-color:blue;"><h3>blue</h3></div></div></div> In the preceding code, we have a <div> element, container, with one child <div> element, row. The row div element in turn has three columns. You will notice that two of the columns have a class name of col-md-3 and one of the columns has a class name of col-md-6. When combined, they add up to 12. The preceding code will work well on all devices with a resolution of 992 pixels or higher. To preserve the preceding layout on devices with smaller resolutions, you'll need to combine the various CSS grid classes. For example, to allow our layout to work on tablets, phones, and medium-sized desktop displays, change the HTML to the following code: <div class="container"><div class="row"><div class="col-xs-3 col-sm-3 col-md-3" style="backgroundcolor:green;"><h3>green</h3></div><div class="col-xs-6 col-sm-6 col-md-6" style="backgroundcolor:red;"><h3>red</h3></div><div class="col-xs-3 col-sm-3 col-md-3" style="backgroundcolor:blue;"><h3>blue</h3></div></div></div> By adding the col-xs-* and col-sm-* class names to the div elements, we'll ensure that our layout will appear the same in a wide range of device resolutions. Bootstrap HTML elements Bootstrap provides a host of different HTML elements that are styled and ready to use. These elements include the following: Tables Buttons Forms Images
Read more
  • 0
  • 0
  • 7730

article-image-foundation
Packt
19 Aug 2014
22 min read
Save for later

Foundation

Packt
19 Aug 2014
22 min read
In this article by Kevin Horek author of Learning Zurb Foundation, we will be covering the following points: How to move away from showing clients wireframes and how to create responsive prototypes Why these prototypes are better and quicker than doing traditional wireframes The different versions of Foundation What does Foundation include? How to use the documentation How to migrate from an older version Getting support when you can't figure something out What browsers does Foundation support? How to extend Foundation Our demo site (For more resources related to this topic, see here.) Over the last couple of years, showing wireframes to most clients has not really worked well for me. They never seem to quite get it, and if they do, they never seem to fully understand all the functionality through a wireframe. For some people, it is really hard to picture things in their head, they need to see exactly what it will look and function like to truly understand what they are looking at. You should still do a rough wireframe either on paper, on a whiteboard, or on the computer. Then once you and/or your team are happy with these rough wireframes, then jump right into the prototype. Rough wireframing and prototypying You might think prototyping this early on when the client has only seen a sitemap is crazy, but the thing is, once you master Foundation, you can build prototypes in about the same time you would spend doing traditional high quality wireframes in Illustrator or whatever program you currently use. With these prototypes, you can make things clickable, interactive, and super fast to make edits to after you get feedback from the client. With the default Foundation components, you can work out how things will work on a phone, tablet, and desktop/laptop. This way you can work with your team to fully understand how things will function and start seeing where the project's potential issues will be. You can then assign people to start dealing with these potential problems early on in the process. When you are ready to show the client, you can walk them through their project on multiple devices and platforms. You can easily show them what content they are going to need and how that content will flow and reflow based on the medium the user is viewing their project on. You should try to get content as early as possible; a lot of companies are hiring content strategists. These content strategists handle working with the client to get, write, and rework content to fit in the responsive web medium. This allows you to design around a client's content, or at least some of the content. We all know that what a client says they will get you for content is not always what you get, so you may need to tweak the design to fit the actual content you get. Making these theming changes to accommodate these content changes can be a pain, but with Foundation, you can just reflow part of the page and try some ideas out in the prototype before you put them back into the working development site. Once you have built up a bunch of prototypes, you can easily combine and use parts of them to create new designs really fast for current or new projects. When prototyping, you should keep everything grayscale, without custom fonts, or a theme beyond the base Foundation one. These prototypes do not have to look pretty. The less it looks like a full design, the better off you will be. You will have to inform your client that an actual design for their project will be coming and that it will be done after they sign off this prototype. When you show the client, you should bring a phone, a tablet, and a laptop to show them how the project will flow on each of these devices. This takes out all the confusion about what happens to the layouts on different screen sizes and on touch and non-touch devices. It also allows your client and your team to fully understand what they are building and how everything works and functions. Trying to take a PDF of wireframes, a Photoshop file, and trying to piece them together to build a responsive web project can be really challenging. With this approach, so many details can get lost in translation, you have to keep going back to talk to the client or your team about how certain things should work or function. Even worse, you have to make huge changes to a section close to the end of the project because something was designed without being really thought through and now your developers have to scramble to make something work within the budget. Prototyping can sort out all the issues or at least the major issues that could arise in the project. With these Foundation prototypes, you keep building on the code for each step of the web building process. Your designer can work with your frontend/backend team to come up with a prototype that everyone is happy with and commit to being able to build it before the client sees anything. If you are familiar with version control, you can use it to keep track of your prototypes and collaborate with another person or a team of people. The two most popular version control software applications are Git (http://git-scm.com/) and Subversion (http://subversion.apache.org/). Git is the more popular of the two right now; however, if you are working on a project that has been around for a number of years, chances are that it will be in Subversion. You can migrate from one to the other, but it might take a bit of work. These prototypes keep your team on the same page right from the beginning of the project and allow the client to sign off on functionality and how the project will work on different mediums. Yes, you are spending more time at the beginning getting everyone on the same page and figuring out functionality early on, but this process should sort out all the confusion later in a project and save you time and money at the end of the project. When the client has changes that are out of scope, it is easy to reference back to the prototype and show them how that change will impact what they signed off on. If the change is major enough then you will need to get them a cost on making that change happen. You should test your prototypes on an iPhone, an Android phone, an iPad, and your desktop or laptop. I would also figure out what browser your client uses and make sure you test on that as well. If they are using an older version of IE, 8 or earlier, you will need to have the conversation with them about how Foundation 4+ does not support IE8. If that support is needed, you will have to come up with a solution to handle this outdated version of IE. Looking at a client's analytics to see what versions of IE their clients are coming to the project with will help you decide how to handle older versions of IE. Analytics might tell you that you can drop the version all together. Another great component that is included with Foundation is Modernizr (http://modernizr.com/); this allows you to write conditional JS and/or CSS for a specific situation or browser version. This really can be a lifesaver. Prototyping smaller projects While you are learning Foundation, you might think that using Foundation on a smaller project will eat up your entire budget. However, these are the best projects to learn Foundation. Basically, you take the prototype to a place where you can show a client the rough look and feel using Foundation. Then, you create a theme board in Photoshop with colors, fonts, photos and anything else to show the client. This first version will be a grayscale prototype that will function across multiple screen sizes. Then you can pull up your theme board to show the direction you are thinking of for the look and feel. If you still feel more comfortable doing your designs in Photoshop, there are some really good Photoshop grid templates at http://www.yeedeen.com/downloads/category/30-psd. If you want to create a custom grid that you can take a screenshot of, then paste into Photoshop, and then drag your guidelines over the grid to make your own template, you can refer to http://www.gridlover.net/foundation/. Prototyping wrap-up These methods are not perfect and may not always work for you, but you're going to see my workflow and how Foundation can be used on all of your web projects. You will figure out what will work with your clients, your projects, and your workflow. Also, you might have slightly different workflows based on the type of project, and/or project budget. If the client does not see value in having a responsive site, you should choose if you want to work with these types of clients. The Web is not one standard resolution anymore and it never will be again, so if a client does not understand that, you might want to consider not working with them. These types of clients are usually super hard to work with and your time is better spent on clients that get or are willing to allow you to teach them and trust you that you are building their project for the modern Web. Personally, clients that have fought with me to not be responsive usually come back a few months later wondering why their site does not work great on their new smartphone or tablet and wanting you to fix it. So try and address this up front and it will save you grief later on and make your clients happier and their experience better. Like anything, there are exceptions to this but just make sure you have a contract in place to outline that you are not building this as responsive, and that it could cause the client a lot of grief and costs later to go back and make it responsive. No matter what you do for a client, you should have a contract in place, this will just make sure you both understand what is each party responsible for. Personally, I like to use a modified version of, (https://gist.github.com/malarkey/4031110). This contract does not have any legal mumbo jumbo that people do not understand. It is written in plain English and has a little bit of a less serious tone. Now that we have covered why prototyping with Foundation is faster than doing wireframes or prototypes in Photoshop, let's talk about what comes in the base Foundation framework. Then we will cover which version to install, and then go through each file and folder. Introducing the framework Before we get started, please refer to the http://foundation.zurb.com/develop/download.html webpage. You will see that there are four versions of Foundation: complete, essentials, custom, and SCSS. But let's talk about the other versions. The essentials is just a smaller version of Foundation that does not include all the components of the framework; this version is a barebones version. Once you are familiar with Foundation, you will likely only include the components that you need for a specific project. By only including the components you need, you can speed up the load time of your project and you do not make the user download files that are not being used by your project. The custom version allows you to pick the components and basic sizes, colors, radius, and text direction. You will likely use this or the SCSS version of Foundation once you are more comfortable with the framework. The SCSS or Sass version of Foundation is the most powerful version. If you do not know what Sass is, it basically gives you additional features of CSS that can speed up how you theme your projects. There is actually another version of Foundation that is not listed on this page, which can be found by hitting the blue Getting Started option in the top right-corner and then clicking on App Guide under Building and App. You can also visit this version at http://foundation.zurb.com/docs/applications.html. This version is the Ruby Gem version of Foundation, and unless you are building a Ruby on Rails project, you will never use this version of Foundation. Zurb keeps the gem pretty up to date, you will likely get the new version of the gem about a week or two after the other versions come out. Alright, let's get into Foundation. If you have not already, hit the blue Download Everything button below the complete heading on the webpage. We will be building a one page demo site from the base Foundation theme that you just downloaded. This way, you can see how to take what you are given by default and customize this base theme to look anyway you want it to. We will give this base theme a custom look and feel, and make it look like you are not using a responsive framework at all. The only way to tell is if you view the source of the website. The Zurb components have very little theming applied to them. This allows you to not have to worry about really overriding the CSS code and you can just start adding additional CSS to customize these components. We will cover how to use all the major components of the framework, you will have an advanced understanding of the framework and how you can use it on all your projects going forward. Foundation has been used on small-to-large websites, web apps, at startups, with content management systems, and with enterprise-level applications. Going over the base theme The base theme that you download is made up of an HTML index file, a folder of CSS files, JavaScript files, and an empty img folder for images, which are explained in the following points: The index.html file has a few Foundation components to get you started. You have three, 12- column grids at three screen sizes; small, medium, and large. You can also control how many columns are in the grid, and the spacing (also called the gutter) between the columns, and how to use the other grid options. You will soon notice that you have full control over pretty much anything and you can control how things are rendered on any screen size or device, and whether that device is in portrait or landscape. You also have the ability to render different code on different devices and for different screen sizes. In the CSS folder, there is the un-minified version of Foundation with the filename foundation.css. There is also a minified version of Foundation with the filename foundation.min.css. If you are not familiar with minification, it has the same code as the foundation.css file, just all the spacing, comments, and code formatting have been taken out. This makes the file really hard to read and edit, but the file size is smaller and will speed up your project's load time. Most of the time, minified files have all the code on one really long line. You should use the foundation.css file as reference but actually include the minified one in your project. The minified version makes debugging and error fixing almost impossible, so we use the un-minified version for development and then the minified version for production. The last file in that folder is normalize.css; this file can be called a reset file, but it is more of an alternative to a reset file. This file is used to try to set defaults on a bunch of CSS elements and tries to get all the browsers to be set to the same defaults. The thinking behind this is that every browser will look and render things the same, and, therefore, there should not be a lot of specific theming fixes for different browsers. These types of files do a pretty good job but are not perfect and you will have to do little fixes for different browsers, even the modern ones. We will also cover how to use some extra CSS to take resetting certain elements a little further than the normalize file does for you. This will mainly include showing you how to render form elements and buttons to be the same across-browser and device. We will also talk about, browser version, platform, OS, and screen resolution detection when we talk about testing. We will also be adding our own CSS file that we will add our customizations to, so if you ever decide to update the framework as a new version comes out, you will not have to worry about overriding your changes. We will never add or modify the core files of the framework; I highly recommend you do not do this either. Once we get into Sass, we will cover how you can really start customizing the framework defaults using the custom variables that are built right into Foundation. These variables are one of the reasons that Foundation is the most advanced responsive framework out there. These variables are super powerful and one of my favorite things about Foundation. Once you understand how to use variables, you can write your own or you can extend your setup of Foundation as much as you like. In the JS folder, you will find a few files and some folders. In the Foundation folder, you will find each of the JavaScript components that you need to make Foundation work properly cross-device, browser, and responsive. These JavaScript components can also be use to extend Foundation's functionality even further. You can only include the components that you need in your project. This allows you to keep the framework lean and can help with load times; this is especially useful on mobile. You can also use CSS to theme each of these components to be rendered differently on each device or at different screen sizes. The foundation.min.js file is a minified version of all the files in the Foundation folder. You can decide based on your needs whether you want to include only the JavaScripts you are using on that project or whether you want to include them all. When you are learning, you should include them all. When you are comfortable with the framework and are ready to make your project live, you should only include the JavaScripts you are actually using. This helps with load times and can make troubleshooting easier. Many of the Foundation components will not work without including the JavaScript for that component. The next file you will notice is jquery.js it might be either in the root of this folder or in the vendor folder if you are using a newer version of Foundation 5. If you are not familiar with jQuery, it is a JavaScript library that makes DOM manipulation, event handling, animation, and Ajax a lot easier. It also makes all of this stuff work cross-browser and cross-device. The next file in the JS folder or in the vendor folder under JS is modernizr.js; this file helps you to write conditional JavaScript and/or CSS to make things work cross-browser and to make progressive enhancements. Also, you put third-party JavaScript libraries that you are using on your project in the vendor folder. These are libraries that you either wrote yourself or found online, are not part of Foundation, and are not required for Foundation to work properly. Referring to the Foundation documentation The Foundation documentation is located at http://foundation.zurb.com/docs/. Foundation is really well documented and provides a lot of code samples and examples to use in your own projects. All the components also contain Sass variables that you can use to customize some of the defaults and even build your own. This saves you writing a bunch of override CSS classes. Each part of the framework is listed on the left-hand side and you can click on what you are looking for. You are taken to a page about that specific part and can read the section's overview, view code samples, working examples, and how to customize that part of the framework. Each section has a pretty good walk through about how to use each piece. Zurb is constantly updating Foundation, so you should check the change log every once in a while at http://foundation.zurb.com/docs/changelog.html. If you need documentation on an older version of Foundation, it is at the bottom of the documentation site in the left-hand column. Zurb keeps all the documentation back to Foundation 2. The only reason you will ever need to use Foundation 2 is if you need to support a really, really old version of IE, such as version 7. Foundation never supported IE6, but you will likely never have to worry about that version of IE. Migrating to a newer version of Foundation If you have an older version of Foundation, each version has a migration guide. The migration guide from Foundation 4 to 5 can be found at http://foundation.zurb.com/docs/upgrading.html. Personally, I have migrated websites and web apps in multiple languages and as long as Zurb does not change the grid, like they did from Foundation 3 to 4, then usually we copy-and-paste over the old version of the Foundation CSS, JavaScript, and images. You will likely have to change some JavaScript calls, do some testing, and do some minor fixes here and there, but it is usually a pretty smooth process as long as you did not modify the core framework or write a bunch of custom overrides. If you did either of these things, you will be in for a lot of work or a full rebuild of your project, so you should never modify the core. For old versions of Foundation, or if your version has been heavily modified, it might be easier to start with a fresh version of Foundation and copy-and-paste in the parts that you want to still use. Personally, I have done both and it really depends on the project. Before you do any migration, make sure you are using some sort of version control, such as GIT. If you do not know what GIT is, you should look into it. Here is a good place to start: (http://git-scm.com/book/en/Getting-Started) GIT has saved me from losing code so many times. If GIT is a little overwhelming right now, at the very least, duplicate your project folder as a backup and then copy in the new version of the framework over your files. If things are really broken, you can at least still use your old version while you work out the kinks in the new version. Framework support At some point, you will likely have questions about something in the framework, or will be trying to get something to work and for some reason, you can't figure it out. Foundation has multiple ways to get support, some of which are listed as follows: E-mail Twitter GitHub StackOverflow Forums To visit or get in-touch with support go to http://foundation.zurb.com/support/support.html. Browser support Foundation 5 supports the majority of browsers and devices, but like anything modern, it drops support for older browser versions. If you need IE8 or cringe, or IE7 support, you will need to use an older version of Foundation. You can see a full browser and device compatibility list at http://foundation.zurb.com/docs/compatibility.html. Extending Foundation Zurb also builds a bunch of other components that usually make their way into Foundation at some point, and work well with Foundation even though they are not officially part of it. These components range from new JavaScript libraries, fonts, icons, templates, and so on. You can visit their playground at http://zurb.com/playground. This playground also has other great resources and tools that you can use on other projects and other mediums. The things at Zurb's playground can make designing with Foundation a lot easier, even if you are not a designer. It can take quite a while to find icons or make them into SVGs or fonts for use in your projects, but Zurb has provided these in their playground. Overview of our one-page demo website The best way to show you how to learn the Zurb Foundation Responsive Framework is to actually get you building a demo site along with me. You can visit the final demo site we will be building at http://www.learningzurbfoundation.com/demo. We will be taking the base starter theme that we downloaded and making a one-page demo site. The demo site is built to teach you how to use the components and how they work together. You can also add outside components, but you can try those on your own. The demo site will show you how to build a responsive website, and it might not look like an ideal site, but I am trying to use as many components as possible to show you how to use the framework. Once you complete this site, you will have a deep understanding of the framework. You can then use this site as a starter theme or at the very least, as a reference for all your Foundation projects going forward. Summary In this article, we covered how to rough wireframe and quickly moved into prototyping. We also covered the following points: We went over what is included in the base Foundation theme Explored the documentation and how to migrate Foundation versions How to get framework support Started to get you thinking about browser support Letting you know that you can extend Foundation beyond its defaults We quickly covered our one-page demo site Resources for Article: Further resources on this subject: Quick start – using Foundation 4 components for your first website [Article] Zurb Foundation – an Overview [Article] Best Practices for Modern Web Applications [Article]
Read more
  • 0
  • 0
  • 2014

article-image-social-media-and-magento
Packt
19 Aug 2014
17 min read
Save for later

Social Media and Magento

Packt
19 Aug 2014
17 min read
Social networks such as Twitter and Facebook are ever popular and can be a great source of new customers if used correctly on your store. In this article by Richard Carter, the author of Learning Magento Theme Development, covers the following topics: Integrating a Twitter feed into your Magento store Integrating a Facebook Like Box into your Magento store Including social share buttons in your product pages Integrating product videos from YouTube into the product page (For more resources related to this topic, see here.) Integrating a Twitter feed into your Magento store If you're active on Twitter, it can be worthwhile to let your customers know. While you can't (yet, anyway!) accept payment for your goods through Twitter, it can be a great way to develop a long term relationship with your store's customers and increase repeat orders. One way you can tell customers you're active on Twitter is to place a Twitter feed that contains some of your recent tweets on your store's home page. While you need to be careful not to get in the way of your store's true content, such as your most recent products and offers, you could add the Twitter feed in the footer of your website.   Creating your Twitter widget   To embed your tweets, you will need to create a Twitter widget. Log in to your Twitter account, navigate to https://twitter.com/settings/widgets, and follow the instructions given there to create a widget that contains your most recent tweets. This will create a block of code for you that looks similar to the following code:   <a class="twitter-timeline" href="https://twitter.com/RichardCarter" data-widget-id="123456789999999999">Tweets by @RichardCarter</a>   <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s) [0],p=/^http:/.test(d.location)?'http':'https';if(!d. getElementById(id)){js=d.createElement(s);js.id=id;js. src=p+"://platform.twitter.com/widgets.js";fjs.parentNode. insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script> Embedding your Twitter feed into a Magento template   Once you have the Twitter widget code to embed, you're ready to embed it into one of Magento's template files. This Twitter feed will be embedded in your store's footer area. So, so open your theme's /app/design/frontend/default/m18/template/page/html/footer.phtml file and add the highlighted section of the following code:   <div class="footer-about footer-col">   <?php echo $this->getLayout()->createBlock('cms/block')->setBlockId('footer_about')->toHtml(); ?>   <?php   $_helper = Mage::helper('catalog/category'); $_categories = $_helper->getStoreCategories(); if (count($_categories) > 0): ?> <ul>   <?phpforeach($_categories as $_category): ?> <li>   <a href="<?php echo $_helper->getCategoryUrl($_category) ?>"> <?php echo $_category->getName() ?>   </a>   </li> <?phpendforeach; ?> </ul>   <?phpendif; ?>   <a class="twitter-timeline" href="https://twitter.com/RichardCarter" data-widget-id="123456789999999999">Tweets by @RichardCarter</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s) [0],p=/^http:/.test(d.location)?'http':'https';if(!d. getElementById(id)){js=d.createElement(s);js.id=id;js. src=p+"://platform.twitter.com/widgets.js";fjs.parentNode. insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script>   </div>   The result of the preceding code is a Twitter feed similar to the following one embedded on your store:     As you can see, the Twitter widget is quite cumbersome. So, it's wise to be sparing when adding this to your website. Sometimes, a simple Twitter icon that links to your account is all you need!   Integrating a Facebook Like Box into your Magento store   Facebook is one of the world's most popular social networks; with careful integration, you can help drive your customers to your Facebook page and increase long term interaction. This will drive repeat sales and new potential customers to your store. One way to integrate your store's Facebook page into your Magento site is to embed your Facebook page's news feed into it.   Getting the embedding code from Facebook   Getting the necessary code for embedding from Facebook is relatively easy; navigate to the Facebook Developers website at https://developers.facebook.com/docs/plugins/like-box-for-pages. Here, you are presented with a form. Complete the form to generate your embedding code; enter your Facebook page's URL in the Facebook Page URL field (the following example uses Magento's Facebook page):   Click on the Get Code button on the screen to tell Facebook to generate the code you will need, and you will see a pop up with the code appear as shown in the following screenshot:   Adding the embed code into your Magento templates   Now that you have the embedding code from Facebook, you can alter your templates to include the code snippets. The first block of code for the JavaScript SDK is required in the header.phtml file in your theme's directory at /app/design/frontend/default/m18/template/page/html/. Then, add it at the top of the file:   <div id="fb-root"></div> <script>(function(d, s, id) {   var js, fjs = d.getElementsByTagName(s)[0]; if (d.getElementById(id)) return; js = d.createElement(s); js.id = id;   js.src = "//connect.facebook.net/en_GB/all.js#xfbml=1"; fjs.parentNode.insertBefore(js, fjs); }(document, 'script', 'facebook-jssdk'));</script>   Next, you can add the second code snippet provided by the Facebook Developers site where you want the Facebook Like Box to appear in your page. For flexibility, you can create a static block in Magento's CMS tool to contain this code and then use the Magento XML layout to assign the static block to a template's sidebar.   Navigate to CMS | Static Blocks in Magento's administration panel and add a new static block by clicking on the Add New Block button at the top-right corner of the screen. Enter a suitable name for the new static block in the Block Title field and give it a value facebook in the Identifier field. Disable Magento's rich text editor tool by clicking on the Show / Hide Editor button above the Content field.   Enter in the Content field the second snippet of code the Facebook Developers website provided, which will be similar to the following code:   <div class="fb-like-box" data-href="https://www.facebook.com/Magento" data-width="195" data-colorscheme="light" data-show-faces="true" data-header="true" data-stream="false" data-show-border="true"></div> Once complete, your new block should look like the following screenshot:   Click on the Save Block button to create a new block for your Facebook widget. Now that you have created the block, you can alter your Magento theme's layout files to include the block in the right-hand column of your store.   Next, open your theme's local.xml file located at /app/design/frontend/default/m18/layout/ and add the following highlighted block of XML to it. This will add the static block that contains the Facebook widget:   <reference name="right">   <block type="cms/block" name="cms_facebook">   <action method="setBlockId"><block_id>facebook</block_id></action>   </block>   <!--other layout instructions -->   </reference>   If you save this change and refresh your Magento store on a page that uses the right-hand column page layout, you will see your new Facebook widget appear in the right-hand column. This is shown in the following screenshot:   Including social share buttons in your product pages   Particularly if you are selling to consumers rather than other businesses, you can make use of social share buttons in your product pages to help customers share the products they love with their friends on social networks such as Facebook and Twitter. One of the most convenient ways to do this is to use a third-party service such as AddThis, which also allows you to track your most shared content. This is useful to learn which products are your most-shared products within your store! Styling the product page a little further   Before you begin to integrate the share buttons, you can style your product page to provide a little more layout and distinction between the blocks of content. Open your theme's styles.css file and append the following CSS (located at /skin/frontend/default/m18/css/) to provide a column for the product image and a column for the introductory content of the product:   .product-img-box, .product-shop {   float: left;   margin: 1%;   padding: 1%;   width: 46%;   }   You can also add some additional CSS to style some of the elements that appear on the product view page in your Magento store:   .product-name { margin-bottom: 10px; }   .or {   color: #888; display: block; margin-top: 10px;   }   .add-to-box { background: #f2f2f2; border-radius: 10px; margin-bottom: 10px; padding: 10px; }   .more-views ul { list-style-type: none;   }   If you refresh a product page on your store, you will see the new layout take effect: Integrating AddThis   Now that you have styled the product page a little, you can integrate AddThis with your Magento store. You will need to get a code snippet from the AddThis website at http://www.addthis.com/get/sharing. Your snippet will look something similar to the following code:   <div class="addthis_toolboxaddthis_default_style ">   <a class="addthis_button_facebook_like" fb:like:layout="button_ count"></a>   <a class="addthis_button_tweet"></a>   <a class="addthis_button_pinterest_pinit" pi:pinit:layout="horizontal"></a>   <a class="addthis_counteraddthis_pill_style"></a> </div>   <script type="text/javascript">var addthis_config = {"data_track_ addressbar":true};</script>   <script type="text/javascript" src="//s7.addthis.com/js/300/addthis_ widget.js#pubid=youraddthisusername"></script>   Once the preceding code is included in a page, this produces a social share tool that will look similar to the following screenshot:   Copy the product view template from the view.phtml file from /app/design/frontend/base/default/catalog/product/ to /app/design/frontend/default/m18/catalog/product/ and open your theme's view.phtml file for editing. You probably don't want the share buttons to obstruct the page name, add-to-cart area, or the brief description field. So, positioning the social share tool underneath those items is usually a good idea. Locate the snippet in your view.phtml file that has the following code:   <?php if ($_product->getShortDescription()):?>   <div class="short-description">   <h2><?php echo $this->__('Quick Overview') ?></h2>   <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?></div>   </div>   <?phpendif;?>   Below this block, you can insert your AddThis social share tool highlighted in the following code so that the code is similar to the following block of code (the youraddthisusername value on the last line becomes your AddThis account's username):   <?php if ($_product->getShortDescription()):?>   <div class="short-description">   <h2><?php echo $this->__('Quick Overview') ?></h2>   <div class="std"><?php echo $_helper->productAttribute($_product, nl2br($_product->getShortDescription()), 'short_description') ?></div>   </div>   <?phpendif;?>   <div class="addthis_toolboxaddthis_default_style ">   <a class="addthis_button_facebook_like" fb:like:layout="button_ count"></a>   <a class="addthis_button_tweet"></a>   <a class="addthis_button_pinterest_pinit" pi:pinit:layout="horizontal"></a>   <a class="addthis_counteraddthis_pill_style"></a> </div>   <script type="text/javascript">var addthis_config = {"data_track_ addressbar":true};</script>   <script type="text/javascript" src="//s7.addthis.com/js/300/addthis_ widget.js#pubid=youraddthisusername"></script>   If you want to reuse this block in multiple places throughout your store, consider adding this to a static block in Magento and using Magento's XML layout to add the block as required.   Once again, refresh the product page on your Magento store and you will see the AddThis toolbar appear as shown in the following screenshot. It allows your customers to begin sharing their favorite products on their preferred social networking sites.     If you can't see your changes, don't forget to clear your caches by navigating to System | Cache Management. If you want to provide some space between other elements and the AddThis toolbar, add the following CSS to your theme's styles.css file:   .addthis_toolbox {   margin: 10px 0;   }   The resulting product page will now look similar to the following screenshot. You have successfully integrated social sharing tools on your Magento store's product page:     Integrating product videos from YouTube into the product page   An increasingly common occurrence on ecommerce stores is the use of video in addition to product photography. The use of videos in product pages can help customers overcome any fears they're not buying the right item and give them a better chance to see the quality of the product they're buying. You can, of course, simply add the HTML provided by YouTube's embedding tool to your product description. However, if you want to insert your video on a specific page within your product template, you can follow the steps described in this section. Product attributes in Magento   Magento products are constructed from a number of attributes (different fields), such as product name, description, and price. Magento allows you to customize the attributes assigned to products, so you can add new fields to contain more information on your product. Using this method, you can add a new Video attribute that will contain the video embedding HTML from YouTube and then insert it into your store's product page template.   An attribute value is text or other content that relates to the attribute, for example, the attribute value for the Product Name attribute might be Blue Tshirt. Magento allows you to create different types of attribute:   •        Text Field: This is used for short lines of text.   •        Text Area: This is used for longer blocks of text.   •        Date: This is used to allow a date to be specified.   •        Yes/No: This is used to allow a Boolean true or false value to be assignedto the attribute.   •        Dropdown: This is used to allow just one selection from a list of optionsto be selected.   •        Multiple Select: This is used for a combination box type to allow one ormore selections to be made from a list of options provided.   •        Price: This is used to allow a value other than the product's price, specialprice, tier price, and cost. These fields inherit your store's currency settings.   •        Fixed Product Tax: This is required in some jurisdictions for certain types ofproducts (for example, those that require an environmental tax to be added). Creating a new attribute for your video field   Navigate to Catalog | Attributes | Manage Attributes in your Magento store's control panel. From there, click on the Add New Attribute button located near the top-right corner of your screen:     In the Attribute Properties panel, enter a value in the Attribute Code field that will be used internally in Magento to refer this. Remember the value you enter here, as you will require it in the next step! We will use video as the Attribute Code value in this example (this is shown in the following screenshot). You can leave the remaining settings in this panel as they are to allow this newly created attribute to be used with all types of products within your store.   In the Frontend Properties panel, ensure that Allow HTML Tags on Frontend is set to Yes (you'll need this enabled to allow you to paste the YouTube embedding HTML into your store and for it to work in the template). This is shown in the following screenshot:   Now select the Manage Labels / Options tab in the left-hand column of your screen and enter a value in the Admin and Default Store View fields in the Manage Titles panel:     Then, click on the Save Attribute button located near the top-right corner of the screen. Finally, navigate to Catalog | Attributes | Manage Attribute Sets and select the attribute set you wish to add your new video attribute to (we will use the Default attribute set for this example). In the right-hand column of this screen, you will see the list of Unassigned Attributes with the newly created video attribute in this list:     Drag-and-drop this attribute into the Groups column under the General group as shown in the following screenshot:   Click on the Save Attribute Set button at the top-right corner of the screen to add the new video attribute to the attribute set.   Adding a YouTube video to a product using the new attribute   Once you have added the new attribute to your Magento store, you can add a video to a product. Navigate to Catalog | Manage Products and select a product to edit (ensure that it uses one of the attribute sets you added the new video attribute to). The new Video field will be visible under the General tab:   Insert the embedding code from the YouTube video you wish to use on your product page into this field. The embed code will look like the following:   <iframe width="320" height="240" src="https://www.youtube.com/embed/dQw4w9WgXcQ?rel=0" frameborder="0" allowfullscreen></iframe> Once you have done that, click on the Save button to save the changes to the product.   Inserting the video attribute into your product view template   Your final task is to allow the content of the video attribute to be displayed in your product page templates in Magento. Open your theme's view.phtml file from /app/design/frontend/default/m18/catalog/product/ and locate the followingsnippet of code:   <div class="product-img-box">   <?php echo $this->getChildHtml('media') ?> </div>   Add the following highlighted code to the preceding code to check whether a video for the product exists and show it if it does exist:   <div class="product-img-box">   <?php   $_video-html = $_product->getResource()->getAttribute('video')->getFrontend()->getValue($_product);   if ($_video-html) echo $_video-html ;   ?>   <?php echo $this->getChildHtml('media') ?>   </div>   If you now refresh the product page that you have added a video to, you will see that the video appears in the same column as the product image. This is shown in the following screenshot: Summary In this article, we looked at expanding the customization of your Magento theme to include elements from social networking sites. You learned about integrating a Twitter feed and Facebook feed into your Magento store, including social share buttons in your product pages, and integrating product videos from YouTube. Resources for Article: Further resources on this subject: Optimizing Magento Performance — Using HHVM [article] Installing Magento [article] Magento Fundamentals for Developers [article]
Read more
  • 0
  • 0
  • 10941

article-image-angularjs-project
Packt
19 Aug 2014
14 min read
Save for later

AngularJS Project

Packt
19 Aug 2014
14 min read
This article by Jonathan Spratley, the author of book, Learning Yeoman, covers the steps of how to create an AngularJS project and previewing our application. (For more resources related to this topic, see here.) Anatomy of an Angular project Generally in a single page application (SPA), you create modules that contain a set of functionality, such as a view to display data, a model to store data, and a controller to manage the relationship between the two. Angular incorporates the basic principles of the MVC pattern into how it builds client-side web applications. The major Angular concepts are as follows: Templates : A template is used to write plain HTML with the use of directives and JavaScript expressions Directives : A directive is a reusable component that extends HTML with the custom attributes and elements Models : A model is the data that is displayed to the user and manipulated by the user Scopes : A scope is the context in which the model is stored and made available to controllers, directives, and expressions Expressions : An expression allows access to variables and functions defined on the scope Filters : A filter formats data from an expression for visual display to the user Views : A view is the visual representation of a model displayed to the user, also known as the Document Object Model (DOM) Controllers : A controller is the business logic that manages the view Injector : The injector is the dependency injection container that handles all dependencies Modules : A module is what configures the injector by specifying what dependencies the module needs Services : A service is a piece of reusable business logic that is independent of views Compiler : The compiler handles parsing templates and instantiating directives and expressions Data binding : Data binding handles keeping model data in sync with the view Why Angular? AngularJS is an open source JavaScript framework known as the Superheroic JavaScript MVC Framework, which is actively maintained by the folks over at Google. Angular attempts to minimize the effort in creating web applications by teaching the browser's new tricks. This enables the developers to use declarative markup (known as directives or expressions) to handle attaching the custom logic behind DOM elements. Angular includes many built-in features that allow easy implementation of the following: Two-way data binding in views using double mustaches {{ }} DOM control for repeating, showing, or hiding DOM fragments Form submission and validation handling Reusable HTML components with self-contained logic Access to RESTful and JSONP API services The major benefit of Angular is the ability to create individual modules that handle specific responsibilities, which come in the form of directives, filters, or services. This enables developers to leverage the functionality of the custom modules by passing in the name of the module in the dependencies. Creating a new Angular project Now it is time to build a web application that uses some of Angular's features. The application that we will be creating will be based on the scaffold files created by the Angular generator; we will add functionality that enables CRUD operations on a database. Installing the generator-angular To install the Yeoman Angular generator, execute the following command: $ npm install -g generator-angular For Karma testing, the generator-karma needs to be installed. Scaffolding the application To scaffold a new AngularJS application, create a new folder named learning-yeoman-ch3 and then open a terminal in that location. Then, execute the following command: $ yo angular --coffee This command will invoke the AngularJS generator to scaffold an AngularJS application, and the output should look similar to the following screenshot: Understanding the directory structure Take a minute to become familiar with the directory structure of an Angular application created by the Yeoman generator: app: This folder contains all of the front-end code, HTML, JS, CSS, images, and dependencies: images: This folder contains images for the application scripts: This folder contains AngularJS codebase and business logic: app.coffee: This contains the application module definition and routing controllers: Custom controllers go here: main.coffee: This is the main controller created by default directives: Custom directives go here filters: Custom filters go here services: Reusable application services go here styles: This contains all CSS/LESS/SASS files: main.css: This is the main style sheet created by default views: This contains the HTML templates used in the application main.html: This is the main view created by default index.html: This is the applications' entry point bower_components: This folder contains client-side dependencies node_modules: This contains all project dependencies as node modules test: This contains all the tests for the application: spec: This contains unit tests mirroring structure of the app/scripts folder karma.conf.coffee: This file contains the Karma runner configuration Gruntfile.js: This file contains all project tasks package.json: This file contains project information and dependencies bower.json: This file contains frontend dependency settings The directories (directives, filters, and services) get created when the subgenerator is invoked. Configuring the application Let's go ahead and create a configuration file that will allow us to store the application wide properties; we will use the Angular value services to reference the configuration object. Open up a terminal and execute the following command: $ yo angular:value Config This command will create a configuration service located in the app/scripts/services directory. This service will store global properties for the application. For more information on Angular services, visit http://goo.gl/Q3f6AZ. Now, let's add some settings to the file that we will use throughout the application. Open the app/scripts/services/config.coffee file and replace with the following code: 'use strict' angular.module('learningYeomanCh3App').value('Config', Config = baseurl: document.location.origin sitetitle: 'learning yeoman' sitedesc: 'The tutorial for Chapter 3' sitecopy: '2014 Copyright' version: '1.0.0' email: 'jonniespratley@gmail.com' debug: true feature: title: 'Chapter 3' body: 'A starting point for a modern angular.js application.' image: 'http://goo.gl/YHBZjc' features: [ title: 'yo' body: 'yo scaffolds out a new application.' image: 'http://goo.gl/g6LO99' , title: 'Bower' body: 'Bower is used for dependency management.' image: 'http://goo.gl/GpxBAx' , title: 'Grunt' body: 'Grunt is used to build, preview and test your project.' image: 'http://goo.gl/9M00hx' ] session: authorized: false user: null layout: header: 'views/_header.html' content: 'views/_content.html' footer: 'views/_footer.html' menu: [ title: 'Home', href: '/' , title: 'About', href: '/about' , title: 'Posts', href: '/posts' ] ) The preceding code does the following: It creates a new Config value service on the learningYeomanCh3App module The baseURL property is set to the location where the document originated from The sitetitle, sitedesc, sitecopy, and version attributes are set to default values that will be displayed throughout the application The feature property is an object that contains some defaults for displaying a feature on the main page The features property is an array of feature objects that will display on the main page as well The session property is defined with authorized set to false and user set to null; this value gets set to the current authenticated user The layout property is an object that defines the paths of view templates, which will be used for the corresponding keys The menu property is an array that contains the different pages of the application Usually, a generic configuration file is created at the top level of the scripts folder for easier access. Creating the application definition During the initial scaffold of the application, an app.coffee file is created by Yeoman located in the app/scripts directory. The scripts/app.coffee file is the definition of the application, the first argument is the name of the module, and the second argument is an array of dependencies, which come in the form of angular modules and will be injected into the application upon page load. The app.coffee file is the main entry point of the application and does the following: Initializes the application module with dependencies Configures the applications router Any module dependencies that are declared inside the dependencies array are the Angular modules that were selected during the initial scaffold. Consider the following code: 'use strict' angular.module('learningYeomanCh3App', [ 'ngCookies', 'ngResource', 'ngSanitize', 'ngRoute' ]) .config ($routeProvider) -> $routeProvider .when '/', templateUrl: 'views/main.html' controller: 'MainCtrl' .otherwise redirectTo: '/' The preceding code does the following: It defines an angular module named learningYeomanCh3App with dependencies on the ngCookies, ngSanitize, ngResource, and ngRoute modules The .config function on the module configures the applications' routes by passing route options to the $routeProvider service Bower downloaded and installed these modules during the initial scaffold. Creating the application controller Generally, when creating an Angular application, you should define a top-level controller that uses the $rootScope service to configure some global application wide properties or methods. To create a new controller, use the following command: $ yo angular:controller app This command will create a new AppCtrl controller located in the app/scripts/controllers directory. file and replace with the following code: 'use strict' angular.module('learningYeomanCh3App') .controller('AppCtrl', ($rootScope, $cookieStore, Config) -> $rootScope.name = 'AppCtrl' App = angular.copy(Config) App.session = $cookieStore.get('App.session') window.App = $rootScope.App = App) The preceding code does the following: It creates a new AppCtrl controller with dependencies on the $rootScope, $cookieStore, and Config modules Inside the controller definition, an App variable is copied from the Config value service The session property is set to the App.session cookie, if available Creating the application views The Angular generator will create the applications' index.html view, which acts as the container for the entire application. The index view is used as the shell for the other views of the application; the router handles mapping URLs to views, which then get injected to the element that declares the ng-view directive. Modifying the application's index.html Let's modify the default view that was created by the generator. Open the app/index.html file, and add the content right below the following HTML comment: The structure of the application will consist of an article element that contains a header,<article id="app" <article id="app" ng-controller="AppCtrl" class="container">   <header id="header" ng-include="App.layout.header"></header>   <section id=”content” class="view-animate-container">     <div class="view-animate" ng-view=""></div>   </section>   <footer id="footer" ng-include="App.layout.footer"></footer> </article> In the preceding code: The article element declares the ng-controller directive to the AppCtrl controller The header element uses an ng-include directive that specifies what template to load, in this case, the header property on the App.layout object The div element has the view-animate-container class that will allow the use of CSS transitions The ng-view attribute directive will inject the current routes view template into the content The footer element uses an ng-include directive to load the footer specified on the App.layout.footer property Use ng-include to load partials, which allows you to easily swap out templates. Creating Angular partials Use the yo angular:view command to create view partials that will be included in the application's main layout. So far, we need to create three partials that the index view (app/index.html) will be consuming from the App.layout property on the $rootScope service that defines the location of the templates. Names of view partials typically begin with an underscore (_). Creating the application's header The header partial will contain the site title and navigation of the application. Open a terminal and execute the following command: $ yo angular:view _header This command creates a new view template file in the app/views directory. Open the app/views/_header.html file and add the following contents: <div class="header"> <ul class="nav nav-pills pull-right"> <li ng-repeat="item in App.menu" ng-class="{'active': App.location.path() === item.href}"> <a ng-href = "#{{item.href}}"> {{item.title}} </a> </li> </ul> <h3 class="text-muted"> {{ App.sitetitle }} </h3> </div> The preceding code does the following: It uses the {{ }} data binding syntax to display App.sitetitle in a heading element The ng-repeat directive is used to repeat each item in the App.menu array defined on $rootScope Creating the application's footer The footer partial will contain the copyright message and current version of the application. Open the terminal and execute the following command: $ yo angular:view _footer This command creates a view template file in the app/views directory. Open the app/views/_footer.html file and add the following markup: <div class="app-footer container clearfix">     <span class="app-sitecopy pull-left">       {{ App.sitecopy }}     </span>     <span class="app-version pull-right">       {{ App.version }}     </span> </div> The preceding code does the following: It uses a div element to wrap two span elements The first span element contains data binding syntax referencing App.sitecopy to display the application's copyright message The second span element also contains data binding syntax to reference App.version to display the application's version Customizing the main view The Angular generator creates the main view during the initial scaffold. Open the app/views/main.html file and replace with the following markup: <div class="jumbotron">     <h1>{{ App.feature.title }}</h1>     <img ng-src="{{ App.feature.image  }}"/>       <p class="lead">       {{ App.feature.body }}       </p>   </div>     <div class="marketing">   <ul class="media-list">         <li class="media feature" ng-repeat="item in App.features">        <a class="pull-left" href="#">           <img alt="{{ item.title }}"                       src="http://placehold.it/80x80"                       ng-src="{{ item.image }}"            class="media-object"/>        </a>        <div class="media-body">           <h4 class="media-heading">{{item.title}}</h4>           <p>{{ item.body }}</p>        </div>         </li>   </ul> </div> The preceding code does the following: At the top of the view, we use the {{ }} data binding syntax to display the title and body properties declared on the App.feature object Next, inside the div.marketing element, another div element is declared with the ng-repeat directive to loop for each item in the App.features property Then, using the {{ }} data binding syntax wrapped around the title and body properties from the item being repeated, we output the values Previewing the application To preview the application, execute the following command: $ grunt serve Your browser should open displaying something similar to the following screenshot: Download the AngularJS Batarang (http://goo.gl/0b2GhK) developer tool extension for Google Chrome for debugging. Summary In this article, we learned the concepts of AngularJS and how to leverage the framework in a new or existing project. Resources for Article: Further resources on this subject: Best Practices for Modern Web Applications [article] Spring Roo 1.1: Working with Roo-generated Web Applications [article] Understand and Use Microsoft Silverlight with JavaScript [article]
Read more
  • 0
  • 0
  • 3711
article-image-shapefiles-leaflet
Packt
18 Aug 2014
5 min read
Save for later

Shapefiles in Leaflet

Packt
18 Aug 2014
5 min read
This article written by Paul Crickard III, the author of Leaflet.js Essentials, describes the use of shapefiles in Leaflet. It shows us how a shapefile can be used to create geographical features on a map. This article explains how shapefiles can be used to add a pop up or for styling purposes. (For more resources related to this topic, see here.) Using shapefiles in Leaflet A shapefile is the most common geographic file type that you will most likely encounter. A shapefile is not a single file, but rather several files used to create geographic features on a map. When you download a shapefile, you will have .shp, .shx, and .dbf at a minimum. These files are the shapefiles that contain the geometry, the index, and a database of attributes. Your shapefile will most likely include a projection file (.prj) that will tell that application the projection of the data so the coordinates make sense to the application. In the examples, you will also have a .shp.xml file that contains metadata and two spatial index files, .sbn and .sbx. To find shapefiles, you can usually search for open data and a city name. In this example, we will be using a shapefile from ABQ Data, the City of Albuquerque data portal. You can find more data on this at http://www.cabq.gov/abq-data. When you download a shapefile, it will most likely be in the ZIP format because it will contain multiple files. To open a shapefile in Leaflet using the leaflet-shpfile plugin, follow these steps: First, add references to two JavaScript files. The first, leaflet-shpfile, is the plugin, and the second depends on the shapefile parser, shp.js: <script src="leaflet.shpfile.js"></script> <script src="shp.js"></script> Next, create a new shapefile layer and add it to the map. Pass the layer path to the zipped shapefile: var shpfile = new L.Shapefile('council.zip'); shpfile.addTo(map); Your map should display the shapefile as shown in the following screenshot: Performing the preceding steps will add the shapefile to the map. You will not be able to see any individual feature properties. When you create a shapefile layer, you specify the data, followed by specifying the options. The options are passed to the L.geoJson class. The following code shows you how to add a pop up to your shapefile layer: var shpfile = new L.Shapefile('council.zip',{onEachFeature:function(feature, layer) { layer.bindPopup("<a href='"+feature.properties.WEBPAGE+"'>Page</a><br><a href='"+feature. properties.PICTURE+"'>Image</a>"); }}); In the preceding code, you pass council.zip to the shapefile, and for options, you use the onEachFeature option, which takes a function. In this case, you use an anonymous function and bind the pop up to the layer. In the text of the pop up, you concatenate your HTML with the name of the property you want to display using the format feature.properties.NAME-OF-PROPERTY. To find the names of the properties in a shapefile, you can open .dbf and look at the column headers. However, this can be cumbersome, and you may want to add all of the shapefiles in a directory without knowing its contents. If you do not know the names of the properties for a given shapefile, the following example shows you how to get them and then display them with their value in a pop up: var holder=[]; for (var key in feature.properties){holder.push(key+": "+feature.properties[key]+"<br>");popupContent=holder.join(""); layer.bindPopup(popupContent);} shapefile.addTo(map); In the preceding code, you first create an array to hold all of the lines in your pop up, one for each key/value pair. Next, you run a for loop that iterates through the object, grabbing each key and concatenating the key name with the value and a line break. You push each line into the array and then join all of the elements into a single string. When you use the .join() method, it will separate each element of the array in the new string with a comma. You can pass empty quotes to remove the comma. Lastly, you bind the pop up with the string as the content and then add the shapefile to the map. You now have a map that looks like the following screenshot: The shapefile also takes a style option. You can pass any of the path class options, such as the color, opacity, or stroke, to change the appearance of the layer. The following code creates a red polygon with a black outline and sets it slightly transparent: var shpfile = new L.Shapefile('council.zip',{style:function(feature){return {color:"black",fillColor:"red",fillOpacity:.75}}}); Summary In this article, we learned how shapefiles can be added to a geographical map. We learned how pop ups are added to the maps. This article also showed how these pop ups would look once added to the map. You will also learn how to connect to an ESRI server that has an exposed REST service. Resources for Article: Further resources on this subject: Getting started with Leaflet [Article] Using JavaScript Effects with Joomla! [Article] Quick start [Article]
Read more
  • 0
  • 0
  • 34358

Packt
14 Aug 2014
10 min read
Save for later

Additional SOA Patterns – Supporting Composition Controllers

Packt
14 Aug 2014
10 min read
In this article by Sergey Popov, author of the book Applied SOA Patterns on the Oracle Platform, we will learn some complex SOA patterns, realized on very interesting Oracle products: Coherence and Oracle Event Processing. (For more resources related to this topic, see here.) We have to admit that for SOA Suite developers and architects (especially from the old BPEL school), the Oracle Event Processing platform could be a bit outlandish. This could be the reason why some people oppose service-oriented and event-driven architecture, or see them as different architectural approaches. The situation is aggravated by the abundance of the acronyms flying around such as EDA EPN, EDN, CEP, and so on. Even here, we use EPN and EDN interchangeably, as Oracle calls it event processing, and generically, it is used in an event delivery network.   The main argument used for distinguishing SOA and EDN is that SOA relies on the application of a standardized contract principle, whereas EDN has to deal with all types of events. This is true, and we have mentioned this fact before. We also mentioned that we have to declare all the event parameters in the form of key-value pairs with their types in <event-type-repository>. We also mentioned that the reference to the event type from the event type repository is not mandatory for a standard EPN adapter, but it's essential when you are implementing a custom inbound adapter in the EPN framework, which is an extremely powerful Java-based feature. As long as it's Java, you can do practically everything! Just follow the programming flow explained in the Oracle documentation; see the EP Input Adapter Implementation section:   import org.apache.commons.logging.Log; import org.apache.commons.logging.LogFactory; import com.bea.wlevs.ede.api.EventProperty; import com.bea.wlevs.ede.api.EventRejectedException; import com.bea.wlevs.ede.api.EventType;   import com.bea.wlevs.ede.api.EventTypeRepository; import com.bea.wlevs.ede.api.RunnableBean;   import com.bea.wlevs.ede.api.StreamSender; import com.bea.wlevs.ede.api.StreamSink; import com.bea.wlevs.ede.api.StreamSource; import com.bea.wlevs.util.Service;   import java.lang.RuntimeException;   public class cargoBookingAdapter implements RunnableBean, StreamSource, StreamSink   {   static final Log v_logger = LogFactory. getLog("cargoBookingAdapter");   private String v_eventTypeName; private EventType v_eventType;        private StreamSender v_eventSender;   private EventTypeRepository v_EvtRep = null; public cargoBookingAdapter(){   super();   }   /**   *  Called by the server to pass in the name of the event   *  v_EvTypee to which event data should be bound.   */   public void setEventType(String v_EvType){ v_eventTypeName = v_EvType; }   /**   *  Called by the server to set an event v_EvTypee   *  repository instance that knows about event   *  v_EvTypees configured for this application   *   *  This repository instance will be used to retrieve an   *  event v_EvTypee instance that will be populated   *  with event data retrieved from the event data file   *  @param etr The event repository.   */   @Service(filter = EventTypeRepository.SERVICE_FILTER)   public void setEventTypeRepository(EventTypeRepository etr){ v_EvtRep = etr;   }   /**   *  Executes to retrieve raw event data and   *  create event v_EvTypee instances from it, then   *  sends the events to the next stage in the   *  EPN.   *  This method, implemented from the RunnableBean   *  interface, executes when this adapter instance   *  is active.   */   public void run()   {   if (v_EvtRep == null){   throw new RuntimeException("EventTypeRepository is   not set");   }   //  Get the event v_EvTypee from the repository by using   //  the event v_EvTypee name specified as a property of   //  this adapter in the EPN assembly file.   v_eventType = v_EvtRep.getEventType(v_eventTypeName); if (v_eventType == null){ throw new RuntimeException("EventType(" + v_eventType + ") is not found.");     }   /**   *   Actual Adapters implementation:   *             *  1. Create an object and assign to it   *      an event v_EvTypee instance generated   *      from event data retrieved by the   *      reader   *   *  2. Send the newly created event v_EvTypee instance   *      to a downstream stage that is   *      listening to this adapter.   */   }   }   }   The presented code snippet demonstrates the injection of a dependency into the Adapter class using the setEventTypeRepository method, implanting the event type definition that is specified in the adapter's configuration.   So, it appears that we, in fact, have the data format and model declarations in an XML form for the event, and we put some effort into adapting the inbound flows to our underlying component. Thus, the Adapter Framework is essential in EDN, and dependency injection can be seen here as a form of dynamic Data Model/Format Transformation of the object's data. Going further, just following the SOA reusabilityprinciple, a single adapter can be used in multiple event-processing networks and for that, we can employ the Adapter Factory pattern discussed earlier (although it's not an official SOA pattern, remember?) For that, we will need the Adapter Factory class and the registration of this factory in the EPN assembly file with a dedicated provider name, which we will use further in applications, employing the instance of this adapter. You must follow the OSGi service registry rules if you want to specify additional service properties in the <osgi:service interface="com.bea.wlevs.ede.api.AdapterFactory"> section and register it only once as an OSGi service.   We also use Asynchronous Queuing and persistence storage to provide reliable delivery of events aggregation to event subscribers, as we demonstrated in the previous paragraph. Talking about aggregation on our CQL processors, we have practically unlimited possibilities to merge and correlate various event sources, such as streams:   <query id="cargoQ1"><![CDATA[   select * from CragoBookingStream, VoyPortCallStream   where CragoBookingStream.POL_CODE = VoyPortCallStream.PORT_CODE and VoyPortCallStream.PORT_CALL_PURPOSE ="LOAD" ]]></query> Here, we employ Intermediate Routing (content-based routing) to scale and balance our event processors and also to achieve a desirable level of high availability. Combined together, all these basic SOA patterns are represented in the Event-Driven Network that has Event-Driven Messaging as one of its forms.   Simply put, the entire EDN has one main purpose: effective decoupling of event (message) providers and consumers (Loose Coupling principle) with reliable event identification and delivering capabilities. So, what is it really? It is a subset of the Enterprise Service Bus compound SOA pattern, and yes, it is a form of an extended Publish-Subscribe pattern.   Some may say that CQL processors (or bean processors) are not completely aligned with the classic ESB pattern. Well, you will not find OSB XQuery in the Canonical ESB patterns catalog either; it's just a tool that supports ESB VETRO operations in this matter. In ESB, we can also call Java Beans when it's necessary for message processing; for instance, doing complex sorts inJava Collections is far easier than in XML/XSLT, and it is worth the serialization/ deserialization efforts. In a similar way, EDN extends the classic ESB by providing the following functionalities:   •        Continuous Query Language   •        It operates on multiple streams of disparate data   •        It joins the incoming data with persisted data   •        It has the ability to plug in to any type of adapter   •        It has the ability to plug to any type of adapters   Combined together, all these features can cover almost any range of practical challenges, and the logistics example we used here in this article is probably too insignificant for such a powerful event-driven platform; however, for a more insightful look at Oracle CEP, refer to Getting Started with Oracle Event Processing 11g, Alexandre Alves, Robin J. Smith, Lloyd Williams, Packt Publishing. Using exactly the same principles and patterns, you can employ the already existing tools in your arsenal. The world is apparentlybigger, and this tool can demonstrate all its strength in the following use cases:     •    As already mentioned, Cablecom Enterprise strives to improve the overall customer experience (not only for VOD). It does so by gathering and aggregating information about user preferences through the purchasing history, watch lists, channel switching, activity in social networks, search history and used meta tags in search, other user experiences from the same target group, upcoming related public events (shows, performances, or premieres), and even the duration of the cursor's position over certain elements of corporate web portals. The task is complex and comprises many activities, including meta tag updates in metadata storage that depend on new findings for predicting trends and so on; however, here we can tolerate (to some extent) the events that aren't processed or are not received.     •    For bank transaction monitoring, we do not have such a luxury. All online events must be accounted and processed with the maximum speed possible. If the last transaction with your credit card was at Bond Street in London, (ATM cash withdrawal) and 5 minutes later, the same card is used to purchase expensive jewellery online with a peculiar delivery address, then someone should flag the card with a possible fraud case and contact the card holder. This is the simplest example that we can provide. When it comes to money laundering tracking cases in our borderless world—the decision-parsing tree from the very first figure in this article—based on all possible correlated events will require all the pages of this book, and you will need a strong magnifying glass to read it; the stratagem of the web nodes and links would drive even the most worldly wise spider crazy.   For these mentioned use cases, Oracle EPN is simply compulsory with some spice, like Coherence for cache management and adequate hardware. It would be prudent to avoid implementing homebrewed solutions (without dozens of years of relevant experience), and following the SOA design patterns is essential.   Let's now assemble all that we discussed in the preceding paragraphs in one final figure. Installation routines will not give you any trouble; just install OEPE 3.5, download it, install CEP components for Eclipse, and you are done with the client/ dev environment. The installation of the server should not pose many difficulties either (http://docs.oracle.com/cd/E28280_01/doc.1111/e14476/install.htm#CEPGS472). When the server is up and running, you can register it in Eclipse(1). The graphical interface will support you in assembling event-handling applications from adapters, processor channels, and event beans; however, knowledge of the internal organization of an XML config and application assembly files (as demonstrated in the earlier code snippets) is always beneficial. In addition to the Eclipse development environment, you have the CEP server web console (visualizer) with almost identical functionalities, which gives you a quick hand with practically all CQL constructs (2). Parallel Complex Events Processing
Read more
  • 0
  • 0
  • 1311

article-image-lightning-introduction
Packt
13 Aug 2014
11 min read
Save for later

Lightning Introduction

Packt
13 Aug 2014
11 min read
In this article by Jorge González and James Watts, the authors of CakePHP 2 Application Cookbook, we will cover the following recipes: Listing and viewing records Adding and editing records Deleting records Adding a login Including a plugin (For more resources related to this topic, see here.) CakePHP is a web framework for rapid application development (RAD), which admittedly covers a wide range of areas and possibilities. However, at its core, it provides a solid architecture for the CRUD (create/read/update/delete) interface. This chapter is a set of quick-start recipes to dive head first into using the framework and build out a simple CRUD around product management. If you want to try the code examples on your own, make sure that you have CakePHP 2.5.2 installed and configured to use a database—you should see something like this: Listing and viewing records To begin, we'll need a way to view the products available and also allow the option to select and view any one of those products. In this recipe, we'll create a listing of products as well as a page where we can view the details of a single product. Getting ready To go through this recipe, we'll first need a table of data to work with. So, create a table named products using the following SQL statement: CREATE TABLE products ( id VARCHAR(36) NOT NULL, name VARCHAR(100), details TEXT, available TINYINT(1) UNSIGNED DEFAULT 1, created DATETIME, modified DATETIME, PRIMARY KEY(id) ); We'll then need some sample data to test with, so now run this SQL statement to insert some products: INSERT INTO products (id, name, details, available, created, modified) VALUES ('535c460a-f230-4565-8378-7cae01314e03', 'Cake', 'Yummy and sweet', 1, NOW(), NOW()), ('535c4638-c708-4171-985a-743901314e03', 'Cookie', 'Browsers love cookies', 1, NOW(), NOW()), ('535c49d9-917c-4eab-854f-743801314e03', 'Helper', 'Helping you all the way', 1, NOW(), NOW()); Before we begin, we'll also need to create ProductsController. To do so, create a file named ProductsController.php in app/Controller/ and add the following content: <?php App::uses('AppController', 'Controller'); class ProductsController extends AppController { public $helpers = array('Html', 'Form'); public $components = array('Session', 'Paginator'); } Now, create a directory named Products/ in app/View/. Then, in this directory, create one file named index.ctp and another named view.ctp. How to do it... Perform the following steps: Define the pagination settings to sort the products by adding the following property to the ProductsController class: public $paginate = array('limit' => 10); Add the following index() method in the ProductsController class: public function index() { $this->Product->recursive = -1; $this->set('products', $this->paginate()); } Introduce the following content in the index.ctp file that we created: <h2><?php echo __('Products'); ?></h2><table><tr><th><?php echo $this->Paginator->sort('id'); ?></th><th><?php echo $this->Paginator->sort('name'); ?></th><th><?php echo $this->Paginator->sort('created'); ?></th></tr><?php foreach ($products as $product): ?><tr><td><?php echo $product['Product']['id']; ?></td><td><?phpecho $this->Html->link($product['Product']['name'],array('controller' => 'products', 'action' => 'view',$product['Product']['id']));?></td><td><?php echo $this->Time->nice($product['Product']['created']); ?></td></tr><?php endforeach; ?></table><div><?php echo $this->Paginator->counter(array('format' => __('Page{:page} of {:pages}, showing {:current} records out of {:count}total, starting on record {:start}, ending on {:end}'))); ?></div><div><?phpecho $this->Paginator->prev(__('< previous'), array(), null,array('class' => 'prev disabled'));echo $this->Paginator->numbers(array('separator' => ''));echo $this->Paginator->next(__('next >'), array(), null,array('class' => 'next disabled'));?></div> Returning to the ProductsController class, add the following view() method to it: public function view($id) {if (!($product = $this->Product->findById($id))) {throw new NotFoundException(__('Product not found'));}$this->set(compact('product'));} Introduce the following content in the view.ctp file: <h2><?php echo h($product['Product']['name']); ?></h2><p><?php echo h($product['Product']['details']); ?></p><dl><dt><?php echo __('Available'); ?></dt><dd><?php echo __((bool)$product['Product']['available'] ? 'Yes': 'No'); ?></dd><dt><?php echo __('Created'); ?></dt><dd><?php echo $this->Time->nice($product['Product']['created']); ?></dd><dt><?php echo __('Modified'); ?></dt><dd><?php echo $this->Time->nice($product['Product']['modified']); ?></dd></dl> Now, navigating to /products in your web browser will display a listing of the products, as shown in the following screenshot: Clicking on one of the product names in the listing will redirect you to a detailed view of the product, as shown in the following screenshot: How it works... We started by defining the pagination setting in our ProductsController class, which defines how the results are treated when returning them via the Paginator component (previously defined in the $components property of the controller). Pagination is a powerful feature of CakePHP, which extends well beyond simply defining the number of results or sort order. We then added an index() method to our ProductsController class, which returns the listing of products. You'll first notice that we accessed a $Product property on the controller. This is the model that we are acting against to read from our table in the database. We didn't create a file or class for this model, as we're taking full advantage of the framework's ability to determine the aspects of our application through convention. Here, as our controller is called ProductsController (in plural), it automatically assumes a Product (in singular) model. Then, in turn, this Product model assumes a products table in our database. This alone is a prime example of how CakePHP can speed up development by making use of these conventions. You'll also notice that in our ProductsController::index() method, we set the $recursive property of the Product model to -1. This is to tell our model that we're not interested in resolving any associations on it. Associations are other models that are related to this one. This is another powerful aspect of CakePHP. It allows you to determine how models are related to each other, allowing the framework to dynamically generate those links so that you can return results with the relations already mapped out for you. We then called the paginate() method to handle the resolving of the results via the Paginator component. It's common practice to set the $recursive property of all models to -1 by default. This saves heavy queries where associations are resolved to return the related models, when it may not be necessary for the query at hand. This can be done via the AppModel class, which all models extend, or via an intermediate class that you may be using in your application. We had also defined a view($id) method, which is used to resolve a single product and display its details. First, you probably noticed that our method receives an $id argument. By default, CakePHP treats the arguments in methods for actions as parts of the URL. So, if we have a product with an ID of 123, the URL would be /products/view/123. In this case, as our argument doesn't have a default value, in its absence from the URL, the framework would return an error page, which states that an argument was required. You will also notice that our IDs in the products table aren't sequential numbers in this case. This is because we defined our id field as VARCHAR(36). When doing this, CakePHP will use a Universally Unique Identifier (UUID) instead of an auto_increment value. To use a UUID instead of a sequential ID, you can use either CHAR(36) or BINARY(36). Here, we used VARCHAR(36), but note that it can be less performant than BINARY(36) due to collation. The use of UUID versus a sequential ID is usually preferred due to obfuscation, where it's harder to guess a string of 36 characters, but also more importantly, if you use database partitioning, replication, or any other means of distributing or clustering your data. We then used the findById() method on the Product model to return a product by it's ID (the one passed to the action). This method is actually a magic method. Just as you can return a record by its ID, by changing the method to findByAvailable(). For example, you would be able to get all records that have the given value for the available field in the table. These methods are very useful to easily perform queries on the associated table without having to define the methods in question. We also threw NotFoundException for the cases in which a product isn't found for the given ID. This exception is HTTP aware, so it results in an error page if thrown from an action. Finally, we used the set() method to assign the result to a variable in the view. Here we're using the compact() function in PHP, which converts the given variable names into an associative array, where the key is the variable name, and the value is the variable's value. In this case, this provides a $product variable with the results array in the view. You'll find this function useful to rapidly assign variables for your views. We also created our views using HTML, making use of the Paginator, Html, and Time helpers. You may have noticed that the usage of TimeHelper was not declared in the $helpers property of our ProductsController. This is because CakePHP is able to find and instantiate helpers from the core or the application automatically, when it's used in the view for the first time. Then, the sort() method on the Paginator helper helps you create links, which, when clicked on, toggle the sorting of the results by that field. Likewise, the counter(), prev(), numbers(), and next() methods create the paging controls for the table of products. You will also notice the structure of the array that we assigned from our controller. This is the common structure of results returned by a model. This can vary slightly, depending on the type of find() performed (in this case, all), but the typical structure would be as follows (using the real data from our products table here): Array([0] => Array([Product] => Array([id] => 535c460a-f230-4565-8378-7cae01314e03[name] => Cake[details] => Yummy and sweet[available] => true[created] => 2014-06-12 15:55:32[modified] => 2014-06-12 15:55:32))[1] => Array([Product] => Array([id] => 535c4638-c708-4171-985a-743901314e03[name] => Cookie[details] => Browsers love cookies[available] => true[created] => 2014-06-12 15:55:33[modified] => 2014-06-12 15:55:33))[2] => Array([Product] => Array([id] => 535c49d9-917c-4eab-854f-743801314e03[name] => Helper[details] => Helping you all the way[available] => true[created] => 2014-06-12 15:55:34[modified] => 2014-06-12 15:55:34))) We also used the link() method on the Html helper, which provides us with the ability to perform reverse routing to generate the link to the desired controller and action, with arguments if applicable. Here, the absence of a controller assumes the current controller, in this case, products. Finally, you may have seen that we used the __() function when writing text in our views. This function is used to handle translations and internationalization of your application. When using this function, if you were to provide your application in various languages, you would only need to handle the translation of your content and would have no need to revise and modify the code in your views. There are other variations of this function, such as __d() and __n(), which allow you to enhance how you handle the translations. Even if you have no initial intention of providing your application in multiple languages, it's always recommended that you use these functions. You never know, using CakePHP might enable you to create a world class application, which is offered to millions of users around the globe!
Read more
  • 0
  • 0
  • 1573
article-image-making-better-faq-page
Packt
24 Jul 2014
17 min read
Save for later

Making a Better FAQ Page

Packt
24 Jul 2014
17 min read
(For more resources related to this topic, see here.) Marking up the FAQ page We'll get started by taking some extra care and attention with the way we mark up our FAQ list. As with most things that deal with web development, there's no right way of doing anything, so don't assume this approach is the only correct one. Any markup that makes sense semantically and makes it easy to enhance your list with CSS and JavaScript is perfectly acceptable. Time for action – setting up the HTML file Perform the following steps to get the HTML file set up for our FAQ page: We'll get started with our sample HTML file, the jQuery file, the scripts.js file, and the styles.css file. In this case, our HTML page will contain a definition list with the questions inside the <dt> tags and the answers wrapped in the <dd> tags. By default, most browsers will indent the <dd> tags, which means the questions hang into the left margin, making them easy to scan. Inside the <body> tag of your HTML document, add a heading and a definition list as shown in the following code: <h1>Frequently Asked Questions</h1> <dl> <dt>What is jQuery?</dt> <dd> <p>jQuery is an awesome JavaScript library</p> </dd> <dt>Why should I use jQuery?</dt> <dd> <p>Because it's awesome and it makes writing JavaScript faster and easier</p> </dd> <dt>Why would I want to hide the answers to my questions?</dt> <dd> <p>To make it easier to peruse the list of available questions - then you simply click to see the answer you're interested in reading.</p> </dd> <dt>What if my answers were a lot longer and more complicated than these examples?</dt> <dd> <p>The great thing about the &lt;dd&gt; element is that it's a block level element that can contain lots of other elements.</p> <p>That means your answer could contain:</p> <ul> <li>Unordered</li> <li>Lists</li> <li>with lots</li> <li>of items</li> <li>(or ordered lists or even another definition list)</li> </ul> <p>Or it might contain text with lots of <strong>special</strong> <em>formatting</em>.</p> <h2>Other things</h2> <p>It can even contain headings. Your answers could take up an entire screen or more all on their own - it doesn't matter since the answer will be hidden until the user wants to see it.</p> </dd> <dt>What if a user doesn't have JavaScript enabled?</dt> <dd> <p>You have two options for users with JavaScript disabled - which you choose might depend on the content of your page.</p> <p>You might just leave the page as it is - and make sure the &lt;dt&gt; tags are styled in a way that makes them stand out and easy to pick up when you're scanning down through the page. This would be a great solution if your answers are relatively short.</p> <p>If your FAQ page has long answers, it might be helpful to put a table of contents list of links to individual questions at the top of the page so users can click it to jump directly to the question and answer they're interested in.This is similar to what we did in the tabbed example, but in this case, we'd usejQuery to hide the table of contents when the page loaded since users with JavaScript wouldn't need to see the table of contents.</p> </dd> </dl> You can adjust the style of the page however you'd like by adding in some CSS styles. The following screenshot shows how the page is styled: For users with JavaScript disabled, this page works fine as is. The questions hang into the left margin and are bolder and larger than the rest of the text on the page, making them easy to scan. What just happened? We set up a basic definition list to hold our questions and answers. The default style of the definition list lends itself nicely to making the list of questions scannable for site visitors without JavaScript. We can enhance that further with our own custom CSS code to make the style of our list match our site. As this simple collapse-and-show (or accordion) action is such a common one, two new elements have been proposed for HTML5: <summary> and <details> that will enable us to build accordions in HTML without the need for JavaScript interactivity. However, at the time of writing this, the new elements are only supported in Webkit browsers, which require some finagling to get them styled with CSS, and are also not accessible. Do keep an eye on these new elements to see if more widespread support for them develops. You can read about the elements in the HTML5 specs (http://www.whatwg.org/specs/web-apps/current-work/multipage/interactive-elements.html). If you'd like to understand the elements better, the HTML5 Doctor has a great tutorial that explains their use and styling at http://html5doctor.com/the-details-and-summary-elements/. Time for action – moving around an HTML document Perform the following steps to move from one element to another in JavaScript: We're going to keep working with the files we set up in the previously. Open up the scripts.js file that's inside your scripts folder. Add a document ready statement, then write a new empty function called dynamicFAQ, as follows: $(document).ready(function(){ }); function dynamicFAQ() { // Our function will go here } Let's think through how we'd like this page to behave. We'd like to have all the answers to our questions hidden when the page is loaded. Then, when a user finds the question they're looking for, we'd like to show the associated answer when they click on the question. This means the first thing we'll need to do is hide all the answers when the page loads. Get started by adding a class jsOff to the <body> tag, as follows: <body class="jsOff"> Now, inside the document ready statement in scripts.js, add the line of code that removes the jsOff class and adds a class selector of jsOn: $(document).ready(function(){ $('body').removeClass('jsOff').addClass('jsOn'); }); Finally, in the styles.css file, add this bit of CSS to hide the answers for the site visitors who have JavaScript enabled: .jsOn dd { display: none; } Now if you refresh the page in the browser, you'll see that the <dd> elements and the content they contain are no longer visible (see the following screenshot): Now, we need to show the answer when the site visitor clicks on a question. To do that, we need to tell jQuery to do something whenever someone clicks on one of the questions or the <dt> tags. Inside the dynamicFAQ function, add a line of code to add a click event handler to the <dt> elements, as shown in the following code: function dynamicFAQ() { $('dt').on('click', function(){ //Show function will go here }); } When the site visitor clicks on a question, we want to get the answer to that question and show it because our FAQ list is set up as follows: <dl> <dt>Question 1</dt> <dd>Answer to Question 1</dd> <dt>Question 2</dt> <dd>Answer to Question 2</dd> ... </dl> We know that the answer is the next node or element in the DOM after our question. We'll start from the question. When a site visitor clicks on a question, we can get the current question by using jQuery's $(this) selector. The user has just clicked on a question, and we say $(this) to mean the question they just clicked on. Inside the new click function, add $(this) so that we can refer to the clicked question, as follows: $('dt').on('click', function(){ $(this); }); Now that we have the question that was just clicked, we need to get the next thing, or the answer to that question so that we can show it. This is called traversing the DOM in JavaScript. It just means that we're moving to a different element in the document. jQuery gives us the next method to move to the next node in the DOM. We'll select our answer by inserting the following code: $('dt').on('click', function(){ $(this).next(); }); Now, we've moved from the question to the answer. Now all that's left to do is show the answer. To do so, add a line of code as follows: $('dt').on('click', function(){ $(this).next().show(); }); If you refresh the page in the browser, you might be disappointed to see that nothing happens when we click the questions. Don't worry—that's easy to fix. We wrote a dynamicFAQ() function, but we didn't call it. Functions don't work until they're called. Inside the document ready statement, call the function as follows: $(document).ready(function(){ $('body').removeClass('jsOff').addClass('jsOn'); dynamicFAQ(); }); Now, if we load the page in the browser, you can see that all of our answers are hidden until we click on the question. This is nice and useful, but it would be even nicer if the site visitor could hide the answer again when they're done reading it to get it out of their way. Luckily, this is such a common task, jQuery makes this very easy for us. All we have to do is replace our call to the show method with a call to the toggle method as follows: $('dt').on('click', function(){ $(this).next().toggle(); }); Now when you refresh the page in the browser, you'll see that clicking on the question once shows the answer and clicking on the question a second time hides the answer again. What just happened? We learned how to traverse the DOM—how to get from one element to another. Toggling the display of elements on a page is a common JavaScript task, so jQuery already has built-in methods to handle it and make it simple and straightforward to get this up and running on our page. That was pretty easy—just a few lines of code. Sprucing up our FAQ page That was so easy, in fact, that we have plenty of time left over to enhance our FAQ page to make it even better. This is where the power of jQuery becomes apparent—you can not only create a show/hide FAQ page, but you can make it a fancy one and still meet your deadline. How's that for impressing a client or your boss? Time for action – making it fancy Perform the following steps to add some fancy new features to the FAQ page: Let's start with a lit le CSS code to change the cursor to a pointer and add a little hover effect to our questions to make it obvious to site visitors that the questions are clickable. Open up the styles.css file that's inside the styles folder and add the following bit of CSS code: .jsOn dt { cursor: pointer; } .jsOn dt:hover { color: #ac92ec; } We're only applying these styles for those site visitors that have JavaScript enabled. These styles definitely help to communicate to the site visitor that the questions are clickable. You might also choose to change something other than the font color for the hover effect. Feel free to style your FAQ list however you'd like. Have a look at the following screenshot: Now that we've made it clear that our <dt> elements can be interacted with, let's take a look at how to show the answers in a nicer way. When we click on a question to see the answer, the change isn't communicated to the site visitor very well; the jump in the page is a little disconcerting and it takes a moment to realize what just happened. It would be nicer and easier to understand if the questions were to slide into view. The site visitor could literally see the question appearing and would understand immediately what change just happened on the screen. jQuery makes that easy for us. We just have to replace our call to the toggle method with a call to the slideToggle method: $('dt').on('click', function(){ $(this).next().slideToggle(); }); Now if you view the page in your browser, you can see that the questions slide smoothly in and out of view when the question is clicked. It's easy to understand what's happening when the page changes, and the animation is a nice touch. Now, there's just one lit le detail we've still got to take care of. Depending on how you've styled your FAQ list, you might see a lit le jump in the answer at the end of the animation. This is caused by some extra margins around the <p> tags inside the <dd> element. They don't normally cause any issues in HTML, and browsers can figure how to display them correctly. However, when we start working with animation, sometimes this becomes a problem. It's easy to fix. Just remove the top margin from the <p> tags inside the FAQ list as follows: .content dd p { margin-top: 0; } If you refresh the page in the browser, you'll see that the little jump is now gone and our animation smoothly shows and hides the answers to our questions. What just happened? We replaced our toggle method with the slideToggle method to animate the showing and hiding of the answers. This makes it easier for the site visitor to understand the change that's taking place on the page. We also added some CSS to make the questions appear to be clickable to communicate the abilities of our page to our site visitors. We're almost there! jQuery made animating that show and hide so easy that we've still got time left over to enhance our FAQ page even more. It would be nice to add some sort of indicator to our questions to show that they're collapsed and can be expanded, and to add some sort of special style to our questions once they're opened to show that they can be collapsed again. Time for action – adding some final touches Perform the following steps to add some finishing touches to our FAQ list: Let's start with some simple CSS code to add a small arrow icon to the left side of our questions. Head back into style.css and modify the styles a bit to add an arrow as follows: .jsOn dt:before { border: 0.5em solid; border-color: transparent transparent transparent #f2eeef; content: ''; display: inline-block; height: 0; margin-right: 0.5em; vertical-align: middle; width: 0; } .jsOn dt:hover:before { border-left-color: #ac92ec; } You might be wondering about this sort of odd bit of CSS. This is a technique to create triangles in pure CSS without having to use any images. If you're not familiar with this technique, I recommend checking out appendTo's blog post that explains pure CSS triangles at http://appendto.com/2013/03/pure-css-triangles-explained/. We've also included a hover style so that the triangle will match the text color when the site visitor hovers his/her mouse over the question. Note that we're using the jsOn class so that arrows don't get added to the page unless the site visitors have JavaScript enabled. See the triangles created in the following screenshot: Next, we'll change the arrow to a different orientation when the question is opened. We'll create a new CSS class open and use it to de fine some new styles for our CSS arrow using the following code: .jsOn dt.open:before { border-color: #f2eeef transparent transparent transparent; border-bottom-width: 0; } .jsOn dt.open:hover:before { border-left-color: transparent; border-top-color: #ac92ec; } Just make sure you add these new classes after the other CSS we're using to style our <dt> tags. This will ensure that the CSS cascades the way we intended. So we have our CSS code to change the arrows and show our questions are open, but how do we actually use that new class? We'll use jQuery to add the class to our question when it is opened and to remove the class when it's closed. jQuery provides some nice methods to work with CSS classes. The addClass method will add a class to a jQuery object and the removeClass method will remove a class. However, we want to toggle our class just like we're toggling the show and hide phenomenon of our questions. jQuery's got us covered for that too. We want the class to change when we click on the question, so we'll add a line of code inside our dynamicFAQ function that we're calling each time a <dt> tag is clicked as follows: $('dt').on('click', function(){ $(this).toggleClass('open'); $(this).next().slideToggle(); }); Now when you view the page, you'll see your open styles being applied to the <dt> tags when they're open and removed again when they're closed. To see this, have a look at the following screenshot: However, we can actually crunch our code to be a little bit smaller. Remember how we chain methods in jQuery? We can take advantage of chaining again. We have a bit of redundancy in our code because we're starting two different lines with $(this). We can remove this extra $(this) and just add our toggleClass method to the chain we've already started as follows: $(this).toggleClass('open').next().slideToggle(); This helps keep our code short and concise, and just look at what we're accomplishing in one line of code! What just happened? We created the CSS styles to style the open and closed states of our questions, and then we added a bit of code to our JavaScript to change the CSS class of the question to use our new styles. jQuery provides a few different methods to update CSS classes, which is o t en a quick and easy way to update the display of our document in response to input from the site visitor. In this case, since we wanted to add and remove a class, we used the toggleClass method. It saved us from having to figure out on our own whether we needed to add or remove the open class. We also took advantage of chaining to simply add this new functionality to our existing line of code, making the animated show and hide phenomenon of the answer and the change of CSS class of our question happen all in just one line of code. How's that for impressive power in a small amount of code? Summary You learned how to set up a basic FAQ page that hides the answers to the questions until the site visitor needs to see them. Because jQuery made this so simple, we had plenty of time left over to enhance our FAQ page even more, adding animations to our show and hide phenomenon for the answers, and taking advantage of CSS to style our questions with special open and closed classes to communicate to our site visitors how our page works. And we did all of that with just a few lines of code! Resources for Article: Further resources on this subject: Calendars in jQuery 1.3 with PHP using jQuery Week Calendar Plugin: Part 1 [article] Using jQuery and jQuery Animation: Tips and Tricks [article] Using jQuery and jQueryUI Widget Factory plugins with RequireJS [article]
Read more
  • 0
  • 0
  • 9102

article-image-article-creating-an-application-using-aspnetmvcangularjsservicestack
Packt
23 Jul 2014
8 min read
Save for later

Creating an Application using ASP.NET MVC, AngularJS and ServiceStack

Packt
23 Jul 2014
8 min read
(For more resources related to this topic, see here.) Routing considerations for ASP.NET MVC and AngularJS In the previous example, we had to make changes to the ASP.NET MVC routing so it ignores the requests handled by the ServiceStack framework. Since the AngularJS application currently uses hashbang URLs, we don't need to make any other changes to the ASP.NET MVC routing. Changing an AngularJS application to use the HTML5 History API instead of hashbang URLs requires a lot more work as it will conflict directly with the ASP.NET MVC routing. You need to set up IIS URL rewriting and use the URL Rewrite module for IIS 7 and higher, which is available at www.iis.net/downloads/microsoft/url-rewrite. AngularJS application routes have to be mapped using this module to the ASP.NET MVC view that hosts the client-side application. We also need to ensure that web service request paths are excluded from URL rewriting. You can explore some changes required for the HTML5 navigation mode in the project found in the Example2 folder from the source code for this article. The HTML5 History API is not supported in Internet Explorer 8 and 9. Using ASP.NET bundling and minification features for AngularJS files So far, we have referenced and included JavaScript and CSS files directly in the _Layout.cshtml file. This makes it difficult to reuse script references between different views, and the assets are not concatenated and minified when deployed to a production environment. Microsoft provides a NuGet package called Microsoft.AspNet.Web.Optimization that contains this essential functionality. When you create a new ASP.NET MVC project, it gets installed and configured with default options. First, we need to add a new BundleConfig.cs file, which will define collections of scripts and style sheets under a virtual path, such as ~/bundles/app, that does not match a physical file. This file will contain the following code: bundles.Add(new ScriptBundle("~/bundles/app").Include( "~/scripts/app/app.js", "~/scripts/app/services/*.js", "~/scripts/app/controllers/*.js")); You can explore these changes in the project found in the Example3 folder from the source code for this article. If you take a look at the BundleConfig.cs file, you will see three script bundles and one style sheet bundle defined. Nothing is stopping you from defining only one script bundle instead, to reduce the resource requests further. We can now reference the bundles in the _Layout.cshtml file and replace the previous scripts with the following code: @Scripts.Render("~/bundles/basejs") @Scripts.Render("~/bundles/angular") @Scripts.Render("~/bundles/app") Each time we add a new file to a location like ~/scripts/app/services/ it will automatically be included in its bundle. If we add the following line of code to the BundleConfig.RegisterBundles method, when we run the application, the scripts or style sheets defined in a bundle will be minified (all of the whitespace, line separators, and comments will be removed) and concatenated in a single file: BundleTable.EnableOptimizations = true; If we take a look at the page source, the script section now looks like the following code: <script src ="/bundles/basejs?v=bWXds_q0E1qezGAjF9o48iD8-hlMNv7nlAONwLLM0Wo1"></script> <script src ="/bundles/angular?v=k-PtTeaKyBiBwT4gVnEq9YTPNruD0u7n13IOEzGTvfw1"></script> <script src ="/bundles/app?v=OKa5fFQWjXSQCNcBuWm9FJLcPFS8hGM6uq1SIdZNXWc1"></script> Using this process, the previous separate requests for each script or style sheet file will be reduced to a request to one or more bundles that are much reduced in content due to concatenation and minification. For convenience, there is a new EnableOptimizations value in web.config that will enable or disable the concatenation and minification of the asset bundles. Securing the AngularJS application We previously discussed that we need to ensure that all browser requests are secured and validated on the server for specific scenarios. Any browser request can be manipulated and changed even unintentionally, so we cannot rely on client-side validation alone. When discussing securing an AngularJS application, there are a couple of alternatives available, of which I'll mention the following: You can use client-side authentication and employ a web service call to authenticate the current user. You can create a time-limited authentication token that will be passed with each data request. This approach involves additional code in the AngularJS application to handle authentication. You can rely on server-side authentication and use an ASP.NET MVC view that will handle any unauthenticated request. This view will redirect to the view that hosts the AngularJS application only when the authentication is successful. The AngularJS application will implicitly use an authentication cookie that is set on the server side, and it does not need any additional code to handle authentication. I prefer server-side authentication as it can be reused with other server-side views and reduces the code required to implement it on both the client side and server side. We can implement server-side authentication in at least two ways, as follows: We can use the ASP.NET Identity system or the older ASP.NET Membership system for scenarios where we need to integrate with an existing application We can use built-in ServiceStack authentication features, which have a wide range of options with support for many authentication providers. This approach has the benefit that we can add a set of web service methods that can be used for authentication outside of the ASP.NET MVC context. The last approach ensures the best integration between ASP.NET MVC and ServiceStack, and it allows us to introduce a ServiceStack NuGet package that provides new productivity benefits for our sample application. Using the ServiceStack.Mvc library ServiceStack has a library that allows deeper integration with ASP.NET MVC through the ServiceStack.Mvc NuGet package. This library provides access to the ServiceStack dependency injection system for ASP.NET MVC applications. It also introduces a new base controller class called ServiceStackController; this can be used by ASP.NET MVC controllers to gain access to the ServiceStack caching, session, and authentication infrastructures. To install this package, you need to run the following command in the NuGet Package Manager Console: Install-Package ServiceStack.Mvc -Version 3.9.71 The following line needs to be added to the AppHost.Configure method, and it will register a ServiceStack controller factory class for ASP.NET MVC: ControllerBuilder.Current.SetControllerFactory(new FunqControllerFactory(container)); The ControllerBuilder.Current.SetControllerFactory method is an ASP.NET MVC extension point that allows the replacement of its DefaultControllerFactory class with a custom one. This class is tasked with matching requests with controllers, among other responsibilities. The FunqControllerFactory class provided in the new NuGet package inherits the DefaultControllerFactory class and ensures that all controllers that have dependencies managed by the ServiceStack dependency injection system will be resolved at application runtime. To exemplify this, the BicycleRepository class is now referenced in the HomeController class, as shown in the following code: public class HomeController : Controller { public BicycleRepository BicycleRepository { get; set; } // // GET: /Home/ public ActionResult Index() { ViewBag.BicyclesCount = BicycleRepository.GetAll().Count(); return View(); } } The application menu now displays the current number of bicycles as initialized in the BicycleRepository class. If we add a new bicycle and refresh the browser page, the menu bicycle count is updated. This highlights the fact that the ASP.NET MVC application uses the same BicycleRepository instance as ServiceStack web services. You can explore this example in the project found in the Example4 folder from the source code for this article. Using the ServiceStack.Mvc library, we have reached a new milestone by bridging ASP.NET MVC controllers with ServiceStack services. In the next section, we will effectively transition to a single server-side application with unified caching, session, and authentication infrastructures. The building blocks of the ServiceStack security infrastructure ServiceStack has built-in, optional authentication and authorization provided by its AuthFeature plugin, which builds on two other important components as follows: Caching: Every service or controller powered by ServiceStack has optional access to an ICacheClient interface instance that provides cache-related methods. The interface needs to be registered as an instance of one of the many caching providers available: an in-memory cache, a relational database cache, a cache based on a key value data store using Redis, a memcached-based cache, a Microsoft Azure cache, and even a cache based on Amazon DynamoDB. Sessions: These are enabled by the SessionFeature ServiceStack plugin and rely on the caching component when the AuthFeature plugin is not enabled. Every service or controller powered by ServiceStack has an ISession property that provides read and write access to the session data. Each ServiceStack request automatically has two cookies set: an ss-id cookie, which is a regular session cookie, and an ss-pid cookie, which is a permanent cookie with an expiry date set far in the future. You can also gain access to a typed session as part of the AuthFeature plugin that will be explored next.
Read more
  • 0
  • 0
  • 4135
Modal Close icon
Modal Close icon