Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Front-End Web Development

341 Articles
article-image-hello-world-program
Packt
20 Apr 2016
12 min read
Save for later

Hello World Program

Packt
20 Apr 2016
12 min read
In this article by Manoj Kumar, author of the book Learning Sinatra, we will write an application. Make sure that you have Ruby installed. We will get a basic skeleton app up and running and see how to structure the application. (For more resources related to this topic, see here.) In this article, we will discuss the following topics: A project that will be used to understand Sinatra Bundler gem File structure of the application Responsibilities of each file Before we begin writing our application, let's write the Hello World application. Getting started The Hello World program is as follows: 1require 'sinatra'23 get '/' do4 return 'Hello World!'5 end The following is how the code works: ruby helloworld.rb Executing this from the command line will run the application and the server will listen to the 4567 port. If we point our browser to http://localhost:4567/, the output will be as shown in the following screenshot: The application To understand how to write a Sinatra application, we will take a small project and discuss every part of the program in detail. The idea We will make a ToDo app and use Sinatra along with a lot of other libraries. The features of the app will be as follows: Each user can have multiple to-do lists Each to-do list will have multiple items To-do lists can be private, public, or shared with a group Items in each to-do list can be assigned to a user or group The modules that we build are as follows: Users: This will manage the users and groups List: This will manage the to-do lists Items: This will manage the items for all the to-do lists Before we start writing the code, let's see what the file structure will be like, understand why each one of them is required, and learn about some new files. The file structure It is always better to keep certain files in certain folders for better readability. We could dump all the files in the home folder; however, that would make it difficult for us to manage the code: The app.rb file This file is the base file that loads all the other files (such as, models, libs, and so on) and starts the application. We can configure various settings of Sinatra here according to the various deployment environments. The config.ru file The config.ru file is generally used when we need to deploy our application with different application servers, such as Passenger, Unicorn, or Heroku. It is also easy to maintain the different deployment environment using config.ru. Gemfile This is one of the interesting stuff that we can do with Ruby applications. As we know, we can use a variety of gems for different purposes. The gems are just pieces of code and are constantly updated. Therefore, sometimes, we need to use specific versions of gems to maintain the stability of our application. We list all the gems that we are going to use for our application with their version. Before we discuss how to use this Gemfile, we will talk about gem bundler. Bundler The gem bundler manages the installation of all the gems and their dependencies. Of course, we would need to install the gem bundler manually: gem install bundler This will install the latest stable version of bundler gem. Once we are done with this, we need to create a new file with the name Gemfile (yes, with a capital G) and add the gems that we will use. It is not necessary to add all the gems to Gemfile before starting to write the application. We can add and remove gems as we require; however, after every change, we need to run the following: bundle install This will make sure that all the required gems and their dependencies are installed. It will also create a 'Gemfile.lock' file. Make sure that we do not edit this file. It contains all the gems and their dependencies information. Therefore, we now know why we should use Gemfile. This is the lib/routes.rb path for folder containing the routes file. What is a route? A route is the URL path for which the application serves a web page when requested. For example, when we type http://www.example.com/, the URL path is / and when we type http://www.example.com/something/, /something/ is the URL path. Now, we need to explicitly define all the routes for which we will be serving requests so that our application knows what to return. It is not important to have this file in the lib folder or to even have it at all. We can also write the routes in the app.rb file. Consider the following examples: get '/' do # code end post '/something' do # code end Both of the preceding routes are valid. The get and post method are the HTTP methods. The first code block will be executed when a GET request is made on / and the second one will be executed when a POST request is made on /something. The only reason we are writing the routes in a separate file is to maintain clean code. The responsibility of each file will be clearly understood in the following: models/: This folder contains all the files that define model of the application. When we write the models for our application, we will save them in this folder. public/: This folder contains all our CSS, JavaScript, and image files. views/: This folder will contain all the files that define the views, such as HTML, HAML, and ERB files. The code Now, we know what we want to build. You also have a rough idea about what our file structure would be. When we run the application, the rackup file that we load will be config.ru. This file tells the server what environment to use and which file is the main application to load. Before running the server, we need to write a minimum code. It includes writing three files, as follows: app.rb config.ru Gemfile We can, of course, write these files in any order we want; however, we need to make sure that all three files have sufficient code for the application to work. Let's start with the app.rb file. The app.rb file This is the file that config.ru loads when the application is executed. This file, in turn, loads all the other files that help it to understand the available routes and the underlying model: 1 require 'sinatra' 2 3 class Todo < Sinatra::Base 4 set :environment, ENV['RACK_ENV'] 5 6 configure do 7 end 8 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } 11 12 end What does this code do? Let's see what this code does in the following: 1 require 'sinatra' //This loads the sinatra gem into memory. 3 class Todo < Sinatra::Base 4 set :environment, ENV['RACK_ENV'] 5 6 configure do 7 end 8 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } 11 12 end This defines our main application's class. This skeleton is enough to start the basic application. We inherit the Base class of the Sinatra module. Before starting the application, we may want to change some basic configuration settings such as logging, error display, user sessions, and so on. We handle all these configurations through the configure blocks. Also, we might need different configurations for different environments. For example, in development mode, we might want to see all the errors; however, in production we don’t want the end user to see the error dump. Therefore, we can define the configurations for different environments. The first step would be to set the application environment to the concerned one, as follows: 4 set :environment, ENV['RACK_ENV'] We will later see that we can have multiple configure blocks for multiple environments. This line reads the system environment RACK_ENV variable and sets the same environment for the application. When we discuss config.ru, we will see how to set RACK_ENV in the first place: 6 configure do 7 end The following is how we define a configure block. Note that here we have not informed the application that to which environment do these configurations need to be applied. In such cases, this becomes the generic configuration for all the environments and this is generally the last configuration block. All the environment-specific configurations should be written before this block in order to avoid code overriding: 9 Dir[File.join(File.dirname(__FILE__),'models','*.rb')].each { |model| require model } If we see the file structure discussed earlier, we can see that models/ is a directory that contains the model files. We need to import all these files in the application. We have kept all our model files in the models/ folder: Dir[File.join(File.dirname(__FILE__),'models','*.rb')] This would return an array of files having the .rb extension in the models folder. Doing this, avoids writing one require line for each file and modifying this file again: 10 Dir[File.join(File.dirname(__FILE__),'lib','*.rb')].each { |lib| load lib } Similarly, we will import all the files in the lib/ folder. Therefore, in short, the app.rb configures our application according to the deployment environment and imports the model files and the other library files before starting the application. Now, let's proceed to write our next file. The config.ru file The config.ru is the rackup file of the application. This loads all the gems and app.rb. We generally pass this file as a parameter to the server, as follows: 1 require 'sinatra' 2 require 'bundler/setup' 3 Bundler.require 4 5 ENV["RACK_ENV"] = "development" 6 7 require File.join(File.dirname(__FILE__), 'app.rb') 8 9 Todo .start! W Working of the code Let's go through each of the lines, as follows: 1 require 'sinatra' 2 require 'bundler/setup' The first two lines import the gems. This is exactly what we do in other languages. The gem 'sinatra' command will include all the Sinatra classes and help in listening to requests, while the bundler gem will manage all the other gems. As we have discussed earlier, we will always use bundler to manage our gems. 3 Bundler.require This line of the code will check Gemfile and make sure that all the gems available match the version and all the dependencies are met. This does not import all the gems as all gems may not be needed in the memory at all times: 5 ENV["RACK_ENV"] = "development" This code will set the system environment RACK_ENV variable to development. This will help the server know which configurations does it need to use. We will later see how to manage a single configuration file with different settings for different environments and use one particular set of configurations for the given environment. If we use version control for our application, config.ru is not version controlled. It has to be customized on whether our environment is development, staging, testing, or production. We may version control a sample config.ru. We will discuss this when we talk about deploying our application. Next, we will require the main application file, as follows: 7 require File.join(File.dirname(__FILE__), 'app.rb') We see here that we have used the File class to include app.rb: File.dirname(__FILE__) It is a convention to keep config.ru and app.rb in the same folder. It is good practice to give the complete file path whenever we require a file in order to avoid breaking the code. Therefore, this part of the code will return the path of the folder containing config.ru. Now, we know that our main application file is in the same folder as config.ru, therefore, we do the following: File.join(File.dirname(__FILE__), 'app.rb') This would return the complete file path of app.rb and the line 7 will load the main application file in the memory. Now, all we need to do is execute app.rb to start the application, as follows: 9 Todo .start! We see that the start! method is not defined by us in the Todo class in app.rb. This is inherited from the Sinatra::Base class. It starts the application and listens to incoming requests. In short, config.ru checks the availability of all the gems and their dependencies, sets the environment variables, and starts the application. The easiest file to write is Gemfile. It has no complex code and logic. It just contains a list of gems and their version details. Gemfile In Gemfile, we need to specify the source from where the gems will be downloaded and the list of the gems. Therefore, let's write a Gemfile with the following lines: 1 source 'https://rubygems.org' 2 gem 'bundler', '1.6.0' 3 gem 'sinatra', '1.4.4' The first line specifies the source. The https://rubygems.org website is a trusted place to download gems. It has a large collection of gems hosted. We can view this page, search for gems that we want to use, read the documentation, and select the exact version for our application. Generally, the latest stable version of bundler is used. Therefore, we search the site for bundler and find out its version. We do the same for the Sinatra gem. Summary In this article, you learned how to build a Hello World program using Sinatra. Resources for Article: Further resources on this subject: Getting Ready for RubyMotion[article] Quick start - your first Sinatra application[article] Building tiny Web-applications in Ruby using Sinatra[article]
Read more
  • 0
  • 0
  • 11808

article-image-getting-started-aurelia
Packt
03 Jan 2017
28 min read
Save for later

Getting Started with Aurelia

Packt
03 Jan 2017
28 min read
In this article by Manuel Guilbault, the author of the book Learning Aurelia, we will how Aurelia is such a modern framework. brainchild of Rob Eisenberg, father of Durandal, it is based on cutting edge Web standards, and is built on modern software architecture concepts and ideas, to offer a powerful toolset and an awesome developer experience. (For more resources related to this topic, see here.) This article will teach you how Aurelia works, and how you can use it to build real-world applications from A to Z. In fact, while reading the article and following the examples, that’s exactly what you will do. You will start by setting up your development environment and creating the project, then I will walk you through concepts such as routing, templating, data-binding, automated testing, internationalization, and bundling. We will discuss application design, communication between components, and integration of third parties. We will cover every topic most modern, real-world single-page applications require. In this first article, we will start by defining some terms that will be used throughout the article. We will quickly cover some core Aurelia concepts. Then we will take a look at the core Aurelia libraries and see how they interact with each other to form a complete, full-featured framework. We will see also what tools are needed to develop an Aurelia application and how to install them. Finally, we will start creating our application and explore its global structure. Terminology As this article is about a JavaScript framework, JavaScript plays a central role in it. If you are not completely up to date with the terminology, which has changed a lot in the last few years, let me clear things up. JavaScript (or JS) is a dialect, or implementation, of the ECMAScript (ES) standard. It is not the only implementation, but it definitely is the most popular. In this article, I will use the JS acronym to talk about actual JavaScript code or code files and the ES acronym when talking about an actual version of the ECMAScript standard. Like everything in computer programing, the ECMAScript standard evolves over time. At the moment of writing, the latest version is ES2016 and was published in June 2016. It was originally called ES7, but TC39, the committee drafting the specification, decided to change their approval and naming model, hence the new name. The previous version, named ES2015 (ES6) before the naming model changed, was published in June 2015 and was a big step forward as compared to the version before it. This older version, named ES5, was published in 2009 and was the most recent version for 6 years, so it is now widely supported by all modern browsers. If you have been writing JavaScript in the last five years, you should be familiar with ES5. When they decided to change the ES naming model, the TC39 committee also chose to change the specification’s approval model. This decision was made in an effort to publish new versions of the language at a quicker pace. As such, new features are being drafted and discussed by the community, and must pass through an approval process. Each year, a new version of the specification will be released, comprising the features that were approved during the year. Those upcoming features are often referred to as ESNext. This term encompasses language features that are approved or at least pretty close to approval but not yet published. It can be reasonable to expect that most or at least some of those features will be published in the next language version. As ES2015 and ES2016 are still recent things, they are not fully supported by most browsers. Moreover, ESNext features have typically no browser support at all. Those multiple names can be pretty confusing. To make things simpler, I will stick with the official names ES5 for the previous version, ES2016 for the current version and ESNext for the next version. Before going any further, you should make yourself familiar with the features introduced by ES2016 and with ESNext decorators, if you are not already. We will use these features throughout the article. If you don’t know where to start with ES2015 and ES2016, you can find a great overview of the new features on Babel’s website: https://babeljs.io/docs/learn-es2015/ As for ESNext decorators, Addy Osmani, a Google engineer, explained them pretty well: https://medium.com/google-developers/exploring-es7-decorators-76ecb65fb841 For further reading, you can take a look at the feature proposals (decorators, class property declarations, async functions, and so on) for future ES versions: https://github.com/tc39/proposals Core concepts Before we start getting our hands dirty, there are a couple of core concepts that need to be explained. Conventions First, Aurelia relies a lot on conventions. Most of those conventions are configurable, and can be changed if they don’t suit your needs. Each time we’ll encounter a convention throughout the article, we will see how to change it whenever possible. Components Components are a first class citizen of Aurelia. What is an Aurelia component? It is a pair made of an HTML template, called the view, and a JavaScript class, called the view-model. The view is responsible for displaying the component, while the view-model controls its data and behavior. Typically, the view sits in an .html file and the view-model in a .js file. By convention, those two files are bound through a naming rule, they must be in the same directory and have the same name (except for their extension, of course). Here’s an example of an empty component with no data, no behavior, and a static template: component.js export class MyComponent {} component.html <template> <p>My component</p> </template> A component must comply with two constraints, a view’s root HTML element must be the template element, and the view-model class must be exported from the .js file. As a rule of thumb, the only function that should be exported by a component’s JS file should be the view-model class. If multiple classes or functions are exported, Aurelia will iterate on the file’s exported functions and classes and will use the first it finds as the view-model. However, since the enumeration order of an object’s keys is not deterministic as per the ES specification, nothing guarantees that the exports will be iterated in the same order they were declared, so Aurelia may pick the wrong class as the component’s view-model. The only exception to that rule is some view resources In addition to its view-model class, a component’s JS file can export things like value converters, binding behaviors, and custom attributes basically any view resource that can’t have a view, which excludes custom elements. Components are the main building blocks of an Aurelia application. Components can use other components; they can be composed to form bigger or more complex components. Thanks to the slot mechanism, you can design a component’s template so parts of it can be replaced or customized. Architecture Aurelia is not your average monolithic framework. It is a set of loosely coupled libraries with well-defined abstractions. Each of its core libraries solves a specific and well-defined problem common to single-page applications. Aurelia leverages dependency injection and a plugin architecture so you can discard parts of the framework and replace them with third-party or even your own implementations. Or you can just throw away features you don’t need so your application is lighter and faster to load. The core Aurelia libraries can be divided into multiple categories. Let’s have a quick glance. Core features The following libraries are mostly independent and can be used by themselves if needed. They each provide a focused set of features and are at the core of Aurelia: aurelia-dependency-injection: A lightweight yet powerful dependency injection container. It supports multiple lifetime management strategies and child containers. aurelia-logging: A simple logger, supporting log levels and pluggable consumers. aurelia-event-aggregator: A lightweight message bus, used for decoupled communication. aurelia-router: A client-side router, supporting static, parameterized or wildcard routes, and child routers. aurelia-binding: An adaptive and pluggable data-binding library. aurelia-templating: An extensible HTML templating engine. Abstraction layers The following libraries mostly define interfaces and abstractions in order to decouple concerns and enable extensibility and pluggable behaviors. This does not mean that some of the libraries in the previous section do not expose their own abstractions besides their features. Some of them do. But the libraries described in the current section have almost no other purpose than defining abstractions: aurelia-loader: An abstraction defining an interface for loading JS modules, views, and other resources. aurelia-history: An abstraction defining an interface for history management used by routing. aurelia-pal: An abstraction for platform-specific capabilities. It is used to abstract away the platform on which the code is running, such as a browser or Node.js. Indeed, this means that some Aurelia libraries can be used on the server side. Default implementations The following libraries are the default implementations of abstractions exposed by libraries from the two previous sections: aurelia-loader-default: An implementation of the aurelia-loader abstraction for SystemJS and require-based loaders. aurelia-history-browser: An implementation of the aurelia-history abstraction based on standard browser hash change and push state mechanisms. aurelia-pal-browser: An implementation of the aurelia-pal abstraction for the browser. aurelia-logging-console: An implementation of the aurelia-logging abstraction for the browser console. Integration layers The following libraries’ purpose is to integrate some of the core libraries together. They provide interface implementations and adapters, along with default configuration or behaviors: aurelia-templating-router: An integration layer between the aurelia-router and the aurelia-templating libraries. aurelia-templating-binding: An integration layer between the aurelia-templating and the aurelia-binding libraries. aurelia-framework: An integration layer that brings together all of the core Aurelia libraries into a full-featured framework. aurelia-bootstrapper: An integration layer that brings default configuration for aurelia-framework and handles application starting. Additional tools and plugins If you take a look at Aurelia’s organization page on GitHub at https://github.com/aurelia, you will see many more repositories. The libraries listed in the previous sections are just the core of Aurelia—the tip of the iceberg, if I may. Many other libraries exposing additional features or integrating third-party libraries are available on GitHub, some of them developed and maintained by the Aurelia team, many others by the community. I strongly suggest that you explore the Aurelia ecosystem by yourself after reading this article, as it is rapidly growing, and the Aurelia community is doing some very exciting things. Tooling In the following section, we will go over the tools needed to develop our Aurelia application. Node.js and NPM Aurelia being a JavaScript framework, it just makes sense that its development tools are also in JavaScript. This means that the first thing you need to do when getting started with Aurelia is to install Node.js and NPM on your development environment. Node.js is a server-side runtime environment based on Google’s V8 JavaScript engine. It can be used to build complete websites or web APIs, but it is also used by a lot of front-end projects to perform development and build tasks, such as transpiling, linting, and minimizing. NPM is the de facto package manager for Node.js. It uses http://www.npmjs.com as its main repository, where all available packages are stored. It is bundled with Node.js, so if you install Node.js on your computer, NPM will also be installed. To install Node.js and NPM on your development environment, you simply need to go to https://nodejs.org/ and download the proper installer suiting your environment. If Node.js and NPM are already installed, I strongly recommend that you make sure to use at least the version 3 of NPM, as older versions may have issues collaborating with some of the other tools we’ll use. If you are not sure which version you have, you can check it by running the following command in a console: > npm –v If Node.js and NPM are already installed but you need to upgrade NPM, you can do so by running the following command: > npm install npm -g The Aurelia CLI Even though an Aurelia application can be built using any package manager, build system, or bundler you want, the preferred tool to manage an Aurelia project is the command line interface, a.k.a. the CLI. At the moment of writing, the CLI only supports NPM as its package manager and requirejs as its module loader and bundler, probably because they both are the most mature and stable. It also uses Gulp 4 behind the scene as its build system. CLI-based applications are always bundled when running, even in development environments. This means that the performance of an application during development will be very close to what it should be like in production. This also means that bundling is a recurring concern, as new external libraries must be added to some bundle in order to be available at runtime. In this article, we’ll stick with the preferred solution and use the CLI. There are however two appendices at the end of the article covering alternatives, a first for Webpack, and a second for SystemJS with JSPM. Installing the CLI The CLI being a command line tool, it should be installed globally, by opening a console and executing the following command: > npm install -g aurelia-cli You may have to run this command with administrator privileges, depending on your environment. If you already have it installed, make sure you have the latest version, by running the following command: > au -v You can then compare the version this command outputs with the latest version number tagged on GitHub, at https://github.com/aurelia/cli/releases/latest. If you don’t have the latest version, you can simply update it by running the following command: > npm install -g aurelia-cli If for some reason the command to update the CLI fails, simply uninstall then reinstall it: > npm uninstall aurelia-cli -g > npm install aurelia-cli -g This should reinstall the latest version. The project skeletons As an alternative to the CLI, project skeletons are available at https://github.com/aurelia/skeleton-navigation. This repository contains multiple sample projects, sitting on different technologies such as SystemJS with JSPM, Webpack, ASP .Net Core, or TypeScript. Prepping up a skeleton is easy. You simply need to download and unzip the archive from GitHub or clone the repository locally. Each directory contains a distinct skeleton. Depending on which one you chose, you’ll need to install different tools and run setup commands. Generally, the instructions in the skeleton’s README.md file are pretty clear. Our application Creating an Aurelia application using the CLI is extremely simple. You just need to open a console in the directory where you want to create your project and run the following command: > au new The CLI’s project creation process will start, and you should see something like this: The first thing the CLI will ask for is the name you want to give to your project. This name will be used both to create the directory in which the project will live and to set some values, such as the name property in the package.json file it will create. Let’s name our application learning-aurelia: Next, the CLI asks what technologies we want to use to develop our application. Here, you can select a custom transpiler such as TypeScript and a CSS preprocessor such as LESS or SASS. Transpiler: Little cousin of the compiler, it translates one programming language into another. In our case, it will be used to transform ESNext code, which may not be supported by all browsers, into ES5, which is understood by all modern browsers. The default choice is to use ESNext and plain CSS, and this is what we will choose: The following steps simply recap the choices we made and ask for confirmation to create the project, then ask if we want to install our project’s dependencies which it does by default. At this point, the CLI will create the project and run an npm install behind the scene. Once it completes, our application is ready to roll: At this point, the directory you ran au new in will contain a new directory named learning-aurelia. This sub-directory will contain the Aurelia project. We’ll explore it a bit in the following section. The CLI is likely to change and offer more options in the future, as there are plans to support additional tools and technologies. Don’t be surprised if you see different or new options when you run it. The path we followed to create our project uses Visual Studio Code as the default code editor. If you want to use another editor such as Atom, Sublime, or WebStorm, which are the other supported options at the moment of writing, you simply need to select option #3 custom transpilers, CSS pre-processors and more at the beginning of the creation process, then select the default answer for each question until asked to select your default code editor. The rest of the creation process should stay pretty much the same. Note that if you select a different code editor, your own experience may differ from the examples and screenshots you’ll find in this article, as Visual Studio Code is the editor that was used during writing. If you are a TypeScript developer, you may want to create a TypeScript project. I however recommend that you stick with plain ESNext, as every example and code sample in this article has been written in JS. Trying to follow with TypeScript may prove cumbersome, although you can try if you like the challenge. The Structure of a CLI-Based Project If you open the newly created project in a code editor, you should see the following file structure: node_modules: The standard NPM directory containing the project’s dependencies; src: The directory containing the application’s source code; test: The directory containing the application’s automated test suites. .babelrc: The configuration file for Babel, which is used by the CLI to transpile our application’s ESNext code into ES5 so most browsers can run it; index.html: The HTML page that loads and launches the application; karma.conf.js: The configuration file for Karma, which is used by the CLI to run unit tests; package.json: The standard Node.js project file. The directory contains other files such as .editorconfig, .eslintrc.json, and .gitignore that are of little interest to learn Aurelia, so we won’t cover them. In addition to all of this, you should see a directory named aurelia_project. This directory contains things related to the building and bundling of the application using the CLI. Let’s see what it’s made of. The aurelia.json file The first thing of importance in this directory is a file named aurelia.json. This file contains the configuration used by the CLI to test, build, and bundle the application. This file can change drastically depending on the choices you make during the project creation process. There are very few scenarios where this file needs to be modified by hand. Adding an external library to the application is such a scenario. Apart from this, this file should mostly never be updated manually. The first interesting section in this file is the platform: "platform": { "id": "web", "displayName": "Web", "output": "scripts", "index": "index.html" }, This section tells the CLI that the output directory where the bundles are written is named scripts. It also tells that the HTML index page, which will load and launch the application, is the index.html file. The next interesting part is the transpiler section: "transpiler": { "id": "babel", "displayName": "Babel", "fileExtension": ".js", "options": { "plugins": [ "transform-es2015-modules-amd" ] }, "source": "src/**/*.js" }, This section tells the CLI to transpile the application’s source code using Babel. It also defines additional plugins as some are already configured in .babelrc to be used when transpiling the source code. In this case, it adds a plugin that will output transpiled files as AMD-compliant modules, for requirejs compatibility. Tasks The aurelia_project directory contains a subdirectory named tasks. This subdirectory contains various Gulp tasks to build, run, and test the application. These tasks can be executed using the CLI. The first thing you can try is to run au without any argument: > au This will list all available commands, along with their available arguments. This list includes built-in commands such as new, which we used already, or generate, which we’ll see in the next section along with the Gulp tasks declared in the tasks directory. To run one of those tasks, simply execute au with the name of the task as its first argument: > au build This command will run the build task which is defined in aurelia_project/tasks/build.js. This task transpiles the application code using Babel, executes the CSS and markup preprocessors if any, and bundles the code in the scripts directory. After running it, you should see two new files in scripts: app-bundle.js and vendor-bundle.js. Those are the actual files that will be loaded by index.html when the application is launched. The former contains all application code both JS files and templates, while the later contains all external libraries used by the application including Aurelia libraries. You may have noticed a command named run in the list of available commands. This task is defined in aurelia_project/tasks/run.js, and executes the build task internally before spawning a local HTTP server to serve the application: > au run By default, the HTTP server will listen for requests on the port 9000, so you can open your favorite browser and go to http://localhost:9000/ to see the default, demo application in action. If you ever need to change the port number on which the development HTTP server runs, you just need to open aurelia_project/tasks/run.js, and locate the call to the browserSync function. The object passed to this function contains a property named port. You can change its value accordingly. The run task can accept a --watch switch: > au run --watch If this switch is present, the task will keep monitoring the source code and, when any code file changes, will rebuild the application and automatically refresh the browser. This can be pretty useful during development. Generators The CLI also offers a way to generate code, using classes defined in the aurelia_project/generators directory. At the moment of writing, there are generators to create custom attributes, custom elements, binding behaviors, value converters, and even tasks and generators, yes, there is a generator to generate generators. If you are not familiar with Aurelia at all, most of those concepts, value converters, binding behaviors, and custom attributes and elements probably mean nothing to you. Don’t worry. A generator can be executed using the built-in generate command: > au generate attribute This command will run the custom attribute generator. It will ask for the name of the attribute to generate then create it in the src/resources/attributes directory. If you take a look at this generator which is found in aurelia_project/generators/attribute.js, you’ll see that the file exports a single class named AttributeGenerator. This class uses the @inject decorator to declare various classes from the aurelia-cli library as dependencies and have instances of them injected in its constructor. It also defines an execute method, which is called by the CLI when running the generator. This method leverages the services provided by aurelia-cli to interact with the user and generate code files. The exact generator names available by default are attribute, element, binding-behavior, value-converter, task, and generator. Environments CLI-based applications support environment-specific configuration values. By default, the CLI supports three environments—development, staging, and production. The configuration object for each of these environments can be found in the different files dev.js, stage.js, and prod.js located in the aurelia_project/environments directory. A typical environment file looks like this: aurelia_project/environments/dev.js export default { debug: true, testing: true }; By default, the environment files are used to enable debugging logging and test-only templating features in the Aurelia framework depending on the environment we’ll see this in a next section. The environment objects can however be enhanced with whatever properties you may need. Typically, it could be used to configure different URLs for a backend, depending on the environment. Adding a new environment is simply a matter of adding a file for it in the aurelia_project/environments directory. For example, you can add a local environment by creating a local.js file in the directory. Many tasks, basically build and all other tasks using it, such as run and test expect an environment to be specified using the env argument: > au build --env prod Here, the application will be built using the prod.js environment file. If no env argument is provided, dev will be used by default. When executed, the build task just copies the proper environment file to src/environment.js before running the transpiler and bundling the output. This means that src/environment.js should never be modified by hand, as it will be automatically overwritten by the build task. The Structure of an Aurelia application The previous section described the files and folders that are specific to a CLI-based project. However, some parts of the project are pretty much the same whatever the build system and package manager are. These are the more global topics we will see in this section. The hosting page The first entry point of an Aurelia application is the HTML page loading and hosting it. By default, this page is named index.html and is located at the root of the project. The default hosting page looks like this: index.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>Aurelia</title> </head> <body aurelia-app="main"> <script src="scripts/vendor-bundle.js" data-main="aurelia-bootstrapper"></script> </body> </html> When this page loads, the script element inside the body element loads the scripts/vendor-bundle.js file, which contains requirejs itself along with definitions for all external libraries and references to app-bundle.js. When loading, requirejs checks the data-main attribute and uses its value as the entry point module. Here, aurelia-bootstrapper kicks in. The bootstrapper first looks in the DOM for elements with the aurelia-app attribute, we can find such an attribute on the body element in the default index.html file. This attribute identifies elements acting as application viewports. The bootstrapper uses the attribute’s value as the name of the application’s main module and locates the module, loads it, and renders the resulting DOM inside the element, overwriting any previous content. The application is now running. Even though the default application doesn’t illustrate this scenario, it is possible for an HTML file to host multiple Aurelia applications. It just needs to contain multiple elements with an aurelia-app attribute, each element referring to its own main module. The main module By convention, the main module referred to by the aurelia-app attribute is named main, and as such is located under src/main.js. This file is expected to export a configure function, which will be called by the Aurelia bootstrapping process and will be passed a configuration object used to configure and boot the framework. By default, the main configure function looks like this: src/main.js import environment from './environment'; export function configure(aurelia) { aurelia.use .standardConfiguration() .feature('resources'); if (environment.debug) { aurelia.use.developmentLogging(); } if (environment.testing) { aurelia.use.plugin('aurelia-testing'); } aurelia.start().then(() => aurelia.setRoot()); } The configure function starts by telling Aurelia to use its defaults configuration, and to load the resources feature. It also conditionally loads the development logging plugin based on the environment’s debug property, and the testing plugin based on the environment’s testing property. This means that, by default, both plugins will be loaded in development, while none will be loaded in production. Lastly, the function starts the framework then attaches the root component to the DOM. The start method returns a Promise, whose resolution triggers the call to setRoot. If you are not familiar with Promises in JavaScript, I strongly suggest that you look it up before going any further, as they are a core concept in Aurelia. The root component At the root of any Aurelia application is a single component, which contains everything within the application. By convention, this root component is named app. It is composed of two files—app.html, which contains the template to render the component, and app.js, which contains its view-model class. In the default application, the template is extremely simple: src/app.html <template> <h1>${message}</h1> </template> This template is made of a single h1 element, which will contain the value of the view-model’s message property as text, thanks to string interpolation. The app view-model looks like this: src/app.js export class App { constructor() { this.message = 'Hello World!'; } } This file simply exports a class having a message property containing the string “Hello World!”. This component will be rendered when the application starts. If you run the application and navigate to the application in your favorite browser, you’ll see a h1 element containing “Hello World!”. You may notice that there is no reference to Aurelia in this component’s code. In fact, the view-model is just plain ESNext and it can be used by Aurelia as is. Of course, we’re going to leverage many Aurelia features in many of our view-models later on, so most of our view-models will in fact have dependencies on Aurelia libraries, but the key point here is that you don’t have to use any Aurelia library in your view-models if you don’t want to, because Aurelia is designed to be as less intrusive as possible. Conventional bootstrapping It is possible to leave the aurelia-app attribute empty in the hosting page: <body aurelia-app> In such a case, the bootstrapping process is much simpler. Instead of loading a main module containing a configure function, the bootstrapper will simply use the framework’s default configuration and load the app component as the application root. This can be a simpler way to get started for a very simple application, as it negates the need for the src/main.js file you can simply delete it. However, it means that you are stuck with the default framework configuration. You cannot load features nor plugins. For most real-life applications, you’ll need to keep the main module, which means specifying it as the aurelia-app attribute’s value. Customizing Aurelia configuration The configure function of the main module receives a configuration object, which is used to configure the framework: src/main.js //Omitted snippet… aurelia.use .standardConfiguration() .feature('resources'); if (environment.debug) { aurelia.use.developmentLogging(); } if (environment.testing) { aurelia.use.plugin('aurelia-testing'); } //Omitted snippet… Here, the standardConfiguration() method is a simple helper that encapsulates the following: aurelia.use .defaultBindingLanguage() .defaultResources() .history() .router() .eventAggregator(); This is the default Aurelia configuration. It loads the default binding language, the default templating resources, the browser history plugin, the router plugin, and the event aggregator. This is the default set of features that a typical Aurelia application uses. All those plugins will be covered at one point or another throughout this article. All those plugins are optional except the binding language, which is needed by the templating engine. If you don’t need one, just don’t load it. In addition to the standard configuration, some plugins are loaded depending on the environment’s settings. When the environment’s debug property is true, Aurelia’s console logger is loaded using the developmentLogging() method, so traces and errors can be seen in the browser console. When the environment’s testing property is true, the aurelia-testing plugin is loaded using the plugin method. This plugin registers some resources that are useful when debugging components. The last line in the configure function starts the application and displays its root component, which is named app by convention. You may, however, bypass the convention and pass the name of your root component as the first argument to setRoot, if you named it otherwise: aurelia.start().then(() => aurelia.setRoot('root')); Here, the root component is expected to sit in the src/root.html and src/root.js files. Summary Getting started with Aurelia is very easy, thanks to the CLI. Installing the tooling and creating an empty project is simply a matter of running a couple of commands, and it takes typically more time waiting for the initial npm install to complete than doing the actual setup. We’ll go over dependency injection and logging, and we’ll start building our application by adding components and configuring routes to navigate between them. Resources for Article: Further resources on this subject: Introduction to JavaScript [article] Breaking into Microservices Architecture [article] Create Your First React Element [article]
Read more
  • 0
  • 0
  • 11752

article-image-changing-views
Packt
15 Feb 2016
25 min read
Save for later

Changing Views

Packt
15 Feb 2016
25 min read
In this article by Christopher Pitt, author of the book React Components, has explained how to change sections without reloading the page. We'll use this knowledge to create the public pages of the website our CMS is meant to control. (For more resources related to this topic, see here.) Location, location, and location Before we can learn about alternatives to reloading pages, let's take a look at how the browser manages reloads. You've probably encountered the window object. It's a global catch-all object for browser-based functionality and state. It's also the default this scope in any HTML page: We've even accessed window before. When we rendered to document.body or used document.querySelector, the window object was assumed. It's the same as if we were to call window.document.querySelector. Most of the time document is the only property we need. That doesn't mean it's the only property useful to us. Try the following, in the console: console.log(window.location); You should see something similar to: Location {     hash: ""     host: "127.0.0.1:3000"     hostname: "127.0.0.1"     href: "http://127.0.0.1:3000/examples/login.html"     origin: "http://127.0.0.1:3000"     pathname: "/examples/login.html"     port: "3000"     ... } If we were trying to work out which components to show based on the browser URL, this would be an excellent place to start. Not only can we read from this object, but we can also write to it: <script>     window.location.href = "http://material-ui.com"; </script> Putting this in an HTML page or entering that line of JavaScript in the console will redirect the browser to material-ui.com. It's the same if you click on the link. And if it's to a different page (than the one the browser is pointing at), then it will cause a full page reload. A bit of history So how does this help us? We're trying to avoid full page reloads, after all. Let's experiment with this object. Let's see what happens when we add something like #page-admin to the URL: Adding #page-admin to the URL leads to the window.location.hash property being populated with the same page. What's more, changing the hash value won't reload the page! It's the same as if we clicked on a link that had that hash in the href attribute. We can modify it without causing full page reloads, and each modification will store a new entry in the browser history. Using this trick, we can step through a number of different "states" without reloading the page, and be able to backtrack each with the browser's back button. Using browser history Let's put this trick to use in our CMS. First, let's add a couple functions to our Nav component: export default (props) => {     // ...define class names       var redirect = (event, section) => {         window.location.hash = `#${section}`;         event.preventDefault();     }       return <div className={drawerClassNames}>         <header className="demo-drawer-header">             <img src="images/user.jpg"                  className="demo-avatar" />         </header>         <nav className={navClassNames}>             <a className="mdl-navigation__link"                href="/examples/login.html"                onClick={(e) => redirect(e, "login")}>                 <i className={buttonIconClassNames}                    role="presentation">                     lock                 </i>                 Login             </a>             <a className="mdl-navigation__link"                href="/examples/page-admin.html"                onClick={(e) => redirect(e, "page-admin")}>                 <i className={buttonIconClassNames}                    role="presentation">                     pages                 </i>                 Pages             </a>         </nav>     </div>; }; We add an onClick attribute to our navigation links. We've created a special function that will change window.location.hash and prevent the default full page reload behavior the links would otherwise have caused. This is a neat use of arrow functions, but we're ultimately creating three new functions in each render call. Remember that this can be expensive, so it's best to move the function creation out of render. We'll replace this shortly. It's also interesting to see template strings in action. Instead of "#" + section, we can use `#${section}` to interpolate the section name. It's not as useful in small strings, but becomes increasingly useful in large ones. Clicking on the navigation links will now change the URL hash. We can add to this behavior by rendering different components when the navigation links are clicked: import React from "react"; import ReactDOM from "react-dom"; import Component from "src/component"; import Login from "src/login"; import Backend from "src/backend"; import PageAdmin from "src/page-admin";   class Nav extends Component {     render() {         // ...define class names           return <div className={drawerClassNames}>             <header className="demo-drawer-header">                 <img src="images/user.jpg"                      className="demo-avatar" />             </header>             <nav className={navClassNames}>                 <a className="mdl-navigation__link"                    href="/examples/login.html"                    onClick={(e) => this.redirect(e, "login")}>                     <i className={buttonIconClassNames}                        role="presentation">                         lock                     </i>                     Login                 </a>                 <a className="mdl-navigation__link"                    href="/examples/page-admin.html"                    onClick={(e) => this.redirect(e, "page-admin")}>                     <i className={buttonIconClassNames}                        role="presentation">                         pages                     </i>                     Pages                 </a>             </nav>         </div>;     }       redirect(event, section) {         window.location.hash = `#${section}`;           var component = null;           switch (section) {             case "login":                 component = <Login />;                 break;             case "page-admin":                 var backend = new Backend();                 component = <PageAdmin backend={backend} />;                 break;         }           var layoutClassNames = [             "demo-layout",             "mdl-layout",             "mdl-js-layout",             "mdl-layout--fixed-drawer"         ].join(" ");           ReactDOM.render(             <div className={layoutClassNames}>                 <Nav />                 {component}             </div>,             document.querySelector(".react")         );           event.preventDefault();     } };   export default Nav; We've had to convert the Nav function to a Nav class. We want to create the redirect method outside of render (as that is more efficient) and also isolate the choice of which component to render. Using a class also gives us a way to name and reference Nav, so we can create a new instance to overwrite it from within the redirect method. It's not ideal packaging this kind of code within a component, so we'll clean that up in a bit. We can now switch between different sections without full page reloads. There is one problem still to solve. When we use the browser back button, the components don't change to reflect the component that should be shown for each hash. We can solve this in a couple of ways. The first approach we can try is checking the hash frequently: componentDidMount() {     var hash = window.location.hash;       setInterval(() => {         if (hash !== window.location.hash) {             hash = window.location.hash;             this.redirect(null, hash.slice(1), true);         }     }, 100); }   redirect(event, section, respondingToHashChange = false) {     if (!respondingToHashChange) {         window.location.hash = `#${section}`;     }       var component = null;       switch (section) {         case "login":             component = <Login />;             break;         case "page-admin":             var backend = new Backend();             component = <PageAdmin backend={backend} />;             break;     }       var layoutClassNames = [         "demo-layout",         "mdl-layout",         "mdl-js-layout",         "mdl-layout--fixed-drawer"     ].join(" ");       ReactDOM.render(         <div className={layoutClassNames}>             <Nav />             {component}         </div>,         document.querySelector(".react")     );       if (event) {         event.preventDefault();     } } Our redirect method has an extra parameter, to apply the new hash whenever we're not responding to a hash change. We've also wrapped the call to event.preventDefault in case we don't have a click event to work with. Other than those changes, the redirect method is the same. We've also added a componentDidMount method, in which we have a call to setInterval. We store the initial window.location.hash and check 10 times a second to see if it has change. The hash value is #login or #page-admin, so we slice the first character off and pass the rest to the redirect method. Try clicking on the different navigation links, and then use the browser back button. The second option is to use the newish pushState and popState methods on the window.history object. They're not very well supported yet, so you need to be careful to handle older browsers or sure you don't need to handle them. You can learn more about pushState and popState at https://developer.mozilla.org/en-US/docs/Web/API/History_API. Using a router Our hash code is functional but invasive. We shouldn't be calling the render method from inside a component (at least not one we own). So instead, we're going to use a popular router to manage this stuff for us. Download it with the following: $ npm install react-router --save Then we need to join login.html and page-admin.html back into the same file: <!DOCTYPE html> <html>     <head>         <script src="/node_modules/babel-core/browser.js"></script>         <script src="/node_modules/systemjs/dist/system.js"></script>         <script src="https://storage.googleapis.com/code.getmdl.io/1.0.6/material.min.js"></script>         <link rel="stylesheet" href="https://storage.googleapis.com/code.getmdl.io/1.0.6/material.indigo-pink.min.css" />         <link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons" />         <link rel="stylesheet" href="admin.css" />     </head>     <body class="         mdl-demo         mdl-color--grey-100         mdl-color-text--grey-700         mdl-base">         <div class="react"></div>         <script>             System.config({                 "transpiler": "babel",                 "map": {                     "react": "/examples/react/react",                     "react-dom": "/examples/react/react-dom",                     "router": "/node_modules/react-router/umd/ReactRouter"                 },                 "baseURL": "../",                 "defaultJSExtensions": true             });               System.import("examples/admin");         </script>     </body> </html> Notice how we've added the ReactRouter file to the import map? We'll use that in admin.js. First, let's define our layout component: var App = function(props) {     var layoutClassNames = [         "demo-layout",         "mdl-layout",         "mdl-js-layout",         "mdl-layout--fixed-drawer"     ].join(" ");       return (         <div className={layoutClassNames}>             <Nav />             {props.children}         </div>     ); }; This creates the page layout we've been using and allows a dynamic content component. Every React component has a this.props.children property (or props.children in the case of a stateless component), which is an array of nested components. For example, consider this component: <App>     <Login /> </App> Inside the App component, this.props.children will be an array with a single item—an instance of the Login. Next, we'll define handler components for the two sections we want to route: var LoginHandler = function() {     return <Login />; }; var PageAdminHandler = function() {     var backend = new Backend();     return <PageAdmin backend={backend} />; }; We don't really need to wrap Login in LoginHandler but I've chosen to do it to be consistent with PageAdminHandler. PageAdmin expects an instance of Backend, so we have to wrap it as we see in this example. Now we can define routes for our CMS: ReactDOM.render(     <Router history={browserHistory}>         <Route path="/" component={App}>             <IndexRoute component={LoginHandler} />             <Route path="login" component={LoginHandler} />             <Route path="page-admin" component={PageAdminHandler} />         </Route>     </Router>,     document.querySelector(".react") ); There's a single root route, for the path /. It creates an instance of App, so we always get the same layout. Then we nest a login route and a page-admin route. These create instances of their respective components. We also define an IndexRoute so that the login page will be displayed as a landing page. We need to remove our custom history code from Nav: import React from "react"; import ReactDOM from "react-dom"; import { Link } from "router";   export default (props) => {     // ...define class names       return <div className={drawerClassNames}>         <header className="demo-drawer-header">             <img src="images/user.jpg"                  className="demo-avatar" />         </header>         <nav className={navClassNames}>             <Link className="mdl-navigation__link" to="login">                 <i className={buttonIconClassNames}                    role="presentation">                     lock                 </i>                 Login             </Link>             <Link className="mdl-navigation__link" to="page-admin">                 <i className={buttonIconClassNames}                    role="presentation">                     pages                 </i>                 Pages             </Link>         </nav>     </div>; }; And since we no longer need a separate redirect method, we can convert the class back into a statement component (function). Notice we've swapped anchor components for a new Link component. This interacts with the router to show the correct section when we click on the navigation links. We can also change the route paths without needing to update this component (unless we also change the route names). Creating public pages Now that we can easily switch between CMS sections, we can use the same trick to show the public pages of our website. Let's create a new HTML page just for these: <!DOCTYPE html> <html>     <head>         <script src="/node_modules/babel-core/browser.js"></script>         <script src="/node_modules/systemjs/dist/system.js"></script>     </head>     <body>         <div class="react"></div>         <script>             System.config({                 "transpiler": "babel",                 "map": {                     "react": "/examples/react/react",                     "react-dom": "/examples/react/react-dom",                     "router": "/node_modules/react-router/umd/ReactRouter"                 },                 "baseURL": "../",                 "defaultJSExtensions": true             });               System.import("examples/index");         </script>     </body> </html> This is a reduced form of admin.html without the material design resources. I think we can ignore the appearance of these pages for the moment, while we focus on the navigation. The public pages are almost 100%, so we can use stateless components for them. Let's begin with the layout component: var App = function(props) {     return (         <div className="layout">             <Nav pages={props.route.backend.all()} />             {props.children}         </div>     ); }; This is similar to the App admin component, but it also has a reference to a Backend. We define that when we render the components: var backend = new Backend(); ReactDOM.render(     <Router history={browserHistory}>         <Route path="/" component={App} backend={backend}>             <IndexRoute component={StaticPage} backend={backend} />             <Route path="pages/:page" component={StaticPage} backend={backend} />         </Route>     </Router>,     document.querySelector(".react") ); For this to work, we also need to define a StaticPage: var StaticPage = function(props) {     var id = props.params.page || 1;     var backend = props.route.backend;       var pages = backend.all().filter(         (page) => {             return page.id == id;         }     );       if (pages.length < 1) {         return <div>not found</div>;     }       return (         <div className="page">             <h1>{pages[0].title}</h1>             {pages[0].content}         </div>     ); }; This component is more interesting. We access the params property, which is a map of all the URL path parameters defined for this route. We have :page in the path (pages/:page), so when we go to pages/1, the params object is {"page":1}. We also pass a Backend to Page, so we can fetch all pages and filter them by page.id. If no page.id is provided, we default to 1. After filtering, we check to see if there are any pages. If not, we return a simple not found message. Otherwise, we render the content of the first page in the array (since we expect the array to have a length of at least 1). We now have a page for the public pages of the website: Summary In this article, we learned about how the browser stores URL history and how we can manipulate it to load different sections without full page reloads. Resources for Article:   Further resources on this subject: Introduction to Akka [article] An Introduction to ReactJs [article] ECMAScript 6 Standard [article]
Read more
  • 0
  • 0
  • 11553

article-image-enhancing-user-interface-ajax
Packt
22 Oct 2009
32 min read
Save for later

Enhancing the User Interface with Ajax

Packt
22 Oct 2009
32 min read
Since our project is a Web 2.0 application, it should be heavily focused on the user experience. The success of our application depends on getting users to post and share content on it. Therefore, the user interface of our application is one of our major concerns. This article will improve the interface of our application by introducing Ajax features, making it more user-friendly and interactive. Ajax and Its Advantages Ajax, which stands for Asynchronous JavaScript and XML, consists of the following technologies: HTML and CSS for structuring and styling information. JavaScript for accessing and manipulating information dynamically. XMLHttpRequest, which is an object provided by modern browsers for exchanging data with the server without reloading the current web page. A format for transferring data between the client and server. XML is sometimes used, but it could be HTML, plain text, or a JavaScript-based format called JSON. Ajax technologies let code on the client-side exchange data with the server behind the scenes, without having to reload the entire page each time the user makes a request. By using Ajax, web developers are able to increase the interactivity and usability of web pages. Ajax offers the following advantages when implemented in the right places: Better user experience. With Ajax, the user can do a lot without refreshing the page, which brings web applications closer to regular desktop applications. Better performance. By exchanging only the required data with the server, Ajax saves bandwidth and increases the application's speed. There are numerous examples of web applications that use Ajax. Google Maps and Gmail are perhaps two of the most prominent examples. In fact, these two applications played an important role in spreading the adoption of Ajax, because of the success that they enjoyed. What sets Gmail from other web mail services is its user interface, which enables users to manage their emails interactively without waiting for a page reload after every action. This creates a better user experience and makes Gmail feel like a responsive and feature-rich application rather than a simple web site. This article explains how to use Ajax with Django so as to make our application more responsive and user friendly. We are going to implement three of the most common Ajax features found in web applications today. But before that, we will learn about the benefits of using an Ajax framework as opposed to working with raw JavaScript functions. Using an Ajax Framework in Django In this section we will choose and install an Ajax framework in our application. This step isn't entirely necessary when using Ajax in Django, but it can greatly simplify working with Ajax. There are many advantages to using an Ajax framework: JavaScript implementations vary from browser to browser. Some browsers provide more complete and feature-rich implementations, whereas others contain implementations that are incomplete or don't adhere to standards. Without an Ajax framework, the developer must keep track of browser support for the JavaScript features that they are using, and work around the limitations that are present in some browser implementations of JavaScript. On the other hand, when using an Ajax framework, the framework takes care of this for us; it abstracts access to the JavaScript implementation and deals with the differences and quirks of JavaScript across browsers. This way, we concentrate on developing features instead of worrying about browser differences and limitations. The standard set of JavaScript functions and classes is a bit lacking for fully fledged web application development. Various common tasks require many lines of code even though they could have been wrapped in simple functions. Therefore, even if you decide not to use an Ajax framework, you will find yourself having to write a library of functions that encapsulates JavaScript facilities and makes them more usable. But why reinvent the wheel when there are many excellent Open Source libraries already available? Ajax frameworks available on the market today range from comprehensive solutions that provide server-side and client-side components to light-weight client-side libraries that simplify working with JavaScript. Given that we are already using Django on the server-side, we only want a client-side framework. In addition, the framework should be easy to integrate with Django without requiring additional dependencies. And finally, it is preferable to pick a light and fast framework. There are many excellent frameworks that fulfil our requirements, such as Prototype, the Yahoo! UI Library and jQuery. I have worked with them all and they are all great. But for our application, I'm going to pick jQuery, because it's the lightest of the three. It also enjoys a very active development community and a wide range of plugins. If you already have experience with another framework, you can continue using it during this article. It is true that you will have to adapt the JavaScript code in this article to your framework, but Django code on the server-side will remain the same no matter which framework you choose. Now that you know the benefits of using an Ajax framework, we will move to installing jQuery into our project. Downloading and Installing jQuery One of the advantages of jQuery is that it consists of a single light-weight file. To download it, head to http://jquery.com/ and choose the latest version (1.2.3 at the time of writing). You will find two choices: Uncompressed version: This is the standard version that I recommend you to use during development. You will get a .js file with the library's code in it. Compressed version: You will also get a .js file if you download this version. However, the code will look obfuscated. jQuery developers produce this version by applying many operations on the uncompressed .js file to reduce its size, such as removing white spaces and renaming variables, as well as many other techniques. This version is useful when you deploy your application, because it offers exactly the same features as the uncompressed one, but with a smaller file size. I recommend the uncompressed version during development because you may want to look into jQuery's code and see how a particular method works. However, the two versions offer exactly the same set of features, and switching from one to another is just a matter of replacing one file. Once you have the jquery-xxx.js file (where xxx is the version number), rename it to jquery.js and copy it to the site_media directory of our project (Remember that this directory holds static files which are not Python code). Next, you will have to include this file in the base template of our site. This will make jQuery available to all of our project pages. To do so, open templates/base.html and add the highlighted code to the head section in it: <head> <title>Django Bookmarks | {% block title %}{% endblock %}</title> <link rel="stylesheet" href="/site_media/style.css"type="text/css" /> <script type="text/javascript"src="/site_media/jquery.js"></script></head> To add your own JavaScript code to an HTML page, you can either put the code in a separate .js file and link it to the HTML page by using the script tag as above, or write the code directly in the body of a script tag: <script type="text/javascript"> // JavaScript code goes here.</script> The first method, however, is recommended over the second one, because it helps keep the source tree organized by putting HTML and JavaScript code in different files. Since we are going to write our own .js files during this article, we need a way to link .js files to templates without having to edit base.html every time. We will do this by creating a template block in the head section of the base.html template. When a particular page wants to include its own JavaScript code, this block may be overridden to add the relevant script tag to the page. We will call this block external, because it is used to link external files to pages. Open templates/base.html and modify its head section as follows: <head> <title>Django Bookmarks | {% block title %}{% endblock %}</title> <link rel="stylesheet" href="/site_media/style.css" type="text/css"/> <script type="text/javascript" src="/site_media/jquery.js"> </script> {% block external %}{% endblock %}</head> And we have finished. From now on, when a view wants to use some JavaScript code, it can link a JavaScript file to its template by overriding the external template block. Before we start to implement Ajax enhancements in our project, let's go through a quick introduction to the jQuery framework. The jQuery JavaScript Framework jQuery is a library of JavaScript functions that facilitates interacting with HTML documents and manipulating them. The library is designed to reduce the time and effort spent on writing code and achieving cross-browser compatibility, while at the same time taking full advantage of what JavaScript offers to build interactive and responsive web applications. The general workflow of using jQuery consists of two steps: Select an HTML element or a group of elements to work on. Apply a jQuery method to the selected group Element Selectors jQuery provides a simple approach to select elements; it works by passing a CSS selector string to a function called $. Here are some examples to illustrate the usage of this function: If you want to select all anchor (<a>) elements on a page, you can use the following function call: $("a") If you want to select anchor elements which have the .title CSS class, use $("a.title") To select an element whose ID is #nav, you can use $("#nav") To select all list item (<li>) elements inside #nav, use $("#nav li") And so on. The $() function constructs and returns a jQuery object. After that, you can call methods on this object to interact with the selected HTML elements. jQuery Methods jQuery offers a variety of methods to manipulate HTML documents. You can hide or show elements, attach event handlers to events, modify CSS properties, manipulate the page structure and, most importantly, perform Ajax requests. Before we go through some of the most important methods, I highly recommend using the Firefox web browser and an extension called Firebug to experiment with jQuery. This extension provides a JavaScript console that is very similar to the interactive Python console. With it, you can enter JavaScript statements and see their output directly without having to create and edit files. To obtain Firebug, go to http://www.getfirebug.com/, and click on the install link. Depending on the security settings of Firefox, you may need to approve the website as a safe source of extensions. If you do not want to use Firefox for any reason, Firebug's website offers a "lite" version of the extension for other browsers in the form of a JavaScript file. Download the file to the site_media directory, and then include it in the templates/base.html template as we did with jquery.js: <head> <title>Django Bookmarks | {% block title %}{% endblock %}</title> <link rel="stylesheet" href="/site_media/style.css" type="text/css"/> <script type="text/javascript" src="/site_media/firebug.js"> </script> <script type="text/javascript" src="/site_media/jquery.js"> </script> {% block external %}{% endblock %}</head> To experiment with the methods outlined in this section, launch the development server and navigate to the application's main page. Open the Firebug console by pressing F12, and try selecting elements and manipulating them. Hiding and Showing Elements Let's start with something simple. To hide an element on the page, call the hide() method on it. To show it again, call the show() method. For example, try this on the navigation menu of your application: >>> $("#nav").hide()>>> $("#nav").show() You can also animate the element while hiding and showing it. Try the fadeOut(), fadeIn(), slideUp() or slideDown() methods to see two of these animated effects. Of course, these methods (like all other jQuery methods) also work if you select more than one element at once. For example, if you open your user page and enter the following method call into the Firebug console, all of the tags will disappear: >>> $('.tags').slideUp() Accessing CSS Properties and HTML Attributes Next, we will learn how to change CSS properties of elements. jQuery offers a method called css() for performing CSS operations. If you call this method with a CSS property name passed as a string, it returns the value of this property: >>> $("#nav").css("display") Result: "block" If you pass a second argument to this method, it sets the specified CSS property of the selected element to the additional argument: >>> $("#nav").css("font-size", "0.8em") Result: <div id="nav" style="font-size: 0.8em;"> In fact, you can manipulate any HTML attribute and not just CSS properties. To do so, use the attr() method which works in a similar way to css(). Calling it with an attribute name returns the attribute value, whereas calling it with an attribute name/value pair sets the attribute to the passed value. To test this, go to the bookmark submission form and enter the following into the console: >>> $("input").attr("size", "48") Results: <input id="id_url" type="text" size="48" name="url"> <input id="id_title" type="text" size="48" name="title"> <input id="id_tags" type="text" size="48" name="tags"> (Output may slightly differ depending on the versions of Firefox and Firebug). This will change the sizes of all input elements on the page at once to 48. In addition, there are shortcut methods to get and set commonly used attributes, such as val() which returns the value of an input field when called without arguments, and sets this value to an argument if you pass one. There is also html() which controls the HTML code inside an element. Finally, there are two methods that can be used to attach or detach a CSS class to an element; they are called addClass() and removeClass(). A third method is provided to toggle a CSS class, and it is called toggleClass(). All of these class methods take the name of the class to be changed as a parameter. Manipulating HTML Documents Now that you are comfortable with manipulating HTML elements, let's see how to add new elements or remove existing elements. To insert HTML code before an element, use the before() method, and to insert code after an element, use the after() method. Notice how jQuery methods are well-named and very easy to remember! Let's test these methods by inserting parentheses around tag lists on the user page. Open your user page and enter the following in the Firebug console: >>> $(".tags").before("<strong>(</strong>")>>> $(".tags").after("<strong>)</strong>") You can pass any string you want to - before() or after() - the string may contain plain text, one HTML element or more. These methods offer a very flexible way to dynamically add HTML elements to an HTML document. If you want to remove an element, use the remove() method. For example: $("#nav").remove() Not only does this method hide the element, it also removes it completely from the document tree. If you try to select the element again after using the remove() method, you will get an empty set: >>> $("#nav") Result: [] Of course, this only removes the elements from the current instance of the page. If you reload the page, the elements will appear again. Traversing the Document Tree Although CSS selectors offer a very powerful way to select elements, there are times when you want to traverse the document tree starting from a particular element. For this, jQuery provides several methods. The parent() method returns the parent of the currently selected element. The children() method returns all the immediate children of the selected element. Finally, the find() method returns all the descendants of the currently selected element. All of these methods take an optional CSS selector string to limit the result to elements that match the selector. For example, $("#nav").find ("li") returns all the <li> descendants of #nav. If you want to access an individual element of a group, use the get() method which takes the index of the element as a parameter. $("li").get(0) for example returns the first <li> element out of the selected group. Handling Events Next, we will learn about event handlers. An event handler is a JavaScript function that is invoked when a particular event happens, for example, when a button is clicked or a form is submitted. jQuery provides a large set of methods to attach handlers to events; events of particular interest in our application are mouse clicks and form submissions. To handle the event of clicking on an element, we select this element and call the click() method on it. This method takes an event handler function as a parameter. Let's try this using the Firebug console. Open the main page of the application, and insert a button after the welcome message: >>> $("p").after("<button id="test-button">Click me!</button>") (Notice that we had to escape the quotations in the strings passed to the after() method.) If you try to click this button, nothing will happen, so let's attach an event handler to it: >>> $("#test-button").click(function () { alert("You clicked me!"); }) Now, when you click the button, a message box will appear. How did this work? The argument that we passed to click() may look a bit complicated, so let's examine it again: function () { alert("You clicked me!"); } This appears to be a function declaration but without a function name. Indeed, this construct creates what is called an anonymous function in JavaScript terminology, and it is used when you need to create a function on the fly and pass it as an argument to another function. We could have avoided using anonymous functions and declared the event handler as a regular function: >>> function handler() { alert("You clicked me!"); }>>> $("#test-button").click(handler) The above code achieves the same effect, but the first one is more concise and compact. I highly recommend you to get used to anonymous functions in JavaScript (if you are not already), as I'm sure you will appreciate this construct and find it more readable after using it for a while. Handling form submissions is very similar to handling mouse clicks. First, you select the form, and then you call the submit() method on it and pass the handler as an argument. We will use this method many times while adding Ajax features to our project in later sections. Sending Ajax Requests Before we finish this section, let's talk about Ajax requests. jQuery provides many ways to send Ajax requests to the server. There is, for example, the load() method which takes a URL and loads the page at this URL into the selected element. There are also methods for sending GET or POST requests, and receiving the results. We will examine these methods in more depth while implementing Ajax features in our project. What Next? This wraps up our quick introduction to jQuery. The information provided in this section will be enough to continue with this article, and once you finish the article, you will be able to implement many interesting Ajax features on your own. But please keep in mind that this jQuery introduction is only the tip of the iceberg. If you want a comprehensive treatment of the jQuery framework, I highly recommend the book "Learning jQuery" from Packt Publishing, as it covers jQuery in much more detail. You can find out more about the book at: http://www.packtpub.com/jQuery Implementing Live Searching of Bookmarks We will start introducing Ajax into our application by implementing live searching. The idea behind this feature is simple: when the user types a few keywords into a text field and clicks search, a script works behind the scenes to fetch search results and present them on the same page. The search page does not reload, thus saving bandwidth, and providing a better and more responsive user experience. Before we start implementing this, we need to keep in mind an important rule while working with Ajax: write your application so that it works without Ajax, and then introduce Ajax to it. If you do so, you ensure that everyone will be able to use your application, including users who don't have JavaScript enabled and those who use browsers without Ajax support. Implementing Searching So before we work with Ajax, let's write a simple view that searches bookmarks by title. First of all, we need to create a search form, so open bookmarks/forms.py and add the following class to it: class SearchForm(forms.Form): query = forms.CharField( label='Enter a keyword to search for', widget=forms.TextInput(attrs={'size': 32})) As you can see, it's a pretty straightforward form class with only one text field. This field will be used by the user to enter search keywords. Next, let's create a view for searching. Open bookmarks/views.py and enter the following code into it: def search_page(request): form = SearchForm() bookmarks = [] show_results = False if request.GET.has_key('query'): show_results = True query = request.GET['query'].strip() if query: form = SearchForm({'query' : query}) bookmarks = Bookmark.objects.filter (title__icontains=query)[:10] variables = RequestContext(request, { 'form': form, 'bookmarks': bookmarks, 'show_results': show_results, 'show_tags': True, 'show_user': True })return render_to_response('search.html', variables) Apart from a couple of method calls, the view should be very easy to understand. We first initialize three variables, form which holds the search form, bookmarks which holds the bookmarks that we will display in the search results, and show_results which is a Boolean flag. We use this flag to distinguish between two cases: The search page was requested without a search query. In this case, we shouldn't display any search results, not even a "No bookmarks found" message. The search page was requested with a search query. In this case, we display the search results, or a "No bookmarks found" message if the query does not match any bookmarks. We need the show_results flag because the bookmarks variable alone is not enough to distinguish between the above two cases. bookmarks will empty when the search page is requested without a query, and it will also be empty when the query does not match any bookmarks. Next, we check whether a query was sent by calling the has_key method on the request.GET dictionary: if request.GET.has_key('query'): show_results = True query = request.GET['query'].strip() if query: form = SearchForm({'query' : query}) bookmarks = Bookmark.objects.filter(title__icontains=query)[:10] We use GET instead of POST here because the search form does not create or change data; it merely queries the database, and the general rule is to use GET with forms that query the database, and POST with forms that create, change or delete records from the database. If a query was submitted by the user, we set show_results to True and call strip() on the query string to ensure that it contains non-whitespace characters before we proceed with searching. If this is indeed the case, we bind the form to the query and retrieve a list of bookmarks that contain the query in their title. Searching is done by using a method called filter in Bookmark.objects. This is the first time that we have used this method; you can think of it as the equivalent of a SELECT statements in Django models. It receives the search criteria in its arguments and returns search results. The name of each argument must adhere to the following naming convention: field__operator Note that field and operator are separated by two underscores: field is the name of the field that we want to search by and operator is the lookup method that we want to use. Here is a list of the commonly-used operators: exact: The value of the argument is an exact match of the field. contains: The field contains the value of the argument. startswith: The field starts with the value of the argument. lt: The field is less than the value of the argument. gt: The field is greater than the value of the argument. Also, there are case-insensitive versions of the first three operators: iexact, icontains and istartswith. After this explanation of the filter method, let's get back to our search view. We use the icontains operator to get a list of bookmarks that match the query and retrieve the first ten items using Python's list slicing syntax. Finally we pass all the variables to a template called search.html to render the search page. Now create the search.html template in the templates directory with the following content: {% extends "base.html" %}{% block title %}Search Bookmarks{% endblock %}{% block head %}Search Bookmarks{% endblock %}{% block content %}<form id="search-form" method="get" action="."> {{ form.as_p }} <input type="submit" value="search" /></form><div id="search-results"> {% if show_results %} {% include 'bookmark_list.html' %} {% endif %}</div>{% endblock %} The template consists of familiar aspects that we have used before. We build the results list by including the bookmark_list.html like we did when building the user and tag pages. We gave the search form an ID, and rendered the search results in a div identified by another ID so that we can interact with them using JavaScript later. Notice how many times the include template tag saved us from writing additional code? It also lets us modify the look of the bookmarks list by editing a single file. This Django template feature is indeed very helpful in organizing and managing templates. Before you test the new view, add an entry for it in urls.py: urlpatterns = patterns('', # Browsing (r'^$', main_page), (r'^user/(w+)/$', user_page), (r'^tag/([^s]+)/$', tag_page), (r'^tag/$', tag_cloud_page), (r'^search/$', search_page),) Now test the search view by navigating to http://127.0.0.1:8000/search/ and experiment with it. You can also add a link to it in the navigation menu if you want; edit templates/base.html and add the highlighted code: <div id="nav"> <a href="/">home</a> | {% if user.is_authenticated %} <a href="/save/">submit</a> | <a href="/search/">search</a> | <a href="/user/{{ user.username }}/"> {{ user.username }}</a> | <a href="/logout/">logout</a> {% else %} <a href="/login/">login</a> | <a href="/register/">register</a> {% endif %}</div> We now have a functional (albeit very basic) search page. Thanks to our modular code, the task will turn out to be much simpler than it may seem. Implementing Live Searching To implement live searching, we need to do two things: Intercept and handle the event of submitting the search form. This can be done using the submit() method of jQuery. Use Ajax to load the search results in the back scenes, and insert them into the page. This can be done using the load() method of jQuery as we will see next. jQuery offers a method called load() that retrieves a page from the server and inserts its contents into the selected element. In its simplest form, the function takes the URL of the remote page to be loaded as a parameter. First of all, let's modify our search view a little so that it only returns search results without the rest of the search page when it receives an additional GET variable called ajax. We do so to enable JavaScript code on the client-side to easily retrieve search results without the rest of the search page HTML. This can be done by simply using the bookmark_list.html template instead of search.html when request.GET contains the key ajax. Open bookmarks/views.py and modify search_page (towards the end) so that it becomes as follows: def search_page(request): [...] variables = RequestContext(request, { 'form': form, 'bookmarks': bookmarks, 'show_results': show_results, 'show_tags': True, 'show_user': True }) if request.GET.has_key('ajax'): return render_to_response('bookmark_list.html', variables) else: return render_to_response('search.html', variables) Next, create a file called search.js in the site_media directory and link it to templates/search.html like this: {% extends "base.html" %}{% block external %} <script type="text/javascript" src="/site_media/search.js"> </script>{% endblock %}{% block title %}Search Bookmarks{% endblock %}{% block head %}Search Bookmarks{% endblock %}[...] Now for the fun part! Let's create a function that loads search results and inserts them into the corresponding div. Write the following code into site_media/search.js: function search_submit() { var query = $("#id_query").val(); $("#search-results").load( "/search/?ajax&query=" + encodeURIComponent(query) ); return false;} Let's go through this function line by line: The function first gets the query string from the text field using the val() method. We use the load() method to get search results from the search_page view, and insert the search results into the #search-results div. The request URL is constructed by first calling encodeURIComponent on query, which works exactly like the urlencode filter we used in Django templates. Calling this function is important to ensure that the constructed URL remains valid even if the user enters special characters into the text field such as &. After escaping query, we concatenate it with /search/?ajax&query=. This URL invokes the search_page view and passes the GET variables ajax and query to it. The view returns search results, and the load() method in turn loads the results into the #search-results div. We return false from the function to tell the browser not to submit the form after calling our handler. If we don't return false in the function, the browser will continue to submit the form as usual, and we don't want that. One little detail remains; where and when to attach search_submit to the submit event of the search form? A rule of a thumb when writing JavaScript is that we cannot manipulate elements in the document tree before the document finishes loading. Therefore, our function must be invoked as soon as the search page is loaded. Fortunately for us, jQuery provides a method to execute a function when the HTML document is loaded. Let's utilize it by appending the following code to site_media/search.js: $(document).ready(function () { $("#search-form").submit(search_submit);}); $(document) selects the document element of the current page. Notice that there are no quotations around document; it's a variable provided by the browser, not a string. ready() is a method that takes a function and executes it as soon as the selected element finishes loading. So in effect, we are telling jQuery to execute the passed function as soon as the HTML document is loaded. We pass an anonymous function to the ready() method; this function simply binds search_submit to the submit event of the form #search-form. That's it. We've implemented live searching with less than fifteen lines of code. To test the new functionality, navigate to http://127.0.0.1:8000/search/, submit queries, and notice how the results are displayed without reloading the page: The information covered in this section can be applied to any form that needs to be processed in the back scenes without reloading the page. You can, for example, create a comment form with a preview button that loads the preview in the same page without reloading. In the next section, we will enhance the user page to let users edit their bookmarks in place, without navigating away from the user page. Editing Bookmarks in Place Editing of posted content is a very common task in web sites. It's usually implemented by offering an edit link next to content. When clicked, this link takes the user to a form located on another page where content can be edited. When the user submits the form, they are redirected back to the content page. Imagine, on the other hand, that you could edit content without navigating away from the content page. When you click edit, the content is replaced with a form. When you submit the form, it disappears and the updated content appears in its place. Everything happens on the same page; edit form rendering and submission are done using JavaScript and Ajax. Wouldn't such a workflow be more intuitive and responsive? The technique described above is called in-place editing. It is now finding its way into web applications and becoming more common. We will implement this feature in our application by letting the user edit their bookmarks in place on the user page. Since our application doesn't support the editing of bookmarks yet, we will implement this first, and then modify the editing procedure to work in place. Implementing Bookmark Editing We already have most of the parts that are needed to implement bookmark editing. This was easy to do thanks to the get_or_create method provided by data models. This little detail greatly simplifies the implementation of bookmark editing. Here is what we need to do: We pass the URL of the bookmark that we want to edit as a GET variable named url to the bookmark_save_page view. We modify bookmark_save_page so that it populates the fields of the bookmark form if it receives the GET variable. The form is populated with the data of the bookmark that corresponds to the passed URL. When the populated form is submitted, the bookmark will be updated as we explained earlier, because it will look like the user submitted the same URL another time. Before we implement the technique described above, let's reduce the size of bookmark_save_page by moving the part that saves a bookmark to a separate function. We will call this function _bookmark_save. The underscore at the beginning of the name tells Python not to import this function when the views module is imported. The function expects a request and a valid form object as parameters; it saves a bookmark out of the form data, and returns this bookmark. Open bookmarks/views.py and create the following function; you can cut and paste the code from bookmark_save_page if you like, as we are not making any changes to it except for the return statement at the end. def _bookmark_save(request, form): # Create or get link. link, dummy = Link.objects.get_or_create(url=form.clean_data['url']) # Create or get bookmark. bookmark, created = Bookmark.objects.get_or_create( user=request.user, link=link ) # Update bookmark title. bookmark.title = form.clean_data['title'] # If the bookmark is being updated, clear old tag list. if not created: bookmark.tag_set.clear() # Create new tag list. tag_names = form.clean_data['tags'].split() for tag_name in tag_names: tag, dummy = Tag.objects.get_or_create(name=tag_name) bookmark.tag_set.add(tag)# Save bookmark to database and return it.bookmark.save()return bookmark Now in the same file, replace the code that you removed from bookmark_save_page with a call to _bookmark_save: @login_requireddef bookmark_save_page(request): if request.method == 'POST': form = BookmarkSaveForm(request.POST) if form.is_valid(): bookmark = _bookmark_save(request, form) return HttpResponseRedirect( '/user/%s/' % request.user.username )else: form = BookmarkSaveForm()variables = RequestContext(request, { 'form': form})return render_to_response('bookmark_save.html', variables) The current logic in bookmark_save_page works like this: if there is POST data:Validate and save bookmark.Redirect to user page.else:Create an empty form.Render page. To implement bookmark editing, we need to slightly modify the logic as follows: if there is POST data: Validate and save bookmark. Redirect to user page.else if there is a URL in GET data: Create a form an populate it with the URL's bookmark.else: Create an empty form.Render page. Let's translate the above pseudo code into Python. Modify bookmark_save_page in bookmarks/views.py so that it looks like the following (new code is highlighted): from django.core.exceptions import ObjectDoesNotExist@login_requireddef bookmark_save_page(request): if request.method == 'POST': form = BookmarkSaveForm(request.POST) if form.is_valid(): bookmark = _bookmark_save(request, form) return HttpResponseRedirect( '/user/%s/' % request.user.username ) elif request.GET.has_key('url'): url = request.GET['url'] title = '' tags = '' try: link = Link.objects.get(url=url) bookmark = Bookmark.objects.get( link=link, user=request.user ) title = bookmark.title tags = ' '.join( tag.name for tag in bookmark.tag_set.all() ) except ObjectDoesNotExist: pass form = BookmarkSaveForm({ 'url': url, 'title': title, 'tags': tags }) else: form = BookmarkSaveForm() variables = RequestContext(request, { 'form': form }) return render_to_response('bookmark_save.html', variables) This new section of the code first checks whether a GET variable called url exists. If this is the case, it loads the corresponding Link and Bookmark objects of this URL, and binds all the data to a bookmark saving form. You may wonder why we load the Link and Bookmark objects in a try-except construct that silently ignores exceptions. Indeed, it's perfectly valid to raise an Http404 exception if no bookmark was found for the requested URL. But our code chooses to only populate the URL field in this situation, leaving the title and tags fields empty.
Read more
  • 0
  • 0
  • 11304

article-image-testing-your-site-and-monitoring-your-progress
Packt
15 Oct 2015
27 min read
Save for later

Testing Your Site and Monitoring Your Progress

Packt
15 Oct 2015
27 min read
In this article by Michael David, author of the book Wordpress Search Engine Optimization, you will learn about Google Analytics and Google Webmaster/Search Console. Once you have built your website and started promoting it, you'll want to monitor your progress to ensure that your hard work is yielding both high rankings, search engine visibility, and web traffic. In this article, we'll cover a range of tools with which you will monitor the quality of your website, learn how search spiders interact with your site, measure your rankings in search engines for various keywords, and analyze how your visitors behave, when they are on your site. With this information, you can gauge your progress and make adjustments to your strategy. (For more resources related to this topic, see here.) Obviously, you'll want to know where your web traffic is coming from, what search terms are being used to find your website, and where you are ranking in the search engines for each of these terms. This information will allow you to see what you still need to work on, in terms of building links to the pages on your website. There are five main tools you will use to analyze your site and evaluate your traffic and rankings, and in this article, we will cover each in turn. They are Google Analytics, Google Search Console (formerly Webmaster Tools), HTML Validator, Bing Webmaster, and Link Assistant's Rank Tracker. As an alternative to Bing Webmaster, you may also want to employ Majestic SEO to check your backlinks. We'll cover each of these tools in turn. Google Analytics Google Analytics monitors and analyzes your website's traffic. With this tool, you can see how many website visitors you have had, whether they found your site through the search engines or clicked through from another website, how these visitors behaved once they were on your site, and much more. You can even connect your Google AdSense account to one or more of the domains you are monitoring in Google Analytics, to get information about which pages and keywords generate the most income. While there are other analytics services available, none match the scope and scale of what Google Analytics offers. Setting up Google Analytics for your website To set up Google Analytics for your website, perform the following steps: To sign up for Google Analytics, visit http://www.google.com/analytics/. On the top-right corner of the page, you'll see a button that says, Sign In to Google Analytics. You'll need to have a Google account before signing in. You will have to click Sign up on the next page if you haven't already, and on the next page you'll enter your website's URL and time zone. If you have more than one website, just start with one. You can add the other sites later. The account name you use is not important, but you can add all of your sites to one account, so you might want to use something generic like your name or your business name. Select the time zone country and time zone, and then click on Continue. Enter your name and country, and click on Continue again. Then you will need to accept the Google Analytics terms of service. After you have accepted the terms, you will be given a snippet of HTML code, that you'll need to insert into the pages of your website. The snippet will look like the following: <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-xxxxxx-x', 'auto'); ga('send', 'pageview');</script> The code must be placed just before the closing head tag in each page on your website. There are several ways to install the code. Check first to see, if your WordPress template offers a field to insert the code in the template admin area—most modern templates offer this feature. If not, you can insert the code manually just before the closing the </head> tag, which you will find in the file called header.php within your WordPress template files. There is yet a third way to do this, if you don't want to tinker with your WordPress code. You can download and install one of the many WordPress analytics plugins. Two sound choices for an analytics plugin would be Google Analyticator or Google Analytics by Yoast; both the plugins are available at Wordpress.org. If you have more than one website, you can add more websites within your analytics account. Personally, I like to keep one website per analytics account just to keep things neat, but some like to have all their websites under one analytics account. To add a second or third website to an analytics account, navigate to Admin on the top menu bar, and then pull down the menu under Account in the left column, and then click Create a New Account. When you are done adding websites, click Home on the top menu bar to navigate to the main analytics page. From here, you can select a web property. This is the screen that greets you when you log in to Google Analytics, the Audience Overview report. It offers an effective quick look at how your website is performing. Using Google Analytics Once you have installed Google Analytics on your websites, you'll need to wait a few weeks, or perhaps longer, for Analytics to collect enough data from your websites to be useful. Remember that with website analytics, like any type of statistical analysis, larger sets of data reveal more accurate and useful information. It may not take that long if you have a high-traffic site, but if you have a fairly new site that doesn't get a lot of traffic, it will take some time, so come back in a few weeks to a month to check to see how your sites are doing. The Audience Overview report is your default report, displayed to you, when you first log into analytics and gives you a quick overview of each site's traffic. You can see at a glance, whether the tracking code is receiving data, how many visits (sessions) your website has gotten in the past 30 days, the average time visitors stay on your site (session duration), your bounce rate (the percentage of users that come to visit your site and leave without visiting another page), and the percentage of new sessions (users that haven't visited before) in the past 30 days. The data displayed on the dashboard is just a small taste of what you can learn from Google Analytics. To drill down to more detailed information, you'll navigate to other sections of analytics using the left menu. Your main report areas in analytics are Audience (who your visitors are), Acquisition (how your visitors found your site), Behavior (what your visitors did on your site), and Conversions (did they make a purchase or complete a contact form). Within any of the main report areas, are a dozens of available reports. Google Analytics can offer you tremendous detail on almost any imaginable metric related to your website. Most of your time, however, is best spent with a few key reports. Understanding key analytics reports Your key analytics reports will be the Audience Overview, Acquisition Overview, and the Behavior Overview. You access each of these reports by navigating to the left menu area and each corresponding overview report is the first link in each list. The Audience Overview report is your default report, discussed earlier. The Acquisition Overview report drills down into how your visitors are finding your site, whether it is through organic search, pay per click programs, social media, or referrals from other websites. These different pathways by which customers find your site are referred to as channels or mediums on other reports, but mean essentially the same thing. The Acquisition Overview report also shows some very basic conversion data, although the reports in the Conversions section offer much more meaningful conversion data. The following screenshot is an Acquisition Overview report: Why is the Acquisition Overview report important? It shows us the relative strength of our inbound channels. In the case above, organic search is delivering the highest number of users. This tells us that we are getting most of our users from the organic search results—our organic campaign is running strong. Referrals generated 384 visits, which means we've got good links delivering traffic. Our 369 direct visitors don't come from other websites, it's a fresh browser page where users type our URL directly or follow a bookmark they've saved. That is a welcome figure, because it means we've got strong brand and name recognition and in the conversions column on the right side of the table, we can see that our direct traffic generated a measurable conversion rate. That's a positive sign that our brand recognition and reputation is strong. The Behavior Overview report tells us how users behave once they visit our site. Are they bouncing immediately or viewing several pages? What content are they viewing the most? Here's a sample Behavior Overview report: Some information is repeated on the Behavior Overview report, such as Pageviews and Bounce Rate. What is important here is the table under the graph. This table shows you the relative popularity of your pages. This data is important because it shows you what content is generating the most user interest. As you can see from the table, the page titled /add-sidebar-wordpress generated over 1,000 pageviews, more than the home page, which is indicated in analytics by a single slash (/). This table shows you your most popular content—these pages are your greatest success. And remember, you can click on any individual page and then see individual metrics for that page. One way to maximize your site earnings is to focus on improving the performance of the pages that are already earning money. Chances are, you earn at least 80 percent of your income from the top 20 percent of the pages on your site. Focus on improving the rankings for those pages in order to get the best return for your efforts. With any analytics report, by default the statistics shown will all be for the past month, but you can adjust the time period, by clicking on the down arrow next to the dates in the upper right hand corner. Setting up automated analytics reports You can also use Google Analytics to e-mail you daily, weekly, or monthly reports. To enable this feature, simply navigate to the report that you'd like to be sent to you. Then, click the Email link just under the report title. The following pop-up will appear: The Frequency field lets you determine how often the report is sent, or you can simply send the report one time. Google Webmasters/Search Console Now, we are going to go into a bit more detail, and show you how to use Google Webmaster Tools to obtain information that you can use to improve your website. Understanding your website's search queries The Search Console now shows you data on what search queries users entered in Google search to find their way to your website. This is valuable: it teaches you what query terms are effectively delivering customers. To see the report, expand Search Traffic on the left navigation menu and select Search Analytics. The following report will display: Examine the top queries that Search Console shows you are getting traffic for, and add them to your list of keywords to work on, if they are not already there. You will probably find that it is most beneficial to focus on those keywords, that are currently ranked between #4 and #10 in Google, to try to get them moved up to one of the top three spots. You'll also want to check for crawl errors on each of your sites while you work in the Search Console. A crawl error is an error that Google's spider encounters, when trying to find pages on your site. A dead link, a link to a page on your site that no longer exists, is a common and perfect example of a crawl error. Crawl errors are detrimental to rankings. First, crawl errors send a poor quality signal to search engines. Remember that, Google wants to deliver a great experience to users of its search engine and properties. Dead links and lost pages do not deliver a great experience to users. To see the crawl error data, expand the Crawl link on the left navigation bar and then click Crawl Errors. The crawl error report will show you any pages that Google attempted to read but could not reach. Not all entries on this report are bad news. If you remove a piece of content that Google crawled in the past, Google will attempt for months to try to find that piece of content again. So, a removed page will generate a crawl error. Another way crawl errors get generated is if you have a sitemap file with page entries that no longer exist. Even inbound links from third party websites to pages that don't exist, will generate crawl errors. Other errors you find might be the result of a typo or the other error that involves going into a specific page on your website to fix. For example, perhaps you made a mistake typing in the URL when linking from one page to another, resulting in a 404 error. To fix that, you need to edit the page that the URL with the error was linked from and correct the URL. Other errors might require editing the files for your template to fix multiple errors simultaneously. For example, if you see that the Googlebot is getting a 404 (page not found) error every time it attempts to crawl the comments feed for a post, then the template file probably doesn't have the right format for creating those URLs. Once you correct the template file, all of the crawl errors related to that problem will be fixed. There are other things you can do in Google Webmaster Tools, for example, you can check to see how many links to your site Google is detecting. Checking your website's code with a HTML Validator HyperText Markup Language (HTML) is a coding standard with a reasonable degree of complexity. HTML standards develop over time, and a valid HTML code displays more websites more accurately in a wider range of browsers. Sites that have substantial amounts of HTML coding errors can potentially be punished by search engines. For this reason, you should periodically check your website for HTML coding errors. There are two principal tools, that web professionals use to check the quality of their websites' code: the W3C HTML Validator and the CSE HTML Validator. The W3C HTML Validator (http://validator.w3.org) is the less robust of the two validators, but it is free. Both validators work in the same way; they examine the code on your website and issue a report advising you of any errors. The CSE HTML Validator (http://htmlvalidator.com) is not a free tool, but you can get a free trial of the software that is good for 30 days or 200 validations, whichever comes first. This software catches errors in HTML, XHTML, CSS, PHP, and JavaScript. It also checks your links, to ensure that they are all valid. It includes an accessibility checker, spell checker, and a SEO checker. With all of these functions, there is a good chance that if your website has any problems, the CSE HTML Validator will be able to find them. After downloading and installing the demo version of CSE HTML Validator, you will be given the option to go to a page that contains two video demos. It is a good idea to watch these videos, or at least the HTML validation video, before trying to use the program. They are not very long, and watching them will reduce the amount of time it takes you to learn to use the program. To validate a file that is on your hard drive, first open the file in the editor, then click the down arrow next to the Validate button on the task bar. You will see several options. Selecting Full will give you not only errors, but messages containing suggestions as well. Errors only will only show you actual errors, and Errors and warnings only will tell you if there are things that could be errors, but might not be. You can experiment with the different options to see which one you like best. After you select a validation option, a box will appear at the bottom of the screen listing all of the errors, as well as warnings and messages depending on the option you chose. You might be surprised at how many errors there are, especially if you are using WordPress to create the code for your site. The code is often not as clean as you might expect it to be. However, not all of the errors you see, will be things that you need to worry about or correct. Yes, it is better to have a website with perfect coding, but one of the advantages of using WordPress is that, you don't have to know how to code to build a website. If you do not know anything about HTML coding, you may do more harm than good by trying to fix minor errors, such as omission of a slash at the end of a tag. Most of these errors will not cause problems anyhow. You should look through the errors and fix the ones you know how to fix. If you are completely mystified by what you see here, don't worry about it too much unless you are having a problem with the way your website loads or displays. If the errors are causing problems, you'll either have to learn a bit about coding or hire someone who knows what they're doing to fix your website. If you want to be able to check the code of an entire website at once, you'll need to buy the Pro version of CSE HTML Validator. You can then use the batch wizard to check your website. This feature is not available in the Standard or Lite versions of the software. To use the batch wizard, click on Tools, then Batch Wizard. A new window will pop up, allowing you to choose the files you want to check. Click on the button with the large green plus sign, and select the appropriate option to add files or URLs to your list. You can add files individually, add an entire folder, or even add a URL. To check an entire site, you can add the root file for your domain from your hard drive, or you can add the URL for the root domain. Once you have added your target, click on it. Now, click on Target in the main menu bar, then on Properties. Click on the Follow Links tab in the box that pops up, then check the box in front of Follow and validate links. Click on the OK button and then click on the Process button to start validating. Checking your inbound link count with Bing Webmaster Bing Webmaster allows you to get information about backlinks, crawl errors, and search traffic to your websites. In order to get the maximum value from this tool, you'll need to authenticate each of your websites to prove that you own them. To get started, go to https://www.bing.com/webmaster/ and sign up with your Microsoft account. As a part of the sign up process, you'll get a HTML file, that you'll install in the root directory of your WordPress installation. This file validates that you are the owner of the website and entitled to see the data that Bing collects. One core use for Bing Webmaster is that it presents a highly accurate picture of one's inbound link counts. If you recall, Google does not present accurate inbound link counts to users. Thus, Bing Webmaster is the most authoritative picture from a search engine, that you'll have of how many backlinks your site enjoys. To see your inbound links, simply log in and navigate to the Dashboard. At the lower right, you'll see the Inbound Links table shown here: The table shows you the inbound links to your website for each page of content. This helpful feature lets you determine, which articles of content are garnering the most interest from the other webmasters. High link counts are always good, but you also want to make sure you are getting high quality links from websites in the same general category as your site. Bing offers an additional feature: link keyword research. Expand the Diagnostics & Tools entry on the navigation bar on the left, and click Keyword Research. The search traffic section will give you valuable information about the search terms you should be targeting for your site, as well as allowing you to see which of the terms you are already targeting are getting traffic. Just as you did with the keywords shown in Google Analytics and Google Webmaster Tools, you want to find keywords that are getting traffic, but are not currently ranked in the top one to three positions in the search engine results pages. Send more links to these pages with the keyword phrase you want to target as anchor text, in order to move these pages up in the rankings. Monitoring ranking positions and movement with Rank Tracker Rank Tracker is a paid tool and we've included it because it is tremendously valuable and noteworthy. Rank Tracker is a software tool that can monitor your website's rankings for thousands of terms in hundreds of different search engines. Even more, it maintains a historical ranking data that helps you gauge your progress as you work. It is a valuable tool that is used by many SEO professionals. There is a free version, although the free version does not allow you to save your work (and thus does not let you save historical information). To really harness the power of this software, you'll need the paid version. You can download either version from http://www.link-assistant.com/rank-tracker/. After you install the program, you will be prompted to enter the URL of your website. On the next screen, the program will ask you to input the keywords you wish to track. If you have them all listed in a spreadsheet somewhere, you can just copy the column that has your keywords in it and paste them all into the tool. Then Rank Tracker will ask you which search engines you are targeting and will check the rank of each keyword in all of the search engines you select. It only takes a few minutes for Rank Tracker to update hundreds of keywords, so you can find out where you are ranking in very little time. This screenshot shows the main interface of the Rank Tracker software. For monitoring progress on large numbers of keywords on several different search engines, Rank Tracker can be a real time-saver: Once your rankings have been updated, you can sort them by rank to see which ones will benefit the most from additional link building. Remember that more than half of the people who search for something in Google, click on the first result. If you are not in the top three to five results, you will get very little traffic from people searching for your targeted keyword. For this reason, you will usually get the best results by working on the keywords that are ranking between #4 and #10 in the search engine results pages. If you want more data to play with, you can click on Update KEI, to find out the keyword effectiveness index for each keyword. This tool gathers the data from Google, to tell you how many searches per month and how much competition there is for each keyword, and then calculates the KEI score based on that data. As a general rule, the higher the KEI number is, the easier it will be to rank for a given keyword. Keep in mind, however, that if you are already ranking for the keyword, it will usually not be that difficult to move up in the rankings, even if the KEI is low. To make it easier to see which keywords will be easy to rank, they are color-coded in the Rank Tracker tool. The easiest ones are green, and the hardest are red. Yellow and orange fall in between. In addition to checking your rankings on keywords you are targeting, you can use the Rank Tracker tool to find more keywords to target. If you navigate to Tools and then Get Keyword Suggestions, a window will pop up that will let you choose from a range of different methods of finding keywords. It is recommended that you start with the Google AdWords Keyword Tool. After you choose the method, you'll be asked to enter keywords that are relevant to the content of your website. You will only get about 100 results no matter how many keywords you enter, so it's best to work on just one at a time. On the next page, you will be presented with a list of keywords. Select the ones you want to add and click on Next. When the tool is done updating the data for the selected keywords, click on the Finish button to add them to your project. You can repeat this process as many times as you want to find new keywords for your website, and you can experiment with the other fifteen tools as well, which should give you more variety. When you find keywords that look promising, put them on a list, and plan on writing posts to target those keywords in the future. Look for keywords that have at least 100 searches per month, with a relatively low amount of competition. Rank Tracker not only allows you to check your current rankings, but it also keeps track of your historical data as well. Every time you update, a new point will be added to the progress graph so you can see your progress over time. This allows you to see, whether what you are doing is working or not. If you see that your rank is dropping for a certain keyword, you can use that information to figure out whether something you changed had the opposite effect that you intended. If you are doing SEO for clients, you'll find the reports in Rank Tracker to be extremely useful (if not mandatory). You can create a monthly report for each client that shows how many keywords are ranked #1, as well as how many are in the top 10, 20, or 100. The report also shows the number of keywords that moved up and the number that moved down. You can use these reports to show your clients how much progress you are making and demonstrate your value to the client. If you want to take advantage of the historical data, you'll have to purchase the paid version of Rank Tracker. The free version does not support saving your projects, so you won't have data from your past rankings to compare and see whether you are moving up or down in the search engine results. You also won't be able to generate reports, that tell you what has changed from the last time you updated your rankings. Monitoring backlinks with majestic SEO If you want to see how many backlinks you have pointing to your site along with a range of additional data, the king of free backlink tools is the powerful Majestic SEO backlink checker tool. To use this tool, go to https://majestic.com/ and enter your domain in the box at the top of the page. The tool will generate a report that shows how many URLs are indexed for your domain, how many total backlinks you have, and how many unique domains link to your website. For heavy-duty link reconnaissance, you'll want the paid upgrade. Underneath the site info, you can see the stats for each page on your site. The tool shows the number of backlinks for each page, as well as the number of domains linking to each page. You can only see ten results per page, so you'll have to click through numerous pages to see all of the results if you have a large site. Majestic SEO offers a few details that you won't get from Google or Bing Webmaster. Majestic SEO calculates and reports the number of separate C class subnets upon which your backlinks appear. As we have learned, links from sites on separate C class subnets are more valuable, because they are perceived by search engines as being truly non-duplicate links. Majestic SEO also reports the number of .edu and .gov upon which your links appear. This extra information gives you a clear picture of how effective your link building efforts are progressing. Majestic offers another feature of note: Majestic crawls the web more deeply, so you'll see higher link counts. Majestic is particularly useful when doing link cleanup, because it scrapes low-value sites that Google and Bing don't bother indexing. This screenshot highlights some of Majestic SEO's more robust features: it shows you the number of backlinks from .edu and .gov domains as well as the number of separate Class C subnets upon which your inbound links appear: Majestic SEO offers one more special feature; it records and graphs your backlink acquisition over time. The graph in the screenshot just above, shows this feature in action. Majestic SEO is not a search engine, so it will show you a count of inbound links without any regards to the quality of the pages on which your links appear. Put another way, Bing Webmaster will only show you links that appear on indexed pages. Lower value pages, such as pages with duplicate content or pages in low-value link directories, tend not to appear in search engine indexes. As such, Majestic SEO reports higher link counts than Bing Webmaster or Google. Summary In this article, we learned how to monitor your progress through the use of free and paid tools. We learned how to set up and employ Google Analytics, to measure and monitor where your website visitors are coming from and how they behave on your site. We learned how to set up and use Google Webmaster Tools to detect crawling errors on your site and learned how Googlebot interacts with your site. We discovered two HTML validation tools that you can use to ensure that your website's code meets current HTML standards. Finally, we learned how to measure and monitor your backlink efforts with Bing Webmaster and Majestic SEO. With the tools and techniques in this article, you can ensure that your optimization efforts are effective and remain on track. Resources for Article: Further resources on this subject: Creating Blog Content in WordPress [Article] Responsive Web Design with WordPress [Article] Introduction to a WordPress application's frontend [Article]
Read more
  • 0
  • 49
  • 11269

article-image-building-web-application-php-and-mariadb-introduction-caching
Packt
11 Jun 2014
4 min read
Save for later

Building a Web Application with PHP and MariaDB - Introduction to caching

Packt
11 Jun 2014
4 min read
Let's begin with database caching. All the data for our application is stored on MariaDB. When a request is made for retrieving the list of available students, we run a query on our course_registry database. Running a single query at a time is simple but as the application gets popular, we will have more concurrent users. As the number of concurrent connections to the database increases, we will have to make sure that our database server is optimized to handle that load. In this section, we will look at the different types of caching that can be performed in the database. Let's start with query caching. Query caching is available by default on MariaDB; to verify if the installation has a query cache, we will use the have_query_cache global variable. Let's use the SHOW VARIABLES command to verify if the query cache is available on our MariaDB installation, as shown in the following screenshot: Now that we have a query cache, let's verify if it is active. To do this, we will use the query_cache_type global variable, shown as follows: From this query, we can verify that the query cache is turned on. Now, let's take a look at the memory that is allocated for the query cache by using the query_cache_size command, shown as follows: The query cache size is currently set to 64 MB; let's modify our query cache size to 128 MB. The following screenshot shows the usage of the SET GLOBAL syntax: We use the SET GLOBAL syntax to set the value for the query_cache_size command, and we verify this by reloading the value of the query_cache_size command. Now that we have the query cache turned on and working, let's look at a few statistics that would give us an idea of how often the queries are being cached. To retrieve this information, we will query the Qcache variable, as shown in the following screenshot: From this output, we can verify whether we are retrieving a lot of statistics about the query cache. One thing to verify is the Qcache_not_cached variable that is high for our database. This is due to the use of prepared statements. The prepared statements are not cached by MariaDB. Another important variable to keep an eye on is the Qcache_lowmem_prunes variable that will give us an idea of the number of queries that were deleted due to low memory. This will indicate that the query cache size has to be increased. From these stats, we understand that for as long as we use the prepared statements, our queries will not be cached on the database server. So, we should use a combination of prepared statements and raw SQL statements, depending on our use cases. Now that we understand a good bit about query caches, let's look at the other caches that MariaDB provides, such as the table open cache, the join cache, and the memory storage cache. The table open cache allows us to define the number of tables that can be left open by the server to allow faster look-ups. This will be very helpful where there is a huge number of requests for a table, and so the table need not be opened for every request. The join buffer cache is commonly used for queries that perform a full join, wherein there are no indexes to be used for finding rows for the next table. Normally, indexes help us avoid these problems. The memory storage cache, previously known as the heap cache, is commonly is used for read-only caches of data from other tables or for temporary work areas. Let's look at the variables that are with MariaDB, as shown in the following screenshot: Database caching is a very important step towards making our application scalable. However, it is important to understand when to cache, the correct caching techniques, and the size for each cache. Allocation of memory for caching has to be done very carefully as the application can run out of memory if too much space is allocated. A good method to allocate memory for caching is by running benchmarks to see how the queries perform, and have a list of popular queries that will run often so that we can begin by caching and optimizing the database for those queries. Now that we have a good understanding of database caching, let's proceed to application-level caching. Resources for Article: Introduction to Kohana PHP Framework Creating and Consuming Web Services in CakePHP 1.3 Installing MariaDB on Windows and Mac OS X
Read more
  • 0
  • 0
  • 11264
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-knockoutjs-templates
Packt
04 Mar 2015
38 min read
Save for later

KnockoutJS Templates

Packt
04 Mar 2015
38 min read
 In this article by Jorge Ferrando, author of the book KnockoutJS Essentials, we are going talk about how to design our templates with the native engine and then we will speak about mechanisms and external libraries we can use to improve the Knockout template engine. When our code begins to grow, it's necessary to split it in several parts to keep it maintainable. When we split JavaScript code, we are talking about modules, classes, function, libraries, and so on. When we talk about HTML, we call these parts templates. KnockoutJS has a native template engine that we can use to manage our HTML. It is very simple, but also has a big inconvenience: templates, it should be loaded in the current HTML page. This is not a problem if our app is small, but it could be a problem if our application begins to need more and more templates. (For more resources related to this topic, see here.) Preparing the project First of all, we are going to add some style to the page. Add a file called style.css into the css folder. Add a reference in the index.html file, just below the bootstrap reference. The following is the content of the file: .container-fluid { margin-top: 20px; } .row { margin-bottom: 20px; } .cart-unit { width: 80px; } .btn-xs { font-size:8px; } .list-group-item { overflow: hidden; } .list-group-item h4 { float:left; width: 100px; } .list-group-item .input-group-addon { padding: 0; } .btn-group-vertical > .btn-default { border-color: transparent; } .form-control[disabled], .form-control[readonly] { background-color: transparent !important; } Now remove all the content from the body tag except for the script tags and paste in these lines: <div class="container-fluid"> <div class="row" id="catalogContainer">    <div class="col-xs-12"       data-bind="template:{name:'header'}"></div>    <div class="col-xs-6"       data-bind="template:{name:'catalog'}"></div>    <div id="cartContainer" class="col-xs-6 well hidden"       data-bind="template:{name:'cart'}"></div> </div> <div class="row hidden" id="orderContainer"     data-bind="template:{name:'order'}"> </div> <div data-bind="template: {name:'add-to-catalog-modal'}"></div> <div data-bind="template: {name:'finish-order-modal'}"></div> </div> Let's review this code. We have two row classes. They will be our containers. The first container is named with the id value as catalogContainer and it will contain the catalog view and the cart. The second one is referenced by the id value as orderContainer and we will set our final order there. We also have two more <div> tags at the bottom that will contain the modal dialogs to show the form to add products to our catalog and the other one will contain a modal message to tell the user that our order is finished. Along with this code you can see a template binding inside the data-bind attribute. This is the binding that Knockout uses to bind templates to the element. It contains a name parameter that represents the ID of a template. <div class="col-xs-12" data-bind="template:{name:'header'}"></div> In this example, this <div> element will contain the HTML that is inside the <script> tag with the ID header. Creating templates Template elements are commonly declared at the bottom of the body, just above the <script> tags that have references to our external libraries. We are going to define some templates and then we will talk about each one of them: <!-- templates --> <script type="text/html" id="header"></script> <script type="text/html" id="catalog"></script> <script type="text/html" id="add-to-catalog-modal"></script> <script type="text/html" id="cart-widget"></script> <script type="text/html" id="cart-item"></script> <script type="text/html" id="cart"></script> <script type="text/html" id="order"></script> <script type="text/html" id="finish-order-modal"></script> Each template name is descriptive enough by itself, so it's easy to know what we are going to set inside them. Let's see a diagram showing where we dispose each template on the screen:   Notice that the cart-item template will be repeated for each item in the cart collection. Modal templates will appear only when a modal dialog is displayed. Finally, the order template is hidden until we click to confirm the order. In the header template, we will have the title and the menu of the page. The add-to-catalog-modal template will contain the modal that shows the form to add a product to our catalog. The cart-widget template will show a summary of our cart. The cart-item template will contain the template of each item in the cart. The cart template will have the layout of the cart. The order template will show the final list of products we want to buy and a button to confirm our order. The header template Let's begin with the HTML markup that should contain the header template: <script type="text/html" id="header"> <h1>    Catalog </h1>   <button class="btn btn-primary btn-sm" data-toggle="modal"     data-target="#addToCatalogModal">    Add New Product </button> <button class="btn btn-primary btn-sm" data-bind="click:     showCartDetails, css:{ disabled: cart().length < 1}">    Show Cart Details </button> <hr/> </script> We define a <h1> tag, and two <button> tags. The first button tag is attached to the modal that has the ID #addToCatalogModal. Since we are using Bootstrap as the CSS framework, we can attach modals by ID using the data-target attribute, and activate the modal using the data-toggle attribute. The second button will show the full cart view and it will be available only if the cart has items. To achieve this, there are a number of different ways. The first one is to use the CSS-disabled class that comes with Twitter Bootstrap. This is the way we have used in the example. CSS binding allows us to activate or deactivate a class in the element depending on the result of the expression that is attached to the class. The other method is to use the enable binding. This binding enables an element if the expression evaluates to true. We can use the opposite binding, which is named disable. There is a complete documentation on the Knockout website http://knockoutjs.com/documentation/enable-binding.html: <button class="btn btn-primary btn-sm" data-bind="click:   showCartDetails, enable: cart().length > 0"> Show Cart Details </button>   <button class="btn btn-primary btn-sm" data-bind="click:   showCartDetails, disable: cart().length < 1"> Show Cart Details </button> The first method uses CSS classes to enable and disable the button. The second method uses the HTML attribute, disabled. We can use a third option, which is to use a computed observable. We can create a computed observable variable in our view-model that returns true or false depending on the length of the cart: //in the viewmodel. Remember to expose it var cartHasProducts = ko.computed(function(){ return (cart().length > 0); }); //HTML <button class="btn btn-primary btn-sm" data-bind="click:   showCartDetails, enable: cartHasProducts"> Show Cart Details </button> To show the cart, we will use the click binding. Now we should go to our viewmodel.js file and add all the information we need to make this template work: var cart = ko.observableArray([]); var showCartDetails = function () { if (cart().length > 0) {    $("#cartContainer").removeClass("hidden"); } }; And you should expose these two objects in the view-model: return {    searchTerm: searchTerm,    catalog: filteredCatalog,    newProduct: newProduct,    totalItems:totalItems,    addProduct: addProduct,    cart: cart,    showCartDetails: showCartDetails, }; The catalog template The next step is to define the catalog template just below the header template: <script type="text/html" id="catalog"> <div class="input-group">    <span class="input-group-addon">      <i class="glyphicon glyphicon-search"></i> Search    </span>    <input type="text" class="form-control" data-bind="textInput:       searchTerm"> </div> <table class="table">    <thead>    <tr>      <th>Name</th>      <th>Price</th>      <th>Stock</th>      <th></th>    </tr>    </thead>    <tbody data-bind="foreach:catalog">    <tr data-bind="style:color:stock() < 5?'red':'black'">      <td data-bind="text:name"></td>      <td data-bind="text:price"></td>      <td data-bind="text:stock"></td>      <td>        <button class="btn btn-primary"          data-bind="click:$parent.addToCart">          <i class="glyphicon glyphicon-plus-sign"></i> Add        </button>      </td>    </tr>    </tbody>    <tfoot>    <tr>      <td colspan="3">        <strong>Items:</strong><span           data-bind="text:catalog().length"></span>      </td>      <td colspan="1">        <span data-bind="template:{name:'cart-widget'}"></span>      </td>    </tr>    </tfoot> </table> </script> Now, each line uses the style binding to alert the user, while they are shopping, that the stock is reaching the maximum limit. The style binding works the same way that CSS binding does with classes. It allows us to add style attributes depending on the value of the expression. In this case, the color of the text in the line must be black if the stock is higher than five, and red if it is four or less. We can use other CSS attributes, so feel free to try other behaviors. For example, set the line of the catalog to green if the element is inside the cart. We should remember that if an attribute has dashes, you should wrap it in single quotes. For example, background-color will throw an error, so you should write 'background-color'. When we work with bindings that are activated depending on the values of the viewmodel, it is good practice to use computed observables. Therefore, we can create a computed value in our product model that returns the value of the color that should be displayed: //In the Product.js var _lineColor = ko.computed(function(){ return (_stock() < 5)? 'red' : 'black'; }); return { lineColor:_lineColor }; //In the template <tr data-bind="style:lineColor"> ... </tr> It would be even better if we create a class in our style.css file that is called stock-alert and we use the CSS binding: //In the style file .stock-alert { color: #f00; } //In the Product.js var _hasStock = ko.computed(function(){ return (_stock() < 5);   }); return { hasStock: _hasStock }; //In the template <tr data-bind="css: hasStock"> ... </tr> Now, look inside the <tfoot> tag. <td colspan="1"> <span data-bind="template:{name:'cart-widget'}"></span> </td> As you can see, we can have nested templates. In this case, we have the cart-widget template inside our catalog template. This give us the possibility of having very complex templates, splitting them into very small pieces, and combining them, to keep our code clean and maintainable. Finally, look at the last cell of each row: <td> <button class="btn btn-primary"     data-bind="click:$parent.addToCart">    <i class="glyphicon glyphicon-plus-sign"></i> Add </button> </td> Look at how we call the addToCart method using the magic variable $parent. Knockout gives us some magic words to navigate through the different contexts we have in our app. In this case, we are in the catalog context and we want to call a method that lies one level up. We can use the magical variable called $parent. There are other variables we can use when we are inside a Knockout context. There is complete documentation on the Knockout website http://knockoutjs.com/documentation/binding-context.html. In this project, we are not going to use all of them. But we are going quickly explain these binding context variables, just to understand them better. If we don't know how many levels deep we are, we can navigate to the top of the view-model using the magic word $root. When we have many parents, we can get the magic array $parents and access each parent using indexes, for example, $parents[0], $parents[1]. Imagine that you have a list of categories where each category contains a list of products. These products are a list of IDs and the category has a method to get the name of their products. We can use the $parents array to obtain the reference to the category: <ul data-bind="foreach: {data: categories}"> <li data-bind="text: $data.name"></li> <ul data-bind="foreach: {data: $data.products, as: 'prod'}>    <li data-bind="text:       $parents[0].getProductName(prod.ID)"></li> </ul> </ul> Look how helpful the as attribute is inside the foreach binding. It makes code more readable. But if you are inside a foreach loop, you can also access each item using the $data magic variable, and you can access the position index that each element has in the collection using the $index magic variable. For example, if we have a list of products, we can do this: <ul data-bind="foreach: cart"> <li><span data-bind="text:$index">    </span> - <span data-bind="text:$data.name"></span> </ul> This should display: 0 – Product 1 1 – Product 2 2 – Product 3 ...  KnockoutJS magic variables to navigate through contexts Now that we know more about what binding variables are, let's go back to our code. We are now going to write the addToCart method. We are going to define the cart items in our js/models folder. Create a file called CartProduct.js and insert the following code in it: //js/models/CartProduct.js var CartProduct = function (product, units) { "use strict";   var _product = product,    _units = ko.observable(units);   var subtotal = ko.computed(function(){    return _product.price() * _units(); });   var addUnit = function () {    var u = _units();    var _stock = _product.stock();    if (_stock === 0) {      return;    } _units(u+1);    _product.stock(--_stock); };   var removeUnit = function () {    var u = _units();    var _stock = _product.stock();    if (u === 0) {      return;    }    _units(u-1);    _product.stock(++_stock); };   return {    product: _product,    units: _units,    subtotal: subtotal,    addUnit : addUnit,    removeUnit: removeUnit, }; }; Each cart product is composed of the product itself and the units of the product we want to buy. We will also have a computed field that contains the subtotal of the line. We should give the object the responsibility for managing its units and the stock of the product. For this reason, we have added the addUnit and removeUnit methods. These methods add one unit or remove one unit of the product if they are called. We should reference this JavaScript file into our index.html file with the other <script> tags. In the viewmodel, we should create a cart array and expose it in the return statement, as we have done earlier: var cart = ko.observableArray([]); It's time to write the addToCart method: var addToCart = function(data) { var item = null; var tmpCart = cart(); var n = tmpCart.length; while(n--) {    if (tmpCart[n].product.id() === data.id()) {      item = tmpCart[n];    } } if (item) {    item.addUnit(); } else {    item = new CartProduct(data,0);    item.addUnit();    tmpCart.push(item);       } cart(tmpCart); }; This method searches the product in the cart. If it exists, it updates its units, and if not, it creates a new one. Since the cart is an observable array, we need to get it, manipulate it, and overwrite it, because we need to access the product object to know if the product is in the cart. Remember that observable arrays do not observe the objects they contain, just the array properties. The add-to-cart-modal template This is a very simple template. We just wrap the code to add a product to a Bootstrap modal: <script type="text/html" id="add-to-catalog-modal"> <div class="modal fade" id="addToCatalogModal">    <div class="modal-dialog">      <div class="modal-content">        <form class="form-horizontal" role="form"           data-bind="with:newProduct">          <div class="modal-header">            <button type="button" class="close"               data-dismiss="modal">              <span aria-hidden="true">&times;</span>              <span class="sr-only">Close</span>            </button><h3>Add New Product to the Catalog</h3>          </div>          <div class="modal-body">            <div class="form-group">              <div class="col-sm-12">                <input type="text" class="form-control"                  placeholder="Name" data-bind="textInput:name">              </div>            </div>            <div class="form-group">              <div class="col-sm-12">                <input type="text" class="form-control"                   placeholder="Price" data-bind="textInput:price">              </div>            </div>            <div class="form-group">              <div class="col-sm-12">                <input type="text" class="form-control"                   placeholder="Stock" data-bind="textInput:stock">              </div>            </div>          </div>          <div class="modal-footer">            <div class="form-group">              <div class="col-sm-12">                <button type="submit" class="btn btn-default"                  data-bind="{click:$parent.addProduct}">                  <i class="glyphicon glyphicon-plus-sign">                  </i> Add Product                </button>              </div>            </div>          </div>        </form>      </div><!-- /.modal-content -->    </div><!-- /.modal-dialog --> </div><!-- /.modal --> </script> The cart-widget template This template gives the user information quickly about how many items are in the cart and how much all of them cost: <script type="text/html" id="cart-widget"> Total Items: <span data-bind="text:totalItems"></span> Price: <span data-bind="text:grandTotal"></span> </script> We should define totalItems and grandTotal in our viewmodel: var totalItems = ko.computed(function(){ var tmpCart = cart(); var total = 0; tmpCart.forEach(function(item){    total += parseInt(item.units(),10); }); return total; }); var grandTotal = ko.computed(function(){ var tmpCart = cart(); var total = 0; tmpCart.forEach(function(item){    total += (item.units() * item.product.price()); }); return total; }); Now you should expose them in the return statement, as we always do. Don't worry about the format now, you will learn how to format currency or any kind of data in the future. Now you must focus on learning how to manage information and how to show it to the user. The cart-item template The cart-item template displays each line in the cart: <script type="text/html" id="cart-item"> <div class="list-group-item" style="overflow: hidden">    <button type="button" class="close pull-right" data-bind="click:$root.removeFromCart"><span>&times;</span></button>    <h4 class="" data-bind="text:product.name"></h4>    <div class="input-group cart-unit">      <input type="text" class="form-control" data-bind="textInput:units" readonly/>        <span class="input-group-addon">          <div class="btn-group-vertical">            <button class="btn btn-default btn-xs"               data-bind="click:addUnit">              <i class="glyphicon glyphicon-chevron-up"></i>            </button>            <button class="btn btn-default btn-xs"               data-bind="click:removeUnit">              <i class="glyphicon glyphicon-chevron-down"></i>            </button>          </div>        </span>    </div> </div> </script> We set an x button in the top-right of each line to easily remove a line from the cart. As you can see, we have used the $root magic variable to navigate to the top context because we are going to use this template inside a foreach loop, and it means this template will be in the loop context. If we consider this template as an isolated element, we can't be sure how deep we are in the context navigation. To be sure, we go to the right context to call the removeFormCart method. It's better to use $root instead of $parent in this case. The code for removeFromCart should lie in the viewmodel context and should look like this: var removeFromCart = function (data) { var units = data.units(); var stock = data.product.stock(); data.product.stock(units+stock); cart.remove(data); }; Notice that in the addToCart method, we get the array that is inside the observable. We did that because we need to navigate inside the elements of the array. In this case, Knockout observable arrays have a method called remove that allows us to remove the object that we pass as a parameter. If the object is in the array, it will be removed. Remember that the data context is always passed as the first parameter in the function we use in the click events. The cart template The cart template should display the layout of the cart: <script type="text/html" id="cart"> <button type="button" class="close pull-right"     data-bind="click:hideCartDetails">    <span>&times;</span> </button> <h1>Cart</h1> <div data-bind="template: {name: 'cart-item', foreach:cart}"     class="list-group"></div> <div data-bind="template:{name:'cart-widget'}"></div> <button class="btn btn-primary btn-sm"     data-bind="click:showOrder">    Confirm Order </button> </script> It's important that you notice the template binding that we have just below <h1>Cart</h1>. We are binding a template with an array using the foreach argument. With this binding, Knockout renders the cart-item template for each element inside the cart collection. This considerably reduces the code we write in each template and in addition makes them more readable. We have once again used the cart-widget template to show the total items and the total amount. This is one of the good features of templates, we can reuse content over and over. Observe that we have put a button at the top-right of the cart to close it when we don't need to see the details of our cart, and the other one to confirm the order when we are done. The code in our viewmodel should be as follows: var hideCartDetails = function () { $("#cartContainer").addClass("hidden"); }; var showOrder = function () { $("#catalogContainer").addClass("hidden"); $("#orderContainer").removeClass("hidden"); }; As you can see, to show and hide elements we use jQuery and CSS classes from the Bootstrap framework. The hidden class just adds the display: none style to the elements. We just need to toggle this class to show or hide elements in our view. Expose these two methods in the return statement of your view-model. We will come back to this when we need to display the order template. This is the result once we have our catalog and our cart:   The order template Once we have clicked on the Confirm Order button, the order should be shown to us, to review and confirm if we agree. <script type="text/html" id="order"> <div class="col-xs-12">    <button class="btn btn-sm btn-primary"       data-bind="click:showCatalog">      Back to catalog    </button>    <button class="btn btn-sm btn-primary"       data-bind="click:finishOrder">      Buy & finish    </button> </div> <div class="col-xs-6">    <table class="table">      <thead>      <tr>        <th>Name</th>        <th>Price</th>        <th>Units</th>        <th>Subtotal</th>      </tr>      </thead>      <tbody data-bind="foreach:cart">      <tr>        <td data-bind="text:product.name"></td>        <td data-bind="text:product.price"></td>        <td data-bind="text:units"></td>        <td data-bind="text:subtotal"></td>      </tr>      </tbody>      <tfoot>      <tr>        <td colspan="3"></td>        <td>Total:<span data-bind="text:grandTotal"></span></td>      </tr>      </tfoot>    </table> </div> </script> Here we have a read-only table with all cart lines and two buttons. One is to confirm, which will show the modal dialog saying the order is completed, and the other gives us the option to go back to the catalog and keep on shopping. There is some code we need to add to our viewmodel and expose to the user: var showCatalog = function () { $("#catalogContainer").removeClass("hidden"); $("#orderContainer").addClass("hidden"); }; var finishOrder = function() { cart([]); hideCartDetails(); showCatalog(); $("#finishOrderModal").modal('show'); }; As we have done in previous methods, we add and remove the hidden class from the elements we want to show and hide. The finishOrder method removes all the items of the cart because our order is complete; hides the cart and shows the catalog. It also displays a modal that gives confirmation to the user that the order is done.  Order details template The finish-order-modal template The last template is the modal that tells the user that the order is complete: <script type="text/html" id="finish-order-modal"> <div class="modal fade" id="finishOrderModal">    <div class="modal-dialog">            <div class="modal-content">        <div class="modal-body">        <h2>Your order has been completed!</h2>        </div>        <div class="modal-footer">          <div class="form-group">            <div class="col-sm-12">              <button type="submit" class="btn btn-success"                 data-dismiss="modal">Continue Shopping              </button>            </div>          </div>        </div>      </div><!-- /.modal-content -->    </div><!-- /.modal-dialog --> </div><!-- /.modal --> </script> The following screenshot displays the output:   Handling templates with if and ifnot bindings You have learned how to show and hide templates with the power of jQuery and Bootstrap. This is quite good because you can use this technique with any framework you want. The problem with this type of code is that since jQuery is a DOM manipulation library, you need to reference elements to manipulate them. This means you need to know over which element you want to apply the action. Knockout gives us some bindings to hide and show elements depending on the values of our view-model. Let's update the show and hide methods and the templates. Add both the control variables to your viewmodel and expose them in the return statement. var visibleCatalog = ko.observable(true); var visibleCart = ko.observable(false); Now update the show and hide methods: var showCartDetails = function () { if (cart().length > 0) {    visibleCart(true); } };   var hideCartDetails = function () { visibleCart(false); };   var showOrder = function () { visibleCatalog(false); };   var showCatalog = function () { visibleCatalog(true); }; We can appreciate how the code becomes more readable and meaningful. Now, update the cart template, the catalog template, and the order template. In index.html, consider this line: <div class="row" id="catalogContainer"> Replace it with the following line: <div class="row" data-bind="if: visibleCatalog"> Then consider the following line: <div id="cartContainer" class="col-xs-6 well hidden"   data-bind="template:{name:'cart'}"></div> Replace it with this one: <div class="col-xs-6" data-bind="if: visibleCart"> <div class="well" data-bind="template:{name:'cart'}"></div> </div> It is important to know that the if binding and the template binding can't share the same data-bind attribute. This is why we go from one element to two nested elements in this template. In other words, this example is not allowed: <div class="col-xs-6" data-bind="if:visibleCart,   template:{name:'cart'}"></div> Finally, consider this line: <div class="row hidden" id="orderContainer"   data-bind="template:{name:'order'}"> Replace it with this one: <div class="row" data-bind="ifnot: visibleCatalog"> <div data-bind="template:{name:'order'}"></div> </div> With the changes we have made, showing or hiding elements now depends on our data and not on our CSS. This is much better because now we can show and hide any element we want using the if and ifnot binding. Let's review, roughly speaking, how our files are now: We have our index.html file that has the main container, templates, and libraries: <!DOCTYPE html> <html> <head> <title>KO Shopping Cart</title> <meta name="viewport" content="width=device-width,     initial-scale=1"> <link rel="stylesheet" type="text/css"     href="css/bootstrap.min.css"> <link rel="stylesheet" type="text/css" href="css/style.css"> </head> <body>   <div class="container-fluid"> <div class="row" data-bind="if: visibleCatalog">    <div class="col-xs-12"       data-bind="template:{name:'header'}"></div>    <div class="col-xs-6"       data-bind="template:{name:'catalog'}"></div>    <div class="col-xs-6" data-bind="if: visibleCart">      <div class="well" data-bind="template:{name:'cart'}"></div>    </div> </div> <div class="row" data-bind="ifnot: visibleCatalog">    <div data-bind="template:{name:'order'}"></div> </div> <div data-bind="template: {name:'add-to-catalog-modal'}"></div> <div data-bind="template: {name:'finish-order-modal'}"></div> </div>   <!-- templates --> <script type="text/html" id="header"> ... </script> <script type="text/html" id="catalog"> ... </script> <script type="text/html" id="add-to-catalog-modal"> ... </script> <script type="text/html" id="cart-widget"> ... </script> <script type="text/html" id="cart-item"> ... </script> <script type="text/html" id="cart"> ... </script> <script type="text/html" id="order"> ... </script> <script type="text/html" id="finish-order-modal"> ... </script> <!-- libraries --> <script type="text/javascript"   src="js/vendors/jquery.min.js"></script> <script type="text/javascript"   src="js/vendors/bootstrap.min.js"></script> <script type="text/javascript"   src="js/vendors/knockout.debug.js"></script> <script type="text/javascript"   src="js/models/product.js"></script> <script type="text/javascript"   src="js/models/cartProduct.js"></script> <script type="text/javascript" src="js/viewmodel.js"></script> </body> </html> We also have our viewmodel.js file: var vm = (function () { "use strict"; var visibleCatalog = ko.observable(true); var visibleCart = ko.observable(false); var catalog = ko.observableArray([...]); var cart = ko.observableArray([]); var newProduct = {...}; var totalItems = ko.computed(function(){...}); var grandTotal = ko.computed(function(){...}); var searchTerm = ko.observable(""); var filteredCatalog = ko.computed(function () {...}); var addProduct = function (data) {...}; var addToCart = function(data) {...}; var removeFromCart = function (data) {...}; var showCartDetails = function () {...}; var hideCartDetails = function () {...}; var showOrder = function () {...}; var showCatalog = function () {...}; var finishOrder = function() {...}; return {    searchTerm: searchTerm,    catalog: filteredCatalog,    cart: cart,    newProduct: newProduct,    totalItems:totalItems,    grandTotal:grandTotal,    addProduct: addProduct,    addToCart: addToCart,    removeFromCart:removeFromCart,    visibleCatalog: visibleCatalog,    visibleCart: visibleCart,    showCartDetails: showCartDetails,    hideCartDetails: hideCartDetails,    showOrder: showOrder,    showCatalog: showCatalog,    finishOrder: finishOrder }; })(); ko.applyBindings(vm); It is useful to debug to globalize the view-model. It is not good practice in production environments, but it is good when you are debugging your application. Window.vm = vm; Now you have easy access to your view-model from the browser debugger or from your IDE debugger. In addition to the product model, we have created a new model called CartProduct: var CartProduct = function (product, units) { "use strict"; var _product = product,    _units = ko.observable(units); var subtotal = ko.computed(function(){...}); var addUnit = function () {...}; var removeUnit = function () {...}; return {    product: _product,    units: _units,    subtotal: subtotal,    addUnit : addUnit,    removeUnit: removeUnit }; }; You have learned how to manage templates with Knockout, but maybe you have noticed that having all templates in the index.html file is not the best approach. We are going to talk about two mechanisms. The first one is more home-made and the second one is an external library used by lots of Knockout developers, created by Jim Cowart, called Knockout.js-External-Template-Engine (https://github.com/ifandelse/Knockout.js-External-Template-Engine). Managing templates with jQuery Since we want to load templates from different files, let's move all our templates to a folder called views and make one file per template. Each file will have the same name the template has as an ID. So if the template has the ID, cart-item, the file should be called cart-item.html and will contain the full cart-item template: <script type="text/html" id="cart-item"></script>  The views folder with all templates Now in the viewmodel.js file, remove the last line (ko.applyBindings(vm)) and add this code: var templates = [ 'header', 'catalog', 'cart', 'cart-item', 'cart-widget', 'order', 'add-to-catalog-modal', 'finish-order-modal' ];   var busy = templates.length; templates.forEach(function(tpl){ "use strict"; $.get('views/'+ tpl + '.html').then(function(data){    $('body').append(data);    busy--;    if (!busy) {      ko.applyBindings(vm);    } }); }); This code gets all the templates we need and appends them to the body. Once all the templates are loaded, we call the applyBindings method. We should do it this way because we are loading templates asynchronously and we need to make sure that we bind our view-model when all templates are loaded. This is good enough to make our code more maintainable and readable, but is still problematic if we need to handle lots of templates. Further more, if we have nested folders, it becomes a headache listing all our templates in one array. There should be a better approach. Managing templates with koExternalTemplateEngine We have seen two ways of loading templates, both of them are good enough to manage a low number of templates, but when lines of code begin to grow, we need something that allows us to forget about template management. We just want to call a template and get the content. For this purpose, Jim Cowart's library, koExternalTemplateEngine, is perfect. This project was abandoned by the author in 2014, but it is still a good library that we can use when we develop simple projects. We just need to download the library in the js/vendors folder and then link it in our index.html file just below the Knockout library. <script type="text/javascript" src="js/vendors/knockout.debug.js"></script> <script type="text/javascript"   src="js/vendors/koExternalTemplateEngine_all.min.js"></script> Now you should configure it in the viewmodel.js file. Remove the templates array and the foreach statement, and add these three lines of code: infuser.defaults.templateSuffix = ".html"; infuser.defaults.templateUrl = "views"; ko.applyBindings(vm); Here, infuser is a global variable that we use to configure the template engine. We should indicate which suffix will have our templates and in which folder they will be. We don't need the <script type="text/html" id="template-id"></script> tags any more, so we should remove them from each file. So now everything should be working, and the code we needed to succeed was not much. KnockoutJS has its own template engine, but you can see that adding new ones is not difficult. If you have experience with other template engines such as jQuery Templates, Underscore, or Handlebars, just load them in your index.html file and use them, there is no problem with that. This is why Knockout is beautiful, you can use any tool you like with it. You have learned a lot of things in this article, haven't you? Knockout gives us the CSS binding to activate and deactivate CSS classes according to an expression. We can use the style binding to add CSS rules to elements. The template binding helps us to manage templates that are already loaded in the DOM. We can iterate along collections with the foreach binding. Inside a foreach, Knockout gives us some magic variables such as $parent, $parents, $index, $data, and $root. We can use the binding as along with the foreach binding to get an alias for each element. We can show and hide content using just jQuery and CSS. We can show and hide content using the bindings: if, ifnot, and visible. jQuery helps us to load Knockout templates asynchronously. You can use the koExternalTemplateEngine plugin to manage templates in a more efficient way. The project is abandoned but it is still a good solution. Summary In this article, you have learned how to split an application using templates that share the same view-model. Now that we know the basics, it would be interesting to extend the application. Maybe we can try to create a detailed view of the product, or maybe we can give the user the option to register where to send the order. Resources for Article: Further resources on this subject: Components [article] Web Application Testing [article] Top features of KnockoutJS [article]
Read more
  • 0
  • 0
  • 11034

article-image-interfacing-react-components-angular-applications
Patrick Marabeas
26 Sep 2014
10 min read
Save for later

Interfacing React Components with Angular Applications

Patrick Marabeas
26 Sep 2014
10 min read
There's been talk lately of using React as the view within Angular's MVC architecture. Angular, as we all know, uses dirty checking. As I'll touch on later, it accepts the fact of (minor) performance loss to gain the great two-way data binding it has. React, on the other hand, uses a virtual DOM and only renders the difference. This results in very fast performance. So, how do we leverage React's performance from our Angular application? Can we retain two-way data flow? And just how significant is the performance increase? The nrg module and demo code can be found over on my GitHub. The application To demonstrate communication between the two frameworks, let's build a reusable Angular module (nrg[Angular(ng) + React(r) = energy(nrg)!]) which will render (and re-render) a React component when our model changes. The React component will be composed of aninputandpelement that will display our model and will also update the model on change. To show this, we'll add aninputandpto our view bound to the model. In essence, changes to either input should result in all elements being kept in sync. We'll also add a button to our component that will demonstrate component unmounting on scope destruction. ;( ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app', ['nrg']) .controller('MainController', ['$scope', function($scope) { $scope.text = 'This is set in Angular'; $scope.destroy = function() { $scope.$destroy(); } }]); })(window, document, angular); data-component specifies the React component we want to mount.data-ctrl (optional) specifies the controller we want to inject into the directive—this will allow specific components to be accessible onscope itself rather than scope.$parent.data-ng-model is the model we are going to pass between our Angular controller and our React view. <div data-ng-controller="MainController"> <!-- React component --> <div data-component="reactComponent" data-ctrl="" data-ng-model="text"> <!-- <input /> --> <!-- <button></button> --> <!-- <p></p> --> </div> <!-- Angular view --> <input type="text" data-ng-model="text" /> <p>{{text}}</p> </div> As you can see, the view has meaning when using Angular to render React components.<div data-component="reactComponent" data-ctrl="" data-ng-model="text"></div> has meaning when compared to<div id="reactComponent"></div>,which requires referencing a script file to see what component (and settings) will be mounted on that element. The Angular module - nrg.js The main functions of this reusable Angular module will be to: Specify the DOM element that the component should be mounted onto. Render the React component when changes have been made to the model. Pass the scope and element attributes to the component. Unmount the React component when the Angular scope is destroyed. The skeleton of our module looks like this: ;(function(window, document, angular, React, undefined) { 'use strict'; angular.module('nrg', []) To keep our code modular and extensible, we'll create a factory that will house our component functions, which are currently justrender and unmount . .factory('ComponentFactory', [function() { return { render: function() { }, unmount: function() { } } }]) This will be injected into our directive. .directive('component', ['$controller', 'ComponentFactory', function($controller, ComponentFactory) { return { restrict: 'EA', If a controller has been specified on the elements viadata-ctrl , then inject the$controller service. As mentioned earlier, this will allow scope variables and functions to be used within the React component to be accessible directly onscope , rather thanscope.$parent (the controller also doesn't need to be declared in the view withng-controller ). controller: function($scope, $element, $attrs){ return ($attrs.ctrl) ? $controller($attrs.ctrl, {$scope:$scope, $element:$element, $attrs:$attrs}) : null; }, Here’s an isolated scope with two-way-binding ondata-ng-model . scope: { ngModel: '=' }, link: function(scope, element, attrs) { // Calling ComponentFactory.render() & watching ng-model } } }]); })(window, document, angular, React); ComponentFactory Fleshing out theComponentFactory , we'll need to know how to render and unmount components. React.renderComponent( ReactComponent component, DOMElement container, [function callback] ) As such, we'll need to pass the component we wish to mount (component), the container we want to mount it in (element) and any properties (attrsandscope) we wish to pass to the component. This render function will be called every time the model is updated, so the updated scope will be pushed through each time. According to the React documentation, "If the React component was previously rendered into container, this (React.renderComponent) will perform an update on it and only mutate the DOM as necessary to reflect the latest React component." .factory('ComponentFactory', [function() { return { render: function(component, element, scope, attrs) { // If you have name-spaced your components, you'll want to specify that here - or pass it in via an attribute etc React.renderComponent(window[component]({ scope: scope, attrs: attrs }), element[0]); }, unmount: function(element) { React.unmountComponentAtNode(element[0]); } } }]) Component directive Back in our directive, we can now set up when we are going to call these two functions. link: function(scope, element, attrs) { // Collect the elements attrs in a nice usable object var attributes = {}; angular.forEach(element[0].attributes, function(a) { attributes[a.name.replace('data-','')] = a.value; }); // Render the component when the directive loads ComponentFactory.render(attrs.component, element, scope, attributes); // Watch the model and re-render the component scope.$watch('ngModel', function() { ComponentFactory.render(attrs.component, element, scope, attributes); }, true); // Unmount the component when the scope is destroyed scope.$on('$destroy', function () { ComponentFactory.unmount(element); }); } This implements dirty checking to see if the model has been updated. I haven't played around too much to see if there's a notable difference in performance between this and using a broadcast/listener. That said, to get a listener working as expected, you will need to wrap the render call in a $timeout to push it to the bottom of the stack to ensure scope is updated. scope.$on('renderMe', function() { $timeout(function() { ComponentFactory.render(attrs.component, element, scope, attributes); }); }); The React component We can now build our React component, which will use the model we defined as well as inform Angular of any updates it performs. /** @jsx React.DOM */ ;(function(window, document, React, undefined) { 'use strict'; window.reactComponent = React.createClass({ This is the content that will be rendered into the container. The properties that we passed to the component ({ scope: scope, attrs: attrs }) when we called React.renderComponent back in our component directive are now accessible via this.props. render: function(){ return ( <div> <input type='text' value={this.props.scope.ngModel} onChange={this.handleChange} /> <button onClick={this.deleteScope}>Destroy Scope</button> <p>{this.props.scope.ngModel}</p> </div> ) }, Via the on Change   event, we can call for Angular to run a digest, just as we normally would, but accessing scope via this.props : handleChange: function(event) { var _this = this; this.props.scope.$apply(function() { _this.props.scope.ngModel = event.target.value; }); }, Here we deal with the click event deleteScope  . The controller is accessible via scope.$parent  . If we had injected a controller into the component directive, its contents would be accessible directly on scope  , just as ngModel is.     deleteScope: function() { this.props.scope.$parent.destroy(); } }); })(window, document, React); The result Putting this code together (you can view the completed code on GitHub, or see it in action) we end up with: Two input elements, both of which update the model. Any changes in either our Angular application or our React view will be reflected in both. A React component button that calls a function in our MainController, destroying the scope and also resulting in the unmounting of the component. Pretty cool. But where is my perf increase!? This is obviously too small an application for anything to be gained by throwing your view over to React. To demonstrate just how much faster applications can be (by using React as the view), we'll throw a kitchen sink worth of randomly generated data at it. 5000 bits to be precise. Now, it should be stated that you probably have a pretty questionable UI if you have this much data binding going on. Misko Hevery has a great response regarding Angular's performance on StackOverflow. In summary: Humans are: Slow: Anything faster than 50ms is imperceptible to humans and thus can be considered as "instant". Limited: You can't really show more than about 2000 pieces of information to a human on a single page. Anything more than that is really bad UI, and humans can't process this anyway. Basically, know Angular's limits and your user's limits! That said, the following performance test was certainly accentuated on mobile devices. Though, on the flip side, UI should be simpler on mobile. Brute force performance demonstration ;(function(window, document, angular, undefined) { 'use strict'; angular.module('app') .controller('NumberController', ['$scope', function($scope) { $scope.numbers = []; ($scope.numGen = function(){ for(var i = 0; i < 5000; i++) { $scope.numbers[i] = Math.floor(Math.random() * (999999999999999 - 1)) + 1; } })(); }]); })(window, document, angular); Angular ng-repeat <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <table> <tr ng-repeat="number in numbers"> <td>{{number}}</td> </tr> </table> </div> There was definitely lag felt as the numbers were loaded in and refreshed. From start to finish, this took around 1.5 seconds. React component <div data-ng-controller="NumberController"> <button ng-click="numGen()">Refresh Data</button> <div data-component="numberComponent" data-ng-model="numbers"></div> </div> ;(function(window, document, React, undefined) { window.numberComponent = React.createClass({ render: function() { var rows = this.props.scope.ngModel.map(function(number) { return ( <tr> <td>{number}</td> </tr> ); }); return ( <table>{rows}</table> ); } }); })(window, document, React); So that just happened. 270 milliseconds start to finish. Around 80% faster! Conclusion So, should you go rewrite all those Angular modules as React components? Probably not. It really comes down to the application you are developing and how dependent you are on OSS. It's definitely possible that a handful of complex modules could put your application in the realm of “feeling a tad sluggish”, but it should be remembered that perceived performance is all that matters to the user. Altering the manner in which content is loaded could end up being a better investment of time. Users will definitely feel performance increases on mobile websites sooner, however, and is certainly something to keep in mind. The nrg module and demo code can be found over on my GitHub. Visit our JavaScript page for more JavaScript content and tutorials!  About the author A guest post by Patrick Marabeas, a freelance frontend developer who loves learning and working with cutting edge web technologies. He spends much of his free time developing Angular modules, such as ng-FitText, ng-Slider, ng-YouTubeAPI, and ng-ScrollSpy. You can follow him on Twitter: @patrickmarabeas.
Read more
  • 0
  • 0
  • 10953

article-image-packaging-and-publishing-an-oracle-jet-hybrid-mobile-application
Vijin Boricha
17 Apr 2018
4 min read
Save for later

Packaging and publishing an Oracle JET Hybrid mobile application

Vijin Boricha
17 Apr 2018
4 min read
Today, we will learn how to package and publish an Oracle JET mobile application on Apple Store and Android Play Store. We can package and publish Oracle JET-based hybrid mobile applications to Google Play or Apple App stores, using framework support and third-party tools. Packaging a mobile application An Oracle JET Hybrid mobile application can be packaged with the help of Grunt build release commands, as described in the following steps: The Grunt release command can be issued with the desired platform as follows: grunt build:release --platform={ios|android} Code sign the application based on the platform in the buildConfig.json component. Note: Further details regarding code sign per platform are available at: Android: https:// cordova.apache.org/docs/en/latest/guide/platforms/android/tools. html.iOS:https:// cordova.apache.org/docs/en/latest/guide/platforms/ios/tools.Html. We can pass the code sign details and rebuild the application using the following command: grunt build:release --platform={ios|android} --buildConfig=path/buildConfig.json The application can be tested after the preceding changes using the following serve command: grunt serve:release --platform=ios|android [--web=true --serverPort=server-port-number --livereloadPort=live-reload-port-number - -destination=emulator-name|device] --buildConfig==path/buildConfig.json Publishing a mobile application Publishing an Oracle JET hybrid mobile application is as per the platform-specific guidelines. Each platform has defined certain standards and procedures to distribute an app on the respective platforms. Publishing on an iOS platform The steps involved in publishing iOS applications include: Enrolling in the Apple Developer Program to distribute the app. Adding advanced, integrated services based on the application type. Preparing our application with approval and configuration according to iOS standards. Testing our app on numerous devices and application releases. Submitting and releasing the application as a mobile app in the store. Alternatively, we can also distribute the app outside the store. The iOS distribution process is represented in the following diagram: Please note that the process for distributing applications on iOS presented in the preceding diagram is the latest, as of writing this chapter. It may be altered by the iOS team at a later point. The preceding said process is up-to-date, as of writing this chapter. Note: For the latest iOS app distribution procedure, please refer to the official iOS documentation at: https://developer.apple.com/library/ios/documentation/IDEs/ Conceptual/AppDistributionGuide/Introduction/Introduction.html Publishing on an Android platform There are multiple approaches to publish our application on Android platforms. The following are the steps involved in publishing the app on Android: Preparing the app for release. We need to perform the following activities to prepare an app for release: Configuring the app for release, including logs and manifests. Testing the release version of the app on multiple devices. Updating the application resources for the release. Preparing any remote applications or services the app interacts with. 2. Releasing the app to the market through Google Play. We need to perform the following activities to prepare an app for release through Google Play: Prepare any promotional documentation required for the app. Configure all the default options and prepare the components. Publish the prepared release version of the app to Google Play. 3. Alternately, we can publish the application through email, or through our own website, for users to download and install. The steps involved in publishing Android applications are advised in the following diagram: Please note that the process for distributing applications on Android presented in the preceding diagram is the latest, as of writing this chapter. It may be altered by the Android team at a later point. Note: For the latest Android app distribution procedure, please refer to the official Android documentation at http://developer.android.com/tools/publishing/publishing_overview. html#publishing-release. You enjoyed an excerpt  from Oracle JET for Developers written by Raja Malleswara Rao Pattamsetti.  With this book, you will learn to leverage Oracle JavaScript Extension Toolkit (JET) to develop efficient client-side applications. Auditing Mobile Applications Creating and configuring a basic mobile application    
Read more
  • 0
  • 0
  • 10736

article-image-yii-adding-users-and-user-management-your-site
Packt
21 Feb 2013
9 min read
Save for later

Yii: Adding Users and User Management to Your Site

Packt
21 Feb 2013
9 min read
(For more resources related to this topic, see here.) Mission Checklist This project assumes that you have a web development environment prepared. The files for this project include a Yii project directory with a database schema. To prepare for the project, carry out the following steps replacing the username lomeara with your own username: Copy the project files into your working directory. cp –r ~/Downloads/project_files/Chapter 3/project_files ~/projects/ch3 Make the directories that Yii uses web writeable. cd ~/projects/ch3/ sudo chown -R lomeara:www-data protected/runtime assets protected/models protected/controllers protected/views If you have a link for a previous project, remove it from the webroot directory. rm /opt/lampp/htdocs/cddb Create a link in the webroot directory to the copied directory. cd /opt/lampp/htdocs sudo ln -s ~/projects/ch3 cbdb Import the project into NetBeans (remember to set the project URL to http://localhost/cbdb) and configure for Yii development with PHPUnit. Create a database named cbdb and load the database schema (~/projects/ch3/ protected/data/schema.sql) into it. If you are not using the XAMPP stack or if your access to MySQL is password protected, you should review and update the Yii configuration file (in NetBeans it is ch3/Source Files/protected/config/main.php). Adding a User Object with CRUD As a foundation for our user management system, we will add a User table to the database and then use Gii to build a quick functional interface. Engage Thrusters Let's set the first building block by adding a User table containing the following information: A username Password hash Reference to a person entry for first name and last name In NetBeans, open a SQL Command window for the cbdb database and run the following command: CREATE TABLE 'user' ( 'id' int(10) unsigned NOT NULL AUTO_INCREMENT, 'username' varchar(20) NOT NULL, 'pwd_hash' char(34) NOT NULL, 'person_id' int(10) unsigned NOT NULL, PRIMARY KEY ('id'), UNIQUE KEY 'username' ('username'), CONSTRAINT 'userperson_ibfk_2' FOREIGN KEY ('person_id') REFERENCES 'person' ('id') ON DELETE CASCADE ) ENGINE=InnoDB; Open a web browser to the Gii URL http://localhost/cbdb/index.php/gii(the password configured in the sample code is yiibook) and use Gii to generate a model from the user table. Then, use Gii to generate CRUD from the user model. Back in NetBeans, add a link to the user index in your site's logged in menu (ch3 | Source Files | protected | views | layouts | main.php). It should look like this: } else { $this->widget('zii.widgets.CMenu',array( 'activeCssClass' => 'active', 'activateParents' => true, 'items'=>array( array('label'=>'Home', 'url'=>array('/site/index')), array('label'=>'Comic Books', 'url'=>array('/book'), 'items' => array( array('label'=>'Publishers', 'url'=>array('/publisher')), ) ), array('label'=>'Users', 'url'=>array('/user/index')), array('label'=>'Logout ('.Yii::app()->user- >name.')', 'url'=>array('/site/logout')) ), )); } ?> Right-click on the project name, run the site, and log in with the default username and password (admin/admin). You will see a menu that includes a link named Users. If you click on the Users link in the menu and then click on Create User, you will see a pretty awful-looking user-creation screen. We are going to fix that. First, we will update the user form to include fields for first name, last name, password, and repeat password. Edit ch3 | Source Files | protected | views | user | _form.php and add those fields. Start by changing all instances of $model to $user. Then, add a call to errorSummary on the person data under the errorSummary call on user. <?php echo $form->errorSummary($user); ?> <?php echo $form->errorSummary($person); ?> Add rows for first name and last name at the beginning of the form. <div class="row"> <?php echo $form->labelEx($person,'fname'); ?> <?php echo $form->textField($person,'fname',array ('size'=>20,'maxlength'=>20)); ?> <?php echo $form->error($person,'fname'); ?> </div> <div class="row"> <?php echo $form->labelEx($person,'lname'); ?> <?php echo $form->textField($person,'lname',array ('size'=>20,'maxlength'=>20)); ?> <?php echo $form->error($person,'lname'); ?> </div> Replace the pwd_hash row with the following two rows: <div class="row"> <?php echo $form->labelEx($user,'password'); ?> <?php echo $form->passwordField($user,'password',array ('size'=>20,'maxlength'=>64)); ?> <?php echo $form->error($user,'password'); ?> </div> <div class="row"> <?php echo $form->labelEx($user,'password_repeat'); ?> <?php echo $form->passwordField($user,'password_repeat',array ('size'=>20,'maxlength'=>64)); ?> <?php echo $form->error($user,'password_repeat'); ?> </div> Finally, remove the row for person_id. These changes are going to completely break the User create/update form for the time being. We want to capture the password data and ultimately make a hash out of it to store securely in the database. To collect the form inputs, we will add password fields to the User model that do not correspond to values in the database. Edit the User model ch3 | Source Files | protected | models | User.php and add two public variables to the class: class User extends CActiveRecord { public $password; public $password_repeat; In the same User model file, modify the attribute labels function to include labels for the new password fields. public function attributeLabels() { return array( 'id' => 'ID', 'username' => 'Username', 'password' => 'Password', 'password_repeat' => 'Password Repeat' ); } In the same User model file, update the rules function with the following rules: Require username Limit length of username and password Compare password with password repeat Accept only safe values for username and password We will come back to this and improve it, but for now, it should look like the following: public function rules() { // NOTE: you should only define rules for those attributes //that will receive user inputs. return array( array('username', 'required'), array('username', 'length', 'max'=>20), array('password', 'length', 'max'=>32), array('password', 'compare'), array('password_repeat', 'safe'), ); } In order to store the user's first and last name, we must change the Create action in the User controller ch3 | Source Files | protected | controllers | UserController. php to create a Person object in addition to a User object. Change the variable name $model to $user, and add an instance of the Person model. public function actionCreate() { $user=new User; $person=new Person; // Uncomment the following line if AJAX validation is //needed // $this->performAjaxValidation($user); if(isset($_POST['User'])) { $user->attributes=$_POST['User']; if($user->save()) $this->redirect(array('view','id'=>$user->id)); } $this->render('create',array( 'user'=>$user, 'person'=>$person, )); } Don't reload the create user page yet. First, update the last line of the User Create view ch3 | Source Files | protected | views | user | create.php to send a User object and a Person object. <?php echo $this->renderPartial('_form', array('user'=>$user, 'person' =>$person)); ?> Make a change to the attributeLabels function in the Person model (ch3 | Source Files | protected | models | Person.php) to display clearer labels for first name and last name. public function attributeLabels() { return array( 'id' => 'ID', 'fname' => 'First Name', 'lname' => 'Last Name', ); } The resulting user form should look like this: Looks pretty good, but if you try to submit the form, you will receive an error. To fix this, we will change the User Create action in the User controller ch3 | Source Files | protected | controllers | UserController.php to check and save both User and Person data. if(isset($_POST['User'], $_POST['Person'])) { $person->attributes=$_POST['Person']; if($person->save()) { $user->attributes=$_POST['User']; $user->person_id = $person->id; if($user->save()) $this->redirect(array('view','id'=>$user->id)); } } Great! Now you can create users, but if you try to edit a user entry, you see another error. This fix will require a couple of more changes. First, in the user controller ch3 | Source Files | protected | controllers | UserController.php, change the loadModel function to load the user model with its related person information: $model=User::model() ->with('person') ->findByPk((int)$id); Next, in the same file, change the actionUpdate function. Add a call to save the person data, if the user save succeeds: if($model->save()) { $model->person->attributes=$_POST['Person']; $model->person->save(); $this->redirect(array('view','id'=>$model->id)); } Then, in the user update view ch3 | Source Files | protected | views | user | update.php, add the person information to the form render. <?php echo $this->renderPartial('_form', array('user'=>$model, 'person' => $model->person)); ?> One more piece of user management housekeeping; try deleting a user. Look in the database for the user and the person info. Oops. Didn't clean up after itself, did it? Update the User controller ch3 | Source Files | protected | controllers | UserController.php once again. Change the call to delete in the User delete action: $this->loadModel($id)->person->delete(); Objective Complete - Mini Debriefing We have added a new object, User, to our site, and associated it with the Person object to capture the user's first and last name. Gii helped us get the basic structure of our user management function in place, and then we altered the model, view, and controller to bring the pieces together.
Read more
  • 0
  • 0
  • 10727
article-image-authenticating-your-application-devise
Packt
25 Oct 2013
11 min read
Save for later

Authenticating Your Application with Devise

Packt
25 Oct 2013
11 min read
(For more resources related to this topic, see here.) Signing in using authentication other than e-mails By default, Devise only allows e-mails to be used for authentication. For some people, this condition will lead to the question, "What if I want to use some other field besides e-mail? Does Devise allow that?" The answer is yes; Devise allows other attributes to be used to perform the sign-in process. For example, I will use username as a replacement for e-mail, and you can change it later with whatever you like, including userlogin, adminlogin, and so on. We are going to start by modifying our user model. Create a migration file by executing the following command inside your project folder: $ rails generate migration add_username_to_users username:string This command will produce a file, which is depicted by the following screenshot: The generated migration file Execute the migrate (rake db:migrate) command to alter your users table, and it will add a new column named username. You need to open the Devise's main configuration file at config/initializers/devise.rb and modify the code: config.authentication_keys = [:username] config.case_insensitive_keys = [:username] config.strip_whitespace_keys = [:username] You have done enough modification to your Devise configuration, and now you have to modify the Devise views to add a username field to your sign-in and sign-up pages. By default, Devise loads its views from its gemset code. The only way to modify the Devise views is to generate copies of its views. This action will automatically override its default views. To do this, you can execute the following command: $ rails generate devise:views It will generate some files, which are shown in the following screenshot: Devise views files As I have previously mentioned, these files can be used to customize another view. But we are going to talk about it a little later in this article. Now, you have the views and you can modify some files to insert the username field. These files are listed as follows: app/views/devise/sessions/new.html.erb: This is a view file for the sign-up page. Basically, all you need to do is change the email field into the username field. #app/views/devise/sessions/new.html.erb <h2>Sign in</h2> <%= notice %> <%= alert %> <%= form_for(resource, :as => resource_name, :url => session_path (resource_name)) do |f| %> <div><%= f.label :username %><br /> <%= f.text_field :username, :autofocus => true %><div> <div><%= f.label :password %><br /> <%= f.password_field :password %></div> <% if devise_mapping.rememberable? -%> <div><%= f.check_box :remember_me %> <%= f.label :remember_me %></div> <% end -%> <div><%= f.submit "Sign in" %></div> <% end %> %= render "devise/shared/links" %> You are now allowed to sign in with your username. The modification will be shown, as depicted in the following screenshot: The sign-in page with username app/views/devise/registrations/new.html.erb: This file is a view file for the registration page. It is a bit different from the sign-up page; in this file, you need to add the username field, so that the user can fill in their username when they perform the registration. #app/views/devise/registrations/new.html.erb <h2>Sign Up</h2> <%= form_for() do |f| %> <%= devise_error_messages! %> <div><%= f.label :email %><br /> <%= f.email_field :email, :autofocus => true %></div> <div><%= f.label :username %><br /> <%= f.text_field :username %></div> <div><%= f.label :password %><br /> <%= f.password_field :password %></div> <div><%= f.label :password_confirmation %><br /> <%= f.password_field :password_confirmation %></div> <div><%= f.submit "Sign up" %></div> <% end %> <%= render "devise/shared/links" %> Especially for registration, you need to perform extra modifications. Mass assignment rules written in the app/controller/application_controller.rb file, and now, we are going to modify them a little. Add username to the sanitizer for sign-in and sign-up, and you will have something as follows: #these codes are written inside configure_permitted_parameters function devise_parameter_sanitizer.for(:sign_in) {|u| u.permit(:email, :username )} devise_parameter_sanitizer.for(:sign_up) {|u| u.permit(:email, :username, :password, :password_confirmation)} These changes will allow you to perform a sign-up along with the username data. The result of the preceding example is shown in the following screenshot: The sign-up page with username I want to add a new case for your sign-in, which is only one field for username and e-mail. This means that you can sign in either with your e-mail ID or username like in Twitter's sign-in form. Based on what we have done before, you already have username and email columns; now, open /app/models/user.rb and add the following line: attr_accessor :signin Next, you need to change the authentication keys for Devise. Open /config/initializers/devise.rb and change the value for config.authentication_keys, as shown in the following code snippet: config.authentication_keys = [ :signin ] Let's go back to our user model. You have to override the lookup function that Devise uses when performing a sign-in. To do this, add the following method inside your model class: def self.find_first_by_auth_conditions(warden_conditions) conditions = warden_conditions.dup where(conditions).where(["lower(username) = :value OR lower(email) = :value", { :value => signin.downcase }]).first end As an addition, you can add a validation for your username, so it will be case insensitive. Add the following validation code into your user model: validates :username, :uniqueness => {:case_sensitive => false} Please open /app/controller/application_controller.rb and make sure you have this code to perform parameter filtering: before_filter :configure_permitted_parameters, if: :devise_controller? protected def configure_permitted_parameters devise_parameter_sanitizer.for(:sign_in) {|u| u.permit(:signin)} devise_parameter_sanitizer.for(:sign_up) {|u| u.permit(:email, : username, :password, :password_confirmation)} end We're almost there! Currently, I assume that you've already stored an account that contains the e-mail ID and username. So, you just need to make a simple change in your sign-in view file (/app/views/devise/sessions/new.html.erb). Make sure that the file contains this code: <h2>Sign in</h2> <%= notice %> <%= alert %> <%= form_for(resource, :as => resource_name, :url => session_path (resource_name)) do |f| %> <div><%= f.label "Username or Email" %><br /> <%= f.text_field :signin, :autofocus => true %></div> <div><%= f.label :password %><br /> <%= f.password_field :password %></div> <% if devise_mapping.rememberable? -%> <div><%= f.check_box :remember_me %> <%= f.label :remember_me %> </div> <% end -%> <div><%= f.submit "Sign in" %></div> <% end %> <%= render "devise/shared/links" %> You can see that you don't have a username or email field anymore. The field is now replaced by a single field named :signin that will accept either the e-mail ID or the username. It's efficient, isn't it? Updating the user account Basically, you are already allowed to access your user account when you activate the registerable module in the model. To access the page, you need to log in first and then go to /users/edit. The page is as shown in the following screenshot: The edit account page But, what if you want to edit your username or e-mail ID? How will you do that? What if you have extra information in your users table, such as addresses, birth dates, bios, and passwords as well? How will you edit these? Let me show you how to edit your user data including your password, or edit your user data without editing your password. Editing your data, including the password: To perform this action, the first thing that you need to do is modify your view. Your view should contain the following code: <div><%= f.label :username %><br /> <%= f.text_field :username %></div> Now, we are going to overwrite Devise's logic. To do this, you have to create a new controller named registrations_controller. Please use the rails command to generate the controller, as shown: $ rails generate controller registrations update It will produce a file located at app/controllers/. Open the file and make sure you write this code within the controller class: class RegistrationsController < Devise::RegistrationsController def update new_params = params.require(:user).permit(:email,:username, : current_password, :password,:password_confirmation) @user = User.find(current_user.id) if @user.update_with_password(new_params) set_flash_message :notice, :updated sign_in @user, :bypass => true redirect_to after_update_path_for(@user) else render "edit" end end end Let's look at the code. Currently, Rails 4 has a new method in organizing whitelist attributes. Therefore, before performing mass assignment attributes, you have to prepare your data. This is done in the first line of the update method. Now, if you see the code, there's a method defined by Devise named update_with_password. This method will use mass assignment attributes with the provided data. Since we have prepared it before we used it, it will be fine. Next, you have to edit your route file a bit. You should modify the rule defined by Devise, so instead of using the original controller, Devise will use the controller you created before. The modification should look as follows: devise_for :users, :controllers => {:registrations => "registrations"} Now you have modified the original user edit page, and it will be a little different. You can turn on your Rails server and see it in action. The view is as depicted in the following screenshot: The modified account edit page Now, try filling up these fields one by one. If you are filling them with different values, you will be updating all the data (e-mail, username, and password), and this sounds dangerous. You can modify the controller to have better data update security, and it all depends on your application's workflows and rules. Editing your data, excluding the password: Actually, you already have what it takes to update data without changing your password. All you need to do is modify your registrations_controller.rb file. Your update function should be as follows: class RegistrationsController < Devise::RegistrationsController def update new_params = params.require(:user).permit(:email,:username, : current_password, :password,:password_confirmation) change_password = true if params[:user][:password].blank? params[:user].delete("password") params[:user].delete("password_confirmation") new_params = params.require(:user).permit(:email,:username) change_password = false end @user = User.find(current_user.id) is_valid = false if change_password is_valid = @user.update_with_password(new_params) else @user.update_without_password(new_params) end if is_valid set_flash_message :notice, :updated sign_in @user, :bypass => true redirect_to after_update_path_for(@user) else render "edit" end end end The main difference from the previous code is now you have an algorithm that will check whether the user intends to update your data with their password or not. If not, the code will call the update_without_password method. Now, you have codes that allow you to edit with/without a password. Now, refresh your browser and try editing with or without a password. It won't be a problem anymore. Summary Now, I believe that you will be able to make your own Rails application with Devise. You should be able to make your own customizations based on your needs. Resources for Article: Further resources on this subject: Integrating typeahead.js into WordPress and Ruby on Rails [Article] Facebook Application Development with Ruby on Rails [Article] Designing and Creating Database Tables in Ruby on Rails [Article]
Read more
  • 0
  • 0
  • 10681

article-image-adding-charts-dashboards
Packt
19 Aug 2016
11 min read
Save for later

Adding Charts to Dashboards

Packt
19 Aug 2016
11 min read
In this article by Vince Sesto, author of the book Learning Splunk Web Framework, we will study adding charts to dashboards. We have a development branch to work from and we are going to work further with the SimpleXMLDashboard dashboard. We should already be on our development server environment as we have just switched over to our new development branch. We are going to create a new bar chart, showing the daily NASA site access for our top educational users. We will change the label of the dashboard and finally place an average overlay on top of our chart: (For more resources related to this topic, see here.) Get into the local directory of our Splunk App, and into the views directory where all our Simple XML code is for all our dashboards: cd $SPLUNK_HOME/etc/apps/nasa_squid_web/local/data/ui/views We are going to work on the simplexmldashboard.xml file. Open this file with a text editor or your favorite code editor. Don't forget, you can also use the Splunk Code Editor if you are not comfortable with other methods. It is not compulsory to indent and nest your Simple XML code, but it is a good idea to have consistent indentation and commenting to make sure your code is clear and stays as readable as possible. Let's start by changing the name of the dashboard that is displayed to the user. Change line 2 to the following line of code (don't include the line numbers): 2   <label>Educational Site Access</label> Move down to line 16 and you will see that we have closed off our row element with a </row>. We are going to add in a new row where we will place our new chart. After the sixteenth line, add the following three lines to create a new row element, a new panel to add our chart, and finally, open up our new chart element: 17 <row> 18 <panel> 19 <chart> The next two lines will give our chart a title and we can then open up our search: 20 <title>Top Educational User</title> 21 <search> To create a new search, just like we would enter in the Splunk search bar, we will use the query tag as listed with our next line of code. In our search element, we can also set the earliest and latest times for our search, but in this instance we are using the entire data source: 22 <query>index=main sourcetype=nasasquidlogs | search calclab1.math.tamu.edu | stats count by MonthDay </query> 23 <earliest>0</earliest> 24 <latest></latest> 25 </search> We have completed our search and we can now modify the way the chart will look on our panel with the option chart elements. In our next four lines of code, we set the chart type as a column chart, set the legend to the bottom of the chart area, remove any master legend, and finally set the height as 250 pixels: 26 <option name="charting.chart">column</option> 27 <option name="charting.legend.placement">bottom</option> 28 <option name="charting.legend.masterLegend">null</option> 29 <option name="height">250px</option> We need to close off the chart, panel, row, and finally the dashboard elements. Make sure you only close off the dashboard element once: 30 </chart> 31 </panel> 32 </row> 33 </dashboard>  We have done a lot of work here. We should be saving and testing our code for every 20 or so lines that we add, so save your changes. And as we mentioned earlier in the article, we want to refresh our cache by entering the following URL into our browser: http://<host:port>/debug/refresh When we view our page, we should see a new column chart at the bottom of our dashboard showing the usage per day for the calclab1.math.tamu.edu domain. But we’re not done with that chart yet. We want to put a line overlay showing the average site access per day for our user. Open up simplexmldashboard.xml again and change our query in line 22 to the following: 22 <query>index=main sourcetype=nasasquidlogs | search calclab1.math.tamu.edu | stats count by MonthDay| eventstats avg(count) as average | eval average=round(average,0)</query> Simple XML contains some special characters, which are ', <, >, and &. If you intend to use advanced search queries, you may need to use these characters, and if so you can do so by either using their HTML entity or using the CDATA tags where you can wrap your query with <![CDATA[ and ]]>. We now need to add two new option lines into our Simple XML code. After line 29, add the following two lines, without replacing all of the closing elements that we previously entered. The first will set the chart overlay field to be displayed for the average field; the next will set the color of the overlay: 30 <option name="charting.chart.overlayFields">average</option> 31 <option name="charting.fieldColors">{"count": 0x639BF1, "average":0xFF5A09}</option>    Save your new changes, refresh the cache, and then reload your page. You should be seeing something similar to the following screenshot: The Simple XML of charts As we can see from our example, it is relatively easy to create and configure our charts using Simple XML. When we completed the chart, we used five options to configure the look and feel of the chart, but there are many more that we can choose from. Our chart element needs to always be in between its two parent elements, which are row and panel. Within our chart element, we always start with a title for the chart and a search to power the chart. We can then set out additional optional settings for earliest and latest, and then a list of options to configure the look and feel as we have demonstrated below. If these options are not specified, default values are provided by Splunk: 1 <chart> 2 <title></title> 3 <search> 4 <query></query> 5 <earliest>0</earliest> 6 <latest></latest> 7 </search> 8 <option name=""></option> 9 </chart> There is a long list of options that can be set for our charts; the following is a list of the more important options to know: charting.chart: This is where you set the chart type, with area, bar, bubble, column, fillerGauge, line, markerGauge, pie, radialGauge, and scatter being the charts that you can choose from. charting.backgroudColor: Set the background color of your chart with a Hex color value. charting.drilldown: Set to either all or none. This allows the chart to be clicked on to allow the search to be drilled down for further information. charting.fieldColors: This can map a color to a field as we did with our average field in the preceding example. charting.fontColor: Set the value of the font color in the chart with a Hex color value. height: The height of the chart in pixels. The value must be between 100 and 1,000 pixels. A lot of the options seem to be self-explanatory, but a full list of options and a description can be found on the Splunk reference material at the following URL: http://docs.splunk.com/Documentation/Splunk/latest/Viz/ChartConfigurationReference. Expanding our Splunk App with maps We will now go through another example in our NASA Squid and Web Data App to run through a more complex type of visualization to present to our user. We will use the Basic Dashboard that we created, but we will change the Simple XML to give it a more meaningful name, and then set up a map to present to our users where our requests are actually coming from. Maps use a map element and don't rely on the chart element as we have been using. The Simple XML code for the dashboard we created earlier in this article looks like the following: <dashboard> <label>Basic Dashboard</label> </dashboard> So let's get to work and give our Basic Dashboard a little "bling": Get into the local directory of our Splunk App, and into the views directory where all our Simple XML code is for our Basic Dashboard: cd $SPLUNK_HOME/etc/apps/nasa_squid_web/local/data/ui/views Open the basic_dashboard.xml file with a text editor or your favorite code editor. Don't forget, you can also use the Splunk Code Editor if you are not comfortable with other methods. We might as well remove all of the code that is in there, because it is going to look completely different than the way it did originally. Now start by setting up your dashboard and label elements, with a label that will give you more information on what the dashboard contains: 1 <dashboard> 2 <label>Show Me Your Maps</label> Open your row, panel, and map elements, and set a title for the new visualization. Make sure you use the map element and not the chart element: 3 <row> 4 <panel> 5 <map> 6 <title>User Locations</title>       We can now add our search query within our search elements. We will only search for IP addresses in our data and use the geostats Splunk function to extract a latitude and longitude from the data: 7 <search> 8 <query>index=main sourcetype="nasasquidlogs" | search From=1* | iplocation From | geostats latfield=lat longfield=lon count by From</query> 9 <earliest>0</earliest> 10 <latest></latest> 11 </search>  The search query that we have in our Simple XML code is more advanced than the previous queries we have implemented. If you need further details on the functions provided in the query, please refer to the Splunk search documentation at the following location: http://docs.splunk.com/Documentation/Splunk/6.4.1/SearchReference/WhatsInThisManual. Now all we need to do is close off all our elements, and that is all that is needed to create our new visualization of IP address requests: visualization of IP address requests: 12 </map> 13 </panel> 14 </row> 15 </dashboard> If your dashboard looks similar to the image below, I think it looks pretty good. But there is more we can do with our code to make it look even better. We can set extra options in our Simple XML code to zoom in, only display a certain part of the map, set the size of the markers, and finally set the minimum and maximum that can be zoomed into the screen. The map looks pretty good, but it seems that a lot of the traffic is being generated by users in USA. Let's have a look at setting some extra configurations in our Simple XML to change the way the map displays to our users. Get back to our basic_dashboard.xml file and add the following options: After our search element is closed off, we can add the following options. First we will set the maximum clusters to be displayed on our map as 100. This will hopefully speed up our map being displayed, and allow all the data points to be viewed further with the drilldown option: 12 <option name="mapping.data.maxClusters">100</option> 13 <option name="mapping.drilldown">all</option> We can now set our central point for the map to load using latitude and longitude values. In this instance, we are going to set the heart of USA as our central point. We are also going to set our zoom value as 4, which will zoom in a little further from the default of 2: 14 <option name="mapping.map.center">(38.48,-102)</option> 15 <option name="mapping.map.zoom">4</option> Remember that we need to have our map, panel, row, and dashboard elements closed off. Save the changes and reload the cache. Let's see what is now displayed: Your map should now be displaying a little faster than what it originally did. It will be focused on USA, where a bulk of the traffic is coming from. The map element has numerous options to use and configure and a full list can be found at the following Splunk reference page: http://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML. Summary In this article we covered about Simple XML charts and how to expand our Splunk App with maps. Resources for Article: Further resources on this subject: The Splunk Web Framework [article] Splunk's Input Methods and Data Feeds [article] The Splunk Interface [article]
Read more
  • 0
  • 0
  • 10620

article-image-implementing-log-screen-using-ext-js
Packt
18 Jul 2013
31 min read
Save for later

Implementing a Log-in screen using Ext JS

Packt
18 Jul 2013
31 min read
In this article Loiane Groner, author of Mastering Ext JS, talks about developing a login page for an application using Ext JS. It is very common to have a login page for an application, which we can use to control access to the system by identifying and authenticating the user through the credentials presented by him/her. Once the user is logged in, we can track the actions performed by the user. We can also restrain access of some features and screens of the system that we do not want a particular user or even a specific group of users to have access to. In this article, we will cover: Creating the login page Handling the login page on the server Adding the Caps Lock warning message in the Password field Submitting the form by pressing the Enter key Encrypting the password before sending to the server (For more resources related to this topic, see here.) The Login screen The Login window will be the first view we are going to implement in this project. We are going to build it step by step and it will have the following capabilities: User will enter the username and password to log in Client-side validation (username and password required to log in) Submit the Login form by pressing Enter Encrypt the password before sending to the server Password Caps Lock warning (similar to Windows OS) Multilingual capability Except for the multilingual capability, we will implement all the other features throughout this topic. So at the end of the implementation, we will have a Login window that looks like the following: So let's get started! Creating the Login screen Under the app/view directory, we will create a new file named Login.js.In this file, we will implement all the code that the user is going to see on the screen. Inside the Login.js file, we will implement the following code: Ext.define('Packt.view.Login', { // #1 extend: 'Ext.window.Window', // #2 alias: 'widget.login', // #3 autoShow: true, // #4 height: 170, // #5 width: 360, // #6 layout: { type: 'fit' // #7 }, iconCls: 'key', // #8 title: "Login", // #9 closeAction: 'hide', // #10 closable: false // #11 }); On the first line (#1) we have the definition of the class. To define a class we use Ext.define, followed by parentheses (()), and inside the parentheses we first declare the name of the class, followed by a comma (") and curly brackets ({}), and at the end a semicolon. All the configurations and properties (#2 to #11) go inside curly brackets. We also need to pay attention to the name of the class. This is the formula suggested by Sencha in Ext JS MVC projects: App Namespace + package name + name of the JS file. we defined the namespace as Packt (configuration name inside the app.js file). We are creating a View for this project, so we will create the JS file under the view package/directory. And then, the name of the file we created is Login.js; therefore, we will lose the .js part and use only Login as the name of the View. Putting all together, we have Packt.view.Login and this will be the name of our class. Then, we are saying that the Login class will extend from the Window class (#2), because we want it to be displayed inside a window, and not on any other component. We are also assigning this class an alias (#3). The alias for a class that extends from a component always starts with widget., followed by the alias we want to assign. The naming convention for an alias is lowercase . It is also important to remember that the alias must be unique in an application. In this case we want to assign login as alias to this class so later we can instantiate this same class using its alias (that is the same as xtype). For example, we can instantiate the Login class using four different options: Using the complete name of the class, which is the most used one: Ext.create('Packt.view.Login'); Using the alias in the Ext.create method: Ext.create('widget.login'); Using the Ext.widget, which is a shorthand way of using Ext.ClassManager.instantiateByAlias: Ext.widget('login'); Using the xtype as an item of another component: items: [ { xtype: 'login' } ] In this book we will use the first, third, and fourth options most of the time. Then we have autoShow configured to true (#4). What happens with the window is that instantiating the component is not enough for displaying it. When we instantiate the window we will have its reference, but it will not be displayed on the screen. If we want it to be displayed we need to call the method show() manually. Another option is to have the autoShow configuration set to true. This way the window will be automatically displayed when we instantiate it. We also have height (#5) and width (#6) of the window. We set the layout as fit (#7) because we want to add a form inside this window that will contain the username and password fields. And using the fit layout the form will occupy all the body space of the window. Remember that when using the fit layout we can only have one item as a child component. We are setting an iconCls (#8) property to the window; this way we will have an icon of a key in the header of the window. We can also give a title for the window (#9), and in this case we chose Login. Following is the declaration of the key style used by the iconCls property: .key { background-image:url('../icons/key.png') !important; } All the styles we will create to use as iconCls have a format like the preceding one. And at last we have the closeAction (#10) and closable (#11) configurations. The closeAction configuration will tell if we want to destroy the window when we close it. In this case, we do not want to destroy it; we only want to hide it. The closable configuration tells if we want to display the X icon on the top-right corner of the window. As this is a Login window, we do not want to give this option for the user. If you would like to, you can also add the resizable and draggable options as false. This will prevent the user to drag the Login window around and also to resize it. So far, this will be the output we have. A single window with an icon at the top-left corner with a title Login : The next step is to add the form with the username and password fields. We are going to add the following code to the Login class: items: [ { xtype: 'form', // #12 frame: false, // #13 bodyPadding: 15, // #14 defaults: { // #15 xtype: 'textfield', // #16 anchor: '100%', // #17 labelWidth: 60 // #18 }, items: [ { name: 'user', fieldLabel: "User" }, { inputType: 'password', // #19 name: 'password', fieldLabel: "Password" } ] } ] As we are using the fit layout, we can only declare one child item in this class. So we are going to add a form (#12) and to make the form to look prettier, we are going to remove the frame property (#13) and also add padding to the form body (#14). The form's frame property is by default set to false. But by default, there is a blue border that appears if we to do not explicitly add this property set to false. As we are going to add two fields to the form, we probably want to avoid repeating some code. That is why we are going to declare some field configurations inside the defaults configuration of the form (#15); this way the configuration we declare inside defaults will be applied to all items of the form, and we will need to declare only the configurations we want to customize. As we are going to declare two fields, both of them will be of type textfield. The default layout of the form is the anchor layout, so we do not need to make this declaration explicit. However, we want both fields can occupy all the horizontal available space of the body of the form. That is why we are declaring anchor as 100% (#17). By default, the width attribute of the label of the TextField class is 100 pixels. It is too much space for a label User and Password, so we are going to decrease this value to 60 pixels (#18). And finally, we have the user text field and the password text field. The configuration name is what we are going to use to identify each field when we submit the form to the server. But there is only one detail missing: when the user types the password into the field the system cannot display its value, we need to mask it somehow. That is why inputType is 'password' (#19) for the password field, as we want to display bullets instead of the original value, and the user will not be able to see the password value. Now we have improved our Login window a little more. This is the output so far: Client-side validations The field component in Ext JS provides some client-side validation capability. This can save time and also bandwidth (the system will only make a server request when it is sure the information has passed the basic validation). It also helps to point out to the user where they have gone wrong in filling out the form. Of course, it is also good to validate the information again on the server side for security reasons, but for now we will focus on the validations we can apply to the form of our Login window. Let's brainstorm some validations we can apply to the username and password fields: The username and password must be mandatory—how are going to authenticate the user without a username and password? The user can only enter alphanumeric characters (A-Z, a-z, and 0-9) in both the fields. The user can only type between 3 and 25 chars in the username field. The user can only type between 3 and 15 chars in the password field. So let's add into the code the ones that are common to both fields: allowBlank: false, // #20 vtype: 'alphanum', // #21 minLength: 3, // #22 msgTarget: 'under' // #23 We are going to add the preceding configurations inside the defaults configuration of the form, as they all apply to both the fields we have. First, both need to be mandatory (#20), we can only allow to enter alphanumeric characters (#21) and the minimum number of characters the user needs to input is three (#22). Then, a last common configuration is that we want to display any validation error message under the field (#23). And the only validation customized for each field is that we can enter a maximum of 25 characters in the User field: name: 'user', fieldLabel: "User", maxLength : 25 And a maximum of 15 characters in the Password field: inputType: 'password', name: 'password', fieldLabel: "Password", maxLength : 15 After we apply the client validations, we will have the following output in case the user went wrong in filling out the Login window: If you do not like it, we can change the place where the error message appears. We just need to change the msgTarget value. The available options are: title, under, side, and none. We can also show the error message as a tooltip (qtip) or display it in a specific target (inner HTML of a specific component). Creating custom VTypes Many systems have a special format for passwords. Let's say we need the password to have at least one digit (0-9), one letter lowercase, one letter uppercase, one special character (@, #, $, %, and so on) and its length between 6 and 20 characters. We can create a regular expression to validate that the password is entering into the app. And to do this, we can create a custom VType to do the validation for us. Creating a custom VType is simple. For our case, we can create a custom VType called passRegex: Ext.apply(Ext.form.field.VTypes, { customPass: function(val, field) { return /^((?=.*d)(?=.*[a-z])(?=.*[A-Z])(?=.*[@#$%]).{6,20})/.test(val); }, customPassText: 'Not a valid password. Length must be at least 6 characters and maximum of 20Password must contain one digit, one letter lowercase, one letter uppercase, onse special symbol @#$% and between 6 and 20 characters.', }); customPass is the name of our custom VType, and we need to declare a function that will validate our regular expression. customPassText is the message that will be displayed to the user in case the incorrect password format is entered. The preceding code can be added anywhere on the code, inside the init function of a controller, inside the launch function of the app.js, or even in a separate JavaScript file (recommended) where you can put all your custom VTypes. To use it, we simply need to add vtype: 'customPass' to our Password field. To learn more about regular expressions, please visit http://www.regular-expressions.info/. Adding the toolbar with buttons So far we have created the Login window, which contains a form with two fields and it is already being validated as well. The only thing missing is to add the two buttons: cancel and submit . We are going to add the buttons as items of a toolbar and the toolbar will be added on the form as a docked item. The docked items can be docked to either on the top, right, left, or bottom of a panel (both form and window components are subclasses of panel). In this case we will dock the toolbar to the bottom of the form. Add the following code right after the items configuration of the form: dockedItems: [ { xtype: 'toolbar', dock: 'bottom', items: [ { xtype: 'tbfill' //#24 }, { xtype: 'button', // #25 itemId: 'cancel', iconCls: 'cancel', text: 'Cancel' }, { xtype: 'button', // #26 itemId: 'submit', formBind: true, // #27 iconCls: 'key-go', text: "Submit" } ] } ] If we take a look back to the screenshot of the Login screen we first presented at the beginning of this article, we will notice that there is a component for the translation/multilingual capability. And after this component there is a space and then we have the Cancel and Submit buttons. As we do not have the multilingual component yet, we can only implement the two buttons, but they need to be at the right end of the form and we need to leave that space. That is why we first need to add a toolbar fill component (#24), which is going to instruct the toolbar's layout to begin using the right-justified button container. Then we will add the Cancel button (#25) and then the Submit button (#26). We are going to add icons to both buttons (iconCls) and later, when we implement the controller class, we will need a way to identify the buttons. This is why we assigned itemId to both of them. We already have the client validations, but even with the validations, the user can click on the Submit button and we want to avoid this behavior. That is why we are binding the Submit button to the form (#27); this way the button will only be enabled if the form has no error from the client validation. In the following screenshot, we can see the current output of the Login form (after we added the toolbar) and also verify the behavior of the Submit button: Running the code To execute the code we have created so far, we need to make a few changes in the app.js file. First, we need to declare views we are using (only one in this case). Also, as we are going to instantiate using the Login class' xtype, we need to declare this class in the requires declaration: requires: [ 'Packt.view.Login' ], views: [ 'Login' ], And the last change is inside the launch function. now we only need to replace the console.log message with the Login instance (#1): splashscreen.next().fadeOut({ duration: 1000, remove:true, listeners: { afteranimate: function(el, startTime, eOpts ){ Ext.widget('login'); // #1 } } }); Now the app.js is OK and we can execute what we have implemented so far! Using itemId versus id Ext.Cmp is bad! Before we create the controller, we will need to have some knowledge about Ext.ComponentQuery selectors. And in this topic we will discuss a subject to help us to understand better why we took some decisions while creating the Login window and why we are going to take some other decisions on the controller topic. Whenever we can, we will always try to use the itemId configuration instead of id to uniquely identify a component. And here comes the question, why? When using id, we need to make sure that id is unique, and none of all the other components of the application has the same id. Now imagine the situation where you are working with other developers of the same team and it is a big application. How can you make sure that id is going to be unique? Pretty difficult, don't you think? And this can be a hard task to achieve. Components created with an id may be accessed globally using Ext.getCmp, which is a short-hand reference for Ext.ComponentManager.get. Just to mention one example, when using Ext.getCmp to retrieve a component by its id, it is going to return the last component declared with the given id. And if the id is not unique, it can return the component that you are not expecting and this can lead into an error of the application. Do not panic! There is an elegant solution, which is using itemId instead of id. The itemId can be used as an alternative way to get a reference of a component. The itemId is an index to the container's internal MixedCollection, and that is why the itemId is scoped locally to the container. This is the biggest advantage of the itemId. For example, we can have a class named MyWindow1, extending from window and inside this class we can have a button with item ID submit. Then we can have another class named MyWindow2, also extending from window, and also with a button with item ID submit. Having two item IDs with the same value is not an issue. We only need to be careful when we use Ext.ComponentQuery to retrieve the component we want. For example, if we have a Login window whose alias is login and another screen called the Registration window whose alias is registration. Both the windows have a button Save whose itemId is save. If we simply use Ext.ComponentQuery.query('button#save'), the result will be an array with two results. However, if we narrow down the selector even more, let's say we want the Login window's Save button, and not the Registration window's Save button, we need to use Ext.ComponentQuery.query('login button#save'), and the result will be a single item, which is exactly we expect. You will notice that we will not use Ext.getCmp in the code of our project. Because it is not a good practice; especially for Ext JS 4 and also because we can use itemId and Ext.ComponentQuery instead. We will understand Ext.ComponentQuery better during the next topic. Creating the login controller We have created the view for the Login screen so far. As we are following the MVC architecture, we are not implementing the user interaction on the View class. If we click on the buttons on the Login class, nothing will happen because we have not yet implemented this logic. We are going to implement this logic now on the controller class. Under the app/controller directory, we will create a new file named Login.js. In this file we will implement all the code related to the events management of the Login screen. Inside the Login.js file we will implement the following code, which is only a base of the controller class we are going to implement: Ext.define('Packt.controller.Login', { // #1 extend: 'Ext.app.Controller', // #2 views: [ 'Login' // #3 ], init: function(application) { // #4 this.control({ // #5 }); } }); As usual, on the first line of the class we have its name (#1). Following the same formula we used for the view/Login.js we will have Packt (app namespace) + controller (name of the package) + Login (which is the name of the file), resulting in Packt.controller.Login. Note that that the controller JS file (controller/Login.js) has the same name as view/Login.js, but that is OK because they are in a different package. It is good to use a similar name for the views, models, stores and controllers because it is going to be easier to maintain the project later. For example, let's say that after the project is in production, we need to add a new button on the Login screen. With only this information (and a little bit of MVC concept knowledge) we know we will need to add the button code on the view/Login.js file and listen to any events that might be fired by this button on the controller/Login.js. Easier maintainability is also a great pro of using the MVC architecture. The controller classes need to extend from Ext.app.Controller (#2), so we will always use this parent class for our controllers. Then we have the views declaration (#3), which is where we are going to declare all the views that this controller will care about. In this case, we only have the Login view so far. We will add more views later on this article. Next, we have the init method declaration (#4). The init method is called before the application boots, before the launch function of Ext.application (app.js). The controller will also load the views, models, and stores declared inside its class. Then we have the control method configured (#5). This is where we are going to listen to all events we want the controller to react. And as we are coding the events fired by the Login window and its child components, this will be our scope in this controller. Adding the controller to app.js Now that we already have a base of the login controller, we need to add it to the app.js file. We can remove this code, since the controller will be responsible for loading the view/Login.js file for us: requires: [ 'Packt.view.Login' ], views: [ 'Login' ], And add the controllers declaration: controllers: [ 'Login' ], And as our project is only starting, declaring the views on the controller classes will help us to have a code more organized, as we do not need to declare all the application's views in the app.js file. Listening to the button click event Our next step now is to start listening to the Login window events. First, we are going to listen to the Submit and Cancel buttons. We already know that we are going to add the listeners inside the this.control declaration. The format that we need to use is the following: 'Ext.ComponentQuery selector': { eventWeWantToListenTo: functionOrMethodWeWantToExecute } First, we need to pass the selector that is going to be used by the Ext.ComponentQuery class to find the component. Then we need to list the event that we want to listen to. And then, we need to declare the function that is going to be executed when the event we are listening to is fired, or declare the name of the controller method that is going to be executed when the event is fired. In our case, we are going to declare the method only for code organization purposes. Now let's focus on finding the correct selector for the Submit and Cancel buttons. According to Ext.ComponentQuery API documentation, we can retrieve components by using their xtype (if you are already familiar with jQuery, you will notice that Ext.ComponentQuery selectors are very similar to jQuery selectors' behavior). Well, we are trying to retrieve two buttons, and their xtype is button. We try then the selector button. But before we start coding, let's make sure that this is the correct selector to avoid us to change the code all the time when trying to figure out the correct selector. There is one very useful tip we can try: open the browser console (command editor), type the following command, and click on Run : Ext.ComponentQuery.query('button'); As we can see in the screenshot, it returned an array of the buttons that were found by the selector we used, and the array contains six buttons; too many buttons and it is not what we want. We want to narrow down to the Submit and Cancel buttons. Let's try to draw a path of the Login window using the components xtype we used: We have a Login window (xtype: login or window), inside the window we have a form (xtype: form), inside the form we have a toolbar (xtype: toolbar), and inside the toolbar we have two buttons (xtype: button). Therefore, we have login-form-toolbar-button. However, if we use login-form-button we will have the same result, because we do not have any other buttons inside the form. So we can try the following command: Ext.ComponentQuery.query('login form button'); So let's try this last selector on the command editor: Now the result is an array of two buttons and these are the buttons that we are looking for! There is still one detail missing: if we use the login form button selector, it will listen to the click event (which is the event we want to listen to) of both buttons. When we click on the Cancel button one thing should happen (reset the form) and when we click on the Submit button, another thing should happen (submit the form to the server to validate the login). So we still want to narrow down the selector even more, until it returns the Cancel button and another selector that will return the Submit button. Going back to the view/Login code, notice that we declared a configuration named itemId to both buttons. We can use these itemId configurations to identify the buttons in a unique way. According to the Ext.ComponentQuery API docs, we can use # as a prefix of itemId. So let's try the following command on the command editor to get the Submit button reference: Ext.ComponentQuery.query('login form button#submit'); The output will be only one button as we expect: Now let's try the following command to retrieve the Cancel button reference: Ext.ComponentQuery.query('login form button#cancel'); The output will be only one button as we expect: So now we have the selectors that we were looking for! Console command editor is a great tool and using it can save us a lot of time when trying to find the exact selector that we want, instead of coding, testing, not the selector we want, code again, test again, and so on. Could we use only button#submit or button#cancel as selectors? Yes, we could use a shorter selector. However, it would work perfectly for now. As the application grows and we declare many more classes and buttons, the event would be fired for all buttons that have the itemId named submit or cancel and this could lead to an error in the application. We always need to remember that itemId is scoped locally to the container. By using login form button as the selector, we make sure that the event will come from the button from the Login window. So let's implement the code inside the controller class: init: function(application) { this.control({ "login form button#submit": { // #1 click: this.onButtonClickSubmit // #2 }, "login form button#cancel": { // #3 click: this.onButtonClickCancel // #4 } }); }, onButtonClickSubmit: function(button, e, options) { console.log('login submit'); // #5 }, onButtonClickCancel: function(button, e, options) { console.log('login cancel'); // #6 } In the preceding code, we have first the listener to the Submit button (#1), and on the following line we say that we want to listen to the click event, and then, when the click event of the Submit button is fired, the onButtonClickSubmit method should be executed (#2). Then we have the same for the Cancel button: we have the listener to the Cancel button (#3), and on the following line we say that we want to listen to the click event, and then, when the click event of the Cancel button is fired, the onButtonClickCancel method should be executed (#4). Next, we have the declaration of the methods onButtonClickSubmit and onButtonClickCancel. For now, we are only going to output a message on the console to make sure that our code is working. So we are going to output login submit (#5) in case the user clicks on the Submit button, and login cancel (#6) in case the user clicks on the Cancel button. But how do you know which are the parameters the event method can receive? You can find the answer to this question in the documentation. If we take a look at the click event in the documentation, this is what we will find: This is exactly what we declared. For all the other event listeners, we will go to the docs and see which are the parameters the event accepts, and then list them as parameters in our code. This is also a very good practice. We should always list out all the arguments from the docs, even if we are only interested in the first one. This way we always know that we have the full collection of the parameters, and this can come very handy when we are doing maintenance of the application. Let's go ahead and try it. Click on the Cancel button and then on the Submit button. This should be the output: Cancel button listener implementation Let's remove the console.log messages and add the code we actually want the methods to execute. First, let's work on the onButtonClickCancel method. When we execute this method, we want it to reset the form. So this is the logic sequence we want to program: Get the Login form reference. Call the method getForm, which is going to return the form basic class. Call the reset method to reset the form. The form basic class provides input field management, validation, submission, and form loading services. The Ext.form.Panel class (xtype: form) works as the container, and it is automatically hooked up with an instance of Ext.form.Basic. That is why we need to get the form basic reference to call the reset method. If we take a look at the parameters we have available on the onButtonClickCancel method, we have: button, e, and options, and none of them provides us the form reference. So what can we do about it? We can use the up method from the Button class (inherited from the AbstractComponent class). With this method, we can use a selector to try to retrieve the form. The up method navigates up the component hierarchy, searching from an ancestor container that matches the passed selector. As the button is inside a toolbar that is inside the form we are looking for, if we use button.up('form'), it will retrieve exactly what we want. Ext JS will see what is the first ancestor in the hierarchy of the button and will find a toolbar. Not what we are looking for. So it goes up again and it will find a form, which is what we are looking for. So this is the code that we are going to implement inside the onButtonClickCancel method: button.up('form').getForm().reset(); Some people like to implement the toolbar inside the window instead of the form. No problem at all, it is only a matter of how you like to implement it. In this case, if the toolbar that contains the Submit button is inside the Window class we can use: button.up('window').down('form').getForm().reset() And we will have the same result! Submit button listener implementation Now we need to implement the onButtonClickSubmit method. Inside this method, we want to program the logic to send the username and password values to the server so that the user can be authenticated. We can implement two programming logics inside this method: the first one is to use the submit method that is provided by the form basic class and the second one is to use an Ajax call to submit the values to the server. Either way we will achieve what we want to do. However, there is one detail that we need to know prior to making this decision: if using the submit method of the form basic class, we will not be able to encrypt the password before we send it to the server, and if we take a look at the parameters sent to the server, the password will be a plain text, and this is not good. Using the Ajax request will result the same; however, we can encrypt the password value before sending to the server. So apparently, the second option seems better and that is the one that we will implement. So to summarize, following are the steps we need to perform in this method: Get the Login form reference Get the Login window reference (so that we can close it once the user has been authenticated) Get the username and password values from the form Encrypt the password Send login information to the server Handle the server response If user is authenticated display application If not, display an error message First, let's get the references that we need: var formPanel = button.up('form'), login = button.up('login'), user = formPanel.down('textfield[name=user]').getValue(), pass = formPanel.down('textfield[name=password]').getValue(); To get the form reference, we can use the button.up('form') code that we already used in the onButtonClickCancel method; to get the Login window reference we can do the same thing, only changing the selector to login or window. Then to get the values from the User and Password fields we can use the down method, but this time the scope will start from the form reference. For the selector we will use the text field xtype, and to make sure we are retrieving the text field we want, we can create an itemId attribute, but there is no need for it. We can use the name attribute since the user and password fields have different names and they are unique within the Login window. To use attributes within a selector we must wrap it in brackets. The next step is to submit the values to the server: if (formPanel.getForm().isValid()) { Ext.Ajax.request({ url: 'php/login.php', params: { user: user, password: pass } }); } If we try to run this code, the application will send the request to the server, but we will get an error as the response because we do not have the login.php page implemented yet. That's OK because we are interested in other details right now. With Firebug or Chrome Developer Tools enabled, open the Net tab and filter by the XHR requests. Make sure to enter a username and password (any valid value so that we can click on the Submit button). This will be the output: We still do not have the password encrypted. The original value is still being displayed and this is not good. We need to encrypt the password. Under the app directory, we will create a new folder named util where we are going to create all the utility classes. We will also create a new file named MD5.js; therefore, we will have a new class named Packt.util.MD5. This class contains a static method called encode and this method encodes the given value using the MD5 algorithm. To understand more about the MD5 algorithm go to http://en.wikipedia.org/wiki/MD5. As Packt.util.MD5 is big, we will not list its code here, but you can download the source code of this book from http://www.packtpub.com/mastering-ext-javascript/book or get the latest version at https://github.com/loiane/masteringextjs). If you would like to make it even more secure, you can also use SSL and ask for a random salt string from the server, salt the password and hash it. You can learn more about it at one the following URLs: http://en.wikipedia.org/wiki/Transport_Layer_Security and http://en.wikipedia.org/wiki/Salt_(cryptography). A static method does not require an instance of the class to be able to be called. In Ext JS, we can declare static attributes and methods inside the static configuration. As the encode method from Packt.util.MD5 class is static, we can call it like Packt.util.MD5.encode(value);. So before Ext.Ajax.request, we will add the following code: pass = Packt.util.MD5.encode(pass); We must not forget to add the Packt.util.MD5 class on the controller's requires declaration (the requires declaration is right after the extend declaration): requires: [ 'Packt.util.MD5' ], Now, if we try to run the code again, and check the XHR requests on the Net tab, we will have the following output: The password is encrypted and it is much safer now.
Read more
  • 0
  • 0
  • 10515
article-image-building-image-slideshow-using-scripty2
Packt
20 May 2010
10 min read
Save for later

Building an Image Slideshow using Scripty2

Packt
20 May 2010
10 min read
In our previous Scripty2 article, we saw the basics of the Scripty2 library, we saw the UI elements available, the fx transitions and some effects. We even build an small example of a image slide effect. This article is going to be a little more complex, just a little, so we are able to build a fully working image slideshow. If you haven't read our previous article, it would be interesting and useful if you do so before continuing with this one. You can find the article here: Scripty2 in Action Well, now we can continue. But first, why don't we take a look at what we are going to build in this article: As we can see in the image we have three main elements. The biggest one will be where the main images are placed, there is also one element where we will make the slide animation to take place. There's also a caption and some descriptive text for the image and, to the right, we have the miniatures slider, clicking on them will make the main image to also slide out and the new one to appear. Seems difficult to achieve? Don't worry, we will make it step by step, so it's very easy to follow. Our main steps are going to be these: First we we will create the html markup and css necessary, so our page looks like the design. Next step will be to create the slide effect for the available images to the right. And our final step will be to make the thumbnail slider to work. Good, enough talking, let's start with the action. Creating the necessary html markup and css We will start by creating an index.html file, with some basic html in it, just like this: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html xml_lang="es-ES" lang="es-ES" ><head><meta http-equiv="content-type" content="text/html; charset=utf-8" /><title>Scripty2</title><link rel="stylesheet" href="css/reset.css" type="text/css" /><link rel="stylesheet" href="css/styles.css" type="text/css" /> </head><body id="article"> <div id="gallery"> </div> <script type="text/javascript" src="js/prototype.s2.min.js"></script> </body></html> Let's take a look at what we have here, first we are including a reset.css file, this time, for a change, we are going to use the Yahoo one, which we can find in this link: http://developer.yahoo.com/yui/3/cssreset/ We will create a folder called css, and a reset.css file inside it, where we will place the yahoo code. Note that we could also link the reset directly from the yahoo site: <link rel="stylesheet" type="text/css" href="http://yui.yahooapis.com/3.1.1/build/cssreset/reset-min.css"> Just below the link to the reset.css file, we find a styles.css file. This file will be the place where we are going to place our own styles. Next we find a div: <div id="gallery"> </div> This will be our placeholder for the entire slideshow gallery. Then one last thing, the link to the Scripty2 library: <script type="text/javascript" src="js/prototype.s2.min.js"></script> Next step will be to add some styles in our styles.css file: html, body{ background-color: #D7D7D7; }#gallery{ font-family: arial; font-size: 12px; width: 470px; height: 265px; background-color: #2D2D2D; border: 1px solid #4F4F4F; margin: 50px auto; position: relative;} Mostly, we are defining a background colour for the page, and then some styles for our gallery div like its width, height, margin and also a background colour. With all this our page will look like the following screenshot: We will need to keep working on this, first in our index.html file, we need a bit more of code: <div id="gallery"> <div id="photos_container"> <img src="./img/image_1.jpg" title="Lorem ipsum" /> <img src="./img/image_1.jpg" title="Lorem ipsum" /> <img src="./img/image_1.jpg" title="Lorem ipsum" /> <img src="./img/image_1.jpg" title="Lorem ipsum" /> </div> <div id="text_container"> <p><strong>Lorem Ipsum</strong><br/>More lorem ipsum but smaller text</p> </div> <div id="thumbnail_container"> <img src="./img/thumbnail_1.jpg" title="Lorem ipsum"/> <img src="./img/thumbnail_1.jpg" title="Lorem ipsum"/> <img src="./img/thumbnail_1.jpg" title="Lorem ipsum"/> <img src="./img/thumbnail_1.jpg" title="Lorem ipsum"/> </div></div> We have placed three more divs inside our main gallery one. One of them would be the photos-photos_container, where we place our big photos. The other two will be one for the text- text_container, and the other for the thumbnail images, that will be thumbnail_container. After we have finished with our html, we need to work with our css. Remember we are going to do it in our styles.css: #photos_container{ width: 452px; height: 247px; background-color: #fff; border: 1px solid #fff; margin-top: 8px; margin-left: 8px; overflow: hidden;}#text_container{ width: 377px; height: 55px; background-color: #000000; color: #ffffff; position: absolute; z-index: 2; bottom: 9px; left: 9px;}#text_container p{ padding: 5px; }#thumbnail_container{ width: 75px; height: 247px; background-color: #000000; position: absolute; z-index: 3; top: 9px; right: 9px;}#thumbnail_container img{ margin: 13px 13px 0px 13px; }strong{ font-weight: bold; } Here we have added styles for each one of our divs, just quite basic styling for the necessary elements, so our page now looks a bit more like the design: Though this looks a bit better than before, it still does nothing and that's going to be our next step. Creating the slide effect First we need some modifications to our html code, this time, though we could use id tags in order to identify elements, we are going to use rel tags. So we can see a different way of doing things. Let's then modify the html: <div id="photos_container"> <img src="./img/image_1.jpg" title="Lorem ipsum" rel="1"/> <img src="./img/image_1.jpg" title="Lorem ipsum" rel="2"/> <img src="./img/image_1.jpg" title="Lorem ipsum" rel="3"/> <img src="./img/image_1.jpg" title="Lorem ipsum" rel="4"/> </div> <div id="text_container"> <p><strong>Lorem Ipsum</strong><br/>More lorem ipsum but smaller text</p> </div> <div id="thumbnail_container"> <img src="./img/thumbnail_1.jpg" title="Lorem ipsum" rel="1"/> <img src="./img/thumbnail_1.jpg" title="Lorem ipsum" rel="2"/> <img src="./img/thumbnail_1.jpg" title="Lorem ipsum" rel="3"/> <img src="./img/thumbnail_1.jpg" title="Lorem ipsum" rel="4"/> </div> Note the difference, we have added rel tags to each one of the images in every one of the divs. Now we are going to add a script after the next line: <script type="text/javascript" src="js/prototype.s2.min.js"></script> The script is going to look like this: <script type="text/javascript"> $$('#thumbnail_container img').each(function(image){ image.observe('click', function(){ $$('#photos_container img[rel="'+this. readAttribute('rel')+'"]').each(function(big_image){ alert(big_image.readAttribute('title')); }) }); });</script> First we select all the images inside the #thumbnail_container div: $$('#thumbnail_container img') Then we use the each function to loop through the results of this selection, and add the click event to them: image.observe('click', function(){ This event will fire a function, that will in turn select the bigger images, the one in which the rel attribute is equal to the rel attribute of the thumbnail: $$('#photos_container img[rel="'+this.readAttribute('rel')+'"]').each(function(big_image){ We know this will return only one value, but as the selected could, theoretically, return more than one value, we need to use the each function again. Note that we use the this.readAttribute('rel') function to read the value of the rel attribute of the thumbnail image. Then we use the alert function to show the title value of the big image: alert(big_image.readAttribute('title')); If we now click on any of the thumbnails, we will get an alert, just like this: We have done this just to check if our code is working and we are able to select the image we want. Now we are going to change this alert for something more interesting. But first, we need to do some modifications to our styles.css file, we will modify our photos container styles: #photos_container{ width: 452px; height: 247px; background-color: #fff; border: 1px solid #fff; margin-top: 8px; margin-left: 8px; overflow: hidden; position: relative;} Only to add the position relate in it, this way we will be able to absolute position the images inside it, adding these styles: #photos_container img{ position: absolute;} With these changes we are done with the styles for now, return to the index.html file, we are going to introduce some modifications here too: <script type="text/javascript" src="js/prototype.s2.min.js"></script> <script type="text/javascript"> var current_image = 1; $$('#photos_container img').each(function(image){ var pos_y = (image.readAttribute('rel') * 247) - 247; image.setStyle({top: pos_y+'px'}); })... I've placed the modifications in bold, so we can concentrate on them. What have we here? We are creating a new variable, current_image, so we can save which is the current active image. At page load this will be the first one, so we are placing 1 to its value. Next we loop through all of the big images, reading its rel attribute, to know which one of the images are we working with. We are multiplying this rel attribute with 247, which is the height of our div. We are also subtracting 247 so the first image is at 0 position. note Just after this, we are using prototype's setStyle function to give each image its proper top value. Now, I'm going to comment one of the css styles: #photos_container{ width: 452px; height: 247px; background-color: #fff; border: 1px solid #fff; margin-top: 8px; margin-left: 8px; //overflow: hidden; position: relative;} You don't need to do this, I' will do it for you, so we can see the result of our previous coding, and then I will turn back and leave it as it was before, prior to continuing. So our page would look like the next image: As we see in the image, all images are one on top of the others, this way we will be able to move them in the y axis. We are going to achieve that by modifying this code: $$('#thumbnail_container img').each(function(image){ image.observe('click', function(){ $$('#photos_container img[rel="'+this. readAttribute('rel')+'"]').each(function(big_image){ alert(big_image.readAttribute('title')); }) }); });
Read more
  • 0
  • 0
  • 10438

article-image-integrating-discussions-wiki-and-blog-oracle-webcenter
Packt
23 Sep 2010
9 min read
Save for later

Integrating Discussions, Wiki, and Blog with Oracle WebCenter

Packt
23 Sep 2010
9 min read
  Web 2.0 Solutions with Oracle WebCenter 11g Learn WebCenter 11g fundamentals and develop real-world enterprise applications in an online work environment Create task-oriented, rich, interactive online work environments with the help of the comprehensive Oracle WebCenter Suite 11g Accelerate the development of Enterprise 2.0 solutions by leveraging the Oracle tools Apply the basic concepts of Enterprise 2.0 for your business solutions by understanding them completely Prepare development environments that suit your enterprise needs using WebCenter applications Define collaborative work environments for the members of your organization Read more about this book (For more resources on Oracle, see here.) The Oracle WebCenter Discussions Service provides a forum tool that can be used within the organization to share information and foster collaboration. Forums are one of the most popular information exchange mechanisms used on the Internet. We are all familiar with the power of forums to create user communities and disseminate information. If you have used the Oracle Technology Network Forums, the WebCenter Discussions Service will look very familiar to you—both use the same powerful forum software. (Move the mouse over the image to enlarge.) Some of the common challenges we have found with implementing forums for the enterprise have been related to the following: Integration into the existing technology stack User and group administration using existing LDAP Support Cultural adoption With Discussions, Oracle has successfully minimized the technical level of effort required during implementation by addressing the main technical challenges (numbers 1 and 2 above). And with formal support offered, as well as the general idea that forums integration using Discussions will not be unique for each implementation, organizations can feel more comfortable about viable long-term support. Given how well Discussions addresses the first three challenges, we now view the cultural adoption as the largest challenge, which can be overcome with effective training and strategic management decisions. Forums in general have become increasingly popular within organizations and Oracle has implemented an intuitive end-user approach. The Discussions Server is installed as part of Oracle WebCenter installation. It can be accessed directly at the default location http://hostname:8890/owc_discussions. While Discussions in itself is a useful forum tool, the real power comes from the Discussions Service that enables us to embed, view, and interact with the forums as part of custom WebCenter applications where forums can be used as a means to obtain and share information by the enterprise from within the current application being used. A Wiki is a website that allows a group of editors to easily create web pages in a collaborative manner using simple markup language. The markup language helps to organize and format the web pages. Many corporations use Wikis to create documentation internally. Each employee of the company has easy access to edit the information within the wiki, and can therefore incorporate new information for the entire community. A Blog is a website that allows an individual author to post a stream of content for a community to view. The word "Blog" is derived from the contraction of "Web Log". Each item in the Blog stream can be viewed as an article. An article can be something as simple as a video or image posting, or something as formal as a newspaper article. It is up to the author to decide what content to post. Typically, authors allow members of a community to engage in dialog related to the blog article by allowing them to post comments on the article. Both the Wiki Service and the Blog Service depend on the WebCenter Wiki and Blog Server. Both services use a single connection to the single backend server. Discussions configuration Our application is built with JDeveloper 11g and connects to the Oracle WebCenter Discussions Server. Ensure that the Discussions Server is running by connecting to the Discussions Server URL (http://hostname:8890/owc_discussions). It is a good idea to populate the Discussions Server with some example users, categories, and forums. The Forums Admin Console is used to create new users and forums. The admin console can be accessed at http://hostname:8890/owc_discussions/admin. The default admin username is weblogic and the password is weblogic. This is irrespective of any username/password specified at installation time. You can configure a wide variety of settings in the Admin Console, including but not limited to the following: Content Structure: Categories, Subcategories, Forums Users and Groups Permissions Filters and Interceptors Moderation User Interface (Colors and themes) Reports and Metrics System Settings (Cache, e-mail, locale, and so on) Plugins The following is a screenshot of the admin console home page: We will not focus on configuring all aspects of the forum, but instead the main pieces required to get started and integrated into WebCenter. Specifically, we will focus on users/groups, and the overall forum content structure. Content structure The general structure for Discussions is as follows: Categories: logical grouping of discussions content Subcategories: optional subgrouping(s) Forums: Lowest-level grouping where end users can create discussion threads Threads: Entries made by end users, which contain the original entry as well as replies Within the Forums Admin Console, you can define a structure that makes sense for your organization. The same principles that apply to other forum software will apply to Discussions content structure. For the purpose of this demo, we have created a Marketing category and a Marketing Ideas forum. The following is an image of that Category Summary that reflects the new Marketing Ideas forum that was just created: User and group structure We have worked on this Discussions Service as an independent service, and hence created our users and groups manually. In an enterprise solution, you will likely hook this into your LDAP (like Oracle Internet Directory) using Enterprise Manager. Following is a screenshots of the users we have configured within the Discussions administration: As shown previously, we have added two users— amit and jimmy—with the appropriate privileges to post to the Marketing Ideas forum. Integrating Discussions with WebCenter During the remainder of this article, we will create a custom WebCenter application, which will integrate with the Discussions Server and in which we will expose a view of the forum that allows you to interact with the forum using task flows. Oracle exposes multiple task flows for the Discussions service. We have listed all the out-of-box task flows in the following table and bolded the ones we will drop into our WebCenter application: Task Flow Description Discussions Forums This task flow shows all the topics associated with a specified forum. Users can create, read, update, and delete topics based on their privileges. Discussions - Popular Topics Shows the popular topics under a given category ID or forum ID. Discussions - Recent Topics Shows all the recent topics under a given category ID or forum ID. Discussions - Watched Forums Allows users to see all their watched forums under a given category ID. Discussions - Watched Topics Allows users to see all their watched topics under a given category ID or forum ID. Discussions - Sidebar View This shows a combined view of the Popular Topics, Recent Topics, Watched Topics, and Watched Forums task flows. The main steps we will complete are as follows: Ensure the Discussions Server is running Create a WebCenter application Create a connection to the Discussions Server Create a JSF page Select appropriate Discussions Service Task Flow and embed it in the page Deploy, run, and test the application Ensuring the Discussions Server is running Before we start developing the application, ensure that the WebCenter Discussions Server is running. If the Discussions Server is not running then start the WLS_Services managed server. Using a browser, open the URL for Discussions (for example, http://hostname:8890/owc_discussions). Log in with the newly created users and post some sample articles. Creating a new WebCenter application Using JDeveloper, create a new application, selecting the WebCenter application template. Creating a JSF page Next, we will create a JSF page to host the view for the Discussions Forum. In the Application Navigator, highlight the ViewController project, right-click on it, and from the context menu select New. In the New Gallery, select JSF Page to create a new JSF page. In the Create JSF Page dialog, select a name for the page and create the page. Creating a connection to the Discussion Forum In order to use the Discussions Service in our application, we need to create a Discussions Forum connection to the Discussions Server. This connection will be used by the application to connect to the backend Discussions Server. Note that it is possible to modify the connection after the application is deployed to the WebLogic server by using the Enterprise Manager Fusion Middleware Control. To set up the discussions connection, right-click on the Connections node in Application Resources pane and select New Connection | Discussions Forum. Select a unique name for the connection such as MyDiscussions. Ensure that the Make this the connection default checkbox is ticked. Next, we have to set the property values for the forum.url and admin.user. The value of forum.url should be the URL of the Discussions server. The default admin user is weblogic. Click on Test Connection to ensure that the connection is set up properly. Click Finish to complete this step. Click on the Resource Palette and open My Catalogs. You should now see the WebCenter Services Catalog. Expand the catalog to view the various task flows available. In the task flows shown, you see the various discussions-related task flows such as the Discussion Forums, Discussions – Popular Topics, Discussions - Recent Topics, and so on. We will use these task flows in our application to create a view of the Discussion Forum.
Read more
  • 0
  • 0
  • 10354
Modal Close icon
Modal Close icon