Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-using-nodejs-dependencies-nwjs
Max Gfeller
19 Nov 2015
6 min read
Save for later

Using Node.js dependencies in NW.js

Max Gfeller
19 Nov 2015
6 min read
NW.js (formerly known as node-webkit) is a framework that makes it possible to write multi-platform desktop applications using the technologies you already know well: HTML, CSS and JavaScript. It bundles a Chromium and a Node (or io.js) runtime and provides additional APIs to implement native-like features like real menu bars or desktop notifications. A big advantage of having a Node/io.js runtime is to be able to make use of all the modules that are available for node developers. We can categorize three different types of modules that we can use. Internal modules Node comes with a solid set of internal modules like fs or http. It is built on the UNIX philosophy of doing only one thing and doing it very well. Therefore you won't find too much functionality in node core. The following modules are shipped with node: assert: used for writing unit tests buffer: raw memory allocation used for dealing with binary data child_process: spawn and use child processes cluster: take advatage of multi-core systems crypto: cryptographic functions dgram: use datagram sockets - dns: perform DNS lookups domain: handle multiple different IO operations as a single group events: provides the EventEmitter fs: operations on the file system http: perform http queries and create http servers https: perform https queries and create https servers net: asynchronous network wrapper os: basic operating-system related utility functions path: handle and transform file paths punycode: deal with punycode domain names querystring: deal with query strings stream: abstract interface implemented by various objects in Node timers: setTimeout, setInterval etc. tls: encrypted stream communication url: URL resolution and parsing util: various utility functions vm: sandbox to run Node code in zlib: bindings to Gzip/Gunzip, Deflate/Inflate, and DeflateRaw/InflateRaw Those are documented on the official Node API documentation and can all be used within NW.js. Please take care that Chromium already defines a crypto global, so when using the crypto module in the webkit context you should assign it to a variable like crypt rather than crypto: var crypt = require('crypto'); The following example shows how we would read a file and use its contents using Node's modules: var fs = require('fs'); fs.readFile(__dirname + '/file.txt', function (error, contents) {   if (error) returnconsole.error(error);   console.log(contents); }); 3rd party JavaScript modules Soon after Node itself was started, Isaac Schlueter, who was friend of creator Ryan Dahl, started working on a package manager for Node itself. While Nodes's popularity reached new highs, a lot of packages got added to the npm registry and it soon became the fastest growing package registry. To the time of this writing there are over 169'000 packages on the registry and nearly two billion downloads each month. The npm registry is now also slowly evolving from being "only" a package manager for Node into a package manager for all things Javascript. Most of these packages can also be used inside NW.js applications. Your application's dependencies are being defined in your package.json file in the dependencies(or devDependencies) section: {   "name": "my-cool-application",   "version": "1.0.0",   "dependencies": {     "lodash": "^3.1.2"   },   "devDependencies": {     "uglify-js": "^2.4.3"   } } In the dependencies field you find all the modules that are required to run your application while in the devDependencies field only the modules required while developing the application are found. Installing a module is fairly easy and the best way to do this is with the npm install command: npm install lodash --save The install command directly downloads the latest version into your node_modules/ folder. The --save flag means that this dependency should also directly be written into your package.json file. You can also define a specific version to download by using following notation: npm install lodash@1.* or even npm install lodash@1.1 How does node's require() work? You need to deal with two different contexts in NW.js and it is really important to always know which context you are currently in as it changes the way the require() function works. When you load a moule using Node's require() function, then this module runs in the Node context. That means you have the same globals as you would have in a pure Node script but you can't access the globals from the browser, e.g. document or window. If you write Javascript code inside of a <script> tag in your html, or when you include a script inside your HTML using <script src="">, then this code runs in the webkit context. There you have access to all browsers globals. In the webkit context The require() function is a module loading system defined by the CommonJS Modules 1.0 standard and directly implemented in node core. To offer the same smooth experience you get a modified require() method that works in webkit, too. Whenever you want to include a certain module from the webkit context, e.g. directly from an inline script in your index.html file, you need to specify the path directly from the root of your project. Let's assume the following folder structure: - app/   - app.js   - foo.js   - bar.js   - index.html And you want to include the app/app.js file directly in your index.html you need to include it like this: <script type="text/javascript">   var app = require('./app/app.js'); </script> If you need to use a module from npm then you can simply require() it and NW.js will figure out where the corresponding node_modules/ folder is located. In the node context In node when you use relative paths it will always try to locate this module relative to the file you are requiring it from. If we take the example from above then we could require the foo.js module from app.js like this: var foo = require('./foo'); About the Author Max Gfeller is a passionate web developer and JavaScript enthusiast. He is making awesome things at Cylon and can be found on Twitter @mgefeller.
Read more
  • 0
  • 0
  • 12597

article-image-overview-tdd
Packt
06 Nov 2015
11 min read
Save for later

Overview of TDD

Packt
06 Nov 2015
11 min read
 In this article, by Ravi Gupta, Harmeet Singh, and Hetal Prajapati, authors of the book Test-Driven JavaScript Development explain how testing is one of the most important phases in the development of any project, and in the traditional software development model. Testing is usually executed after the code for functionality is written. Test-driven development (TDD) makes a big difference by writing tests before the actual code. You are going to learn TDD for JavaScript and see how this approach can be utilized in the projects. In this article, you are going to learn the following: Complexity of web pages Understanding TDD Benefits of TDD and common myths (For more resources related to this topic, see here.) Complexity of web pages When Tim Berners-Lee wrote the first ever web browser around 1990, it was supposed to run HTML, neither CSS nor JavaScript. Who knew that WWW will be the most powerful communication medium? Since then, there are now a number of technologies and tools which help us write the code and run it for our needs. We do a lot these days with the help of the Internet. We shop, read, learn, share, and collaborate... well, a few words are not going to suffice to explain what we do on the Internet, are they? Over the period of time, our needs have grown to a very complex level, so is the complexity of code written for websites. It's not plain HTML anymore, not some CSS style, not some basic JavaScript tweaks. That time has passed. Pick any site you visit daily, view source by opening developer tools of the browser, and look at the source code of the site. What do you see? Too much code? Too many styles? Too many scripts? The JavaScript code and CSS code is so huge to keep it in as inline, and we need to keep them in different files, sometimes even different folders to keep them organized. Now, what happens before you publish all the code live? You test it. You test each line and see if that works fine. Well, that's a programmer's job. Zero defect, that's what every organization tries to achieve. When that is in focus, testing comes into picture, more importantly, a development style, which is essentially test driven. As the title says for this article, we're going to keep our focus on test-driven JavaScript development.   Understanding Test-driven development TDD, short for Test-driven development, is a process for software development. Kent Beck, who is known for development of TDD, refers this as "Rediscovery." Kent's answer to a question on Quora can be found at https://www.quora.com/Why-does-Kent-Beck-refer-to-the-rediscovery-of-test-driven-development. "The original description of TDD was in an ancient book about programming. It said you take the input tape, manually type in the output tape you expect, then program until the actual output tape matches the expected output. After I'd written the first xUnit framework in Smalltalk I remembered reading this and tried it out. That was the origin of TDD for me. When describing TDD to older programmers, I often hear, "Of course. How else could you program?" Therefore I refer to my role as "rediscovering" TDD." If you go and try to find references to TDD, you would even get few references from 1968. It's not a new technique, though did not get so much attention yet. Recently, an interest toward TDD is growing, and as a result, there are a number of tools on the Web. For example, Jasmine, Mocha, DalekJS, JsUnit, QUnit, and Karma are among these popular tools and frameworks. More specifically, test-driven JavaScript development is getting popular these days. Test-driven development is a software development process, which enforces a developer to write test before production code. A developer writes a test, expects a behavior, and writes code to make the test pass. It is needless to mention that the test will always fail at the start. Need of testing To err is human. As a developer, it's not easy to find defects in our own code and often we think that our code is perfect. But there are always some chances that a defect is present in the code. Every organization or individual wants to deliver the best software they can. This is one major reason that every software, every piece of code is well tested before its release. Testing helps to detect and correct defects. There are a number of reasons why testing is needed. They are as follows: To check if the software is functioning as per the requirements There will not be just one device or one platform to run your software The end user will perform an action as a programmer you never expected There was a study conducted by National Institute of Standards and Technology (NIST) in 2002, which reported that software bugs cost the U.S. economy around $60 billion annually. With better testing, more than one-third of the cost could be avoided. The earlier the defect is found, the cheaper it is to fix it. A defect found post release would cost 10-100 times more to fix than if it had already been detected and fixed. The report of the study performed by NIST can be found at http://www.nist.gov/director/planning/upload/report02-3.pdf. If we draw a curve for the cost, it comes as an exponential when it comes to cost. The following figure clearly shows that the cost increases as the project matures with time. Sometimes, it's not possible to fix a defect without making changes in the architecture. In those cases, the cost, sometimes, is so much that developing the software from scratch seems like a better option. Benefits of TDD and common myths Every methodology has its own benefits and myths among people. The following sections will analyze the key benefits and most common myths of TDD. Benefits TDD has its own advantages over regular development approaches. There are a number of benefits, which help make a decision of using TDD over the traditional approach. Automated testing: If you did see a website code, you know that it's not easy to maintain and test all the scripts manually and keep them working. A tester may leave a few checks, but automated tests won't. Manual testing is error prone and slow. Lower cost of overall development: With TDD, the number of debugs is significantly decreased. You develop some code; run tests, if you fail, re-doing the development is significantly faster than debugging and fixing it. TDD aims at detecting defect and correcting them at an early stage, which costs much cheaper than detecting and correcting at a later stage or post release. Also, now debugging is very less frequent and significant amount of time is saved. With the help of tools/test runners like Karma, JSTestDriver, and so on, running every JavaScript tests on browser is not needed, which saves significant time in validation and verification while the development goes on. Increased productivity: Apart from time and financial benefits, TDD helps to increase productivity since the developer becomes more focused and tends to write quality code that passes and fulfills the requirement. Clean, maintainable, and flexible code: Since tests are written first, production code is often very neat and simple. When a new piece of code is added, all the tests can be run at once to see if anything failed with the change. Since we try to keep our tests atomic, and our methods also address a single goal, the code automatically becomes clean. At the end of the application development, there would be thousands of test cases which will guarantee that every piece of logic can be tested. The same test cases also act as documentation for users who are new to the development of system, since these tests act as an example of how the code works. Improved quality and reduced bugs: Complex codes invite bugs. Developers when change anything in neat and simple code, they tend to leave less or no bugs at all. They tend to focus on purpose and write code to fulfill the requirement. Keeps technical debt to minimum: This is one of the major benefits of TDD. Not writing unit tests and documentation is a big part, which increases technical debt for a software/project. Since TDD encourages you to write tests first, and if they are well written, they act as documentation, you keep technical debt for these to minimum. As Wikipedia says, A technical debt can be defined as tasks to be performed before a unit can be called complete. If the debt is not repaid, interest also adds up and makes it harder to make changes at a later stage. More about Technical debt can be found at https://en.wikipedia.org/wiki/Technical_debt. Myths Along with the benefits, TDD has some myths as well. Let's check few of them: Complete code coverage: TDD enforces to write tests first and developers write minimum amount of code to pass the test and almost 100% code coverage is done. But that does not guarantee that nothing is missed and the code is bug free. Code coverage tools do not cover all the paths. There can be infinite possibilities in loops. Of course it's not possible and feasible to check all the paths, but a developer is supposed to take care of major and critical paths. A developer is supposed to take care of business logic, flow, and process code most of the times. No need to test integration parts, setter-getter methods for properties, configurations, UI, and so on. Mocking and stubbing is to be used for integrations. No need of debugging the code: Though test-first development makes one think that debugging is not needed, but it's not always true. You need to know the state of the system when a test failed. That will help you to correct and write the code further. No need of QA: TDD cannot always cover everything. QA plays a very important role in testing. UI defects, integration defects are more likely to be caught by a QA. Even though developers are excellent, there are chances of errors. QA will try every kind of input and unexpected behavior that even a programmer did not cover with test cases. They will always try to crash the system with random inputs and discover defects. I can code faster without tests and can also validate for zero defect: While this may stand true for very small software and websites where code is small and writing test cases may increase overall time of development and delivery of the product. But for bigger products, it helps a lot to identify defects at a very early stage and gives a chance to correct at a very low cost. As seen in the previous screenshots of cost of fixing defects for phases and testing types, the cost of correcting a defect increases with time. Truly, whether TDD is required for a project or not, it depends on context. TDD ensures a good design and architecture: TDD encourages developers to write quality code, but it is not a replacement for good design practice and quality code. Will a team of developers be enough to ensure a stable and scalable architecture? Design should still be done by following the standard practices. You need to write all tests first: Another myth says that you need to write all tests first and then the actual production code. Actually, generally an iterative approach is used. Write some tests first, then some code, run the tests, fix the code, run the tests, write more tests, and so on. With TDD, you always test parts of software and keep developing. There are many myths, and covering all of them is not possible. The point is, TDD offers developers a better opportunity of delivering quality code. TDD helps organizations by delivering close to zero-defect products. Summary In this article, you learned about what TDD is. You learned about the benefits and myths of TDD. Resources for Article: Further resources on this subject: Understanding outside-in [article] Jenkins Continuous Integration [article] Understanding TDD [article]
Read more
  • 0
  • 0
  • 3409

article-image-e-commerce-mean
Packt
05 Nov 2015
8 min read
Save for later

E-commerce with MEAN

Packt
05 Nov 2015
8 min read
These days e-commerce platforms are widely available. However, as common as they might be, there are instances that after investing a significant amount of time learning how to use a specific tool you might realize that it can not fit your unique e-commerce needs as it promised. Hence, a great advantage of building your own application with an agile framework is that you can quickly meet your immediate and future needs with a system that you fully understand. Adrian Mejia Rosario, the author of the book, Building an E-Commerce Application with MEAN, shows us how MEAN stack (MongoDB, ExpressJS, AngularJS and NodeJS) is a killer JavaScript and full-stack combination. It provides agile development without compromising on performance and scalability. It is ideal for the purpose of building responsive applications with a large user base such as e-commerce applications. Let's have a look at a project using MEAN. (For more resources related to this topic, see here.) Understanding the project structure The applications built with the angular-fullstack generator have many files and directories. Some code goes in the client, other executes in the backend and another portion is just needed for development cases such as the tests suites. It’s important to understand the layout to keep the code organized. The Yeoman generators are time savers! They are created and maintained by the community following the current best practices. It creates many directories and a lot of boilerplate code to get you started. The numbers of unknown files in there might be overwhelming at first. On reviewing the directory structure created, we see that there are three main directories: client, e2e and server: The client folder will contain the AngularJS files and assets. The server directory will contain the NodeJS files, which handles ExpressJS and MongoDB. Finally, the e2e files will contain the AngularJS end-to-end tests. File Structure This is the overview of the file structure of this project: meanshop ├── client │ ├── app - App specific components │ ├── assets - Custom assets: fonts, images, etc… │ └── components - Non-app specific/reusable components │ ├── e2e - Protractor end to end tests │ └── server ├── api - Apps server API ├── auth - Authentication handlers ├── components - App-wide/reusable components ├── config - App configuration │ └── local.env.js - Environment variables │ └── environment - Node environment configuration └── views - Server rendered views Components You might be already familiar with a number of tools used in this project. If that’s not the case, you can read the brief description here. Testing AngularJS comes with a default test runner called Karma and we are going going to leverage its default choices: Karma: JavaScript unit test runner. Jasmine: It's a BDD framework to test JavaScript code. It is executed with Karma. Protractor: They are end-to-end tests for AngularJS. These are the highest levels of testing that run in the browser and simulate user interactions with the app. Tools The following are some of the tools/libraries that we are going to use in order to increase our productivity: GruntJS: It's a tool that serves to automate repetitive tasks, such as a CSS/JS minification, compilation, unit testing, and JS linting. Yeoman (yo): It's a CLI tool to scaffold web projects., It automates directory creation and file creation through generators and also provides command lines for common tasks. Travis CI: Travis CI is a continuous integration tool that runs your test suites every time you commit to the repository. EditorConfig: EditorConfig is an IDE plugin that loads the configuration from a file .editorconfig. For example, you can set indent_size = 2 indent with spaces, tabs, and so on. It’s a time saver and helps maintain consistency across multiple IDEs/teams. SocketIO: It's a library that enables real-time bidirectional communication between the server and the client. Bootstrap: It's a frontend framework for web development. We are going to use it to build the theme thought-out for this project. AngularJS full-stack: It's a generator for Yeoman that will provide useful command lines to quickly generate server/client code and deploy it to Heroku or OpenShift. BabelJS: It's a js-tojs compiler that allows to use features from the next generation JavaScript (ECMAScript 6), currently without waiting for browser support. Git: It's a distributed code versioning control system. Package managers We have package managers for our third-party backend and frontend modules. They are as follows: NPM: It is the default package manager for NodeJS. Bower: It is the frontend package manager that can be used to handle versions and dependencies of libraries and assets used in a web project. The file bower.json contains the packages and versions to install and the file .bowerrc contains the path where those packages are to be installed. The default directory is ./bower_components. Bower packages If you have followed the exact steps to scaffold our app you will have the following frontend components installed: angular angular-cookies angular-mocks angular-resource angular-sanitize angular-scenario angular-ui-router angular-socket-io angular-bootstrap bootstrap es5-shim font-awesome json3 jquery lodash Previewing the final e-commerce app Let’s take a pause from the terminal. In any project, before starting coding, we need to spend some time planning and visualizing what we are aiming for. That’s exactly what we are going to do, draw some wireframes that walk us through the app. Our e-commerce app, MEANshop, will have three main sections: Homepage Marketplace Back-office Homepage The home page will contain featured products, navigation, menus, and basic information, as you can see in the following image: Figure 2 - Wireframe of the homepage Marketplace This section will show all the products, categories, and search results. Figure 3 - Wireframe of the products page Back-office You need to be a registered user to access the back office section, as shown in the following figure:   Figure 4 - Wireframe of the login page After you login, it will present you with different options depending on the role. If you are the seller, you can create new products, such as the following: Figure 5 - Wireframe of the Product creation page If you are an admin, you can do everything that a seller does (create products) plus you can manage all the users and delete/edit products. Understanding requirements for e-commerce applications There’s no better way than to learn new concepts and technologies while developing something useful with it. This is why we are building a real-time e-commerce application from scratch. However, there are many kinds of e-commerce apps. In the following sections we will delimit what we are going to do. Minimum viable product for an e-commerce site Even the largest applications that we see today started small and grew their way up. The minimum viable product (MVP) is strictly the minimum that an application needs to work on. In the e-commerce example, it will be: Add products with title, price, description, photo, and quantity. Guest checkout page for products. One payment integration (for example, Paypal). This is strictly the minimum requirement to get an e-commerce site working. We are going to start with these but by no means will we stop there. We will keep adding features as we go and build a framework that will allow us to extend the functionality with high quality. Defining the requirements We are going to capture our requirements for the e-commerce application with user stories. A user story is a brief description of a feature told from the perspective of a user where he expresses his desire and benefit in the following format: As a <role>, I want <desire> [so that <benefit>] User stories and many other concepts were introduced with the Agile Manifesto. Learn more at https://en.wikipedia.org/wiki/Agile_software_development Here are the features that we are planning to develop through this book that have been captured as user stories: As a seller, I want to create products. As a user, I want to see all published products and its details when I click on them. As a user, I want to search for a product so that I can find what I’m looking for quickly. As a user, I want to have a category navigation menu so that I can narrow down the search results. As a user, I want to have real-time information so that I can know immediately if a product just got sold-out or became available. As a user, I want to check out products as a guest user so that I can quickly purchase an item without registering. As a user, I want to create an account so that I can save my shipping addresses, see my purchase history, and sell products. As an admin, I want to manage user roles so that I can create new admins, sellers, and remove seller permission. As an admin, I want to manage all the products so that I can ban them if they are not appropriate. As an admin, I want to see a summary of the activities and order status. All these stories might seem verbose but they are useful in capturing requirements in a consistent way. They are also handy to develop test cases against it. Summary Now that we have a gist of an e-commerce app with MEAN, lets build a full-fledged e-commerce project with Building an E-Commerce Application with MEAN. Resources for Article:   Further resources on this subject: Introduction to Couchbase [article] Protecting Your Bitcoins [article] DynamoDB Best Practices [article]
Read more
  • 0
  • 0
  • 15867

article-image-architecture-backbone
Packt
04 Nov 2015
18 min read
Save for later

Architecture of Backbone

Packt
04 Nov 2015
18 min read
In this article by Abiee Echamea, author of the book Mastering Backbone.js, you will see that one of the best things about Backbone is the freedom of building applications with the libraries of your choice, no batteries included. Backbone is not a framework but a library. Building applications with it can be challenging as no structure is provided. The developer is responsible for code organization and how to wire the pieces of code across the application; it's a big responsibility. Bad decisions about code organization can lead to buggy and unmaintainable applications that nobody wants to see. In this article, you will learn the following topics: Delegating the right responsibilities to Backbone objects Splitting the application into small and maintainable scripts (For more resources related to this topic, see here.) The big picture We can split application into two big logical parts. The first is an infrastructure part or root application, which is responsible for providing common components and utilities to the whole system. It has handlers to show error messages, activate menu items, manage breadcrumbs, and so on. It also owns common views such as dialog layouts or loading the progress bar. A root application is responsible for providing common components and utilities to the whole system. A root application is the main entry point to the system. It bootstraps the common objects, sets the global configuration, instantiates routers, attaches general services to a global application, renders the main application layout at the body element, sets up third-party plugins, starts a Backbone history, and instantiates, renders, and initializes components such as a header or breadcrumb. However, the root application itself does nothing; it is just the infrastructure to provide services to the other parts that we can call subapplications or modules. Subapplications are small applications that run business value code. It's where the real work happens. Subapplications are focused on a specific domain area, for example, invoices, mailboxes, or chats, and should be decoupled from the other applications. Each subapplication has its own router, entities, and views. To decouple subapplications from the root application, communication is made through a message bus implemented with the Backbone.Events or Backbone.Radio plugin such that services are requested to the application by triggering events instead of call methods on an object. Subapplications are focused on a specific domain area and should be decoupled from the root application and other subapplications. Figure 1.1 shows a component diagram of the application. As you can see, the root application depends on the routers of the subapplications due to the Backbone.history requirement to instantiate all the routers before calling the start method and the root application does this. Once Backbone.history is started, the browser's URL is processed and a route handler in a subapplication is triggered; this is the entry point for subapplications. Additionally, a default route can be defined in the root application for any route that is not handled on the subapplications. Figure 1.1: Logical organization of a Backbone application When you build Backbone applications in this way, you know exactly which object has the responsibility, so debugging and improving the application is easier. Remember, divide and conquer. Also by doing this, you make your code more testable, improving its robustness. Responsibilities of the Backbone objects One of the biggest issues with the Backbone documentation is no clues about how to use its objects. Developers should figure out the responsibilities for each object across the application but at least you have some experience working with Backbone already and this is not an easy task. The next sections will describe the best uses for each Backbone object. In this way, you will have a clearer idea about the scope of responsibilities of Backbone, and this will be the starting point of designing our application architecture. Keep in mind, Backbone is a library with foundation objects, so you will need to bring your own objects and structure to make an awesome Backbone application. Models This is the place where the general business logic lives. A specific business logic should be placed on other sites. A general business logic is all the rules that are so general that they can be used on multiple use cases, while specific business logic is a use case itself. Let's imagine a shopping cart. A model can be an item in the cart. The logic behind this model can include calculating the total by multiplying the unit price by the quantity or setting a new quantity. In this scenario, assume that the shop has a business rule that a customer can buy the same product only three times. This is a specific business rule because it is specific for this business, or how many stores do you know with this rule? These business rules take place on other sites and should be avoided on models. Also, it's a good idea to validate the model data before sending requests to the server. Backbone helps us with the validate method for this, so it's reasonable to put validation logic here too. Models often synchronize the data with the server, so direct calls to servers such as AJAX calls should be encapsulated at the model level. Models are the most basic pieces of information and logic; keep this in mind. Collections Consider collections as data repositories similar to a database. Collections are often used to fetch the data from the server and render its contents as lists or tables. It's not usual to see business logic here. Resource servers have different ways to deal with lists of resources. For instance, while some servers accept a skip parameter for pagination, others have a page parameter for the same purpose. Another case is responses; a server can respond with a plain array while other prefer sending an object with a data, list, or some other key, where an array of objects is placed. There is no standard way. Collections can deal with these issues, making server requests transparent for the rest of the application. Views Views have the responsibility of handling Document Object Model (DOM). Views work closely with the template engines rendering the templates and putting the results in DOM. Listen for low-level events using a jQuery API and transform them into domain ones. Views abstract the user interactions transforming his/her actions into data structures for the application, for example, clicking on a save button in a form view will create a plain object with the information in the input and trigger a domain event such as save:contact with this object attached. Then a domain-specific object can apply domain logic to the data and show a result. Business logic on views should be avoided, but basic form validations are allowed, such as accepting only numbers, but complex validations should be done on the model. Routers Routers have a simple responsibility: listening for URL changes on the browser and transforming them into a call to a handler. A router knows which handler to call for a given URL and also decodes the URL parameters and passes them to the handlers. The root application bootstraps the infrastructure, but routers decide which subapplication will be executed. In this way, routers are a kind of entry point. Domain objects It is possible to develop Backbone applications using only the Backbone objects described in the previous section, but for a medium-to-large application, it's not sufficient. We need to introduce a new kind of object with well-delimited responsibilities that use and coordinate the Backbone foundation objects. Subapplication facade This object is the public interface of a subapplication. Any interaction with the subapplication should be done through its methods. Direct calls to internal objects of the subapplication are discouraged. Typically, methods on this controller are called from the router but can be called from anywhere. The main responsibility of this object is simplifying the subapplication internals, so its work is to fetch the data from the server through models or collections and in case an error occurs during the process, it has to show an error message to the user. Once the data is loaded in a model or collection, it creates a subapplication controller that knows the views that should be rendered and has the handlers to deal with its events. The subapplication facade will transform the URL request into a Backbone data object. It shows the right error message; creates a subapplication controller, and delegates the control to it. The subapplication controller or mediator This object acts as an air traffic controller for the views, models, and collections. With a Backbone data object, it will instantiate and render the appropriate views and then coordinate them. However, the coordination task is not easy in complex layouts. Due to loose coupling reasons, a view cannot call the methods or events of the other views directly. Instead of this, a view triggers an event and the controller handles the event and orchestrates the view's behavior, if necessary. Note how the views are isolated, handling just their owned portion of DOM and triggering events when they need to communicate something. Business logic for simple use cases can be implemented here, but for more complex interactions, another strategy is needed. This object implements the mediator pattern allowing other basic objects such as views and models to keep it simple and allow loose coupling. The logic workflow The application starts bootstrapping common components and then initializes all the routers available for the subapplications and starts Backbone.history. See Figure 1.2, After initialization, the URL on the browser will trigger a route for a subapplication, then a route handler instantiates a subapplication facade object and calls the method that knows how to handle the request. The facade will create a Backbone data object, such as a collection, and fetch the data from the server calling its fetch method. If an error is issued while fetching the data, the subapplication facade will ask the root application to show the error, for example, a 500 Internal Server Error. Figure 1.2: Abstract architecture for subapplications Once the data is in a model or collection, the subapplication facade will instantiate the subapplication object that knows the business rules for the use case and pass the model or collection to it. Then, it renders one or more view with the information of the model or collection and places the results in the DOM. The views will listen for DOM events, for example, click, and transform them into a higher-level event to be consumed by the application object. The subapplication object listens for events on models and views and coordinates them when an event is triggered. When the business rules are not too complex, they can be implemented on this application object, such as deleting a model. Models and views can be in sync with the Backbone events or use a library for bindings such as Backbone.Stickit. In the next section, we will describe this process step by step with code examples for a better understanding of the concepts explained. Route handling The entry point for a subapplication is given by its routes, which ideally share the same namespace. For instance, a contacts subapplication can have these routes: contacts: Lists all the available contacts Contacts/page/:page: Paginates the contacts collection contacts/new: Shows a form to create a new contact contacts/view/:id: Shows an invoice given its ID contacts/edit/:id: Shows a form to edit a contact Note how all the routes start with the /contacts prefix. It's a good practice to use the same prefix for all the subapplication routes. In this way, the user will know where he/she is in the application, and you will have a clean separation of responsibilities. Use the same prefix for all URLs in one subapplication; avoid mixing routes with the other subapplications. When the user points the browser to one of these routes, a route handler is triggered. The function handler parses the URL request and delegates the request to the subapplication object, as follows: var ContactsRouter = Backbone.Router.extend({ routes: { "contacts": "showContactList", "contacts/page/:page": "showContactList", "contacts/new": "createContact", "contacts/view/:id": "showContact", "contacts/edit/:id": "editContact" }, showContactList: function(page) { page = page || 1; page = page > 0 ? page : 1; var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactList(page); }, createContact: function() { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showNewContactForm(); }, showContact: function(contactId) { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactById(contactId); }, editContact: function(contactId) { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactEditorById(contactId); } }); The validation of the URL parameters should be done on the router as shown in the showContactList method. Once the validation is done, ContactsRouter instantiates an application object, ContactsApp, which is a facade for the Contacts subapplication; finally, ContactsRouter calls an API method to handle the user request. The router doesn't know anything about business logic; it just knows how to decode the URL requests and which object to call in order to handle the request. Here, the region object points to an existing DOM node by passing the application and tells us where the application should be rendered. The subapplication facade A subapplication is composed of smaller pieces that handle specific use cases. In the case of the contacts app, a use case can be see a contact, create a new contact, or edit a contact. The implementation of these use cases is separated on different objects that handle views, events, and business logic for a specific use case. The facade basically fetches the data from the server, handles the connection errors, and creates the objects needed for the use case, as shown here: function ContactsApp(options) { this.region = options.region; this.showContactList = function(page) { App.trigger("loading:start"); new ContactCollection().fetch({ success: _.bind(function(collection, response, options) { this._showList(collection); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showList = function(contacts) { var contactList = new ContactList({region: this.region}); contactList.showList(contacts); } this.showNewContactForm = function() { this._showEditor(new Contact()); }; this.showContactEditorById = function(contactId) { new Contact({id: contactId}).fetch({ success: _.bind(function(model, response, options) { this._showEditor(model); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showEditor = function(contact) { var contactEditor = new ContactEditor({region: this.region}); contactEditor.showEditor(contact); } this.showContactById = function(contactId) { new Contact({id: contactId}).fetch({ success: _.bind(function(model, response, options) { this._showViewer(model); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showViewer = function(contact) { var contactViewer = new ContactViewer({region: this.region}); contactViewer.showContact(contact); } } The simplest handler is showNewContactForm, which is called when the user wants to create a new contact. This creates a new Contact object and passes to the _showEditor method, which will render an editor for a blank Contact. The handler doesn't need to know how to do this because the ContactEditor application will do the job. Other handlers follow the same pattern, triggering an event for the root application to show a loading widget to the user while fetching the data from the server. Once the server responds successfully, it calls another method to handle the result. If an error occurs during the operation, it triggers an event to the root application to show a friendly error to the user. Handlers receive an object and create an application object that renders a set of views and handles the user interactions. The object created will respond to the action of the users, that is, let's imagine the object handling a form to save a contact. When users click on the save button, it will handle the save process and maybe show a message such as Are you sure want to save the changes and take the right action? The subapplication mediator The responsibility of the subapplication mediator object is to render the required layout and views to be showed to the user. It knows which views need to be rendered and in which order, so instantiate the views with the models if needed and put the results on the DOM. After rendering the necessary views, it will listen for user interactions as Backbone events triggered from the views; methods on the object will handle the interaction as described in the use cases. The mediator pattern is applied to this object to coordinate efforts between the views. For example, imagine that we have a form with contact data. As the user made some input in the edition form, other views will render a preview business card for the contact; in this case, the form view will trigger changes to the application object and the application object will tell the business card view to use a new set of data each time. As you can see, the views are decoupled and this is the objective of the application object. The following snippet shows the application that shows a list of contacts. It creates a ContactListView view, which knows how to render a collection of contacts and pass the contacts collection to be rendered: var ContactList = function(options) { _.extend(this, Backbone.Events); this.region = options.region; this.showList = function(contacts) { var contactList = new ContactListView({ collection: contacts }); this.region.show(contactList); this.listenTo(contactList, "item:contact:delete", this._deleteContact); } this._deleteContact = function(contact) { if (confirm('Are you sure?')) { contact.collection.remove(contact); } } this.close = function() { this.stopListening(); } } The ContactListView view will be responsible for transforming this into the DOM nodes and responding to collection events such as adding a new contact or removing one. Once the view is initialized, it is rendered on a specific region previously specified. When the view is finally on DOM, the application listens for the "item:contact:delete" event, which will be triggered if the user clicks on a delete button rendered for each contact. To see a contact, a ContactViewer application is responsible for managing the use case, which is as follows: var ContactViewer = function(options) { _.extend(this, Backbone.Events); this.region = options.region; this.showContact = function(contact) { var contactView = new ContactView({model: contact}); this.region.show(contactView); this.listenTo(contactView, "contact:delete", this._deleteContact); }, this._deleteContact = function(contact) { if (confirm("Are you sure?")) { contact.destroy({ success: function() { App.router.navigate("/contacts", true); }, error: function() { alert("Something goes wrong"); } }); } } } It's the same situation, that is, the contact list creates a view that manages the DOM interactions, renders on the specified region, and listens for events. From the details view of a contact, users can delete them. Similar to a list, a _deleteContact method handles the event, but the difference is when a contact is deleted, the application is redirected to the list of contacts, which is the expected behavior. You can see how the handler uses the root application infrastructure by calling the navigate method of the global App.router. The handler forms to create or edit contacts are very similar, so the same ContactEditor can be used for both the cases. This object will show a form to the user and will wait for the save action, as shown in the following code: var ContactEditor = function(options) { _.extend(this, Backbone.Events) this.region = options.region; this.showEditor = function(contact) { var contactForm = new ContactForm({model: contact}); this.region.show(contactForm); this.listenTo(contactForm, "contact:save", this._saveContact); }, this._saveContact = function(contact) { contact.save({ success: function() { alert("Successfully saved"); App.router.navigate("/contacts"); }, error: function() { alert("Something goes wrong"); } }); } } In this case, the model can have modifications in its data. In simple layouts, the views and model can work nicely with the model-view data bindings, so no extra code is needed. In this case, we will assume that the model is updated as the user puts in information in the form, for example, Backbone.Stickit. When the save button is clicked, a "contact:save" event is triggered and the application responds with the _saveContact method. See how the method issues a save call to the standard Backbone model and waits for the result. In successful requests, a message will be displayed and the user is redirected to the contact list. In errors, a message will tell the user that the application found a problem while saving the contact. The implementation details about the views are outside of the scope of this article, but you can abstract the work made by this object by seeing the snippets in this section. Summary In this article, we started by describing in a general way how a Backbone application works. It describes two main parts, a root application and subapplications. A root application provides common infrastructure to the other smaller and focused applications that we call subapplications. Subapplications are loose-coupled with the other subapplications and should own resources such as views, controllers, routers, and so on. A subapplication manages a small part of the system and no more. Communication between the subapplications and root application is made through an event-driven bus, such as Backbone.Events or Backbone.Radio. The user interacts with the application using views that a subapplication renders. A subapplication mediator orchestrates interaction between the views, models, and collections. It also handles the business logic such as saving or deleting a resource. Resources for Article: Further resources on this subject: Object-Oriented JavaScript with Backbone Classes [article] Building a Simple Blog [article] Marionette View Types and Their Use [article]
Read more
  • 0
  • 0
  • 4182

article-image-restservices-finagle-and-finch
Packt
03 Nov 2015
9 min read
Save for later

RESTServices with Finagle and Finch

Packt
03 Nov 2015
9 min read
In this article by Jos Dirksen, the author of RESTful Web Services with Scala, we'll only be talking about Finch. Note, though, that most of the concepts provided by Finch are based on the underlying Finagle ideas. Finch just provides a very nice REST-based set of functions to make working with Finagle very easy and intuitive. (For more resources related to this topic, see here.) Finagle and Finch are two different frameworks that work closely together. Finagle is an RPC framework, created by Twitter, which you can use to easily create different types of services. On the website (https://github.com/twitter/finagle), the team behind Finagle explains it like this: Finagle is an extensible RPC system for the JVM, used to construct high-concurrency servers. Finagle implements uniform client and server APIs for several protocols, and is designed for high performance and concurrency. Most of Finagle's code is protocol agnostic, simplifying the implementation of new protocols. So, while Finagle provides the plumbing required to create highly scalable services, it doesn't provide direct support for specific protocols. This is where Finch comes in. Finch (https://github.com/finagle/finch) provides an HTTP REST layer on top of Finagle. On their website, you can find a nice quote that summarizes what Finch aims to do: Finch is a thin layer of purely functional basic blocks atop of Finagle for building composable REST APIs. Its mission is to provide the developers simple and robust REST API primitives being as close as possible to the bare metal Finagle API. Your first Finagle and Finch REST service Let's start by building a minimal Finch REST service. The first thing we need to do is to make sure we have the correct dependencies. To use Finch, all you have to do is to add the following dependency to your SBT file: "com.github.finagle" %% "finch-core" % "0.7.0" With this dependency added, we can start coding our very first Finch service. The next code fragment shows a minimal Finch service, which just responds with a Hello, Finch! message: package org.restwithscala.chapter2.gettingstarted import io.finch.route._ import com.twitter.finagle.Httpx object HelloFinch extends App { Httpx.serve(":8080", (Get / "hello" />"Hello, Finch!").toService) println("Press <enter> to exit.") Console.in.read.toChar } When this service receives a GET request on the URL path hello, it will respond with a Hello, Finch! message. Finch does this by creating a service (using the toService function) from a route (more on what a route is will be explained in the next section) and using the Httpx.serve function to host the created service. When you run this example, you'll see an output as follows: [info] Loading project definition from /Users/jos/dev/git/rest-with-scala/project [info] Set current project to rest-with-scala (in build file:/Users/jos/dev/git/rest-with-scala/) [info] Running org.restwithscala.chapter2.gettingstarted.HelloFinch Jun 26, 2015 9:38:00 AM com.twitter.finagle.Init$$anonfun$1 apply$mcV$sp INFO: Finagle version 6.25.0 (rev=78909170b7cc97044481274e297805d770465110) built at 20150423-135046 Press <enter> to exit. At this point, we have an HTTP server running on port 8080. When we make a call to http://localhost:8080/hello, this server will respond with the Hello, Finch! message. To test this service, you can make a HTTP requests in Postman like this: If you don't want to use a GUI to make the requests, you can also use the following Curl command: curl'http://localhost:8080/hello' HTTP verb and URL matching An important part of every REST framework is the ability to easily match HTTP verbs and the various path segments of the URL. In this section, we'll look at the tools Finch provides us with. Let's look at the code required to do this (the full source code for this example can be found at https://github.com/josdirksen/rest-with-scala/blob/master/chapter-02/src/main/scala/org/restwithscala/chapter2/steps/FinchStep1.scala): package org.restwithscala.chapter2.steps import com.twitter.finagle.Httpx import io.finch.request._ import io.finch.route._ import io.finch.{Endpoint => _} object FinchStep1 extends App { // handle a single post using a RequestReader valtaskCreateAPI = Post / "tasks" /> ( for { bodyContent<- body } yield s"created task with: $bodyContent") // Use matchers and extractors to determine which route to call // For more examples see the source file. valtaskAPI = Get / "tasks" /> "Get a list of all the tasks" | Get / "tasks" / long /> ( id =>s"Get a single task with id: $id" ) | Put / "tasks" / long /> ( id =>s"Update an existing task with id $id to " ) | Delete / "tasks" / long /> ( id =>s"Delete an existing task with $id" ) // simple server that combines the two routes and creates a val server = Httpx.serve(":8080", (taskAPI :+: taskCreateAPI).toService ) println("Press <enter> to exit.") Console.in.read.toChar server.close() } In this code fragment, we created a number of Router instances that process the requests, which we sent from Postman. Let's start by looking at one of the routes of the taskAPI router: Get / "tasks" / long /> (id =>s"Get a single task with id: $id"). The following table explains the various parts of the route: Part Description Get While writing routers, usually the first thing you do is determine which HTTP verb you want to match. In this case, this route will only match the GET verb. Besides the Get matcher, Finch also provides the following matchers: Post, Patch, Delete, Head, Options, Put, Connect, and Trace. "tasks" The next part of the route is a matcher that matches a URL path segment. In this case, we match the following URL: http://localhost:8080/tasks. Finch will use an implicit conversion to convert this String object to a finch Matcher object. Finch also has two wildcard Matchers: * and **. The * matcher allows any value for a single path segment, and the ** matcher allows any value for multiple path segments. long The next part in the route is called an Extractor. With an extractor, you turn part of the URL into a value, which you can use to create the response (for example, retrieve an object from the database using the extracted ID). The long extractor, as the name implies, converts the matching path segment to a long value. Finch also provides an int, string, and Boolean extractor. long =>B The last part of the route is used to create the response message. Finch provides different ways of creating the response, which we'll show in the other parts of this article. In this case, we need to provide Finch with a function that transforms the long value we extracted, and return a value Finch can convert to a response (more on this later). In this example, we just return a String.  If you've looked closely at the source code, you would have probably noticed that Finch uses custom operators to combine the various parts of a route. Let's look a bit closer at those. With Finch, we get the following operators (also called combinators in Finch terms): / or andThen: With this combinatory, you sequentially combine various matchers and extractors together. Whenever the first part matches, the next one is called. For instance: Get / "path" / long. | or orElse: This combinator allows you to combine two routers (or parts thereof) together as long as they are of the same type. So, we could do (Get | Post) to create a matcher, which matches the GET and POST HTTP verbs. In the code sample, we've also used this to combine all the routes that returned a simple String into the taskAPI router. /> or map: With this combinatory, we pass the request and any extracted values from the path to a function for further processing. The result of the function that is called is returned as the HTTP response. As you'll see in the rest of the article, there are different ways of processing the HTTP request and creating a response. :+:: The final combinator allows you to combine two routers together of different types. In the example, we have two routers. A taskAPI, that returns a simple String, and a taskCreateAPI, which uses a RequestReader (through the body function) to create the response. We can't combine these with | since the result is created using two different approaches, so we use the :+:combinator. We just return simple Strings whenever we get a request. In the next section, we'll look at how you can use RequestReader to convert the incoming HTTP requests to case classes and use those to create a HTTP response. When you run this service, you'll see an output as follows: [info] Loading project definition from /Users/jos/dev/git/rest-with-scala/project [info] Set current project to rest-with-scala (in build file:/Users/jos/dev/git/rest-with-scala/) [info] Running org.restwithscala.chapter2.steps.FinchStep1 Jun 26, 2015 10:19:11 AM com.twitter.finagle.Init$$anonfun$1 apply$mcV$sp INFO: Finagle version 6.25.0 (rev=78909170b7cc97044481274e297805d770465110) built at 20150423-135046 Press <enter> to exit. Once the server is started, you can once again use Postman(or any other REST client) to make requests to this service (example requests can be found at https://github.com/josdirksen/rest-with-scala/tree/master/common): And once again, you don't have to use a GUI to make the requests. You can test the service with Curl as follows: # Create task curl 'http://localhost:8080/tasks' -H 'Content-Type: text/plain;charset=UTF-8' --data-binary $'{ntaskdatan}' # Update task curl 'http://localhost:8080/tasks/1' -X PUT -H 'Content-Type: text/plain;charset=UTF-8' --data-binary $'{ntaskdatan}' # Get all tasks curl'http://localhost:8080/tasks' # Get single task curl'http://localhost:8080/tasks/1' Summary This article only showed a couple of the features Finch provides. But it should give you a good head start toward working with Finch. Resources for Article: Further resources on this subject: RESTful Java Web Services Design [article] Creating a RESTful API [article] Scalability, Limitations, and Effects [article]
Read more
  • 0
  • 0
  • 3675

article-image-magento-2-development-cookbook
Packt
03 Nov 2015
4 min read
Save for later

Upgrading from Magneto 1

Packt
03 Nov 2015
4 min read
In Magento 2 Development Cookbook by Bart Delvaux, the overarching goal of this book is to provides you with the with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. (For more resources related to this topic, see here.) Why Magento 2 Solve common problems encountered while extending your Magento 2 store to fit your business needs. Exciting and enhanced features of Magento 2 such as customizing security permissions, intelligent filtered search options, easy third-party integration, among others. Learn to build and maintain a Magento 2 shop via a visual-based page editor and customize the look and feel using Magento 2 offerings on the go. What this article covers? This article covers Preparing an upgrade from Magento 1. Preparing an upgrade from Magento 1 The differences between Magento 1 and Magento 2 are big. The code has a whole new structure with a lot of improvements but there is one big disadvantage. What to do if I want to upgrade my Magento 1 shop to a Magento 2 shop. Magento created an upgrade tool that migrates the data of the Magento 1 database to the right structure for a Magento 2 database. The custom modules in your Magento 1 shop will not work in a Magento 2. It is possible that some of your modules will have a Magento 2 version and depending of the module, the module author will have a migration tool to migrate the data that is in the module. Getting ready Before we get started, make sure you have an empty (without sample data) Magento 2 installation with the same version as the Migration tool that is available at: https://github.com/magento/data-migration-tool-ce How to do it In your Magento 2 version (with the same version as the migration tool), run the following commands: composer config repositories.data-migration-tool git https://github.com/magento/data-migration-tool-ce composer require magento/data-migration-tool:dev-master Install Magento 2 with an empty database by running the installer. Make sure you configure it with the right time zone and currencies. When these steps are done, you can test the tool by running the following command: php vendor/magento/data-migration-tool/bin/migrate This command will print the usage of the command. The next thing is creating the configuration files. The examples of the configuration files are in the following folder: vendor/magento/data-migration-tool/etc/<version>. We can create a copy of this folder where we can set our custom configuration values. For a Magento 1.9 installation, we have to run the following cp command: cp –R vendor/magento/data-migration-tool/etc/ce-to-ce/1.9.1.0/ vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration Open the vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist file and search for the source/database and destination/database tags. Change the values of these database settings to your database settings like in the following code: <source> <database host="localhost" name="magento1" user="root"/> </source> <destination> <database host="localhost" name="magento2_migration" user="root"/> </destination> Rename that file to config.xml with the following command: mv vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml.dist vendor/magento/data-migration-tool/etc/ce-to-ce/packt-migration/config.xml How it works By adding a composer dependency, we installed the data migration tool for Magento 2 in the codebase. This migration tool is a PHP command line script that will handle the migration steps from a Magento 1 shop. In the etc folder of the migration module, there is an example configuration of an empty Magento 1.9 shop. If you want to migrate an existing Magento 1 shop, you have to customize these configuration files so it matches your preferred state. In the next recipe, we will learn how we can use the script to start the migration. Who this book is written for? This book is packed with a wide range of techniques to modify and extend the functionality of your online store. It contains easy-to-understand recipes starting with the basics and moving on to cover advanced topics. Many recipes work with code examples that can be downloaded from the book’s website. Summary In this article, we learned about how to Prepare an upgrade from Magento 1. Read Magento 2 Development Cookbook to gain detailed knowledge of Magento 2 workflows, explore use cases for advanced features, craft well thought out orchestrations, troubleshoot unexpected behavior, and extend Magento 2 through customizations. Other related titles are: Magento : Beginner's Guide - Second Edition Mastering Magento Magento: Beginner's Guide Mastering Magento Theme Design Resources for Article: Further resources on this subject: Creating a Responsive Magento Theme with Bootstrap 3[article] Social Media and Magento[article] Optimizing Magento Performance — Using HHVM [article]
Read more
  • 0
  • 0
  • 11520
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-relational-databases-sqlalchemy
Packt
02 Nov 2015
28 min read
Save for later

Relational Databases with SQLAlchemy

Packt
02 Nov 2015
28 min read
In this article by Matthew Copperwaite, author of the book Learning Flask Framework, he talks about how relational databases are the bedrock upon which almost every modern web applications are built. Learning to think about your application in terms of tables and relationships is one of the keys to a clean, well-designed project. We will be using SQLAlchemy, a powerful object relational mapper that allows us to abstract away the complexities of multiple database engines, to work with the database directly from within Python. In this article, we shall: Present a brief overview of the benefits of using a relational database Introduce SQLAlchemy, The Python SQL Toolkit and Object Relational Mapper Configure our Flask application to use SQLAlchemy Write a model class to represent blog entries Learn how to save and retrieve blog entries from the database Perform queries—sorting, filtering, and aggregation Create schema migrations using Alembic (For more resources related to this topic, see here.) Why use a relational database? Our application's database is much more than a simple record of things that we need to save for future retrieval. If all we needed to do was save and retrieve data, we could easily use flat text files. The fact is, though, that we want to be able to perform interesting queries on our data. What's more, we want to do this efficiently and without reinventing the wheel. While non-relational databases (sometimes known as NoSQL databases) are very popular and have their place in the world of web, relational databases long ago solved the common problems of filtering, sorting, aggregating, and joining tabular data. Relational databases allow us to define sets of data in a structured way that maintains the consistency of our data. Using relational databases also gives us, the developers, the freedom to focus on the parts of our app that matter. In addition to efficiently performing ad hoc queries, a relational database server will also do the following: Ensure that our data conforms to the rules set forth in the schema Allow multiple people to access the database concurrently, while at the same time guaranteeing the consistency of the underlying data Ensure that data, once saved, is not lost even in the event of an application crash Relational databases and SQL, the programming language used with relational databases, are topics worthy of an entire book. Because this book is devoted to teaching you how to build apps with Flask, I will show you how to use a tool that has been widely adopted by the Python community for working with databases, namely, SQLAlchemy. SQLAlchemy abstracts away many of the complications of writing SQL queries, but there is no substitute for a deep understanding of SQL and the relational model. For that reason, if you are new to SQL, I would recommend that you check out the colorful book Learn SQL the Hard Way, Zed Shaw available online for free at http://sql.learncodethehardway.org/. Introducing SQLAlchemy SQLAlchemy is an extremely powerful library for working with relational databases in Python. Instead of writing SQL queries by hand, we can use normal Python objects to represent database tables and execute queries. There are a number of benefits to this approach which are listed as follows: Your application can be developed entirely in Python. Subtle differences between database engines are abstracted away. This allows you to do things just like a lightweight database, for instance, use SQLite for local development and testing, then switch to the databases designed for high loads (such as PostgreSQL) in production. Database errors are less common because there are now two layers between your application and the database server: the Python interpreter itself (which will catch the obvious syntax errors), and SQLAlchemy, which has well-defined APIs and it's own layer of error-checking. Your database code may become more efficient, thanks to SQLAlchemy's unit-of-work model which helps reduce unnecessary round-trips to the database. SQLAlchemy also has facilities for efficiently pre-fetching related objects known as eager loading. Object Relational Mapping (ORM) makes your code more maintainable, an asperation known as don't repeat yourself, (DRY). Suppose you add a column to a model. With SQLAlchemy it will be available whenever you use that model. If, on the other hand, you had hand-written SQL queries strewn throughout your app, you would need to update each query, one at a time, to ensure that you were including the new column. SQLAlchemy can help you avoid SQL injection vulnerabilities. Excellent library support: There are a multitude of useful libraries that can work directly with your SQLAlchemy models to provide things like maintenance interfaces and RESTful APIs. I hope you're excited after reading this list. If all the items in this list don't make sense to you right now, don't worry. Now that we have discussed some of the benefits of using SQLAlchemy, let's install it and start coding. If you'd like to learn more about SQLAlchemy, there is an article devoted entirely to its design in The Architecture of Open-Source Applications, available online for free at http://aosabook.org/en/sqlalchemy.html. Installing SQLAlchemy We will use pip to install SQLAlchemy into the blog app's virtualenv. To activate your virtualenv, change directories to source the activate script as follows: $ cd ~/projects/blog $ source bin/activate (blog) $ pip install sqlalchemy Downloading/unpacking sqlalchemy … Successfully installed sqlalchemy Cleaning up... You can check if your installation succeeded by opening a Python interpreter and checking the SQLAlchemy version; note that your exact version number is likely to differ. $ python >>> import sqlalchemy >>> sqlalchemy.__version__ '0.9.0b2' Using SQLAlchemy in our Flask app SQLAlchemy works very well with Flask on its own, but the author of Flask has released a special Flask extension named Flask-SQLAlchemy that provides helpers with many common tasks, and can save us from having to re-invent the wheel later on. Let's use pip to install this extension: (blog) $ pip install flask-sqlalchemy … Successfully installed flask-sqlalchemy Flask provides a standard interface for the developers who are interested in building extensions. As the framework has grown in popularity, the number of high quality extensions has increased. If you'd like to take a look at some of the more popular extensions, there is a curated list available on the Flask project website at http://flask.pocoo.org/extensions/. Choosing a database engine SQLAlchemy supports a multitude of popular database dialects, including SQLite, MySQL, and PostgreSQL. Depending on the database you would like to use, you may need to install an additional Python package containing a database driver. Listed next are several popular databases supported by SQLAlchemy and the corresponding pip-installable driver. Some databases have multiple driver options, so I have listed the most popular one first. Database Driver Package(s) SQLite Not needed, part of the Python standard library since version 2.5 MySQL MySQL-python, PyMySQL (pure Python), OurSQL PostgreSQL psycopg2 Firebird fdb Microsoft SQL Server pymssql, PyODBC Oracle cx-Oracle SQLite comes as standard with Python and does not require a separate server process, so it is perfect for getting up and running quickly. For simplicity in the examples that follow, I will demonstrate how to configure the blog app for use with SQLite. If you have a different database in mind that you would like to use for the blog project, feel free to use pip to install the necessary driver package at this time. Connecting to the database Using your favorite text editor, open the config.py module for our blog project (~/projects/blog/app/config.py). We are going to add an SQLAlchemy specific setting to instruct Flask-SQLAlchemy how to connect to our database. The new lines are highlighted in the following: class Configuration(object): APPLICATION_DIR = current_directory DEBUG = True SQLALCHEMY_DATABASE_URI = 'sqlite:///%s/blog.db' % APPLICATION_DIR The SQLALCHEMY_DATABASE_URIis comprised of the following parts: dialect+driver://username:password@host:port/database Because SQLite databases are stored in local files, the only information we need to provide is the path to the database file. On the other hand, if you wanted to connect to PostgreSQL running locally, your URI might look something like this: postgresql://postgres:secretpassword@localhost:5432/blog_db If you're having trouble connecting to your database, try consulting the SQLAlchemy documentation on the database URIs:http://docs.sqlalchemy.org/en/rel_0_9/core/engines.html Now that we've specified how to connect to the database, let's create the object responsible for actually managing our database connections. This object is provided by the Flask-SQLAlchemy extension and is conveniently named SQLAlchemy. Open app.py and make the following additions: from flask import Flask from flask.ext.sqlalchemy import SQLAlchemy from config import Configuration app = Flask(__name__) app.config.from_object(Configuration) db = SQLAlchemy(app) These changes instruct our Flask app, and in turn SQLAlchemy, how to communicate with our application's database. The next step will be to create a table for storing blog entries and to do so, we will create our first model. Creating the Entry model A model is the data representation of a table of data that we want to store in the database. These models have attributes called columns that represent the data items in the data. So, if we were creating a Person model, we might have columns for storing the first and last name, date of birth, home address, hair color, and so on. Since we are interested in creating a model to represent blog entries, we will have columns for things like the title and body content. Note that we don't say a People model or Entries model – models are singular even though they commonly represent many different objects. With SQLAlchemy, creating a model is as easy as defining a class and specifying a number of attributes assigned to that class. Let's start with a very basic model for our blog entries. Create a new file named models.py in the blog project's app/ directory and enter the following code: import datetime, re from app import db def slugify(s): return re.sub('[^w]+', '-', s).lower() class Entry(db.Model): id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(100)) slug = db.Column(db.String(100), unique=True) body = db.Column(db.Text) created_timestamp = db.Column(db.DateTime, default=datetime.datetime.now) modified_timestamp = db.Column( db.DateTime, default=datetime.datetime.now, onupdate=datetime.datetime.now) def __init__(self, *args, **kwargs): super(Entry, self).__init__(*args, **kwargs) # Call parent constructor. self.generate_slug() def generate_slug(self): self.slug = '' if self.title: self.slug = slugify(self.title) def __repr__(self): return '<Entry: %s>' % self.title There is a lot going on, so let's start with the imports and work our way down. We begin by importing the standard library datetime and re modules. We will be using datetime to get the current date and time, and re to do some string manipulation. The next import statement brings in the db object that we created in app.py. As you recall, the db object is an instance of the SQLAlchemy class, which is a part of the Flask-SQLAlchemy extension. The db object provides access to the classes that we need to construct our Entry model, which is just a few lines ahead. Before the Entry model, we define a helper function slugify, which we will use to give our blog entries some nice URLs. The slugify function takes a string like A post about Flask and uses a regular expression to turn a string that is human-readable in a URL, and so returns a-post-about-flask. Next is the Entry model. Our Entry model is a normal class that extends db.Model. By extending the db.Model our Entry class will inherit a variety of helpers which we'll use to query the database. The attributes of the Entry model, are a simple mapping of the names and data that we wish to store in the database and are listed as follows: id: This is the primary key for our database table. This value is set for us automatically by the database when we create a new blog entry, usually an auto incrementing number for each new entry. While we will not explicitly set this value, a primary key comes in handy when you want to refer one model to another. title: The title for a blog entry, stored as a String column with a maximum length of 100. slug: The URL-friendly representation of the title, stored as a String column with a maximum length of 100. This column also specifies unique=True, so that no two entries can share the same slug. body: The actual content of the post, stored in a Text column. This differs from the String type of the Title and Slug as you can store as much text as you like in this field. created_timestamp: The time a blog entry was created, stored in a DateTime column. We instruct SQLAlchemy to automatically populate this column with the current time by default when an entry is first saved. modified_timestamp: The time a blog entry was last updated. SQLAlchemy will automatically update this column with the current time whenever we save an entry. For short strings such as titles or names of things, the String column is appropriate, but when the text may be especially long it is better to use a Text column, as we did for the entry body. We've overridden the constructor for the class (__init__) so that when a new model is created, it automatically sets the slug for us based on the title. The last piece is the __repr__ method which is used to generate a helpful representation of instances of our Entry class. The specific meaning of __repr__ is not important but allows you to reference the object that the program is working with, when debugging. A final bit of code needs to be added to main.py, the entry-point to our application, to ensure that the models are imported. Add the highlighted changes to main.py as follows: from app import app, db import models import views if __name__ == '__main__': app.run() Creating the Entry table In order to start working with the Entry model, we first need to create a table for it in our database. Luckily, Flask-SQLAlchemy comes with a nice helper for doing just this. Create a new sub-folder named scripts in the blog project's app directory. Then create a file named create_db.py: (blog) $ cd app/ (blog) $ mkdir scripts (blog) $ touch scripts/create_db.py Add the following code to the create_db.py module. This function will automatically look at all the code that we have written and create a new table in our database for the Entry model based on our models: from main import db if __name__ == '__main__': db.create_all() Execute the script from inside the app/ directory. Make sure the virtualenv is active. If everything goes successfully, you should see no output. (blog) $ python create_db.py (blog) $ If you encounter errors while creating the database tables, make sure you are in the app directory, with the virtualenv activated, when you run the script. Next, ensure that there are no typos in your SQLALCHEMY_DATABASE_URI setting. Working with the Entry model Let's experiment with our new Entry model by saving a few blog entries. We will be doing this from the Python interactive shell. At this stage let's install IPython, a sophisticated shell with features like tab-completion (that the default Python shell lacks): (blog) $ pip install ipython Now check if we are in the app directory and let's start the shell and create a couple of entries as follows: (blog) $ ipython In []: from models import * # First things first, import our Entry model and db object. In []: db # What is db? Out[]: <SQLAlchemy engine='sqlite:////home/charles/projects/blog/app/blog.db'> If you are familiar with the normal Python shell but not IPython, things may look a little different at first. The main thing to be aware of is that In[] refers to the code you type in, and Out[] is the output of the commands you put in to the shell. IPython has a neat feature that allows you to print detailed information about an object. This is done by typing in the object's name followed by a question-mark (?). Introspecting the Entry model provides a bit of information, including the argument signature and the string representing that object (known as the docstring) of the constructor: In []: Entry? # What is Entry and how do we create it? Type: _BoundDeclarativeMeta String Form:<class 'models.Entry'> File: /home/charles/projects/blog/app/models.py Docstring: <no docstring> Constructor information: Definition:Entry(self, *args, **kwargs) We can create Entry objects by passing column values in as the keyword-arguments. In the preceding example, it uses **kwargs; this is a shortcut for taking a dict object and using it as the values for defining the object, as shown next: In []: first_entry = Entry(title='First entry', body='This is the body of my first entry.') In order to save our first entry, we will to add it to the database session. The session is simply an object that represents our actions on the database. Even after adding it to the session, it will not be saved to the database yet. In order to save the entry to the database, we need to commit our session: In []: db.session.add(first_entry) In []: first_entry.id is None # No primary key, the entry has not been saved. Out[]: True In []: db.session.commit() In []: first_entry.id Out[]: 1 In []: first_entry.created_timestamp Out[]: datetime.datetime(2014, 1, 25, 9, 49, 53, 1337) As you can see from the preceding code examples, once we commit the session, a unique id will be assigned to our first entry and the created_timestamp will be set to the current time. Congratulations, you've created your first blog entry! Try adding a few more on your own. You can add multiple entry objects to the same session before committing, so give that a try as well. At any point while you are experimenting, feel free to delete the blog.db file and re-run the create_db.py script to start over with a fresh database. Making changes to an existing entry In order to make changes to an existing Entry, simply make your edits and then commit. Let's retrieve our Entry using the id that was returned to use earlier, make some changes and commit it. SQLAlchemy will know that it needs to be updated. Here is how you might make edits to the first entry: In []: first_entry = Entry.query.get(1) In []: first_entry.body = 'This is the first entry, and I have made some edits.' In []: db.session.commit() And just like that your changes are saved. Deleting an entry Deleting an entry is just as easy as creating one. Instead of calling db.session.add, we will call db.session.delete and pass in the Entry instance that we wish to remove: In []: bad_entry = Entry(title='bad entry', body='This is a lousy entry.') In []: db.session.add(bad_entry) In []: db.session.commit() # Save the bad entry to the database. In []: db.session.delete(bad_entry) In []: db.session.commit() # The bad entry is now deleted from the database. Retrieving blog entries While creating, updating, and deleting are fairly straightforward operations, the real fun starts when we look at ways to retrieve our entries. We'll start with the basics, and then work our way up to more interesting queries. We will use a special attribute on our model class to make queries: Entry.query. This attribute exposes a variety of APIs for working with the collection of entries in the database. Let's simply retrieve a list of all the entries in the Entry table: In []: entries = Entry.query.all() In []: entries # What are our entries? Out[]: [<Entry u'First entry'>, <Entry u'Second entry'>, <Entry u'Third entry'>, <Entry u'Fourth entry'>] As you can see, in this example, the query returns a list of Entry instances that we created. When no explicit ordering is specified, the entries are returned to us in an arbitrary order chosen by the database. Let's specify that we want the entries returned to us in an alphabetical order by title: In []: Entry.query.order_by(Entry.title.asc()).all() Out []: [<Entry u'First entry'>, <Entry u'Fourth entry'>, <Entry u'Second entry'>, <Entry u'Third entry'>] Shown next is how you would list your entries in reverse-chronological order, based on when they were last updated: In []: oldest_to_newest = Entry.query.order_by(Entry.modified_timestamp.desc()).all() Out []: [<Entry: Fourth entry>, <Entry: Third entry>, <Entry: Second entry>, <Entry: First entry>] Filtering the list of entries It is very useful to be able to retrieve the entire collection of blog entries, but what if we want to filter the list? We could always retrieve the entire collection and then filter it in Python using a loop, but that would be very inefficient. Instead we will rely on the database to do the filtering for us, and simply specify the conditions for which entries should be returned. In the following example, we will specify that we want to filter by entries where the title equals 'First entry'. In []: Entry.query.filter(Entry.title == 'First entry').all() Out[]: [<Entry u'First entry'>] If this seems somewhat magical to you, it's because it really is! SQLAlchemy uses operator overloading to convert expressions like <Model>.<column> == <some value> into an abstracted object called BinaryExpression. When you are ready to execute your query, these data-structures are then translated into SQL. A BinaryExpression is simply an object that represents the logical comparison and is produced by over-riding the standards methods that are typically called on an object when comparing values in Python. In order to retrieve a single entry, you have two options, .first() and .one(). Their differences and similarities are summarized in the following table: Number of matching rows first() behavior one() behavior 1 Return the object. Return the object. 0 Return None. Raise sqlalchemy.orm.exc.NoResultFound 2+ Return the first object (based on either explicit ordering or the ordering chosen by the database). Raise sqlalchemy.orm.exc.MultipleResultsFound Let's try the same query as before, but instead of calling .all(), we will call .first() to retrieve a single Entry instance: In []: Entry.query.filter(Entry.title == 'First entry').first() Out[]: <Entry u'First entry'> Notice how previously .all() returned a list containing the object, whereas .first() returned just the object itself. Special lookups In the previous example we tested for equality, but there are many other types of lookups possible. In the following table, have listed some that you may find useful. A complete list can be found in the SQLAlchemy documentation. Example Meaning Entry.title == 'The title' Entries where the title is "The title", case-sensitive. Entry.title != 'The title' Entries where the title is not "The title". Entry.created_timestamp < datetime.date(2014, 1, 25) Entries created before January 25, 2014. For less than or equal, use <=. Entry.created_timestamp > datetime.date(2014, 1, 25) Entries created after January 25, 2014. For greater than or equal, use >=. Entry.body.contains('Python') Entries where the body contains the word "Python", case-sensitive. Entry.title.endswith('Python') Entries where the title ends with the string "Python", case-sensitive. Note that this will also match titles that end with the word "CPython", for example. Entry.title.startswith('Python') Entries where the title starts with the string "Python", case-sensitive. Note that this will also match titles like "Pythonistas". Entry.body.ilike('%python%') Entries where the body contains the word "python" anywhere in the text, case-insensitive. The "%" character is a wild-card. Entry.title.in_(['Title one', 'Title two']) Entries where the title is in the given list, either 'Title one' or 'Title two'. Combining expressions The expressions listed in the preceding table can be combined using bitwise operators to produce arbitrarily complex expressions. Let's say we want to retrieve all blog entries that have the word Python or Flask in the title. To accomplish this, we will create two contains expressions, then combine them using Python's bitwise OR operator which is a pipe| character unlike a lot of other languages that use a double pipe || character: Entry.query.filter(Entry.title.contains('Python') | Entry.title.contains('Flask')) Using bitwise operators, we can come up with some pretty complex expressions. Try to figure out what the following example is asking for: Entry.query.filter( (Entry.title.contains('Python') | Entry.title.contains('Flask')) & (Entry.created_timestamp > (datetime.date.today() - datetime.timedelta(days=30))) ) As you probably guessed, this query returns all entries where the title contains either Python or Flask, and which were created within the last 30 days. We are using Python's bitwise OR and AND operators to combine the sub-expressions. For any query you produce, you can view the generated SQL by printing the query as follows: In []: query = Entry.query.filter( (Entry.title.contains('Python') | Entry.title.contains('Flask')) & (Entry.created_timestamp > (datetime.date.today() - datetime.timedelta(days=30))) ) In []: print str(query) SELECT entry.id AS entry_id, ... FROM entry WHERE ( (entry.title LIKE '%%' || :title_1 || '%%') OR (entry.title LIKE '%%' || :title_2 || '%%') ) AND entry.created_timestamp > :created_timestamp_1 Negation There is one more piece to discuss, which is negation. If we wanted to get a list of all blog entries which did not contain Python or Flask in the title, how would we do that? SQLAlchemy provides two ways to create these types of expressions, using either Python's unary negation operator (~) or by calling db.not_(). This is how you would construct this query with SQLAlchemy: Using unary negation: In []: Entry.query.filter(~(Entry.title.contains('Python') | Entry.title.contains('Flask')))   Using db.not_(): In []: Entry.query.filter(db.not_(Entry.title.contains('Python') | Entry.title.contains('Flask'))) Operator precedence Not all operations are considered equal to the Python interpreter. This is like in math class, where we learned that expressions like 2 + 3 * 4 are equal to 14 and not 20, because the multiplication operation occurs first. In Python, bitwise operators all have a higher precedence than things like equality tests, so this means that when you are building your query expression, you have to pay attention to the parentheses. Let's look at some example Python expressions and see the corresponding query: Expression Result (Entry.title == 'Python' | Entry.title == 'Flask') Wrong! SQLAlchemy throws an error because the first thing to be evaluated is actually the 'Python' | Entry.title! (Entry.title == 'Python') | (Entry.title == 'Flask') Right. Returns entries where the title is either "Python" or "Flask". ~Entry.title == 'Python' Wrong! SQLAlchemy will turn this into a valid SQL query, but the results will not be meaningful. ~(Entry.title == 'Python') Right. Returns entries where the title is not equal to "Python". If you find yourself struggling with the operator precedence, it's a safe bet to put parentheses around any comparison that uses ==, !=, <, <=, >, and >=. Making changes to the schema The final topic we will discuss in this article is how to make modifications to an existing Model definition. From the project specification, we know we would like to be able to save drafts of our blog entries. Right now we don't have any way to tell whether an entry is a draft or not, so we will need to add a column that let's us store the status of our entry. Unfortunately, while db.create_all() works perfectly for creating tables, it will not automatically modify an existing table; to do this we need to use migrations. Adding Flask-Migrate to our project We will use Flask-Migrate to help us automatically update our database whenever we change the schema. In the blog virtualenv, install Flask-migrate using pip: (blog) $ pip install flask-migrate The author of SQLAlchemy has a project called alembic; Flask-Migrate makes use of this and integrates it with Flask directly, making things easier. Next we will add a Migrate helper to our app. We will also create a script manager for our app. The script manager allows us to execute special commands within the context of our app, directly from the command-line. We will be using the script manager to execute the migrate command. Open app.py and make the following additions: from flask import Flask from flask.ext.migrate import Migrate, MigrateCommand from flask.ext.script import Manager from flask.ext.sqlalchemy import SQLAlchemy from config import Configuration app = Flask(__name__) app.config.from_object(Configuration) db = SQLAlchemy(app) migrate = Migrate(app, db) manager = Manager(app) manager.add_command('db', MigrateCommand) In order to use the manager, we will add a new file named manage.py along with app.py. Add the following code to manage.py: from app import manager from main import * if __name__ == '__main__': manager.run() This looks very similar to main.py, the key difference being that instead of calling app.run(), we are calling manager.run(). Django has a similar, although auto-generated, manage.py file that serves a similar function. Creating the initial migration Before we can start changing our schema, we need to create a record of its current state. To do this, run the following commands from inside your blog's app directory. The first command will create a migrations directory inside the app folder which will track the changes we make to our schema. The second command db migrate will create a snapshot of our current schema so that future changes can be compared to it. (blog) $ python manage.py db init Creating directory /home/charles/projects/blog/app/migrations ... done ... (blog) $ python manage.py db migrate INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. Generating /home/charles/projects/blog/app/migrations/versions/535133f91f00_.py ... done Finally, we will run db upgrade to run the migration which will indicate to the migration system that everything is up-to-date: (blog) $ python manage.py db upgrade INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running upgrade None -> 535133f91f00, empty message Adding a status column Now that we have a snapshot of our current schema, we can start making changes. We will be adding a new column named status, which will store an integer value corresponding to a particular status. Although there are only two statuses at the moment (PUBLIC and DRAFT), using an integer instead of a Boolean gives us the option to easily add more statuses in the future. Open models.py and make the following additions to the Entry model: class Entry(db.Model): STATUS_PUBLIC = 0 STATUS_DRAFT = 1 id = db.Column(db.Integer, primary_key=True) title = db.Column(db.String(100)) slug = db.Column(db.String(100), unique=True) body = db.Column(db.Text) status = db.Column(db.SmallInteger, default=STATUS_PUBLIC) created_timestamp = db.Column(db.DateTime, default=datetime.datetime.now) ... From the command-line, we will once again be running db migrate to generate the migration script. You can see from the command's output that it found our new column: (blog) $ python manage.py db migrate INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.autogenerate.compare] Detected added column 'entry.status' Generating /home/charles/projects/blog/app/migrations/versions/2c8e81936cad_.py ... done Because we have blog entries in the database, we need to make a small modification to the auto-generated migration to ensure the statuses for the existing entries are initialized to the proper value. To do this, open up the migration file (mine is migrations/versions/2c8e81936cad_.py) and change the following line: op.add_column('entry', sa.Column('status', sa.SmallInteger(), nullable=True)) The replacement of nullable=True with server_default='0' tells the migration script to not set the column to null by default, but instead to use 0: op.add_column('entry', sa.Column('status', sa.SmallInteger(), server_default='0')) Finally, run db upgrade to run the migration and create the status column: (blog) $ python manage.py db upgrade INFO [alembic.migration] Context impl SQLiteImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running upgrade 535133f91f00 -> 2c8e81936cad, empty message Congratulations, your Entry model now has a status field! Summary By now you should be familiar with using SQLAlchemy to work with a relational database. We covered the benefits of using a relational database and an ORM, configured a Flask application to connect to a relational database, and created SQLAlchemy models. All this allowed us to create relationships between our data and perform queries. To top it off, we also used a migration tool to handle future database schema changes. We will set aside the interactive interpreter and start creating views to display blog entries in the web browser. We will put all our SQLAlchemy knowledge to work by creating interesting lists of blog entries, as well as a simple search feature. We will build a set of templates to make the blogging site visually appealing, and learn how to use the Jinja2 templating language to eliminate repetitive HTML coding. Resources for Article:   Further resources on this subject: Man, Do I Like Templates! [article] Snap – The Code Snippet Sharing Application [article] Deploying on your own server [article]
Read more
  • 0
  • 0
  • 14720

article-image-making-simple-web-based-ssh-client-using-nodejs-and-socketio
Jakub Mandula
28 Oct 2015
7 min read
Save for later

Making a simple Web based SSH client using Node.js and Socket.io

Jakub Mandula
28 Oct 2015
7 min read
If you are reading this post, you probably know what SSH stands for. But just for the sake of formality, here we go: SSH stands for Secure Shell. It is a network protocol for secure access to the shell on a remote computer. You can do much more over SSH besides commanding your computer. Here you can find further information: http://en.wikipedia.org/wiki/Secure_Shell. In this post, we are going to create a very simple web terminal. And when I say simple, I mean it! However much you like colors, it will not support them because the parsing is just beyond the scope of this post. If you want a good client-side terminal library use term.js. It is made by the same guy who wrote pty.js, which we will be using. It is able to handle pretty much all key events and COLORS!!!! Installation I am going to assume you already have your node and npm installed. First we will install all of the npm packages we will be using: npm install express pty.js socket.io Express is a super cool web framework for Node. We are going to use it to serve our static files. I know it is a bit overkill, but I like Express. pty.js is where the magic will be happening. It forks processes into virtual pseudo terminals and provides bindings for communication. Socket.io is what we will use to transmit the data from the web browser to the server and back. It uses modern WebSockets, but provides fallbacks for backward compatibility. Anytime you want to create a real-time application, Socket.io is the way to go. Planning First things first, we need to think what we want the program to do. We want the program to create an instance of a shell on the server (remote machine) and send all of the text to the browser. Back in the browser, we want to capture any user events and send them back to the server shell. The WebSSH server This is the code that will power the terminal forwarding. Open a new file named server.js and start by importing all of the libraries: var express = require('express'); var https = require('https'); var http = require('http'); var fs = require('fs'); var pty = require('pty.js'); Set up express: // Setup the express app var app = express(); // Static file serving app.use("/",express.static("./")); Next we are going to create the server. // Creating an HTTP server var server = http.createServer(app).listen(8080) If you want to use HTTPS, which you probably will, you need to generate a key and certificate and import them as shown. var options = { key: fs.readFileSync('keys/key.pem'), cert: fs.readFileSync('keys/cert.pem') }; Then use the options object to create the actual server. Notice that this time we are using the https package. // Create an HTTPS server var server = https.createServer(options, app).listen(8080) CAUTION: Even if you use HTTPS, do not use this example program on the Internet. You are not authenticating the client in any way and thus providing a free open gate to your computer. Please make sure you only use this on your Private network protected by a firewall!!! Now bind the socket.io instance to the server: var io = require('socket.io')(server); After this, we can set up the place where the magic happens. // When a new socket connects io.on('connection', function(socket){ // Create terminal var term = pty.spawn('sh', [], { name: 'xterm-color', cols: 80, rows: 30, cwd: process.env.HOME, env: process.env }); // Listen on the terminal for output and send it to the client term.on('data', function(data){ socket.emit('output', data); }); // Listen on the client and send any input to the terminal socket.on('input', function(data){ term.write(data); }); // When socket disconnects, destroy the terminal socket.on("disconnect", function(){ term.destroy(); console.log("bye"); }); }); In this block, all we do is wait for new connections. When we get one, we spawn a new virtual terminal and start to pump the data from the terminal to the socket and vice versa. After the socket disconnects, we make sure to destroy the terminal. If you have noticed, I am using the simple sh shell. I did this mainly because I don't have a fancy prompt on it. Because we are not adding any parsing logic, my bash prompt would show up like this: ]0;piman@mothership: ~ _[01;32m✓ [33mpiman_[0m ↣ _[1;34m[~]_[37m$[0m - Eww! But you may use any shell you like. This is all that we need on the server side. Save the file and close it. Client side The client side is going to be just a very simple HTML file. Start with a very simple HTML markup: <!doctype html> <html> <head> <title>SSH Client</title> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/socket.io/1.3.5/socket.io.min.js"></script> <script type="text/javascript" src="//cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js"></script> <style> body { margin: 0; padding: 0; } .terminal { font-family: monospace; color: white; background: black; } </style> </head> <body> <h1>SSH</h1> <div class="terminal"> </div> <script> </script> </body> </html> I am downloading the client side libraries jquery and socket.io from cdnjs. All of the client code will be written in the script tag below the terminal div. Surprisingly the code is very simple: // Connect to the socket.io server var socket = io.connect('http://localhost:8080'); // Wait for data from the server socket.on('output', function (data) { // Insert some line breaks where they belong data = data.replace("n", "<br>"); data = data.replace("r", "<br>"); // Append the data to our terminal $('.terminal').append(data); }); // Listen for user input and pass it to the server $(document).on("keypress",function(e){ var char = String.fromCharCode(e.which); socket.emit("input", char); }); Notice that we do not have to explicitly append the text the client types to the terminal mainly because the server echos it back anyways. Now we are done! Run the server and open up the URL in your browser. node server.js You should see a small prompt and be able to start typing commands. You can now explore you machine from the browser! Remember that our Web Terminal does not support Tab, Ctrl, Backspace or Esc characters. Implementing this is your homework. Conclusion I hope you found this tutorial useful. You can apply the knowledge in any real-time application where communication with the server is critical. All the code is available here. Please note, that if you'd like to use a browser terminal I strongly recommend term.js. It supports colors and styles and all the basic keys including Tabs, Backspace etc. I use it in my PiDashboard project. It is much cleaner and less tedious than the example I have here. I can't wait what amazing apps you will invent based on this. About the Author Jakub Mandula is a student interested in anything to do with technology, computers, mathematics or science.
Read more
  • 0
  • 6
  • 48193

article-image-icons
Packt
26 Oct 2015
21 min read
Save for later

PrimeFaces Theme Development: Icons

Packt
26 Oct 2015
21 min read
In this article by Andy Bailey and Sudheer Jonna, the authors of the book, PrimeFaces Theme Development, we'll cover icons, which add a lot of value to an application based on the principle that a picture is worth a thousand words. Equally important is the fact that they can, when well designed, please the eye and serve as memory joggers for your user. We humans strongly associate symbols with actions. For example, a save button with a disc icon is more evocative. The association becomes even stronger when we use the same icon for the same action in menus and button bars. It is also possible to use icons in place of text labels. It is an important thing to keep in mind when designing the user interface of your application that the navigational and action elements (such as buttons) should not be so intrusive that the application becomes too cluttered with the things that can be done. The user wants to be able to see the information that they want to see and use input dialogs to add more. What they don't want is to be distracted with links, lots of link and button text, and glaring visuals. In this article, we will cover the following topics: The standard theme icon set Creating a set of icons of our own Adding new icons to a theme Using custom icons in a commandButton component Using custom icons in a menu component The FontAwesome icons as an alternative to the ThemeRoller icons (For more resources related to this topic, see here.) Introducing the standard theme icon set jQuery UI provides a big set of standard icons that can be applied by just adding icon class names to HTML elements. The full list of icons is available at its official site, which can be viewed by visiting http://api.jqueryui.com/theming/icons/. Also, available in some of the published icon cheat sheets at http://www.petefreitag.com/cheatsheets/jqueryui-icons/. The icon class names follow the following syntax in order to add them for HTML elements: .ui-icon-{icon type}-{icon sub description}-{direction} .ui-icon-{icon type}-{icon sub description}-{direction} For example, the following span element will display an icon of a triangle pointing to the south: <span class="ui-icon ui-icon-triangle-1-s"></span> Other icons such as ui-icon-triangle-1-n, ui-icon-triangle-1-e, and ui-icon-triangle-1-w represent icons of triangles pointing to the north, east, and west respectively. The direction element is optional, and it is available only for a few icons such as a triangle, an arrow, and so on. These theme icons will be integrated in a number of jQuery UI-based widgets such as buttons, menus, dialogs, date picker components, and so on. The aforementioned standard set of icons is available in the ThemeRoller as one image sprite instead of a separate image for each icon. That is, ThemeRoller is designed to use the image sprites technology for icons. The different image sprites that vary in color (based on the widget state) are available in the images folder of each downloaded theme. An image sprite is a collection of images put into a single image. A webpage with many images may take a long time to load and generate multiple server requests. For a high-performance application, this idea will reduce the number of server requests and bandwidth. Also, it centralizes the image locations so that all the icons can be found at one location. The basic image sprite for the PrimeFaces Aristo theme looks like this: The image sprite's look and feel will vary based on the screen area of the widget and its components such as the header and content and widget states such as hover, active, highlight, and error styles. Let us now consider a JSF/PF-based example, where we can add a standard set of icons for UI components such as the commandButton and menu bar. First, we will create a new folder in web pages called chapter6. Then, we will create a new JSF template client called standardThemeIcons.xhtml and add a link to it in the chaptersTemplate.xhtml template file. When adding a submenu, use Chapter 6 for the label name and for the menu item, use Standard Icon Set as its value. In the title section, replace the text title with the respective topic of this article, which is Standard Icons: <ui:define name="title">   Standard Icons </ui:define> In the content section, replace the text content with the code for commandButton and menu components. Let's start with the commandButton components. The set of commandButton components uses the standard theme icon set with the help of the icon attribute, as follows: <h:panelGroup style="margin-left:830px">   <h3 style="margin-top: 0">Buttons</h3>   <p:commandButton value="Edit" icon="ui-icon-pencil"     type="button" />   <p:commandButton value="Bookmark" icon="ui-icon-bookmark"     type="button" />   <p:commandButton value="Next" icon="ui-icon-circle-arrow-e"     type="button" />   <p:commandButton value="Previous" icon="ui-icon-circle-arrow-w"     type="button" /> </h:panelGroup> The generated HTML for the first commandButton that is used to display the standard icon will be as follows: <button id="mainForm:j_idt15" name="mainForm:j_idt15" class="ui-   button ui-widget ui-state-default ui-corner-all ui-button-text-   icon-left" type="button" role="button" aria-disabled="false">   <span class="ui-button-icon-left ui-icon ui-c   ui-icon-     pencil"></span>   <span class="ui-button-text ui-c">Edit</span> </button> The PrimeFaces commandButton renderer appends the icon position CSS class based on the icon position (left or right) to the HTML button element, apart from the icon CSS class in one child span element and text CSS class in another child span element. This way, it displays the icon on commandButton based on the icon position property. By default, the position of the icon is left. Now, we will move on to the menu components. A menu component uses the standard theme icon set with the help of the menu item icon attribute. Add the following code snippets of the menu component to your page: <h3>Menu</h3> <p:menu style="margin-left:500px">   <p:submenu label="File">     <p:menuitem value="New" url="#" icon="ui-icon-plus" />     <p:menuitem value="Delete" url="#" icon="ui-icon-close" />     <p:menuitem value="Refresh" url="#" icon="ui-icon-refresh" />     <p:menuitem value="Print" url="#" icon="ui-icon-print" />   </p:submenu>   <p:submenu label="Navigations">     <p:menuitem value="Home" url="http://www.primefaces.org"       icon="ui-icon home" />     <p:menuitem value="Admin" url="#" icon="ui-icon-person" />     <p:menuitem value="Contact Us" url="#" icon="ui-icon-       contact" />   </p:submenu> </p:menu> You may have observed from the preceding code snippets that each icon from ThemeRoller starts with ui-icon for consistency. Now, run the application and navigate your way to the newly created page, and you should see the standard ThemeRoller icons applied to buttons and menu items, as shown in the following screenshot: For further information, you can use PrimeFaces showcase (http://www.primefaces.org/showcase/), where you can see the default icons used for components, applying standard theme icons with the help of the icon attribute, and so on. Creating a set of icons of our own In this section, we are going to discuss how to create our own icons for the PrimeFaces web application. Instead of using images, you need to use image sprites by considering the impact of application performance. Most of the time, we might be interested in adding custom icons to UI components apart from the regular standard icon set. Generally, in order to create our own custom icons, we need to provide CSS classes with the background-image property, which is referred to the image in the theme images folder. For example, the following commandButton components will use a custom icon: <p:commandButton value="With Icon" icon="disk"/> <p:commandButton icon="disk"/> The disk icon is created by adding the .disk CSS class with the background image property. In order to display the image, you need to provide the correct relative path of the image from the web application, as follows: .disk {   background-image: url('disk.png') !important; } However, as discussed earlier, we are going to use the image sprite technology instead of a separate image for each icon to optimize web performance. Before creating an image sprite, you need to select all the required images and convert those images (PNG, JPG, and so on) to the icon format with a size almost equal to to that of the ThemeRoller icons. In this article, we used the Paint.NET tool to convert images to the ICO format with a size of 16 by 16 pixels. Paint.NET is a free raster graphics editor for Microsoft Windows, and it is developed on the .NET framework. It is a good replacement for the Microsoft Paint program in an editor with support for layers blending, transparency, and plugins. If the ICO format is not available, then you have to add the file type plug-in for the Paint.NET installation directory. So, this is just a two-step process for the conversion: The image (PNG, JPG, and so on) need to be saved as the Icons (*.ico) option from the Save as type dropdown. Then, select 16 by 16 dimensions with the supported bit system (8-bit, 32-bit, and so on). All the PrimeFaces theme icons are designed to have the same dimensions. There are many online and offline tools available that can be used to create an image sprite. I used Instant Sprite, an open source CSS sprite generator tool, to create an image sprite in this article. You can have a look at the official site for this CSS generator tool by visiting http://instantsprite.com/. Let's go through the following step-by-step process to create an image sprite using the Instant Sprite tool: First, either select multiple icons from your computer, or drag and drop icons on to the tool page. In the Thumbnails section, just drag and drop the images to change their order in the sprite. Change the offset (in pixels), direction (horizontal, vertical, and diagonal), and the type (.png or .gif) values in the Options section. In the Sprite section, right-click on the image to save it on your computer. You can also save the image in a new window or as a base64 type. In the Usage section, you will find the generated sprite CSS classes and HTML. Once the image is created, you will be able to see the image in the preview section before finalizing the image. Now, let's start creating the image sprite for button bar and menu components, which are going to be used in later sections. First, download or copy the required individual icons on the computer. Then, select all those files and drag and drop them in a particular order, as follows: We can also configure a few options, such as an offset of 10 px for icon padding, direction as horizontal to display them horizontally, and then finally selecting the image as the PNG type: The image sprite is generated in the sprite section, as follows: Right-click on the image to save it on your computer. Now, we have created a custom image sprite from the set of icons. Once the image sprite has been created, change the sprite name to ui-custom-icons and copy the generated CSS styles for later. In the generated HTML, note that each div class is appended with the ui-icon class to display the icon with a width of 16 px and height of 16 px. Adding the new icons to your theme In order to apply the custom icons to your web page, we first need to copy the generated image sprite file and then add the generated CSS classes from the previous section. The following generated sprite file has to be added to the images folder of the primefaces-moodyBlue2 custom theme. Let's name the file ui-custom-icons: After this, copy the generated CSS rules from the previous section. The first CSS class (ui-icon) contains the image sprite to display custom icons using the background URL property and dimensions such as the width and height properties for each icon. But since we are going to add the image reference in widget state style classes, you need to remove the background image URL property from the ui-icon class. Hence, the ui-icon class contains only the width and height dimensions: .ui-icon {   width: 16px;   height: 16px; } Later, modify the icon-specific CSS class names as shown in the following format. Each icon has its own icon name: .ui-icon-{icon name} The following CSS classes are used to refer individual icons with the help of the background-position property. Now after modification, the positioning CSS classes will look like this: .ui-icon-edit { background-position: 0 0; } .ui-icon-bookmark { background-position: -26px 0; } .ui-icon-next { background-position: -52px 0; } .ui-icon-previous { background-position: -78px 0; } .ui-icon-new { background-position: -104px 0; } .ui-icon-delete { background-position: -130px 0; } .ui-icon-refresh { background-position: -156px 0; } .ui-icon-print { background-position: -182px 0; } .ui-icon-home { background-position: -208px 0; } .ui-icon-admin { background-position: -234px 0; } .ui-icon-contactus { background-position: -260px 0; } Apart from the preceding CSS classes, we have to add the component state CSS classes. Widget states such as hover, focus, highlight, active, and error need to refer to different image sprites in order to display the component state behavior for user interactions. For demonstration purposes, we created only one image sprite and used it for all the CSS classes. But in real-time development, the image will vary based on the widget state. The following widget states refer to image sprites for different widget states: .ui-icon, .ui-widget-content .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-widget-header .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-default .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-hover .ui-icon, .ui-state-focus .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-active .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-highlight .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } .ui-state-error .ui-icon, .ui-state-error-text .ui-icon {   background-image: url("#{resource['primefaces-     moodyblue2:images/ui-custom-icons.png']}"); } In the JSF ecosystem, image references in the theme.css file must be converted to an expression that JSF resource loading can understand. So at first, in the preceding CSS classes, all the image URLs are appeared in the following expression: background-image: url("images/ui-custom-icons.png"); The preceding expression, when modified, looks like this: background-image: url("#{resource['primefaces-   moodyblue2:images/ui-custom-icons.png']}");  We need to make sure that the default state classes are commented out in the theme.css (the moodyblue2 theme) file to display the custom icons. By default, custom theme classes (such as the state classes and icon classes available under custom states and images and custom icons positioning) are commented out in the source code of the GitHub project. So, we need to uncomment these sections and comment out the default theme classes (such as the state classes and icon classes available under states and images and positioning). This means that the default or custom style classes only need to be available in the theme.css file. (OR) You can see all these changes in moodyblue3 theme as well. The custom icons appeared in Custom Icons screen by just changing the current theme to moodyblue3. Using custom icons in the commandButton components After applying the new icons to the theme, you are ready to use them on the PrimeFaces components. In this section, we will add custom icons to command buttons. Let's add a link named Custom Icons to the chaptersTemplate.xhtml file. The title of this page is also named Custom Icons. The following code snippets show how custom icons are added to command buttons using the icon attribute: <h3 style="margin-top: 0">Buttons</h3> <p:commandButton value="Edit" icon="ui-icon-edit" type="button" /> <p:commandButton value="Bookmark" icon="ui-icon-bookmark"   type="button" /> <p:commandButton value="Next" icon="ui-icon-next" type="button" /> <p:commandButton value="Previous" icon="ui-icon-previous"   type="button" /> Now, run the application and navigate to the newly created page. You should see the custom icons applied to the command buttons, as shown in the following screenshot: The commandButton component also supports the iconpos attribute if you wish to display the icon either to the left or right side. The default value is left. Using custom icons in a menu component In this section, we are going to add custom icons to a menu component. The menuitem tag supports the icon attribute to attach a custom icon. The following code snippets show how custom icons are added to the menu component: <h3>Menu</h3> <p:menu style="margin-left:500px">   <p:submenu label="File">     <p:menuitem value="New" url="#" icon="ui-icon-new" />     <p:menuitem value="Delete" url="#" icon="ui-icon-delete" />     <p:menuitem value="Refresh" url="#" icon="ui-icon-refresh" />     <p:menuitem value="Print" url="#" icon="ui-icon-print" />   </p:submenu>   <p:submenu label="Navigations">     <p:menuitem value="Home" url="http://www.primefaces.org"       icon="ui-icon-home" />     <p:menuitem value="Admin" url="#" icon="ui-icon-admin" />     <p:menuitem value="Contact Us" url="#" icon="ui-icon-       contactus" />   </p:submenu> </p:menu> Now, run the application and navigate to the newly created page. You will see the custom icons applied to the menu component, as shown in the following screenshot: Thus, you can apply custom icons on a PrimeFaces component that supports the icon attribute. The FontAwesome icons as an alternative to the ThemeRoller icons In addition to the default ThemeRoller icon set, the PrimeFaces team provided and supported a set of alternative icons named the FontAwesome iconic font and CSS framework. Originally, it was designed for the Twitter Bootstrap frontend framework. Currently, it works well with all frameworks. The official site for the FontAwesome toolkit is http://fortawesome.github.io/Font-Awesome/. The features of FontAwesome that make it a powerful iconic font and CSS toolkit are as follows: One font, 519 icons: In a single collection, FontAwesome is a pictographic language of web-related actions No JavaScript required: It has minimum compatibility issues because FontAwesome doesn't required JavaScript Infinite scalability: SVG (short for Scalable Vector Graphics) icons look awesome in any size Free to use: It is completely free and can be used for commercial usage CSS control: It's easy to style the icon color, size, shadow, and so on Perfect on retina displays: It looks gorgeous on high resolution displays It can be easily integrated with all frameworks Desktop-friendly Compatible with screen readers FontAwesome is an extension to Bootstrap by providing various icons based on scalable vector graphics. This FontAwesome feature is available from the PrimeFaces 5.2 release onwards. These icons can be customized in terms of size, color, drop and shadow and so on with the power of CSS. The full list of icons is available at both the official site of FontAwesome (http://fortawesome.github.io/Font-Awesome/icons/) as well as the PrimeFaces showcase (http://www.primefaces.org/showcase/ui/misc/fa.xhtml). In order to enable this feature, we have to set primefaces.FONT_AWESOME context param in web.xml to true, as follows: <context-param>   <param-name>primefaces.FONT_AWESOME</param-name>   <param-value>true</param-value> </context-param> The usage is as simple as using the standard ThemeRoller icons. PrimeFaces components such as buttons or menu items provide an icon attribute, which accepts an icon from the FontAwesome icon set. Remember that the icons should be prefixed by fa in a component. The general syntax of the FontAwesome icons will be as follows: fa fa-[name]-[shape]-[o]-[direction] Here, [name] is the name of the icon, [shape] is the optional shape of the icon's background (either circle or square), [o] is the optional outlined version of the icon, and [direction] is the direction in which certain icons point. Now, we first create a new navigation link named FontAwesome under chapter6 inside the chapterTemplate.xhtml template file. Then, we create a JSF template client called fontawesome.xhtml, where it explains the FontAwesome feature with the help of buttons and menu. This page has been added as a menu item for the top-level menu bar. In the content section, replace the text content with the following code snippets. The following set of buttons displays the FontAwesome icons with the help of the icon attribute. You may have observed that the fa-fw style class used to set icons at a fixed width. This is useful when variable widths throw off alignment: <h3 style="margin-top: 0">Buttons</h3> <p:commandButton value="Edit" icon="fa fa-fw fa-edit"   type="button" /> <p:commandButton value="Bookmark" icon="fa fa-fw fa-bookmark"   type="button" /> <p:commandButton value="Next" icon="fa fa-fw fa-arrow-right"   type="button" /> <p:commandButton value="Previous" icon="fa fa-fw fa-arrow-  left"   type="button" /> After this, apply the FontAwesome icons to navigation lists, such as the menu component, to display the icons just to the left of the component text content, as follows: <h3>Menu</h3> <p:menu style="margin-left:500px">   <p:submenu label="File">     <p:menuitem value="New" url="#" icon="fa fa-plus" />     <p:menuitem value="Delete" url="#" icon="fa fa-close" />     <p:menuitem value="Refresh" url="#" icon="fa fa-refresh" />     <p:menuitem value="Print" url="#" icon="fa fa-print" />   </p:submenu>   <p:submenu label="Navigations">     <p:menuitem value="Home" url="http://www.primefaces.org"       icon="fa fa-home" />     <p:menuitem value="Admin" url="#" icon="fa fa-user" />     <p:menuitem value="Contact Us" url="#" icon="fa fa-       picture-o" />   </p:submenu> </p:menu> Now, run the application and navigate to the newly created page. You should see the FontAwesome icons applied to buttons and menu items, as shown in the following screenshot: Note that the 40 shiny new icons of FontAwesome are available only in the PrimeFaces Elite 5.2.2 release and the community PrimeFaces 5.3 release because PrimeFaces was upgraded to FontAwesome 4.3 version since its 5.2.2 release. Summary In this article, we explored the standard theme icon set and how to use it on various PrimeFaces components. We also learned how to create our own set of icons in the form of the image sprite technology. We saw how to create image sprites using open source online tools and add them on a PrimeFaces theme. Finally, we had a look at the FontAwesome CSS framework, which was introduced as an alternative to the standard ThemeRoller icons. To ensure best practice, we learned how to use icons on commandButton and menu components. Now that you've come to the end of this article, you should be comfortable using web icons for PrimeFaces components in different ways. Resources for Article: Further resources on this subject: Introducing Primefaces [article] Setting Up Primefaces [article] Components Of Primefaces Extensions [article]
Read more
  • 0
  • 0
  • 5942

article-image-guidelines-creating-responsive-forms
Packt
23 Oct 2015
12 min read
Save for later

Guidelines for Creating Responsive Forms

Packt
23 Oct 2015
12 min read
In this article by Chelsea Myers, the author of the book, Responsive Web Design Patterns, covers the guidelines for creating responsive forms. Online forms are already modular. Because of this, they aren't hard to scale down for smaller screens. The little boxes and labels can naturally shift around between different screen sizes since they are all individual elements. However, form elements are naturally tiny and very close together. Small elements that you are supposed to click and fill in, whether on a desktop or mobile device, pose obstacles for the user. If you developed a form for your website, you more than likely want people to fill it out and submit it. Maybe the form is a survey, a sign up for a newsletter, or a contact form. Regardless of the type of form, online forms have a purpose; get people to fill them out! Getting people to do this can be difficult at any screen size. But when users are accessing your site through a tiny screen, they face even more challenges. As designers and developers, it is our job to make this process as easy and accessible as possible. Here are some guidelines to follow when creating a responsive form: Give all inputs breathing room. Use proper values for input's type attribute. Increase the hit states for all your inputs. Stack radio inputs and checkboxes on small screens. Together, we will go over each of these guidelines and see how to apply them. (For more resources related to this topic, see here.) The responsive form pattern Before we get started, let's look at the markup for the form we will be using. We want to include a sample of the different input options we can have. Our form will be very basic and requires simple information from the users such as their name, e-mail, age, favorite color, and favorite animal. HTML: <form> <!—- text input --> <label class="form-title" for="name">Name:</label> <input type="text" name="name" id="name" /> <!—- email input --> <label class="form-title" for="email">Email:</label> <input type="email" name="email" id="email" /> <!—- radio boxes --> <label class="form-title">Favorite Color</label> <input type="radio" name="radio" id="red" value="Red" /> <label>Red</label> <input type="radio" name="radio" id="blue" value="Blue" /><label>Blue</label> <input type="radio" name="radio" id="green" value="Green" /><label>Green</label> <!—- checkboxes --> <label class="form-title" for="checkbox">Favorite Animal</label> <input type="checkbox" name="checkbox" id="dog" value="Dog" /><label>Dog</label> <input type="checkbox" name="checkbox" id="cat" value="Cat" /><label>Cat</label> <input type="checkbox" name="checkbox" id="other" value="Other" /><label>Other</label> <!—- drop down selection --> <label class="form-title" for="select">Age:</label> <select name="select" id="select"> <option value="age-group-1">1-17</option> <option value="age-group-2">18-50</option> <option value="age-group-3">&gt;50</option> </select> <!—- textarea--> <label class="form-title" for="textarea">Tell us more:</label> <textarea cols="50" rows="8" name="textarea" id="textarea"></textarea> <!—- submit button --> <input type="submit" value="Submit" /> </form> With no styles applied, our form looks like the following screenshot: Several of the form elements are next to each other, making the form hard to read and almost impossible to fill out. Everything seems tiny and squished together. We can do better than this. We want our forms to be legible and easy to fill. Let's go through the guidelines and make this eyesore of a form more approachable. #1 Give all inputs breathing room In the preceding screenshot, we can't see when one form element ends and the other begins. They are showing up as inline, and therefore displaying on the same line. We don't want this though. We want to give all our form elements their own line to live on and not share any space to the right of each type of element. To do this, we add display: block to all our inputs, selects, and text areas. We also apply display:block to our form labels using the class .form-title. We will be going more into why the titles have their own class for the fourth guideline. CSS: input[type="text"], input[type="email"], textarea, select { display: block; margin-bottom: 10px; } .form-title { display:block; font-weight: bold; } As mentioned, we are applying display:block to text and e-mail inputs. We are also applying it to textarea and select elements. Just having our form elements display on their own line is not enough. We also give everything a margin-bottom element of 10px to give the elements some breathing room between one another. Next, we apply display:block to all the form titles and make them bold to add more visual separation. #2 Use proper values for input's type attribute Technically, if you are collecting a password from a user, you are just asking for text. E-mail, search queries, and even phone numbers are just text too. So, why would we use anything other than <input type="text"…/>? You may not notice the difference on your desktop computer between these form elements, but the change is the biggest on mobile devices. To show you, we have two screenshots of what the keyboard looks like on an iPhone while filling out the text input and the e-mail input: In the left image, we are focused on the text input for entering your name. The keyboard here is normal and nothing special. In the right image, we are focused on the e-mail input and can see the difference on the keyboard. As the red arrow points out, the @ key and the . key are now present when typing in the e-mail input. We need both of those to enter in a valid e-mail, so the device brings up a special keyboard with those characters. We are not doing anything special other than making sure the input has type="email" for this to happen. This works because type="email" is a new HTML5 attribute. HTML5 will also validate that the text entered is a correct email format (which used to be done with JavaScript). Here are some other HTML5 type attribute values from the W3C's third HTML 5.1 Editor's Draft (http://www.w3.org/html/wg/drafts/html/master/semantics.html#attr-input-type-keywords): color date datetime email month number range search tel time url week #3 Increase the hit states for all your inputs It would be really frustrating for the user if they could not easily select an option or click a text input to enter information. Making users struggle isn't going to increase your chance of getting them to actually complete the form. The form elements are naturally very small and not large enough for our fingers to easily tap. Because of this, we should increase the size of our form inputs. Having form inputs to be at least 44 x 44 px large is a standard right now in our industry. This is not a random number either. Apple suggests this size to be the minimum in their iOS Human Interface Guidelines, as seen in the following quote: "Make it easy for people to interact with content and controls by giving each interactive element ample spacing. Give tappable controls a hit target of about 44 x 44 points." As you can see, this does not apply to only form elements. Apple's suggest is for all clickable items. Now, this number may change along with our devices' resolutions in the future. Maybe it will go up or down depending on the size and precision of our future technology. For now though, it is a good place to start. We need to make sure that our inputs are big enough to tap with a finger. In the future, you can always test your form inputs on a touchscreen to make sure they are large enough. For our form, we can apply this minimum size by increasing the height and/or padding of our inputs. CSS: input[type="text"], input[type="email"], textarea, select { display: block; margin-bottom: 10px; font-size: 1em; padding:5px; min-height: 2.75em; width: 100%; max-width: 300px; } The first two styles are from the first guideline. After this, we are increasing the font-size attribute of the inputs, giving the inputs more padding, and setting a min-height attribute for each input. Finally, we are making the inputs wider by setting the width to 100%, but also applying a max-width attribute so the inputs do not get too unnecessarily wide. We want to increase the size of our submit button as well. We definitely don't want our users to miss clicking this: input[type="submit"] { min-height: 3em; padding: 0 2.75em; font-size: 1em; border:none; background: mediumseagreen; color: white; } Here, we are also giving the submit button a min-height attribute, some padding, and increasing the font-size attribute. We are striping the browser's native border style on the button with border:none. We also want to make this button very obvious, so we apply a mediumseagreen color as the background and white text color. If you view the form so far in the browser, or look at the image, you will see all the form elements are bigger now except for the radio inputs and checkboxes. Those elements are still squished together. To make our radio and checkboxes bigger in our example, we will make the option text bigger. Doesn't it make sense that if you want to select red as your favorite color, you would be able to click on the word "red" too, and not just the box next to the word? In the HTML for the radio inputs and the checkboxes, we have markup that looks like this: <input type="radio" name="radio" id="red" value="Red" /><label>Red</label> <input type="checkbox" name="checkbox" id="dog" value="Dog" /><label>Dog</label> To make the option text clickable, all we have to do is set the for attribute on the label to match the id attribute of the input. We will wrap the radio and checkbox inputs inside of their labels so that we can easily stack them for guideline #4. We will also give the labels a class of choice to help style them. <label class="choice" for="red"><input type="radio" name="radio" id="red" value="Red" />Red</label> <label class="choice" for="dog"><input type="checkbox" name="checkbox" id="dog" value="Dog" />Dog</label> Now, the option text and the actual input are both clickable. After doing this, we can apply some more styles to make selecting a radio or checkbox option even easier: label input { margin-left: 10px; } .choice { margin-right: 15px; padding: 5px 0; } .choice + .form-title { margin-top: 10px; } With label input, we are giving the input and the label text a little more space between each other. Then, using the .choice class, we are spreading out each option with margin-right: 15px and making the hit states bigger with padding: 5px 0. Finally, with .choice + .form-title, we are giving any .form-title element that comes after an element with a class of .choice more breathing room. This is going back to the responsive form guideline #1. There is only one last thing we need to do. On small screens, we want to stack the radio and checkbox inputs. On large screens, we want to keep them inline. To do this, we will add display:block to the .choice class. We will then be using a media query to change it back: @media screen and (min-width: 600px){ .choice { display: inline; } } With each input on its own line for smaller screens, they are easier to select. But we don't need to take up all that vertical space on wider screens. With this, our form is done. You can see our finished form, as shown in the following screenshot: Much better, wouldn't you say? No longer are all the inputs tiny and mushed together. The form is easy to read, tap, and begin entering in information. Filling in forms is not considered a fun thing to do, especially on a tiny screen with big thumbs. But there are ways in which we can make the experience easier and a little more visually pleasing. Summary A classic user experience challenge is to design a form that encourages completion. When it comes to fact, figures, and forms, it can be hard to retain the user's attention. This does not mean it is impossible. Having a responsive website does make styling tables and forms a little more complex. But what is the alternative? Nonresponsive websites make you pinch and zoom endlessly to fill out a form or view a table. Having a responsive website gives you the opportunity to make this task easier. It takes a little more code, but in the end, your users will greatly benefit from it. With this article, we have wrapped up guidelines for creating responsive forms. Resources for Article: Further resources on this subject: Securing and Authenticating Web API [article] Understanding CRM Extendibility Architecture [article] CSS3 – Selectors and nth Rules [article]
Read more
  • 0
  • 0
  • 10879
article-image-securing-and-authenticating-web-api
Packt
21 Oct 2015
9 min read
Save for later

Securing and Authenticating Web API

Packt
21 Oct 2015
9 min read
In this article by Rajesh Gunasundaram, author of ASP.NET Web API Security Essentials, we will cover how to secure a Web API using forms authentication and Windows authentication. You will also get to learn the advantages and disadvantages of using the forms and Windows authentication in Web API. In this article, we will cover the following topics: The working of forms authentication Implementing forms authentication in the Web API Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism Configuring Windows authentication Enabling Windows authentication in Katana Discussing Hawkauthentication (For more resources related to this topic, see here.) The working of forms authentication The user credentials will be submitted to the server using the HTML forms in forms authentication. This can be used in the ASP.NET Web API only if it is consumed from a web application. Forms authentication is built under ASP.NET and uses the ASP.NET membership provider to manage user accounts. Forms authentication requires a browser client to pass the user credentials to the server. It sends the user credentials in the request and uses HTTP cookies for the authentication. Let's list out the process of forms authenticationstep by step: The browser tries to access a restricted action that requires an authenticated request. If the browser sends an unauthenticated request, thenthe server responds with an HTTP status 302 Found and triggers the URL redirection to the login page. To send the authenticated request, the user enters the username and password and submits the form. If the credentials are valid, the server responds with an HTTP 302 status code that initiates the browser to redirect the page to the original requested URI with the authentication cookie in the response. Any request from the browser will now include the authentication cookie and the server will grant access to any restricted resource. The following image illustrates the workflow of forms authentication: Fig 1 – Illustrates the workflow of forms authentication Implementing forms authentication in the Web API To send the credentials to the server, we need an HTML form to submit. Let's use the HTML form or view an ASP.NET MVC application. The steps to implement forms authentication in an ASP.NET MVC application areas follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Name the project Chapter06.FormsAuthentication and click OK. Fig 2 – We have named the ASP.NET Web Application as Chapter06.FormsAuthentication Select the MVC template in the New ASP.NET Project dialog. Tick Web APIunder Add folders and core referencesand press OKleaving Authentication to Individual User Accounts. Fig 3 – Select MVC template and check Web API in add folders and core references In the Models folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code snippet: namespaceChapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "steve@gmail.com", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "matt@gmail.com", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "mark@gmail.com", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } As you can see in the preceding code, we decorated the Get() action in ContactsController with the [Authorize] attribute. So, this Web API action can only be accessed by an authenticated request. An unauthenticated request to this action will make the browser redirect to the login page and enable the user to either register or login. Once logged in, any request that tries to access this action will be allowed as it is authenticated.This is because the browser automatically sends the session cookie along with the request and forms authentication uses this cookie to authenticate the request. It is very important to secure the website using SSL as forms authentication sends unencrypted credentials. Discussing the advantages and disadvantages of using the integrated Windows authentication mechanism First let's see the advantages of Windows authentication. Windows authentication is built under theInternet Information Services (IIS). It doesn't sends the user credentials along with the request. This authentication mechanism is best suited for intranet applications and doesn't need a user to enter their credentials. However, with all these advantages, there are a few disadvantages in the Windows authentication mechanism. It requires Kerberos that works based on tickets or NTLM, which is a Microsoft security protocols that should be supported by the client. The client'sPC must be underan active directory domain. Windows authentication is not suitable for internet applications as the client may not necessarily be on the same domain. Configuring Windows authentication Let's implement Windows authentication to an ASP.NET MVC application, as follows: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed Templatenamed Web. Choose ASP.NET Web Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthentication and click OK. Fig 4 – We have named the ASP.NET Web Application as Chapter06.WindowsAuthentication Change the Authentication mode to Windows Authentication. Fig 5 – Select Windows Authentication in Change Authentication window Select the MVC template in the New ASP.NET Project dialog. Tick Web API under Add folders and core references and click OK. Fig 6 – Select MVC template and check Web API in add folders and core references Under theModels folder, add a class named Contact.cs with the following code: namespace Chapter06.FormsAuthentication.Models { public class Contact { publicint Id { get; set; } public string Name { get; set; } public string Email { get; set; } public string Mobile { get; set; } } } Add a Web API controller named ContactsController with the following code: namespace Chapter06.FormsAuthentication.Api { public class ContactsController : ApiController { IEnumerable<Contact> contacts = new List<Contact> { new Contact { Id = 1, Name = "Steve", Email = "steve@gmail.com", Mobile = "+1(234)35434" }, new Contact { Id = 2, Name = "Matt", Email = "matt@gmail.com", Mobile = "+1(234)5654" }, new Contact { Id = 3, Name = "Mark", Email = "mark@gmail.com", Mobile = "+1(234)56789" } }; [Authorize] // GET: api/Contacts publicIEnumerable<Contact> Get() { return contacts; } } } The Get() action in ContactsController is decorated with the[Authorize] attribute. However, in Windows authentication, any request is considered as an authenticated request if the client relies on the same domain. So no explicit login process is required to send an authenticated request to call theGet() action. Note that the Windows authentication is configured in the Web.config file: <system.web> <authentication mode="Windows" /> </system.web> Enabling Windows authentication in Katana The following steps will create a console application and enable Windows authentication in katana: Create New Project from the Start pagein Visual Studio. Select Visual C# Installed TemplatenamedWindows Desktop. Select Console Applicationfrom the middle panel. Give project name as Chapter06.WindowsAuthenticationKatana and click OK: Fig 7 – We have named the Console Application as Chapter06.WindowsAuthenticationKatana Install NuGet Packagenamed Microsoft.Owin.SelfHost from NuGet Package Manager: Fig 8 – Install NuGet Package named Microsoft.Owin.SelfHost Add aStartup class with the following code snippet: namespace Chapter06.WindowsAuthenticationKatana { class Startup { public void Configuration(IAppBuilder app) { var listener = (HttpListener)app.Properties["System.Net.HttpListener"]; listener.AuthenticationSchemes = AuthenticationSchemes.IntegratedWindowsAuthentication; app.Run(context => { context.Response.ContentType = "text/plain"; returncontext.Response.WriteAsync("Hello Packt Readers!"); }); } } } Add the following code in the Main function in Program.cs: using (WebApp.Start<Startup>("http://localhost:8001")) { Console.WriteLine("Press any Key to quit Web App."); Console.ReadKey(); } Now run the application and open http://localhost:8001/ in the browser: Fig 8 – Open the Web App in a browser If you capture the request using the fiddler, you will notice an Authorization Negotiate entry in the header of the request Try calling http://localhost:8001/ in the fiddler and you will get a 401 Unauthorized response with theWWW-Authenticate headers that indicates that the server attaches a Negotiate protocol that consumes either Kerberos or NTLM, as follows: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/8.0 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM X-Powered-By: ASP.NET Date: Tue, 01 Sep 2015 19:35:51 IST Content-Length: 6062 Proxy-Support: Session-Based-Authentication Discussing Hawk authentication Hawk authentication is a message authentication code-based HTTP authentication scheme that facilitates the partial cryptographic verification of HTTP messages. Hawk authentication requires a symmetric key to be shared between the client and server. Instead of sending the username and password to the server in order to authenticate the request, Hawk authentication uses these credentials to generate a message authentication code and is passed to the server in the request for authentication. Hawk authentication is mainly implemented in those scenarios where you need to pass the username and password via the unsecured layer and no SSL is implemented over the server. In such cases, Hawk authentication protects the username and password and passes the message authentication code instead. For example, if you are building a small product that has control over both the server and client and implementing SSL is too expensive for such a small project, then Hawk is the best option to secure the communication between your server and client. Summary Voila! We just secured our Web API using the forms- and Windows-based authentication. In this article,youlearnedabout how forms authentication works and how it is implemented in the Web API. You also learnedabout configuring Windows authentication and got to know about the advantages and disadvantages of using Windows authentication. Then you learned about implementing the Windows authentication mechanism in Katana. Finally, we had an introduction about Hawk authentication and the scenarios of using Hawk authentication. Resources for Article: Further resources on this subject: Working with ASP.NET Web API [article] Creating an Application using ASP.NET MVC, AngularJS and ServiceStack [article] Enhancements to ASP.NET [article]
Read more
  • 0
  • 0
  • 7059

article-image-gamification-moodle-lms
Packt
19 Oct 2015
11 min read
Save for later

Gamification with Moodle LMS

Packt
19 Oct 2015
11 min read
 In this article by Natalie Denmeade, author of the book, Gamification with Moodle describes how teachers can use Gamification design in their course development within the Moodle Learning Management System (LMS) to increase the motivation and engagement of learners. (For more resources related to this topic, see here.) Gamification is a design process that re-frames goals to be more appealing and achievable by using game design principles. The goal of this process is it to keep learners engaged and motivated in a way that is not always present in traditional courses. When implemented in elegant solutions, learners may be unaware of the subtle game elements being used. A gamification strategy can be considered successful if learners are more engaged, feel challenged and confident to keep progressing, which has implications for the way teachers consider their course evaluation processes. It is important to note that Gamification in education is more about how the person feels at certain points in their learning journey than about the end product which may or may not look like a game. Gamification and Moodle After following the tutorials in this book, teachers will gain the basic skills to get started applying Gamification design techniques in their Moodle courses. They can take learners on a journey of risk, choice, surprise, delight, and transformation. Taking an activity and reframing it to be more appealing and achievable sounds like the job description of any teacher or coach! Therefore, many teachers are already doing this! Understanding games and play better can help teachers be more effective in using a wider range of game elements to aid retention and completions in their courses. In this book you will find hints and tips on how to apply proven strategies to online course development, including the research into a growth mindset from Carol Dweck in her book Mindset. You will see how the use of game elements in Foursquare (badges), Twitter (likes), and Linkedin (progress bar), can also be applied to Moodle course design. In addition, you will use the core features available in Moodle which were designed to encourage learner participation as they collaborate, tag, share, vote, network, and generate learning content for each other. Finally, explore new features and plug-ins which offer dozens of ways that teachers can use game elements in Moodle such as, badges, labels, rubrics, group assignments, custom grading scales, forums, and conditional activities. A benefit of using Moodle as a Gamification LMS is it was developed on social constructivist principles. As these are learner-centric principles this means it is easy to use common Moodle features to apply gamification through the implementation of game components, mechanics and dynamics. These have been described by Kevin Werbach (in the Coursera MOOC on Gamification) as: Game Dynamics are the grammar: (the hidden elements) Constraints, emotions, narrative, progression, relationships Game Mechanics are the verbs: The action is driven forward by challenges, chance, competition/cooperation, feedback, resource acquisition, rewards, transactions, turns, win states Game Components are the nouns: Achievements, avatars, badges, boss fights, collections, combat, content, unlocking, gifting, leaderboards, levels, points, quests, teams, virtual goods Most of these game elements are not new ideas to teachers. It could be argued that school is already gamified through the use of grades and feedback. In fact it would be impossible to find a classroom that is not using some game elements. This book will help you identify which elements will be most effective in your current context. Teachers are encouraged to start with a few and gradually expanding their repertoire. As with professional game design, just using game elements will not ensure learners are motivated and engaged. The measure of success of a Gamification strategy is that learners continue to build resilience and autonomy in their own learning. When implemented well, the potential benefits of using a Gamification design process in Moodle are to: Provide manageable set of subtasks and tasks by hiding and revealing content Make assessment criteria visible, predictable, and in plain English using marking guidelines and rubrics Increase ownership of learning paths through choice and activity restrictions Build individual and group identity through work place simulations and role play Offer freedom to fail and try again without negative repercussions Increase enjoyment of both teacher and learners When teachers follow the step by step guide provided in this book they will create a basic Moodle course that acts as a flexible framework ready for learning content. This approach is ideal for busy teachers who want to respond to the changing needs and situations in the classroom. The dynamic approach keeps Teachers in control of adding and changing content without involving a technology support team. Onboarding tips By using focussed examples, the book describes how to use Moodle to implement an activity loop that identifies a desired behaviour and wraps motivations and feedback around that action. For example, a desired action may be for each learner to update their Moodle profile information with their interests and an avatar. Various motivational strategies could be put in place to prompt (or force) the learners to complete this task, including: Ask learners to share their avatars, with a link to their profile in a forum with ratings. Everyone else is doing it and they will feel left out if they don't get a like or a comment (creating a social norm). They might get rated as having the best avatar. Update the forum type so that learners can't see other avatars until they make a post. Add a theme (for example, Lego inspired avatars) so that creating an avatar is a chance to be creative and play. Choosing how they represent themselves in an online space is an opportunity for autonomy. Set the conditional release so learners cannot see the next activity until this activity is marked as complete (for example, post at least 3 comments on other avatars). The value in this process is that learners have started building connections between new classmates. This activity loop is designed to appeal to diverse motivations and achieve multiple goals: Encourages learners to create an online persona and choose their level of anonymity Invite learners to look at each other’s profiles and speed up the process of getting to know each other Introduce learners to the idea of forum posting and rating in a low-risk (non-assessable) way Take the workload off the Teacher to assess each activity directly Enforce compliance through software options which saves admin time and creates an expectation of work standards for learners Feedback options Games celebrate small and large successes and so should Moodle courses. There are a number of ways to do this in Moodle, including simply automating feedback with a Label, which is revealed once a milestone is reached. These milestones could be an activity completion, topic completion, or a level has been reached in the course total. Feedback can be provided through symbols of the achievement. Learners of all ages are highly motivated by this. Nearly all human cultures use symbols, icons, medals and badges to indicate status and achievements such as a black belt in Karate, Victoria Cross and Order of Australia Medals, OBE, sporting trophies, Gold Logies, feathers and tattoos. Symbols of achievement can be achieved through the use of open badges. Moodle offers a simple way to issue badges in line with Open Badges Industry (OBI) standards. The learner can take full ownership of this badge when they export it to their online backpack. Higher education institutes are finding evidence that open badges are a highly effective way to increase motivation for mature learners. Kaplan University found the implementation of badges resulted in increased student engagement by 17 percent. As well as improving learner reactions to complete harder tasks, grades increased up to 9 percent. Class attendance and discussion board posts increased over the non-badged counterparts. Using open badges as a motivation strategy enables feedback to be regularly provided along the way from peers, automated reporting and the teacher. For advanced Moodlers, the book describes how rubrics can be used for "levelling up" and how the Moodle gradebook can be configured as an exponential point scoring system to indicate progress. Social game elements Implementing social game elements is a powerful way to increase motivation and participation. A Gamification experiment with thousands of MOOC participants measured participation of learners in three groups of "plain, game and social". Students in the game condition had a 22.5 percent higher test score in the final test compared to students in the plain condition. Students in the social condition showed an even stronger increase of almost 40 percent compared to students in the plain condition. (See A Playful Game Changer: Fostering Student Retention in Online Education with Social Gamification Krause et al, 2014). Moodle has a number of components that can be used to encourage collaborative learning. Just as the online gaming world has created spaces where players communicate outside of the game in forums, wikis and You Tube channels as well as having people make cheat guides about the games and are happy to share their knowledge with beginners. In Moodle we can imitate these collaborative spaces gamers use to teach each other and make the most of the natural leaders and influencers in the class. Moodle activities can be used to encourage communication between learners and allow delegation and skill-sharing. For example, the teacher may quickly explain and train the most experienced in the group how to perform a certain task and then showcase their work to others as an example. The learner could create blog posts which become an online version of an exercise book. The learner chooses the sharing level so classmates only, or the whole world, can view what is shared and leave comments. The process of delegating instruction through the connection of leader/learners to lagger/learners, in a particular area, allows finish lines to be at different points. Rather spending the last few weeks marking every learner’s individual work, the Teacher can now focus their attention on the few people who have lagged behind and need support to meet the deadlines. It's worth taking the time to learn how to configure a Moodle course. This provides the ability to set up a system that is scalable and adaptable to each learner. The options in Moodle can be used to allow learners to create their own paths within the boundaries set by a teacher. Therefore, rather than creating personalised learning paths for every student, set up a suite of tools for learners to create their own learning paths. Learning how to configure Moodle activities will reduce administration tasks through automatic reports, assessments and conditional release of activities. The Moodle activities will automatically create data on learner participation and competence to assist in identifying struggling learners. The inbuilt reports available in Moodle LMS help Teachers to get to know their learners faster. In addition, the reports also create evidence for formative assessment which saves hours of marking time. Through the release from repetitive tasks, teachers can spend more time on the creative and rewarding aspects of teaching. Rather than wait for a game design company to create an awesome educational game for a subject area, get started by using the same techniques in your classroom. This creative process is rewarding for both teachers and learners because it can be constantly adapted for their unique needs. Summary Moodle provides a flexible Gamification platform because teachers are directly in control of modifying and adding a sequence of activities, without having to go through an administrator. Although it may not look as good as a video game (made with an extensive budget) learners will appreciate the effort and personalisation. The Gamification framework does require some preparation. However, once implemented it picks up a momentum of its own and the teacher has a reduced workload in the long run. Purchase the book and enjoy a journey into Gamification in education with Moodle! Resources for Article: Further resources on this subject: Virtually Everything for Everyone [article] Moodle for Online Communities [article] State of Play of BuddyPress Themes [article]
Read more
  • 0
  • 0
  • 6559

article-image-testing-your-site-and-monitoring-your-progress
Packt
15 Oct 2015
27 min read
Save for later

Testing Your Site and Monitoring Your Progress

Packt
15 Oct 2015
27 min read
In this article by Michael David, author of the book Wordpress Search Engine Optimization, you will learn about Google Analytics and Google Webmaster/Search Console. Once you have built your website and started promoting it, you'll want to monitor your progress to ensure that your hard work is yielding both high rankings, search engine visibility, and web traffic. In this article, we'll cover a range of tools with which you will monitor the quality of your website, learn how search spiders interact with your site, measure your rankings in search engines for various keywords, and analyze how your visitors behave, when they are on your site. With this information, you can gauge your progress and make adjustments to your strategy. (For more resources related to this topic, see here.) Obviously, you'll want to know where your web traffic is coming from, what search terms are being used to find your website, and where you are ranking in the search engines for each of these terms. This information will allow you to see what you still need to work on, in terms of building links to the pages on your website. There are five main tools you will use to analyze your site and evaluate your traffic and rankings, and in this article, we will cover each in turn. They are Google Analytics, Google Search Console (formerly Webmaster Tools), HTML Validator, Bing Webmaster, and Link Assistant's Rank Tracker. As an alternative to Bing Webmaster, you may also want to employ Majestic SEO to check your backlinks. We'll cover each of these tools in turn. Google Analytics Google Analytics monitors and analyzes your website's traffic. With this tool, you can see how many website visitors you have had, whether they found your site through the search engines or clicked through from another website, how these visitors behaved once they were on your site, and much more. You can even connect your Google AdSense account to one or more of the domains you are monitoring in Google Analytics, to get information about which pages and keywords generate the most income. While there are other analytics services available, none match the scope and scale of what Google Analytics offers. Setting up Google Analytics for your website To set up Google Analytics for your website, perform the following steps: To sign up for Google Analytics, visit http://www.google.com/analytics/. On the top-right corner of the page, you'll see a button that says, Sign In to Google Analytics. You'll need to have a Google account before signing in. You will have to click Sign up on the next page if you haven't already, and on the next page you'll enter your website's URL and time zone. If you have more than one website, just start with one. You can add the other sites later. The account name you use is not important, but you can add all of your sites to one account, so you might want to use something generic like your name or your business name. Select the time zone country and time zone, and then click on Continue. Enter your name and country, and click on Continue again. Then you will need to accept the Google Analytics terms of service. After you have accepted the terms, you will be given a snippet of HTML code, that you'll need to insert into the pages of your website. The snippet will look like the following: <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-xxxxxx-x', 'auto'); ga('send', 'pageview');</script> The code must be placed just before the closing head tag in each page on your website. There are several ways to install the code. Check first to see, if your WordPress template offers a field to insert the code in the template admin area—most modern templates offer this feature. If not, you can insert the code manually just before the closing the </head> tag, which you will find in the file called header.php within your WordPress template files. There is yet a third way to do this, if you don't want to tinker with your WordPress code. You can download and install one of the many WordPress analytics plugins. Two sound choices for an analytics plugin would be Google Analyticator or Google Analytics by Yoast; both the plugins are available at Wordpress.org. If you have more than one website, you can add more websites within your analytics account. Personally, I like to keep one website per analytics account just to keep things neat, but some like to have all their websites under one analytics account. To add a second or third website to an analytics account, navigate to Admin on the top menu bar, and then pull down the menu under Account in the left column, and then click Create a New Account. When you are done adding websites, click Home on the top menu bar to navigate to the main analytics page. From here, you can select a web property. This is the screen that greets you when you log in to Google Analytics, the Audience Overview report. It offers an effective quick look at how your website is performing. Using Google Analytics Once you have installed Google Analytics on your websites, you'll need to wait a few weeks, or perhaps longer, for Analytics to collect enough data from your websites to be useful. Remember that with website analytics, like any type of statistical analysis, larger sets of data reveal more accurate and useful information. It may not take that long if you have a high-traffic site, but if you have a fairly new site that doesn't get a lot of traffic, it will take some time, so come back in a few weeks to a month to check to see how your sites are doing. The Audience Overview report is your default report, displayed to you, when you first log into analytics and gives you a quick overview of each site's traffic. You can see at a glance, whether the tracking code is receiving data, how many visits (sessions) your website has gotten in the past 30 days, the average time visitors stay on your site (session duration), your bounce rate (the percentage of users that come to visit your site and leave without visiting another page), and the percentage of new sessions (users that haven't visited before) in the past 30 days. The data displayed on the dashboard is just a small taste of what you can learn from Google Analytics. To drill down to more detailed information, you'll navigate to other sections of analytics using the left menu. Your main report areas in analytics are Audience (who your visitors are), Acquisition (how your visitors found your site), Behavior (what your visitors did on your site), and Conversions (did they make a purchase or complete a contact form). Within any of the main report areas, are a dozens of available reports. Google Analytics can offer you tremendous detail on almost any imaginable metric related to your website. Most of your time, however, is best spent with a few key reports. Understanding key analytics reports Your key analytics reports will be the Audience Overview, Acquisition Overview, and the Behavior Overview. You access each of these reports by navigating to the left menu area and each corresponding overview report is the first link in each list. The Audience Overview report is your default report, discussed earlier. The Acquisition Overview report drills down into how your visitors are finding your site, whether it is through organic search, pay per click programs, social media, or referrals from other websites. These different pathways by which customers find your site are referred to as channels or mediums on other reports, but mean essentially the same thing. The Acquisition Overview report also shows some very basic conversion data, although the reports in the Conversions section offer much more meaningful conversion data. The following screenshot is an Acquisition Overview report: Why is the Acquisition Overview report important? It shows us the relative strength of our inbound channels. In the case above, organic search is delivering the highest number of users. This tells us that we are getting most of our users from the organic search results—our organic campaign is running strong. Referrals generated 384 visits, which means we've got good links delivering traffic. Our 369 direct visitors don't come from other websites, it's a fresh browser page where users type our URL directly or follow a bookmark they've saved. That is a welcome figure, because it means we've got strong brand and name recognition and in the conversions column on the right side of the table, we can see that our direct traffic generated a measurable conversion rate. That's a positive sign that our brand recognition and reputation is strong. The Behavior Overview report tells us how users behave once they visit our site. Are they bouncing immediately or viewing several pages? What content are they viewing the most? Here's a sample Behavior Overview report: Some information is repeated on the Behavior Overview report, such as Pageviews and Bounce Rate. What is important here is the table under the graph. This table shows you the relative popularity of your pages. This data is important because it shows you what content is generating the most user interest. As you can see from the table, the page titled /add-sidebar-wordpress generated over 1,000 pageviews, more than the home page, which is indicated in analytics by a single slash (/). This table shows you your most popular content—these pages are your greatest success. And remember, you can click on any individual page and then see individual metrics for that page. One way to maximize your site earnings is to focus on improving the performance of the pages that are already earning money. Chances are, you earn at least 80 percent of your income from the top 20 percent of the pages on your site. Focus on improving the rankings for those pages in order to get the best return for your efforts. With any analytics report, by default the statistics shown will all be for the past month, but you can adjust the time period, by clicking on the down arrow next to the dates in the upper right hand corner. Setting up automated analytics reports You can also use Google Analytics to e-mail you daily, weekly, or monthly reports. To enable this feature, simply navigate to the report that you'd like to be sent to you. Then, click the Email link just under the report title. The following pop-up will appear: The Frequency field lets you determine how often the report is sent, or you can simply send the report one time. Google Webmasters/Search Console Now, we are going to go into a bit more detail, and show you how to use Google Webmaster Tools to obtain information that you can use to improve your website. Understanding your website's search queries The Search Console now shows you data on what search queries users entered in Google search to find their way to your website. This is valuable: it teaches you what query terms are effectively delivering customers. To see the report, expand Search Traffic on the left navigation menu and select Search Analytics. The following report will display: Examine the top queries that Search Console shows you are getting traffic for, and add them to your list of keywords to work on, if they are not already there. You will probably find that it is most beneficial to focus on those keywords, that are currently ranked between #4 and #10 in Google, to try to get them moved up to one of the top three spots. You'll also want to check for crawl errors on each of your sites while you work in the Search Console. A crawl error is an error that Google's spider encounters, when trying to find pages on your site. A dead link, a link to a page on your site that no longer exists, is a common and perfect example of a crawl error. Crawl errors are detrimental to rankings. First, crawl errors send a poor quality signal to search engines. Remember that, Google wants to deliver a great experience to users of its search engine and properties. Dead links and lost pages do not deliver a great experience to users. To see the crawl error data, expand the Crawl link on the left navigation bar and then click Crawl Errors. The crawl error report will show you any pages that Google attempted to read but could not reach. Not all entries on this report are bad news. If you remove a piece of content that Google crawled in the past, Google will attempt for months to try to find that piece of content again. So, a removed page will generate a crawl error. Another way crawl errors get generated is if you have a sitemap file with page entries that no longer exist. Even inbound links from third party websites to pages that don't exist, will generate crawl errors. Other errors you find might be the result of a typo or the other error that involves going into a specific page on your website to fix. For example, perhaps you made a mistake typing in the URL when linking from one page to another, resulting in a 404 error. To fix that, you need to edit the page that the URL with the error was linked from and correct the URL. Other errors might require editing the files for your template to fix multiple errors simultaneously. For example, if you see that the Googlebot is getting a 404 (page not found) error every time it attempts to crawl the comments feed for a post, then the template file probably doesn't have the right format for creating those URLs. Once you correct the template file, all of the crawl errors related to that problem will be fixed. There are other things you can do in Google Webmaster Tools, for example, you can check to see how many links to your site Google is detecting. Checking your website's code with a HTML Validator HyperText Markup Language (HTML) is a coding standard with a reasonable degree of complexity. HTML standards develop over time, and a valid HTML code displays more websites more accurately in a wider range of browsers. Sites that have substantial amounts of HTML coding errors can potentially be punished by search engines. For this reason, you should periodically check your website for HTML coding errors. There are two principal tools, that web professionals use to check the quality of their websites' code: the W3C HTML Validator and the CSE HTML Validator. The W3C HTML Validator (http://validator.w3.org) is the less robust of the two validators, but it is free. Both validators work in the same way; they examine the code on your website and issue a report advising you of any errors. The CSE HTML Validator (http://htmlvalidator.com) is not a free tool, but you can get a free trial of the software that is good for 30 days or 200 validations, whichever comes first. This software catches errors in HTML, XHTML, CSS, PHP, and JavaScript. It also checks your links, to ensure that they are all valid. It includes an accessibility checker, spell checker, and a SEO checker. With all of these functions, there is a good chance that if your website has any problems, the CSE HTML Validator will be able to find them. After downloading and installing the demo version of CSE HTML Validator, you will be given the option to go to a page that contains two video demos. It is a good idea to watch these videos, or at least the HTML validation video, before trying to use the program. They are not very long, and watching them will reduce the amount of time it takes you to learn to use the program. To validate a file that is on your hard drive, first open the file in the editor, then click the down arrow next to the Validate button on the task bar. You will see several options. Selecting Full will give you not only errors, but messages containing suggestions as well. Errors only will only show you actual errors, and Errors and warnings only will tell you if there are things that could be errors, but might not be. You can experiment with the different options to see which one you like best. After you select a validation option, a box will appear at the bottom of the screen listing all of the errors, as well as warnings and messages depending on the option you chose. You might be surprised at how many errors there are, especially if you are using WordPress to create the code for your site. The code is often not as clean as you might expect it to be. However, not all of the errors you see, will be things that you need to worry about or correct. Yes, it is better to have a website with perfect coding, but one of the advantages of using WordPress is that, you don't have to know how to code to build a website. If you do not know anything about HTML coding, you may do more harm than good by trying to fix minor errors, such as omission of a slash at the end of a tag. Most of these errors will not cause problems anyhow. You should look through the errors and fix the ones you know how to fix. If you are completely mystified by what you see here, don't worry about it too much unless you are having a problem with the way your website loads or displays. If the errors are causing problems, you'll either have to learn a bit about coding or hire someone who knows what they're doing to fix your website. If you want to be able to check the code of an entire website at once, you'll need to buy the Pro version of CSE HTML Validator. You can then use the batch wizard to check your website. This feature is not available in the Standard or Lite versions of the software. To use the batch wizard, click on Tools, then Batch Wizard. A new window will pop up, allowing you to choose the files you want to check. Click on the button with the large green plus sign, and select the appropriate option to add files or URLs to your list. You can add files individually, add an entire folder, or even add a URL. To check an entire site, you can add the root file for your domain from your hard drive, or you can add the URL for the root domain. Once you have added your target, click on it. Now, click on Target in the main menu bar, then on Properties. Click on the Follow Links tab in the box that pops up, then check the box in front of Follow and validate links. Click on the OK button and then click on the Process button to start validating. Checking your inbound link count with Bing Webmaster Bing Webmaster allows you to get information about backlinks, crawl errors, and search traffic to your websites. In order to get the maximum value from this tool, you'll need to authenticate each of your websites to prove that you own them. To get started, go to https://www.bing.com/webmaster/ and sign up with your Microsoft account. As a part of the sign up process, you'll get a HTML file, that you'll install in the root directory of your WordPress installation. This file validates that you are the owner of the website and entitled to see the data that Bing collects. One core use for Bing Webmaster is that it presents a highly accurate picture of one's inbound link counts. If you recall, Google does not present accurate inbound link counts to users. Thus, Bing Webmaster is the most authoritative picture from a search engine, that you'll have of how many backlinks your site enjoys. To see your inbound links, simply log in and navigate to the Dashboard. At the lower right, you'll see the Inbound Links table shown here: The table shows you the inbound links to your website for each page of content. This helpful feature lets you determine, which articles of content are garnering the most interest from the other webmasters. High link counts are always good, but you also want to make sure you are getting high quality links from websites in the same general category as your site. Bing offers an additional feature: link keyword research. Expand the Diagnostics & Tools entry on the navigation bar on the left, and click Keyword Research. The search traffic section will give you valuable information about the search terms you should be targeting for your site, as well as allowing you to see which of the terms you are already targeting are getting traffic. Just as you did with the keywords shown in Google Analytics and Google Webmaster Tools, you want to find keywords that are getting traffic, but are not currently ranked in the top one to three positions in the search engine results pages. Send more links to these pages with the keyword phrase you want to target as anchor text, in order to move these pages up in the rankings. Monitoring ranking positions and movement with Rank Tracker Rank Tracker is a paid tool and we've included it because it is tremendously valuable and noteworthy. Rank Tracker is a software tool that can monitor your website's rankings for thousands of terms in hundreds of different search engines. Even more, it maintains a historical ranking data that helps you gauge your progress as you work. It is a valuable tool that is used by many SEO professionals. There is a free version, although the free version does not allow you to save your work (and thus does not let you save historical information). To really harness the power of this software, you'll need the paid version. You can download either version from http://www.link-assistant.com/rank-tracker/. After you install the program, you will be prompted to enter the URL of your website. On the next screen, the program will ask you to input the keywords you wish to track. If you have them all listed in a spreadsheet somewhere, you can just copy the column that has your keywords in it and paste them all into the tool. Then Rank Tracker will ask you which search engines you are targeting and will check the rank of each keyword in all of the search engines you select. It only takes a few minutes for Rank Tracker to update hundreds of keywords, so you can find out where you are ranking in very little time. This screenshot shows the main interface of the Rank Tracker software. For monitoring progress on large numbers of keywords on several different search engines, Rank Tracker can be a real time-saver: Once your rankings have been updated, you can sort them by rank to see which ones will benefit the most from additional link building. Remember that more than half of the people who search for something in Google, click on the first result. If you are not in the top three to five results, you will get very little traffic from people searching for your targeted keyword. For this reason, you will usually get the best results by working on the keywords that are ranking between #4 and #10 in the search engine results pages. If you want more data to play with, you can click on Update KEI, to find out the keyword effectiveness index for each keyword. This tool gathers the data from Google, to tell you how many searches per month and how much competition there is for each keyword, and then calculates the KEI score based on that data. As a general rule, the higher the KEI number is, the easier it will be to rank for a given keyword. Keep in mind, however, that if you are already ranking for the keyword, it will usually not be that difficult to move up in the rankings, even if the KEI is low. To make it easier to see which keywords will be easy to rank, they are color-coded in the Rank Tracker tool. The easiest ones are green, and the hardest are red. Yellow and orange fall in between. In addition to checking your rankings on keywords you are targeting, you can use the Rank Tracker tool to find more keywords to target. If you navigate to Tools and then Get Keyword Suggestions, a window will pop up that will let you choose from a range of different methods of finding keywords. It is recommended that you start with the Google AdWords Keyword Tool. After you choose the method, you'll be asked to enter keywords that are relevant to the content of your website. You will only get about 100 results no matter how many keywords you enter, so it's best to work on just one at a time. On the next page, you will be presented with a list of keywords. Select the ones you want to add and click on Next. When the tool is done updating the data for the selected keywords, click on the Finish button to add them to your project. You can repeat this process as many times as you want to find new keywords for your website, and you can experiment with the other fifteen tools as well, which should give you more variety. When you find keywords that look promising, put them on a list, and plan on writing posts to target those keywords in the future. Look for keywords that have at least 100 searches per month, with a relatively low amount of competition. Rank Tracker not only allows you to check your current rankings, but it also keeps track of your historical data as well. Every time you update, a new point will be added to the progress graph so you can see your progress over time. This allows you to see, whether what you are doing is working or not. If you see that your rank is dropping for a certain keyword, you can use that information to figure out whether something you changed had the opposite effect that you intended. If you are doing SEO for clients, you'll find the reports in Rank Tracker to be extremely useful (if not mandatory). You can create a monthly report for each client that shows how many keywords are ranked #1, as well as how many are in the top 10, 20, or 100. The report also shows the number of keywords that moved up and the number that moved down. You can use these reports to show your clients how much progress you are making and demonstrate your value to the client. If you want to take advantage of the historical data, you'll have to purchase the paid version of Rank Tracker. The free version does not support saving your projects, so you won't have data from your past rankings to compare and see whether you are moving up or down in the search engine results. You also won't be able to generate reports, that tell you what has changed from the last time you updated your rankings. Monitoring backlinks with majestic SEO If you want to see how many backlinks you have pointing to your site along with a range of additional data, the king of free backlink tools is the powerful Majestic SEO backlink checker tool. To use this tool, go to https://majestic.com/ and enter your domain in the box at the top of the page. The tool will generate a report that shows how many URLs are indexed for your domain, how many total backlinks you have, and how many unique domains link to your website. For heavy-duty link reconnaissance, you'll want the paid upgrade. Underneath the site info, you can see the stats for each page on your site. The tool shows the number of backlinks for each page, as well as the number of domains linking to each page. You can only see ten results per page, so you'll have to click through numerous pages to see all of the results if you have a large site. Majestic SEO offers a few details that you won't get from Google or Bing Webmaster. Majestic SEO calculates and reports the number of separate C class subnets upon which your backlinks appear. As we have learned, links from sites on separate C class subnets are more valuable, because they are perceived by search engines as being truly non-duplicate links. Majestic SEO also reports the number of .edu and .gov upon which your links appear. This extra information gives you a clear picture of how effective your link building efforts are progressing. Majestic offers another feature of note: Majestic crawls the web more deeply, so you'll see higher link counts. Majestic is particularly useful when doing link cleanup, because it scrapes low-value sites that Google and Bing don't bother indexing. This screenshot highlights some of Majestic SEO's more robust features: it shows you the number of backlinks from .edu and .gov domains as well as the number of separate Class C subnets upon which your inbound links appear: Majestic SEO offers one more special feature; it records and graphs your backlink acquisition over time. The graph in the screenshot just above, shows this feature in action. Majestic SEO is not a search engine, so it will show you a count of inbound links without any regards to the quality of the pages on which your links appear. Put another way, Bing Webmaster will only show you links that appear on indexed pages. Lower value pages, such as pages with duplicate content or pages in low-value link directories, tend not to appear in search engine indexes. As such, Majestic SEO reports higher link counts than Bing Webmaster or Google. Summary In this article, we learned how to monitor your progress through the use of free and paid tools. We learned how to set up and employ Google Analytics, to measure and monitor where your website visitors are coming from and how they behave on your site. We learned how to set up and use Google Webmaster Tools to detect crawling errors on your site and learned how Googlebot interacts with your site. We discovered two HTML validation tools that you can use to ensure that your website's code meets current HTML standards. Finally, we learned how to measure and monitor your backlink efforts with Bing Webmaster and Majestic SEO. With the tools and techniques in this article, you can ensure that your optimization efforts are effective and remain on track. Resources for Article: Further resources on this subject: Creating Blog Content in WordPress [Article] Responsive Web Design with WordPress [Article] Introduction to a WordPress application's frontend [Article]
Read more
  • 0
  • 49
  • 11269
Packt
14 Oct 2015
9 min read
Save for later

CSS3 – Selectors and nth Rules

Packt
14 Oct 2015
9 min read
In this article by Ben Frain, the author of Responsive Web Design with HTML5 and CSS3 Second Edition, we'll look in detail at pseudo classes, selectors such as the :last-child and nth-child, the nth rules and nth-based selection in responsive web design. CSS3 structural pseudo-classes CSS3 gives us more power to select elements based upon where they sit in the structure of the DOM. Let's consider a common design treatment; we're working on the navigation bar for a larger viewport and we want to have all but the last link over on the left. Historically, we would have needed to solve this problem by adding a class name to the last link so that we could select it, like this: <nav class="nav-Wrapper"> <a href="/home" class="nav-Link">Home</a> <a href="/About" class="nav-Link">About</a> <a href="/Films" class="nav-Link">Films</a> <a href="/Forum" class="nav-Link">Forum</a> <a href="/Contact-Us" class="nav-Link nav-LinkLast">Contact Us</a> </nav> This in itself can be problematic. For example, sometimes, just getting a content management system to add a class to a final list item can be frustratingly difficult. Thankfully, in those eventualities, it's no longer a concern. We can solve this problem and many more with CSS3 structural pseudo-classes. The :last-child selector CSS 2.1 already had a selector applicable for the first item in a list: div:first-child { /* Styles */ } However, CSS3 adds a selector that can also match the last: div:last-child { /* Styles */ } Let's look how that selector could fix our prior problem: @media (min-width: 60rem) { .nav-Wrapper { display: flex; } .nav-Link:last-child { margin-left: auto; } } There are also useful selectors for when something is the only item: :only-child and the only item of a type: :only-of-type. The nth-child selectors The nth-child selectors let us solve even more difficult problems. With the same markup as before, let's consider how nth-child selectors allow us to select any link(s) within the list. Firstly, what about selecting every other list item? We could select the odd ones like this: .nav-Link:nth-child(odd) { /* Styles */ } Or, if you wanted to select the even ones: .nav-Link:nth-child(even) { /* Styles */ } Understanding what nth rules do For the uninitiated, nth-based selectors can look pretty intimidating. However, once you've mastered the logic and syntax you'll be amazed what you can do with them. Let's take a look. CSS3 gives us incredible flexibility with a few nth-based rules: nth-child(n) nth-last-child(n) nth-of-type(n) nth-last-of-type(n) We've seen that we can use (odd) or (even) values already in an nth-based expression but the (n) parameter can be used in another couple of ways: As an integer; for example, :nth-child(2) would select the 
second item As a numeric expression; for example, :nth-child(3n+1) would start at 1 and then select every third element The integer based property is easy enough to understand, just enter the element number you want to select. The numeric expression version of the selector is the part that can be a little baffling for mere mortals. If math is easy for you, I apologize for this next section. For everyone else, let's break it down. Breaking down the math Let's consider 10 spans on a page: <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> <span></span> By default they will be styled like this: span { height: 2rem; width: 2rem; background-color: blue; display: inline-block; } As you might imagine, this gives us 10 squares in a line: OK, let's look at how we can select different ones with nth-based selections. For practicality, when considering the expression within the parenthesis, I start from the right. So, for example, if I want to figure out what (2n+3) will select, I start with the right-most number (the three here indicates the third item from the left) and know it will select every second element from that point on. So adding this rule: span:nth-child(2n+3) { color: #f90; border-radius: 50%; } Which results in this in the browser: As you can see, our nth selector targets the third list item and then every subsequent second one after that too (if there were 100 list items, it would continue selecting every second one). How about selecting everything from the second item onwards? Well, although you could write :nth-child(1n+2), you don't actually need the first number 1 as unless otherwise stated, n is equal to 1. We can therefore just write :nth-child(n+2). Likewise, if we wanted to select every third element, rather than write :nth-child(3n+3), we can just write :nth-child(3n) as every third item would begin at the third item anyway, without needing to explicitly state it. The expression can also use negative numbers, for example, :nth-child(3n-2) starts at -2 and then selects every third item. You can also change the direction. By default, once the first part of the selection is found, the subsequent ones go down the elements in the DOM (and therefore from left to right in our example). However, you can reverse that with a minus. For example: span:nth-child(-2n+3) { background-color: #f90; border-radius: 50%; } This example finds the third item again, but then goes in the opposite direction to select every two elements (up the DOM tree and therefore from right to left in our example): Hopefully, the nth-based expressions are making perfect sense now? The nth-child and nth-last-child differ in that the nth-last-child variant works from the opposite end of the document tree. For example, :nth-last-child(-n+3) starts at 3 from the end and then selects all the items after it. Here's what that rule gives us in the browser: Finally, let's consider :nth-of-type and :nth-last-of-type. While the previous examples count any children regardless of type (always remember the nth-child selector targets all children at the same DOM level, regardless of classes), :nth-of-type and :nth-last-of-type let you be specific about the type of item you want to select. Consider the following markup <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <span class="span-class"></span> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> <div class="span-class"></div> If we used the selector: .span-class:nth-of-type(-2n+3) { background-color: #f90; border-radius: 50%; } Even though all the elements have the same span-class, we will only actually be targeting the span elements (as they are the first type selected). Here is what gets selected: We will see how CSS4 selectors can solve this issue shortly. CSS3 doesn't count like JavaScript and jQuery! If you're used to using JavaScript and jQuery you'll know that it counts from 0 upwards (zero index based). For example, if selecting an element in JavaScript or jQuery, an integer value of 1 would actually be the second element. CSS3 however, starts at 1 so that a value of 1 is the first item it matches. nth-based selection in responsive web designs Just to close out this little section I want to illustrate a real life responsive web design problem and how we can use nth-based selection to solve it. Let's consider how a horizontal scrolling panel might look in a situation where horizontal scrolling isn't possible. So, using the same markup, let's turn the top 10 grossing films of 2014 into a grid. For some viewports the grid will only be two items wide, as the viewport increases we show three items and at larger sizes still we show four. Here is the problem though. Regardless of the viewport size, we want to prevent any items on the bottom row having a border on the bottom. Here is how it looks with four items wide: See that pesky border below the bottom two items? That's what we need to remove. However, I want a robust solution so that if there were another item on the bottom row, the border would also be removed on that too. Now, because there are a different number of items on each row at different viewports, we will also need to change the nth-based selection at different viewports. For the sake of brevity, I'll show you the selection that matches four items per row (the larger of the viewports). You can view the code sample to see the amended selection at the different viewports. @media (min-width: 55rem) { .Item { width: 25%; } /* Get me every fourth item and of those, only ones that are in the last four items */ .Item:nth-child(4n+1):nth-last-child(-n+4), /* Now get me every one after that same collection too. */ .Item:nth-child(4n+1):nth-last-child(-n+4) ~ .Item { border-bottom: 0; } } You'll notice here that we are chaining the nth-based pseudo-class selectors. It's important to understand that the first doesn't filter the selection for the next, rather the element has to match each of the selections. For our preceding example, the first element has to be the first item of four and also be one of the last four. Nice! Thanks to nth-based selections we have a defensive set of rules to remove the bottom border regardless of the viewport size or number of items we are showing. To learn how to build websites with a responsive and mobile first methodology, allowing a website to display effortlessly on all devices, take a look at Responsive Web Design with HTML5 and CSS3 -Second Edition.
Read more
  • 0
  • 0
  • 11230

article-image-how-build-cross-platform-desktop-application-nodejs-and-electron
Mika Turunen
14 Oct 2015
9 min read
Save for later

How to build a cross-platform desktop application with Node.js and Electron

Mika Turunen
14 Oct 2015
9 min read
Do you want to make a desktop application, but you have only mastered web development so far? Or maybe you feel overwhelmed by all of the different API’s that different desktop platforms have to offer? Or maybe you want to write a beautiful application in HTML5 and JavaScript and have it working on the desktop? Maybe you want to port an existing web application to the desktop? Well, luckily for us, there are a number of alternatives and we are going to look into Node.js and Electron to help us get our HTML5 and JavaScript running on the desktop side with no hiccups. What are the different parts in an Electron application Commonly, all of the different components in Electron are either running in the main process (backend) or the rendering process (frontend). The main process can communicate with different parts of the operating system if there’s a need for that, and the rendering process mainly just focuses on showing the content, pretty much like in any HTML5 application you find on the Internet. The processes communicate with each other through IPC (inter-process communication), which in Node.js terms is just a super simple event emitter and nothing else. You can send events and listen for events. You can get the complete source code from here for this post. Let's start working on it You need to have node.js installed and you can install it from https://nodejs.org/. Now that you have Node.js installed you can start focusing on creating the application. First of all, create an empty directory where you will be placing your code. # Open up your favourite terminal, command-line tool or any other alternative as we'll be running quite a bit of commands # Create the directory mkdir /some/location/that/works/in/your/system # Go into the directory cd /some/location/that/works/in/your/system # Now we need to initialize it for our Electron and Node work npm init NPM will start asking you questions about the application we are about to make. You can just hit Enter and not answer any of them if you feel like it. We can fill them in manually once we know a bit more about our application. Now we should have a directory structure with the following files in it: package.json And that's it, nothing else. We'll start by creating two new files in your favorite text editor or IDE. The files are (leave the files empty): main.js index.html Drop all of the files into the same directory as the package.json is in for easier handling of everything for now. Main.js will be our main process file, which is the connecting layer to the underlying desktop operating system for our Electron application. At this point we need to install Electron as a dependency for our application, which is really easy. Just write: npm install --save electron-prebuilt Alternatively if you cloned/downloaded the associated Github repository you can just go into the directory and write: npm install This will install all dependencies from package.json, including the prebuilt-electron. Now we have Electron's prebuilt binaries installed as a direct dependency for our application and we can run our application on our platform. It's wise to manually update our package.json file using the npm init command generated for us. Open up package.josn file and modify the scripts block to look like this (or if it's missing, create it): "main": "main.js", "scripts": { "start": "electron ." }, The whole package.json file should be roughly something like this (taken from the tutorial repo I linked earlier): { "name": "", "version": "1.0.0", "description": "", "main": "main.js", "scripts": { "start": "electron ." }, "repository": { }, "keywords": [ ], "author": "", "license": "MIT", "bugs": { }, "homepage": "", "dependencies": { "electron-prebuilt": "^0.25.3" } } The main property in the file points to the main.js and the scripts sections start property tells it to run command "Electron .", which essentially tells Electron to digest the current directory as an application and Electron hardwires the property main as the main process for the application. This means that main.js is now our main process, just like we wanted. Main and rendering process We need to write the main process JavaScript and the rendering process HTML to get our application to start. Let's start with the main process, main.js. You can also find all of the code below from the tutorial repository here. The code has been peppered with a good amount of comments to give a deeper understanding of what is going on in the code and what different parts do in the context of Electron. // Loads Electron specific app that is not commonly available for node or io.js var app = require("app"); // Inter process communication -- Used to communicate from Main process (this) // to the actual rendering process (index.html) -- not really used in this example var ipc = require("ipc"); // Loads the Electron specific module or browser handling var BrowserWindow = require("browser-window"); // Report crashes to our server. var crashReporter = require("crash-reporter"); // Keep a global reference of the window object, if you don't, the window will // be closed automatically when the javascript object is garbage collected var mainWindow = null; // Quit when all windows are closed. app.on("window-all-closed", function() { // OS X specific check if (process.platform != "darwin") { app.quit(); } }); // This event will be called when Electron has done initialization and ready for creating browser windows. app.on("ready", function() { crashReporter.start(); // Create the browser window (where the applications visual parts will be) mainWindow = newBrowserWindow({ width: 800, height: 600 }); // Building the file path to the index.html mainWindow.loadUrl("file://" + __dirname + "/index.html"); // Emitted when the window is closed. // The function just deferences the mainWindow so garbage collection can // pick it up mainWindow.on("closed", function() { mainWindow = null; }); }); You can now start the application, but it'll just start an empty window since we have nothing to render in the rendering process. Let's fix that by populating our index.html with some content. <!DOCTYPE html> <html> <head> <title>Hello Tutorial!</title> </head> <body> <h2>Tutorial</h2> We are using node.js <script>document.write(process.version)</script> and Electron <script>document.write(process.versions["electron"])</script>. </body> </html> Because this is an Electron application we have used the node.js/io.js process and other content relating to the actual node.js/io.js setup we have going. The line document.write(process.version) actually is a call to the Node.js process. This is one of the great things about Electron: we are essentially bridging the gap between the desktop applications and HTML5 applications. Now to run the application. npm start There is a huge list of different desktop environment integration possibilities you can do with Electron and you can read more about them from the Electron documentation at http://electron.atom.io/. Obviously this is still far from a complete application, but this should give you the understanding on how to work with Electron, how it behaves and what you can do with it. You can start using your favorite JavaScript/CSS frontend framework in the index.html to build a great looking GUI for your new desktop application and you can also use all Node.js specific NPM modules in the backend along with the desktop environment integration. Maybe we'll look into writing a great looking GUI for our application with some additional desktop environment integration in another post. Packaging and distributing Electron applications Applications can be packaged into distributable operating system specific containers. For example, .exe files allow them to run on different hardware. The packaging process is fairly simple and well documented in the Electron's documentation and it is out of the scope for this post but worth the look if you want to package your application. To understand more of the application distribution and packaging process, read the Electrons official documentation on it here. Electron and it's current use Electron is still really fresh and right out of GitHub's knowing hands, but it's already been adopted by quite few companies for use and there are number of applications already built on top of it. Companies using Electron: Slack Microsoft Github Applications built with Electron or using Electron Visual Studio Code - Microsofts Visual Studio code Heartdash - Hearthdash is a card tracking application for Hearthstone. Monu - Process monitoring app Kart - Frontend for RetroArch Friends - P2P chat powered by the web Final words on Electron It's obvious that Electron is still taking its first baby steps, but it's hard to deny the fact that more and more user interfaces will be written in different web technologies with HTML5 and this is one of the great starts for it. It'll be interesting to see how the gap between the desktop and the web application develop as time goes on and people like you and me will be playing a key role in the development of future applications. With help of technologies like Electron the desktop application development just got that much easier. For more Node.js content, look no further than our dedicated page! About the author Mika Turunen is a software professional hailing from the frozen cold Finland. He spends a good part of his day playing with emerging web and cloud related technologies, but he also has a big knack for games and game development. His hobbies include game collecting, game development and games in general. When he's not playing with technology he is spending time with his two cats and growing his beard.
Read more
  • 0
  • 0
  • 21899
Modal Close icon
Modal Close icon