Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-developing-basic-site-nodejs-and-express
Packt
17 Feb 2016
21 min read
Save for later

Developing a Basic Site with Node.js and Express

Packt
17 Feb 2016
21 min read
In this article, we will continue with the Express framework. It's one of the most popular frameworks available and is certainly a pioneering one. Express is still widely used and several developers use it as a starting point. (For more resources related to this topic, see here.) Getting acquainted with Express Express (http://expressjs.com/) is a web application framework for Node.js. It is built on top of Connect (http://www.senchalabs.org/connect/), which means that it implements middleware architecture. In the previous chapter, when exploring Node.js, we discovered the benefit of such a design decision: the framework acts as a plugin system. Thus, we can say that Express is suitable for not only simple but also complex applications because of its architecture. We may use only some of the popular types of middleware or add a lot of features and still keep the application modular. In general, most projects in Node.js perform two functions: run a server that listens on a specific port, and process incoming requests. Express is a wrapper for these two functionalities. The following is basic code that runs the server: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(1337, '127.0.0.1'); console.log('Server running at http://127.0.0.1:1337/'); var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(1337, '127.0.0.1'); console.log('Server running at http://127.0.0.1:1337/'); This is an example extracted from the official documentation of Node.js. As shown, we use the native module http and run a server on the port 1337. There is also a request handler function, which simply sends the Hello world string to the browser. Now, let's implement the same thing but with the Express framework, using the following code: var express = require('express'); var app = express(); app.get("/", function(req, res, next) { res.send("Hello world"); }).listen(1337); console.log('Server running at http://127.0.0.1:1337/'); It's pretty much the same thing. However, we don't need to specify the response headers or add a new line at the end of the string because the framework does it for us. In addition, we have a bunch of middleware available, which will help us process the requests easily. Express is like a toolbox. We have a lot of tools to do the boring stuff, allowing us to focus on the application's logic and content. That's what Express is built for: saving time for the developer by providing ready-to-use functionalities. Installing Express There are two ways to install Express. We'll will start with the simple one and then proceed to the more advanced technique. The simpler approach generates a template, which we may use to start writing the business logic directly. In some cases, this can save us time. From another viewpoint, if we are developing a custom application, we need to use custom settings. We can also use the boilerplate, which we get with the advanced technique; however, it may not work for us. Using package.json Express is like every other module. It has its own place in the packages register. If we want to use it, we need to add the framework in the package.json file. The ecosystem of Node.js is built on top of the Node Package Manager. It uses the JSON file to find out what we need and installs it in the current directory. So, the content of our package.json file looks like the following code: { "name": "projectname", "description": "description", "version": "0.0.1", "dependencies": { "express": "3.x" } } These are the required fields that we have to add. To be more accurate, we have to say that the mandatory fields are name and version. However, it is always good to add descriptions to our modules, particularly if we want to publish our work in the registry, where such information is extremely important. Otherwise, the other developers will not know what our library is doing. Of course, there are a bunch of other fields, such as contributors, keywords, or development dependencies, but we will stick to limited options so that we can focus on Express. Once we have our package.json file placed in the project's folder, we have to call npm install in the console. By doing so, the package manager will create a node_modules folder and will store Express and its dependencies there. At the end of the command's execution, we will see something like the following screenshot: The first line shows us the installed version, and the proceeding lines are actually modules that Express depends on. Now, we are ready to use Express. If we type require('express'), Node.js will start looking for that library inside the local node_modules directory. Since we are not using absolute paths, this is normal behavior. If we miss running the npm install command, we will be prompted with Error: Cannot find module 'express'. Using a command-line tool There is a command-line instrument called express-generator. Once we run npm install -g express-generator, we will install and use it as every other command in our terminal. If you use the framework inseveral projects, you will notice that some things are repeated. We can even copy and paste them from one application to another, and this is perfectly fine. We may even end up with our own boiler plate and can always start from there. The command-line version of Express does the same thing. It accepts few arguments and based on them, creates a skeleton for use. This can be very handy in some cases and will definitely save some time. Let's have a look at the available arguments: -h, --help: This signifies output usage information. -V, --version: This shows the version of Express. -e, --ejs: This argument adds the EJS template engine support. Normally, we need a library to deal with our templates. Writing pure HTML is not very practical. The default engine is set to JADE. -H, --hogan: This argument is Hogan-enabled (another template engine). -c, --css: If wewant to use the CSS preprocessors, this option lets us use LESS(short forLeaner CSS) or Stylus. The default is plain CSS. -f, --force: This forces Express to operate on a nonempty directory. Let's try to generate an Express application skeleton with LESS as a CSS preprocessor. We use the following line of command: express --css less myapp A new myapp folder is created with the file structure, as seen in the following screenshot: We still need to install the dependencies, so cd myapp && npm install is required. We will skip the explanation of the generated directories for now and will move to the created app.js file. It starts with initializing the module dependencies, as follows: var express = require('express'); var path = require('path'); var favicon = require('static-favicon'); var logger = require('morgan'); var cookieParser = require('cookie-parser'); var bodyParser = require('body-parser'); var routes = require('./routes/index'); var users = require('./routes/users'); var app = express(); Our framework is express, and path is a native Node.js module. The middleware are favicon, logger, cookieParser, and bodyParser. The routes and users are custom-made modules, placed in local for the project folders. Similarly, as in the Model-View-Controller(MVC) pattern, these are the controllers for our application. Immediately after, an app variable is created; this represents the Express library. We use this variable to configure our application. The script continues by setting some key-value pairs. The next code snippet defines the path to our views and the default template engine: app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'jade'); The framework uses the methods set and get to define the internal properties. In fact, we may use these methods to define our own variables. If the value is a Boolean, we can replace set and get with enable and disable. For example, see the following code: app.set('color', 'red'); app.get('color'); // red app.enable('isAvailable'); The next code adds middleware to the framework. Wecan see the code as follows: app.use(favicon()); app.use(logger('dev')); app.use(bodyParser.json()); app.use(bodyParser.urlencoded()); app.use(cookieParser()); app.use(require('less-middleware')({ src: path.join(__dirname, 'public') })); app.use(express.static(path.join(__dirname, 'public'))); The first middleware serves as the favicon of our application. The second is responsible for the output in the console. If we remove it, we will not get information about the incoming requests to our server. The following is a simple output produced by logger: GET / 200 554ms - 170b GET /stylesheets/style.css 200 18ms - 110b The json and urlencoded middleware are related to the data sent along with the request. We need them because they convert the information in an easy-to-use format. There is also a middleware for the cookies. It populates the request object, so we later have access to the required data. The generated app uses LESS as a CSS preprocessor, and we need to configure it by setting the directory containing the .less files. Eventually, we define our static resources, which should be delivered by the server. These are just few lines, but we've configured the whole application. We may remove or replace some of the modules, and the others will continue working. The next code in the file maps two defined routes to two different handlers, as follows: app.use('/', routes); app.use('/users', users); If the user tries to open a missing page, Express still processes the request by forwarding it to the error handler, as follows: app.use(function(req, res, next) { var err = new Error('Not Found'); err.status = 404; next(err); }); The framework suggests two types of error handling:one for the development environment and another for the production server. The difference is that the second one hides the stack trace of the error, which should be visible only for the developers of the application. As we can see in the following code, we are checking the value of the env property and handling the error differently: // development error handler if (app.get('env') === 'development') { app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: err }); }); } // production error handler app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: {} }); }); At the end, the app.js file exports the created Express instance, as follows: module.exports = app; To run the application, we need to execute node ./bin/www. The code requires app.js and starts the server, which by default listens on port 3000. #!/usr/bin/env node var debug = require('debug')('my-application'); var app = require('../app'); app.set('port', process.env.PORT || 3000); var server = app.listen(app.get('port'), function() { debug('Express server listening on port ' + server.address().port); }); The process.env declaration provides an access to variables defined in the current development environment. If there is no PORT setting, Express uses 3000 as the value. The required debug module uses a similar approach to find out whether it has to show messages to the console. Managing routes The input of our application is the routes. The user visits our page at a specific URL and we have to map this URL to a specific logic. In the context of Express, this can be done easily, as follows: var controller = function(req, res, next) { res.send("response"); } app.get('/example/url', controller); We even have control over the HTTP's method, that is, we are able to catch POST, PUT, or DELETE requests. This is very handy if we want to retain the address path but apply a different logic. For example, see the following code: var getUsers = function(req, res, next) { // ... } var createUser = function(req, res, next) { // ... } app.get('/users', getUsers); app.post('/users', createUser); The path is still the same, /users, but if we make a POST request to that URL, the application will try to create a new user. Otherwise, if the method is GET, it will return a list of all the registered members. There is also a method, app.all, which we can use to handle all the method types at once. We can see this method in the following code snippet: app.all('/', serverHomePage); There is something interesting about the routing in Express. We may pass not just one but many handlers. This means that we can create a chain of functions that correspond to one URL. For example, it we need to know if the user is logged in, there is a module for that. We can add another method that validates the current user and attaches a variable to the request object, as follows: var isUserLogged = function(req, res, next) { req.userLogged = Validator.isCurrentUserLogged(); next(); } var getUser = function(req, res, next) { if(req.userLogged) { res.send("You are logged in. Hello!"); } else { res.send("Please log in first."); } } app.get('/user', isUserLogged, getUser); The Validator class is a class that checks the current user's session. The idea is simple: we add another handler, which acts as an additional middleware. After performing the necessary actions, we call the next function, which passes the flow to the next handler, getUser. Because the request and response objects are the same for all the middlewares, we have access to the userLogged variable. This is what makes Express really flexible. There are a lot of great features available, but they are optional. At the end of this chapter, we will make a simple website that implements the same logic. Handling dynamic URLs and the HTML forms The Express framework also supports dynamic URLs. Let's say we have a separate page for every user in our system. The address to those pages looks like the following code: /user/45/profile Here, 45 is the unique number of the user in our database. It's of course normal to use one route handler for this functionality. We can't really define different functions for every user. The problem can be solved by using the following syntax: var getUser = function(req, res, next) { res.send("Show user with id = " + req.params.id); } app.get('/user/:id/profile', getUser); The route is actually like a regular expression with variables inside. Later, that variable is accessible in the req.params object. We can have more than one variable. Here is a slightly more complex example: var getUser = function(req, res, next) { var userId = req.params.id; var actionToPerform = req.params.action; res.send("User (" + userId + "): " + actionToPerform) } app.get('/user/:id/profile/:action', getUser); If we open http://localhost:3000/user/451/profile/edit, we see User (451): edit as a response. This is how we can get a nice looking, SEO-friendly URL. Of course, sometimes we need to pass data via the GET or POST parameters. We may have a request like http://localhost:3000/user?action=edit. To parse it easily, we need to use the native url module, which has few helper functions to parse URLs: var getUser = function(req, res, next) { var url = require('url'); var url_parts = url.parse(req.url, true); var query = url_parts.query; res.send("User: " + query.action); } app.get('/user', getUser); Once the module parses the given URL, our GET parameters are stored in the .query object. The POST variables are a bit different. We need a new middleware to handle that. Thankfully, Express has one, which is as follows: app.use(express.bodyParser()); var getUser = function(req, res, next) { res.send("User: " + req.body.action); } app.post('/user', getUser); The express.bodyParser() middleware populates the req.body object with the POST data. Of course, we have to change the HTTP method from .get to .post or .all. If we want to read cookies in Express, we may use the cookieParser middleware. Similar to the body parser, it should also be installed and added to the package.json file. The following example sets the middleware and demonstrates its usage: var cookieParser = require('cookie-parser'); app.use(cookieParser('optional secret string')); app.get('/', function(req, res, next){ var prop = req.cookies.propName }); Returning a response Our server accepts requests, does some stuff, and finally, sends the response to the client's browser. This can be HTML, JSON, XML, or binary data, among others. As we know, by default, every middleware in Express accepts two objects, request and response. The response object has methods that we can use to send an answer to the client. Every response should have a proper content type or length. Express simplifies the process by providing functions to set HTTP headers and sending content to the browser. In most cases, we will use the .send method, as follows: res.send("simple text"); When we pass a string, the framework sets the Content-Type header to text/html. It's great to know that if we pass an object or array, the content type is application/json. If we develop an API, the response status code is probably going to be important for us. With Express, we are able to set it like in the following code snippet: res.send(404, 'Sorry, we cannot find that!'); It's even possible to respond with a file from our hard disk. If we don't use the framework, we will need to read the file, set the correct HTTP headers, and send the content. However, Express offers the .sendfile method, which wraps all these operations as follows: res.sendfile(__dirname + "/images/photo.jpg"); Again, the content type is set automatically; this time it is based on the filename's extension. When building websites or applications with a user interface, we normally need to serve an HTML. Sure, we can write it manually in JavaScript, but it's good practice to use a template engine. This means we save everything in external files and the engine reads the markup from there. It populates them with some data and, at the end, provides ready-to-show content. In Express, the whole process is summarized in one method, .render. However, to work properly, we have to instruct the framework regarding which template engine to use. We already talked about this in the beginning of this chapter. The following two lines of code, set the path to our views and the template engine: app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'jade'); Let's say we have the following template ( /views/index.jade ): h1= title p Welcome to #{title} Express provides a method to serve templates. It accepts the path to the template, the data to be applied, and a callback. To render the previous template, we should use the following code: res.render("index", {title: "Page title here"}); The HTML produced looks as follows: <h1>Page title here</h1><p>Welcome to Page title here</p> If we pass a third parameter, function, we will have access to the generated HTML. However, it will not be sent as a response to the browser. The example-logging system We've seen the main features of Express. Now let's build something real. The next few pages present a simple website where users can read only if they are logged in. Let's start and set up the application. We are going to use Express' command-line instrument. It should be installed using npm install -g express-generator. We create a new folder for the example, navigate to it via the terminal, and execute express --css less site. A new directory, site, will be created. If we go there and run npm install, Express will download all the required dependencies. As we saw earlier, by default, we have two routes and two controllers. To simplify the example, we will use only the first one: app.use('/', routes). Let's change the views/index.jade file content to the following HTML code: doctype html html head title= title link(rel='stylesheet', href='/stylesheets/style.css') body h1= title hr p That's a simple application using Express. Now, if we run node ./bin/www and open http://127.0.0.1:3000, we will see the page. Jade uses indentation to parse our template. So, we should not mix tabs and spaces. Otherwise, we will get an error. Next, we need to protect our content. We check whether the current user has a session created; if not, a login form is shown. It's the perfect time to create a new middleware. To use sessions in Express, install an additional module: express-session. We need to open our package.json file and add the following line of code: "express-session": "~1.0.0" Once we do that, a quick run of npm install will bring the module to our application. All we have to do is use it. The following code goes to app.js: var session = require('express-session'); app.use(session({ secret: 'app', cookie: { maxAge: 60000 }})); var verifyUser = function(req, res, next) { if(req.session.loggedIn) { next(); } else { res.send("show login form"); } } app.use('/', verifyUser, routes); Note that we changed the original app.use('/', routes) line. The session middleware is initialized and added to Express. The verifyUser function is called before the page rendering. It uses the req.session object, and checks whether there is a loggedIn variable defined and if its value is true. If we run the script again, we will see that the show login form text is shown for every request. It's like this because no code sets the session exactly the way we want it. We need a form where users can type their username and password. We will process the result of the form and if the credentials are correct, the loggedIn variable will be set to true. Let's create a new Jade template, /views/login.jade: doctype html html head title= title link(rel='stylesheet', href='/stylesheets/style.css') body h1= title hr form(method='post') label Username: br input(type='text', name='username') br label Password: br input(type='password', name='password') br input(type='submit') Instead of sending just a text with res.send("show login form"); we should render the new template, as follows: res.render("login", {title: "Please log in."}); We choose POST as the method for the form. So, we need to add the middleware that populates the req.body object with the user's data, as follows: app.use(bodyParser()); Process the submitted username and password as follows: var verifyUser = function(req, res, next) { if(req.session.loggedIn) { next(); } else { var username = "admin", password = "admin"; if(req.body.username === username && req.body.password === password) { req.session.loggedIn = true; res.redirect('/'); } else { res.render("login", {title: "Please log in."}); } } } The valid credentials are set to admin/admin. In a real application, we may need to access a database or get this information from another place. It's not really a good idea to place the username and password in the code; however, for our little experiment, it is fine. The previous code checks whether the passed data matches our predefined values. If everything is correct, it sets the session, after which the user is forwarded to the home page. Once you log in, you should be able to log out. Let's add a link for that just after the content on the index page (views/index.jade ): a(href='/logout') logout Once users clicks on this link, they will be forward to a new page. We just need to create a handler for the new route, remove the session, and forward them to the index page where the login form is reflected. Here is what our logging out handler looks like: // in app.js var logout = function(req, res, next) { req.session.loggedIn = false; res.redirect('/'); } app.all('/logout', logout); Setting loggedIn to false is enough to make the session invalid. The redirect sends users to the same content page they came from. However, this time, the content is hidden and the login form pops up. Summary In this article, we learned about one of most widely used Node.js frameworks, Express. We discussed its fundamentals, how to set it up, and its main characteristics. The middleware architecture, which we mentioned in the previous chapter, is the base of the library and gives us the power to write complex but, at the same time, flexible applications. The example we used was a simple one. We required a valid session to provide page access. However, it illustrates the usage of the body parser middleware and the process of registering the new routes. We also updated the Jade templates and saw the results in the browser. For more information on Node.js Refer to the following URLs: https://www.packtpub.com/web-development/instant-nodejs-starter-instant https://www.packtpub.com/web-development/learning-nodejs-net-developers https://www.packtpub.com/web-development/nodejs-essentials Resources for Article: Further resources on this subject: Writing a Blog Application with Node.js and AngularJS [article] Testing in Node and Hapi [article] Learning Node.js for Mobile Application Development [article]
Read more
  • 0
  • 0
  • 14618

article-image-understanding-php-basics
Packt
17 Feb 2016
27 min read
Save for later

Understanding PHP basics

Packt
17 Feb 2016
27 min read
In this article by Antonio Lopez Zapata, the author of the book Learning PHP 7, you need to understand not only the syntax of the language, but also its grammatical rules, that is, when and why to use each element of the language. Luckily, for you, some languages come from the same root. For example, Spanish and French are romance languages as they both evolved from spoken Latin; this means that these two languages share a lot of rules, and learning Spanish if you already know French is much easier. (For more resources related to this topic, see here.) Programming languages are quite the same. If you already know another programming language, it will be very easy for you to go through this chapter. If it is your first time though, you will need to understand from scratch all the grammatical rules, so it might take some more time. But fear not! We are here to help you in this endeavor. In this chapter, you will learn about these topics: PHP in web applications Control structures Functions PHP in web applications Even though the main purpose of this chapter is to show you the basics of PHP, doing so in a reference-manual way is not interesting enough. If we were to copy paste what the official documentation says, you might as well go there and read it by yourself. Instead, let's not forget the main purpose of this book and your main goal—to write web applications with PHP. We will show you how can you apply everything you are learning as soon as possible, before you get too bored. In order to do that, we will go through the journey of building an online bookstore. At the very beginning, you might not see the usefulness of it, but that is just because we still haven't seen all that PHP can do. Getting information from the user Let's start by building a home page. In this page, we are going to figure out whether the user is looking for a book or just browsing. How do we find this out? The easiest way right now is to inspect the URL that the user used to access our application and extract some information from there. Save this content as your index.php file: <?php $looking = isset($_GET['title']) || isset($_GET['author']); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>You lookin'? <?php echo (int) $looking; ?></p> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> </body> </html> And now, access http://localhost:8000/?author=Harper Lee&title=To Kill a Mockingbird. You will see that the page is printing some of the information that you passed on to the URL. For each request, PHP stores in an array—called $_GET- all the parameters that are coming from the query string. Each key of the array is the name of the parameter, and its associated value is the value of the parameter. So, $_GET contains two entries: $_GET['author'] contains Harper Lee and $_GET['title'] contains To Kill a Mockingbird. On the first highlighted line, we are assigning a Boolean value to the $looking variable. If either $_GET['title'] or $_GET['author'] exists, this variable will be true; otherwise, false. Just after that, we close the PHP tag and then we start printing some HTML, but as you can see, we are actually mixing HTML with PHP code. Another interesting line here is the second highlighted line. We are printing the content of $looking, but before that, we cast the value. Casting means forcing PHP to transform a type of value to another one. Casting a Boolean to an integer means that the resultant value will be 1 if the Boolean is true or 0 if the Boolean is false. As $looking is true since $_GET contains valid keys, the page shows 1. If we try to access the same page without sending any information as in http://localhost:8000, the browser will say "Are you looking for a book? 0". Depending on the settings of your PHP configuration, you will see two notice messages complaining that you are trying to access the keys of the array that do not exist. Casting versus type juggling We already knew that when PHP needs a specific type of variable, it will try to transform it, which is called type juggling. But PHP is quite flexible, so sometimes, you have to be the one specifying the type that you need. When printing something with echo, PHP tries to transform everything it gets into strings. Since the string version of the false Boolean is an empty string, this would not be useful for our application. Casting the Boolean to an integer first assures that we will see a value, even if it is just "0". HTML forms HTML forms are one of the most popular ways to collect information from users. They consist a series of fields called inputs in the HTML world and a final submit button. In HTML, the form tag contains two attributes: the action points, where the form will be submitted and method that specifies which HTTP method the form will use—GET or POST. Let's see how it works. Save the following content as login.html and go to http://localhost:8000/login.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore - Login</title> </head> <body> <p>Enter your details to login:</p> <form action="authenticate.php" method="post"> <label>Username</label> <input type="text" name="username" /> <label>Password</label> <input type="password" name="password" /> <input type="submit" value="Login"/> </form> </body> </html> This form contains two fields, one for the username and one for the password. You can see that they are identified by the name attribute. If you try to submit this form, the browser will show you a Page Not Found message, as it is trying to access http://localhost:8000/authenticate.phpand the web server cannot find it. Let's create it then: <?php $submitted = !empty($_POST); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>Form submitted? <?php echo (int) $submitted; ?></p> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> </body> </html> As with $_GET, $_POST is an array that contains the parameters received by POST. In this piece of code, we are first asking whether that array is not empty—note the ! operator. Afterwards, we just display the information received, just as in index.php. Note that the keys of the $_POST array are the values for the name argument of each input field. Control structures So far, our files have been executed line by line. Due to that, we are getting some notices on some scenarios, such as when the array does not contain what we are looking for. Would it not be nice if we could choose which lines to execute? Control structures to the rescue! A control structure is like a traffic diversion sign. It directs the execution flow depending on some predefined conditions. There are different control structures, but we can categorize them in conditionals and loops. A conditional allows us to choose whether to execute a statement or not. A loop will execute a statement as many times as you need. Let's take a look at each one of them. Conditionals A conditional evaluates a Boolean expression, that is, something that returns a value. If the expression is true, it will execute everything inside its block of code. A block of code is a group of statements enclosed by {}. Let's see how it works: <?php echo "Before the conditional."; if (4 > 3) { echo "Inside the conditional."; } if (3 > 4) { echo "This will not be printed."; } echo "After the conditional."; In this piece of code, we are using two conditionals. A conditional is defined by the keyword if followed by a Boolean expression in parentheses and by a block of code. If the expression is true, it will execute the block; otherwise, it will skip it. You can increase the power of conditionals by adding the keyword else. This tells PHP to execute a block of code if the previous conditions were not satisfied. Let's see an example: if (2 > 3) { echo "Inside the conditional."; } else { echo "Inside the else."; } This will execute the code inside else as the condition of if was not satisfied. Finally, you can also add an elseif keyword followed by another condition and block of code to continue asking PHP for more conditions. You can add as many elseif as you need after if. If you add else, it has to be the last one of the chain of conditions. Also keep in mind that as soon as PHP finds a condition that resolves to true, it will stop evaluating the rest of the conditions: <?php if (4 > 5) { echo "Not printed"; } elseif (4 > 4) { echo "Not printed"; } elseif (4 == 4) { echo "Printed."; } elseif (4 > 2) { echo "Not evaluated."; } else { echo "Not evaluated."; } if (4 == 4) { echo "Printed"; } In this last example, the first condition that evaluates to true is the one that is highlighted. After that, PHP does not evaluate any more conditions until a new if starts. With this knowledge, let's try to clean up a bit of our application, executing statements only when needed. Copy this code to your index.php file: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p> <?php if (isset($_COOKIE[username'])) { echo "You are " . $_COOKIE['username']; } else { echo "You are not authenticated."; } ?> </p> <?php if (isset($_GET['title']) && isset($_GET['author'])) { ?> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> <?php } else { ?> <p>You are not looking for a book?</p> <?php } ?> </body> </html> In this new code, we are mixing conditionals and HTML code in two different ways. The first one opens a PHP tag and adds an if-else clause that will print whether we are authenticated or not with echo. No HTML is merged within the conditionals, which makes it clear. The second option—the second highlighted block—shows an uglier solution, but this is sometimes necessary. When you have to print a lot of HTML code, echo is not that handy, and it is better to close the PHP tag; print all the HTML you need and then open the tag again. You can do that even inside the code block of an if clause, as you can see in the code. Mixing PHP and HTML If you feel like the last file we edited looks rather ugly, you are right. Mixing PHP and HTML is confusing, and you have to avoid it by all means. Let's edit our authenticate.php file too, as it is trying to access $_POST entries that might not be there. The new content of the file would be as follows: <?php $submitted = isset($_POST['username']) && isset($_POST['password']); if ($submitted) { setcookie('username', $_POST['username']); } ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <?php if ($submitted): ?> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> <?php else: ?> <p>You did not submitted anything.</p> <?php endif; ?> </body> </html> This code also contains conditionals, which we already know. We are setting a variable to know whether we've submitted a login or not and to set the cookies if we have. However, the highlighted lines show you a new way of including conditionals with HTML. This way, tries to be more readable when working with HTML code, avoiding the use of {} and instead using : and endif. Both syntaxes are correct, and you should use the one that you consider more readable in each case. Switch-case Another control structure similar to if-else is switch-case. This structure evaluates only one expression and executes the block depending on its value. Let's see an example: <?php switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; break; case 'Lord of the Rings': echo "A classic!"; break; default: echo "Dunno that one."; break; } The switch case takes an expression; in this case, a variable. It then defines a series of cases. When the case matches the current value of the expression, PHP executes the code inside it. As soon as PHP finds break, it will exit switch-case. In case none of the cases are suitable for the expression, if there is a default case         , PHP will execute it, but this is optional. You also need to know that breaks are mandatory if you want to exit switch-case. If you do not specify any, PHP will keep on executing statements, even if it encounters a new case. Let's see a similar example but without breaks: <?php $title = 'Twilight'; switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; case 'Twilight': echo 'Uh...'; case 'Lord of the Rings': echo "A classic!"; default: echo "Dunno that one."; } If you test this code in your browser, you will see that it is printing "Uh...A classic!Dunno that one.". PHP found that the second case is valid so it executes its content. But as there are no breaks, it keeps on executing until the end. This might be the desired behavior sometimes, but not usually, so we need to be careful when using it! Loops Loops are control structures that allow you to execute certain statements several times—as many times as you need. You might use them on several different scenarios, but the most common one is when interacting with arrays. For example, imagine you have an array with elements but you do not know what is in it. You want to print all its elements so you loop through all of them. There are four types of loops. Each of them has their own use cases, but in general, you can transform one type of loop into another. Let's see them closely While While is the simplest of the loops. It executes a block of code until the expression to evaluate returns false. Let's see one example: <?php $i = 1; while ($i < 4) { echo $i . " "; $i++; } Here, we are defining a variable with the value 1. Then, we have a while clause in which the expression to evaluate is $i < 4. This loop will execute the content of the block of code until that expression is false. As you can see, inside the loop we are incrementing the value of $i by 1 each time, so after 4 iterations, the loop will end. Check out the output of that script, and you will see "0 1 2 3". The last value printed is 3, so by that time, $i was 3. After that, we increased its value to 4, so when the while was evaluating whether $i < 4, the result was false. Whiles and infinite loops One of the most common problems with while loops is creating an infinite loop. If you do not add any code inside while, which updates any of the variables considered in the while expression so it can be false at some point, PHP will never exit the loop! For This is the most complex of the four loops. For defines an initialization expression, an exit condition, and the end of the iteration expression. When PHP first encounters the loop, it executes what is defined as the initialization expression. Then, it evaluates the exit condition, and if it resolves to true, it enters the loop. After executing everything inside the loop, it executes the end of the iteration expression. Once this is done, it will evaluate the end condition again, going through the loop code and the end of iteration expression until it evaluates to false. As always, an example will help clarify this: <?php for ($i = 1; $i < 10; $i++) { echo $i . " "; } The initialization expression is $i = 1 and is executed only the first time. The exit condition is $i < 10, and it is evaluated at the beginning of each iteration. The end of the iteration expression is $i++, which is executed at the end of each iteration. This example prints numbers from 1 to 9. Another more common usage of the for loop is with arrays: <?php $names = ['Harry', 'Ron', 'Hermione']; for ($i = 0; $i < count($names); $i++) { echo $names[$i] . " "; } In this example, we have an array of names. As it is defined as a list, its keys will be 0, 1, and 2. The loop initializes the $i variable to 0, and it will iterate until the value of $i is not less than the amount of elements in the array 3 The first iteration $i is 0, the second will be 1, and the third one will be 2. When $i is 3, it will not enter the loop as the exit condition evaluates to false. On each iteration, we are printing the content of the $i position of the array; hence, the result of this code will be all three names in the array. We careful with exit conditions It is very common to set an exit condition. This is not exactly what we need, especially with arrays. Remember that arrays start with 0 if they are a list, so an array of 3 elements will have entries 0, 1, and 2. Defining the exit condition as $i <= count($array) will cause an error on your code, as when $i is 3, it also satisfies the exit condition and will try to access to the key 3, which does not exist. Foreach The last, but not least, type of loop is foreach. This loop is exclusive for arrays, and it allows you to iterate an array entirely, even if you do not know its keys. There are two options for the syntax, as you can see in these examples: <?php $names = ['Harry', 'Ron', 'Hermione']; foreach ($names as $name) { echo $name . " "; } foreach ($names as $key => $name) { echo $key . " -> " . $name . " "; } The foreach loop accepts an array; in this case, $names. It specifies a variable, which will contain the value of the entry of the array. You can see that we do not need to specify any end condition, as PHP will know when the array has been iterated. Optionally, you can specify a variable that will contain the key of each iteration, as in the second loop. Foreach loops are also useful with maps, where the keys are not necessarily numeric. The order in which PHP will iterate the array will be the same order in which you used to insert the content in the array. Let's use some loops in our application. We want to show the available books in our home page. We have the list of books in an array, so we will have to iterate all of them with a foreach loop, printing some information from each one. Append the following code to the body tag in index.php: <?php endif; $books = [ [ 'title' => 'To Kill A Mockingbird', 'author' => 'Harper Lee', 'available' => true, 'pages' => 336, 'isbn' => 9780061120084 ], [ 'title' => '1984', 'author' => 'George Orwell', 'available' => true, 'pages' => 267, 'isbn' => 9780547249643 ], [ 'title' => 'One Hundred Years Of Solitude', 'author' => 'Gabriel Garcia Marquez', 'available' => false, 'pages' => 457, 'isbn' => 9785267006323 ], ]; ?> <ul> <?php foreach ($books as $book): ?> <li> <i><?php echo $book['title']; ?></i> - <?php echo $book['author']; ?> <?php if (!$book['available']): ?> <b>Not available</b> <?php endif; ?> </li> <?php endforeach; ?> </ul> The highlighted code shows a foreach loop using the : notation, which is better when mixing it with HTML. It iterates all the $books arrays, and for each book, it will print some information as a HTML list. Also note that we have a conditional inside a loop, which is perfectly fine. Of course, this conditional will be executed for each entry in the array, so you should keep the block of code of your loops as simple as possible. Functions A function is a reusable block of code that, given an input, performs some actions and optionally returns a result. You already know several predefined functions, such as empty, in_array, or var_dump. These functions come with PHP so you do not have to reinvent the wheel, but you can create your own very easily. You can define functions when you identify portions of your application that have to be executed several times or just to encapsulate some functionality. Function declaration Declaring a function means to write it down so that it can be used later. A function has a name, takes arguments, and has a block of code. Optionally, it can define what kind of value is returning. The name of the function has to follow the same rules as variable names; that is, it has to start by a letter or underscore and can contain any letter, number, or underscore. It cannot be a reserved word. Let's see a simple example: function addNumbers($a, $b) { $sum = $a + $b; return $sum; } $result = addNumbers(2, 3); Here, the function's name is addNumbers, and it takes two arguments: $a and $b. The block of code defines a new variable $sum that is the sum of both the arguments and then returns its content with return. In order to use this function, you just need to call it by its name, sending all the required arguments, as shown in the highlighted line. PHP does not support overloaded functions. Overloading refers to the ability of declaring two or more functions with the same name but different arguments. As you can see, you can declare the arguments without knowing what their types are, so PHP would not be able to decide which function to use. Another important thing to note is the variable scope. We are declaring a $sum variable inside the block of code, so once the function ends, the variable will not be accessible any more. This means that the scope of variables declared inside the function is just the function itself. Furthermore, if you had a $sum variable declared outside the function, it would not be affected at all since the function cannot access that variable unless we send it as an argument. Function arguments A function gets information from outside via arguments. You can define any number of arguments—including 0. These arguments need at least a name so that they can be used inside the function, and there cannot be two arguments with the same name. When invoking the function, you need to send the arguments in the same order as we declared them. A function may contain optional arguments; that is, you are not forced to provide a value for those arguments. When declaring the function, you need to provide a default value for those arguments, so in case the user does not provide a value, the function will use the default one: function addNumbers($a, $b, $printResult = false) { $sum = $a + $b; if ($printResult) { echo 'The result is ' . $sum; } return $sum; } $sum1 = addNumbers(1, 2); $sum1 = addNumbers(3, 4, false); $sum1 = addNumbers(5, 6, true); // it will print the result This new function takes two mandatory arguments and an optional one. The default value is false, and is used as a normal value inside the function. The function will print the result of the sum if the user provides true as the third argument, which happens only the third time that the function is invoked. For the first two times, $printResult is set to false. The arguments that the function receives are just copies of the values that the user provided. This means that if you modify these arguments inside the function, it will not affect the original values. This feature is known as sending arguments by a value. Let's see an example: function modify($a) { $a = 3; } $a = 2; modify($a); var_dump($a); // prints 2 We are declaring the $a variable with the value 2, and then we are calling the modify method, sending $a. The modify method modifies the $a argument, setting its value to 3. However, this does not affect the original value of $a, which reminds to 2 as you can see in the var_dump function. If what you want is to actually change the value of the original variable used in the invocation, you need to pass the argument by reference. To do that, you add & in front of the argument when declaring the function: function modify(&$a) { $a = 3; } Now, after invoking the modify function, $a will be always 3. Arguments by value versus by reference PHP allows you to do it, and in fact, some native functions of PHP use arguments by reference—remember the array sorting functions; they did not return the sorted array; instead, they sorted the array provided. But using arguments by reference is a way of confusing developers. Usually, when someone uses a function, they expect a result, and they do not want their provided arguments to be modified. So, try to avoid it; people will be grateful! The return statement You can have as many return statements as you want inside your function, but PHP will exit the function as soon as it finds one. This means that if you have two consecutive return statements, the second one will never be executed. Still, having multiple return statements can be useful if they are inside conditionals. Add this function inside your functions.php file: function loginMessage() { if (isset($_COOKIE['username'])) { return "You are " . $_COOKIE['username']; } else { return "You are not authenticated."; } } Let's use it in your index.php file by replacing the highlighted content—note that to save some tees, I replaced most of the code that was not changed at all with //…: //... <body> <p><?php echo loginMessage(); ?></p> <?php if (isset($_GET['title']) && isset($_GET['author'])): ?> //... Additionally, you can omit the return statement if you do not want the function to return anything. In this case, the function will end once it reaches the end of the block of code. Type hinting and return types With the release of PHP7, the language allows developers to be more specific about what functions get and return. You can—always optionally—specify the type of argument that the function needs, for example, type hinting, and the type of result the function will return—return type. Let's first see an example: <?php declare(strict_types=1); function addNumbers(int $a, int $b, bool $printSum): int { $sum = $a + $b; if ($printSum) { echo 'The sum is ' . $sum; } return $sum; } addNumbers(1, 2, true); addNumbers(1, '2', true); // it fails when strict_types is 1 addNumbers(1, 'something', true); // it always fails This function states that the arguments need to be an integer, and Boolean, and that the result will be an integer. Now, you know that PHP has type juggling, so it can usually transform a value of one type to its equivalent value of another type, for example, the string 2 can be used as integer 2. To stop PHP from using type juggling with the arguments and results of functions, you can declare the strict_types directive as shown in the first highlighted line. This directive has to be declared on the top of each file, where you want to enforce this behavior. The three invocations work as follows: The first invocation sends two integers and a Boolean, which is what the function expects. So, regardless of the value of strict_types, it will always work. The second invocation sends an integer, a string, and a Boolean. The string has a valid integer value, so if PHP was allowed to use type juggling, the invocation would resolve to just normal. But in this example, it will fail because of the declaration on top of the file. The third invocation will always fail as the something string cannot be transformed into a valid integer. Let's try to use a function within our project. In our index.php file, we have a foreach loop that iterates the books and prints them. The code inside the loop is kind of hard to understand as it is mixing HTML with PHP, and there is a conditional too. Let's try to abstract the logic inside the loop into a function. First, create the new functions.php file with the following content: <?php function printableTitle(array $book): string { $result = '<i>' . $book['title'] . '</i> - ' . $book['author']; if (!$book['available']) { $result .= ' <b>Not available</b>'; } return $result; } This file will contain our functions. The first one, printableTitle, takes an array representing a book and builds a string with a nice representation of the book in HTML. The code is the same as before, just encapsulated in a function. Now, index.php will have to include the functions.php file and then use the function inside the loop. Let's see how this is done: <?php require_once 'functions.php' ?> <!DOCTYPE html> <html lang="en"> //... ?> <ul> <?php foreach ($books as $book): ?> <li><?php echo printableTitle($book); ?> </li> <?php endforeach; ?> </ul> //... Well, now our loop looks way cleaner, right? Also, if we need to print the title of the book somewhere else, we can reuse the function instead of duplicating code! Summary In this article, we went through all the basics of procedural PHP while writing simple examples in order to practice them. You now know how to use variables and arrays with control structures and functions and how to get information from HTTP requests among others. Resources for Article: Further resources on this subject: Getting started with Modernizr using PHP IDE[article] PHP 5 Social Networking: Implementing Public Messages[article] Working with JSON in PHP jQuery[article]
Read more
  • 0
  • 0
  • 16342

article-image-writing-blog-application-nodejs-and-angularjs
Packt
16 Feb 2016
35 min read
Save for later

Writing a Blog Application with Node.js and AngularJS

Packt
16 Feb 2016
35 min read
In this article, we are going to build a blog application by using Node.js and AngularJS. Our system will support adding, editing, and removing articles, so there will be a control panel. The MongoDB or MySQL database will handle the storing of the information and the Express framework will be used as the site base. It will deliver the JavaScript, CSS, and the HTML to the end user, and will provide an API to access the database. We will use AngularJS to build the user interface and control the client-side logic in the administration page. (For more resources related to this topic, see here.) This article will cover the following topics: AngularJS fundamentals Choosing and initializing a database Implementing the client-side part of an application with AngularJS Exploring AngularJS AngularJS is an open source, client-side JavaScript framework developed by Google. It's full of features and is really well documented. It has almost become a standard framework in the development of single-page applications. The official site of AngularJS, http://angularjs.org, provides a well-structured documentation. As the framework is widely used, there is a lot of material in the form of articles and video tutorials. As a JavaScript library, it collaborates pretty well with Node.js. In this article, we will build a simple blog with a control panel. Before we start developing our application, let's first take a look at the framework. AngularJS gives us very good control over the data on our page. We don't have to think about selecting elements from the DOM and filling them with values. Thankfully, due to the available data-binding, we may update the data in the JavaScript part and see the change in the HTML part. This is also true for the reverse. Once we change something in the HTML part, we get the new values in the JavaScript part. The framework has a powerful dependency injector. There are predefined classes in order to perform AJAX requests and manage routes. You could also read Mastering Web Development with AngularJS by Peter Bacon Darwin and Pawel Kozlowski, published by Packt Publishing. Bootstrapping AngularJS applications To bootstrap an AngularJS application, we need to add the ng-app attribute to some of our HTML tags. It is important that we pick the right one. Having ng-app somewhere means that all the child nodes will be processed by the framework. It's common practice to put that attribute on the <html> tag. In the following code, we have a simple HTML page containing ng-app: <html ng-app> <head> <script src="angular.min.js"></script> </head> <body> ... </body> </html>   Very often, we will apply a value to the attribute. This will be a module name. We will do this while developing the control panel of our blog application. Having the freedom to place ng-app wherever we want means that we can decide which part of our markup will be controlled by AngularJS. That's good, because if we have a giant HTML file, we really don't want to spend resources parsing the whole document. Of course, we may bootstrap our logic manually, and this is needed when we have more than one AngularJS application on the page. Using directives and controllers In AngularJS, we can implement the Model-View-Controller pattern. The controller acts as glue between the data (model) and the user interface (view). In the context of the framework, the controller is just a simple function. For example, the following HTML code illustrates that a controller is just a simple function: <html ng-app> <head> <script src="angular.min.js"></script> <script src="HeaderController.js"></script> </head> <body> <header ng-controller="HeaderController"> <h1>{{title}}</h1> </header> </body> </html>   In <head> of the page, we are adding the minified version of the library and HeaderController.js; a file that will host the code of our controller. We also set an ng-controller attribute in the HTML markup. The definition of the controller is as follows: function HeaderController($scope) { $scope.title = "Hello world"; } Every controller has its own area of influence. That area is called the scope. In our case, HeaderController defines the {{title}} variable. AngularJS has a wonderful dependency-injection system. Thankfully, due to this mechanism, the $scope argument is automatically initialized and passed to our function. The ng-controller attribute is called the directive, that is, an attribute, which has meaning to AngularJS. There are a lot of directives that we can use. That's maybe one of the strongest points of the framework. We can implement complex logic directly inside our templates, for example, data binding, filtering, or modularity. Data binding Data binding is a process of automatically updating the view once the model is changed. As we mentioned earlier, we can change a variable in the JavaScript part of the application and the HTML part will be automatically updated. We don't have to create a reference to a DOM element or attach event listeners. Everything is handled by the framework. Let's continue and elaborate on the previous example, as follows: <header ng-controller="HeaderController"> <h1>{{title}}</h1> <a href="#" ng-click="updateTitle()">change title</a> </header>   A link is added and it contains the ng-click directive. The updateTitle function is a function defined in the controller, as seen in the following code snippet: function HeaderController($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } }   We don't care about the DOM element and where the {{title}} variable is. We just change a property of $scope and everything works. There are, of course, situations where we will have the <input> fields and we want to bind their values. If that's the case, then the ng-model directive can be used. We can see this as follows: <header ng-controller="HeaderController"> <h1>{{title}}</h1> <a href="#" ng-click="updateTitle()">change title</a> <input type="text" ng-model="title" /> </header>   The data in the input field is bound to the same title variable. This time, we don't have to edit the controller. AngularJS automatically changes the content of the h1 tag. Encapsulating logic with modules It's great that we have controllers. However, it's not a good practice to place everything into globally defined functions. That's why it is good to use the module system. The following code shows how a module is defined: angular.module('HeaderModule', []); The first parameter is the name of the module and the second one is an array with the module's dependencies. By dependencies, we mean other modules, services, or something custom that we can use inside the module. It should also be set as a value of the ng-app directive. The code so far could be translated to the following code snippet: angular.module('HeaderModule', []) .controller('HeaderController', function($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   So, the first line defines a module. We can chain the different methods of the module and one of them is the controller method. Following this approach, that is, putting our code inside a module, we will be encapsulating logic. This is a sign of good architecture. And of course, with a module, we have access to different features such as filters, custom directives, and custom services. Preparing data with filters The filters are very handy when we want to prepare our data, prior to be displayed to the user. Let's say, for example, that we need to mention our title in uppercase once it reaches a length of more than 20 characters: angular.module('HeaderModule', []) .filter('customuppercase', function() { return function(input) { if(input.length > 20) { return input.toUpperCase(); } else { return input; } }; }) .controller('HeaderController', function($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   That's the definition of the custom filter called customuppercase. It receives the input and performs a simple check. What it returns, is what the user sees at the end. Here is how this filter could be used in HTML: <h1>{{title | customuppercase}}</h1> Of course, we may add more than one filter per variable. There are some predefined filters to limit the length, such as the JavaScript to JSON conversion or, for example, date formatting. Dependency injection Dependency management can be very tough sometimes. We may split everything into different modules/components. They have nicely written APIs and they are very well documented. However, very soon, we may realize that we need to create a lot of objects. Dependency injection solves this problem by providing what we need, on the fly. We already saw this in action. The $scope parameter passed to our controller, is actually created by the injector of AngularJS. To get something as a dependency, we need to define it somewhere and let the framework know about it. We do this as follows: angular.module('HeaderModule', []) .factory("Data", function() { return { getTitle: function() { return "A better title."; } } }) .controller('HeaderController', function($scope, Data) { $scope.title = Data.getTitle(); $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   The Module class has a method called factory. It registers a new service that could later be used as a dependency. The function returns an object with only one method, getTitle. Of course, the name of the service should match the name of the controller's parameter. Otherwise, AngularJS will not be able to find the dependency's source. The model in the context of AngularJS In the well-known Model-View-Controller pattern, the model is the part that stores the data in the application. AngularJS doesn't have a specific workflow to define models. The $scope variable could be considered a model. We keep the data in properties attached to the current scope. Later, we can use the ng-model directive and bind a property to the DOM element. We already saw how this works in the previous sections. The framework may not provide the usual form of a model, but it's made like that so that we can write our own implementation. The fact that AngularJS works with plain JavaScript objects, makes this task easily doable. Final words on AngularJS AngularJS is one of the leading frameworks, not only because it is made by Google, but also because it's really flexible. We could use just a small piece of it or build a solid architecture using the giant collection of features. Selecting and initializing the database To build a blog application, we need a database that will store the published articles. In most cases, the choice of the database depends on the current project. There are factors such as performance and scalability and we should keep them in mind. In order to have a better look at the possible solutions, we will have a look at the two of the most popular databases: MongoDB and MySQL. The first one is a NoSQL type of database. According to the Wikipedia entry (http://en.wikipedia.org/wiki/ NoSQL) on NoSQL databases: "A NoSQL or Not Only SQL database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases." In other words, it's simpler than a SQL database, and very often stores information in the key value type. Usually, such solutions are used when handling and storing large amounts of data. It is also a very popular approach when we need flexible schema or when we want to use JSON. It really depends on what kind of system we are building. In some cases, MySQL could be a better choice, while in some other cases, MongoDB. In our example blog, we're going to use both. In order to do this, we will need a layer that connects to the database server and accepts queries. To make things a bit more interesting, we will create a module that has only one API, but can switch between the two database models. Using NoSQL with MongoDB Let's start with MongoDB. Before we start storing information, we need a MongoDB server running. It can be downloaded from the official page of the database https://www.mongodb.org/downloads. We are not going to handle the communication with the database manually. There is a driver specifically developed for Node.js. It's called mongodb and we should include it in our package.json file. After successful installation via npm install, the driver will be available in our scripts. We can check this as follows: "dependencies": { "mongodb": "1.3.20" }   We will stick to the Model-View-Controller architecture and the database-related operations in a model called Articles. We can see this as follows: var crypto = require("crypto"), type = "mongodb", client = require('mongodb').MongoClient, mongodb_host = "127.0.0.1", mongodb_port = "27017", collection; module.exports = function() { if(type == "mongodb") { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } else { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } }   It starts with defining a few dependencies and settings for the MongoDB connection. Line number one requires the crypto module. We will use it to generate unique IDs for every article. The type variable defines which database is currently accessed. The third line initializes the MongoDB driver. We will use it to communicate with the database server. After that, we set the host and port for the connection and at the end a global collection variable, which will keep a reference to the collection with the articles. In MongoDB, the collections are similar to the tables in MySQL. The next logical step is to establish a database connection and perform the needed operations, as follows: connection = 'mongodb://'; connection += mongodb_host + ':' + mongodb_port; connection += '/blog-application'; client.connect(connection, function(err, database) { if(err) { throw new Error("Can't connect"); } else { console.log("Connection to MongoDB server successful."); collection = database.collection('articles'); } });   We pass the host and the port, and the driver is doing everything else. Of course, it is a good practice to handle the error (if any) and throw an exception. In our case, this is especially needed because without the information in the database, the frontend has nothing to show. The rest of the module contains methods to add, edit, retrieve, and delete records: return { add: function(data, callback) { var date = new Date(); data.id = crypto.randomBytes(20).toString('hex'); data.date = date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate(); collection.insert(data, {}, callback || function() {}); }, update: function(data, callback) { collection.update( {ID: data.id}, data, {}, callback || function(){ } ); }, get: function(callback) { collection.find({}).toArray(callback); }, remove: function(id, callback) { collection.findAndModify( {ID: id}, [], {}, {remove: true}, callback ); } }   The add and update methods accept the data parameter. That's a simple JavaScript object. For example, see the following code: { title: "Blog post title", text: "Article's text here ..." }   The records are identified by an automatically generated unique id. The update method needs it in order to find out which record to edit. All the methods also have a callback. That's important, because the module is meant to be used as a black box, that is, we should be able to create an instance of it, operate with the data, and at the end continue with the rest of the application's logic. Using MySQL We're going to use an SQL type of database with MySQL. We will add a few more lines of code to the already working Articles.js model. The idea is to have a class that supports the two databases like two different options. At the end, we should be able to switch from one to the other, by simply changing the value of a variable. Similar to MongoDB, we need to first install the database to be able use it. The official download page is http://www.mysql.com/downloads. MySQL requires another Node.js module. It should be added again to the package. json file. We can see the module as follows: "dependencies": { "mongodb": "1.3.20", "mysql": "2.0.0" }   Similar to the MongoDB solution, we need to firstly connect to the server. To do so, we need to know the values of the host, username, and password fields. And because the data is organized in databases, a name of the database. In MySQL, we put our data into different databases. So, the following code defines the needed variables: var mysql = require('mysql'), mysql_host = "127.0.0.1", mysql_user = "root", mysql_password = "", mysql_database = "blog_application", connection;   The previous example leaves the password field empty but we should set the proper value of our system. The MySQL database requires us to define a table and its fields before we start saving data. So, the following code is a short dump of the table used in this article: CREATE TABLE IF NOT EXISTS `articles` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` longtext NOT NULL, `text` longtext NOT NULL, `date` varchar(100) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;   Once we have a database and its table set, we can continue with the database connection, as follows: connection = mysql.createConnection({ host: mysql_host, user: mysql_user, password: mysql_password }); connection.connect(function(err) { if(err) { throw new Error("Can't connect to MySQL."); } else { connection.query("USE " + mysql_database, function(err, rows, fields) { if(err) { throw new Error("Missing database."); } else { console.log("Successfully selected database."); } }) } });   The driver provides a method to connect to the server and execute queries. The first executed query selects the database. If everything is ok, you should see Successfully selected database as an output in your console. Half of the job is done. What we should do now is replicate the methods returned in the first MongoDB implementation. We need to do this because when we switch to the MySQL usage, the code using the class will not work. And by replicating them we mean that they should have the same names and should accept the same arguments. If we do everything correctly, at the end our application will support two types of databases. And all we have to do is change the value of the type variable: return { add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); }, update: function(data, callback) { var query = "UPDATE articles SET "; query += "title=" + connection.escape(data.title) + ", "; query += "text=" + connection.escape(data.text) + " "; query += "WHERE id='" + data.id + "'"; connection.query(query, callback); }, get: function(callback) { var query = "SELECT * FROM articles ORDER BY id DESC"; connection.query(query, function(err, rows, fields) { if (err) { throw new Error("Error getting."); } else { callback(rows); } }); }, remove: function(id, callback) { var query = "DELETE FROM articles WHERE id='" + id + "'"; connection.query(query, callback); } }   The code is a little longer than the one generated in the first MongoDB variant. That's because we needed to construct MySQL queries from the passed data. Keep in mind that we have to escape the information, which comes to the module. That's why we use connection.escape(). With these lines of code, our model is completed. Now we can add, edit, remove, or get data. Let's continue with the part that shows the articles to our users. Developing the client side with AngularJS Let's assume that there is some data in the database and we are ready to present it to the users. So far, we have only developed the model, which is the class that takes care of the access to the information. To simplify the process, we will Express here. We need to first update the package.json file and include that in the framework, as follows: "dependencies": { "express": "3.4.6", "jade": "0.35.0", "mongodb": "1.3.20", "mysql": "2.0.0" }   We are also adding Jade, because we are going to use it as a template language. The writing of markup in plain HTML is not very efficient nowadays. By using the template engine, we can split the data and the HTML markup, which makes our application much better structured. Jade's syntax is kind of similar to HTML. We can write tags without the need to close them: body p(class="paragraph", data-id="12"). Sample text here footer a(href="#"). my site   The preceding code snippet is transformed to the following code snippet: <body> <p data-id="12" class="paragraph">Sample text here</p> <footer><a href="#">my site</a></footer> </body>   Jade relies on the indentation in the content to distinguish the tags. Let's start with the project structure, as seen in the following screenshot: We placed our already written class, Articles.js, inside the models directory. The public directory will contain CSS styles, and all the necessary client-side JavaScript: the AngularJS library, the AngularJS router module, and our custom code. We will skip some of the explanations about the following code. Our index.js file looks as follows: var express = require('express'); var app = express(); var articles = require("./models/Articles")(); app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); app.use(express.static(__dirname + '/public')); app.use(function(req, res, next) { req.articles = articles; next(); }); app.get('/api/get', require("./controllers/api/get")); app.get('/', require("./controllers/index")); app.listen(3000); console.log('Listening on port 3000');   At the beginning, we require the Express framework and our model. Maybe it's better to initialize the model inside the controller, but in our case this is not necessary. Just after that, we set up some basic options for Express and define our own middleware. It has only one job to do and that is to attach the model to the request object. We are doing this because the request object is passed to all the route handlers. In our case, these handlers are actually the controllers. So, Articles.js becomes accessible everywhere via the req.articles property. At the end of the script, we placed two routes. The second one catches the usual requests that come from the users. The first one, /api/get, is a bit more interesting. We want to build our frontend on top of AngularJS. So, the data that is stored in the database should not enter the Node.js part but on the client side where we use Google's framework. To make this possible, we will create routes/controllers to get, add, edit, and delete records. Everything will be controlled by HTTP requests performed by AngularJS. In other words, we need an API. Before we start using AngularJS, let's take a look at the /controllers/api/get.js controller: module.exports = function(req, res, next) { req.articles.get(function(rows) { res.send(rows); }); }   The main job is done by our model and the response is handled by Express. It's nice because if we pass a JavaScript object, as we did, (rows is actually an array of objects) the framework sets the response headers automatically. To test the result, we could run the application with node index.js and open http://localhost:3000/api/ get. If we don't have any records in the database, we will get an empty array. If not, the stored articles will be returned. So, that's the URL, which we should hit from within the AngularJS controller in order to get the information. The code of the /controller/index.js controller is also just a few lines. We can see the code as follows: module.exports = function(req, res, next) { res.render("list", { app: "" }); }   It simply renders the list view, which is stored in the list.jade file. That file should be saved in the /views directory. But before we see its code, we will check another file, which acts as a base for all the pages. Jade has a nice feature called blocks. We may define different partials and combine them into one template. The following is our layout.jade file: doctype html html(ng-app="#{app}") head title Blog link(rel='stylesheet', href='/style.css') script(src='/angular.min.js') script(src='/angular-route.min.js') body block content   There is only one variable passed to this template, which is #{app}. We will need it later to initialize the administration's module. The angular.min.js and angular-route.min.js files should be downloaded from the official AngularJS site, and placed in the /public directory. The body of the page contains a block placeholder called content, which we will later fill with the list of the articles. The following is the list.jade file: extends layout block content .container(ng-controller="BlogCtrl") section.articles article(ng-repeat="article in articles") h2 {{article.title}} br small published on {{article.date}} p {{article.text}} script(src='/blog.js')   The two lines in the beginning combine both the templates into one page. The Express framework transforms the Jade template into HTML and serves it to the browser of the user. From there, the client-side JavaScript takes control. We are using the ng-controller directive saying that the div element will be controlled by an AngularJS controller called BlogCtrl. The same class should have variable, articles, filled with the information from the database. ng-repeat goes through the array and displays the content to the users. The blog.js class holds the code of the controller: function BlogCtrl($scope, $http) { $scope.articles = [ { title: "", text: "Loading ..."} ]; $http({method: 'GET', url: '/api/get'}) .success(function(data, status, headers, config) { $scope.articles = data; }) .error(function(data, status, headers, config) { console.error("Error getting articles."); }); }   The controller has two dependencies. The first one, $scope, points to the current view. Whatever we assign as a property there is available as a variable in our HTML markup. Initially, we add only one element, which doesn't have a title, but has text. It is shown to indicate that we are still loading the articles from the database. The second dependency, $http, provides an API in order to make HTTP requests. So, all we have to do is query /api/get, fetch the data, and pass it to the $scope dependency. The rest is done by AngularJS and its magical two-way data binding. To make the application a little more interesting, we will add a search field, as follows: // views/list.jade header .search input(type="text", placeholder="type a filter here", ng-model="filterText") h1 Blog hr   The ng-model directive, binds the value of the input field to a variable inside our $scope dependency. However, this time, we don't have to edit our controller and can simply apply the same variable as a filter to the ng-repeat: article(ng-repeat="article in articles | filter:filterText") As a result, the articles shown will be filtered based on the user's input. Two simple additions, but something really valuable is on the page. The filters of AngularJS can be very powerful. Implementing a control panel The control panel is the place where we will manage the articles of the blog. Several things should be made in the backend before continuing with the user interface. They are as follows: app.set("username", "admin"); app.set("password", "pass"); app.use(express.cookieParser('blog-application')); app.use(express.session());   The previous lines of code should be added to /index.js. Our administration should be protected, so the first two lines define our credentials. We are using Express as data storage, simply creating key-value pairs. Later, if we need the username we can get it with app.get("username"). The next two lines enable session support. We need that because of the login process. We added a middleware, which attaches the articles to the request object. We will do the same with the current user's status, as follows: app.use(function(req, res, next) { if (( req.session && req.session.admin === true ) || ( req.body && req.body.username === app.get("username") && req.body.password === app.get("password") )) { req.logged = true; req.session.admin = true; }; next(); });   Our if statement is a little long, but it tells us whether the user is logged in or not. The first part checks whether there is a session created and the second one checks whether the user submitted a form with the correct username and password. If these expressions are true, then we attach a variable, logged, to the request object and create a session that will be valid during the following requests. There is only one thing that we need in the main application's file. A few routes that will handle the control panel operations. In the following code, we are defining them along with the needed route handler: var protect = function(req, res, next) { if (req.logged) { next(); } else { res.send(401, 'No Access.'); } } app.post('/api/add', protect, require("./controllers/api/add")); app.post('/api/edit', protect, require("./controllers/api/edit")); app.post('/api/delete', protect, require("./controllers/api/ delete")); app.all('/admin', require("./controllers/admin"));   The three routes, which start with /api, will use the model Articles.js to add, edit, and remove articles from the database. These operations should be protected. We will add a middleware function that takes care of this. If the req.logged variable is not available, it simply responds with a 401 - Unauthorized status code. The last route, /admin, is a little different because it shows a login form instead. The following is the controller to create new articles: module.exports = function(req, res, next) { req.articles.add(req.body, function() { res.send({success: true}); }); }   We transfer most of the logic to the frontend, so again, there are just a few lines. What is interesting here is that we pass req.body directly to the model. It actually contains the data submitted by the user. The following code, is how the req.articles.add method looks for the MongoDB implementation: add: function(data, callback) { data.ID = crypto.randomBytes(20).toString('hex'); collection.insert(data, {}, callback || function() {}); } And the MySQL implementation is as follows: add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); } In both the cases, we need title and text in the passed data object. Thankfully, due to Express' bodyParser middleware, this is what we have in the req.body object. We can directly forward it to the model. The other route handlers are almost the same: // api/edit.js module.exports = function(req, res, next) { req.articles.update(req.body, function() { res.send({success: true}); }); } What we changed is the method of the Articles.js class. It is not add but update. The same technique is applied in the route to delete an article. We can see it as follows: // api/delete.js module.exports = function(req, res, next) { req.articles.remove(req.body.id, function() { res.send({success: true}); }); }   What we need for deletion is not the whole body of the request but only the unique ID of the record. Every API method sends {success: true} as a response. While we are dealing with API requests, we should always return a response. Even if something goes wrong. The last thing in the Node.js part, which we have to cover, is the controller responsible for the user interface of the administration panel, that is, the. / controllers/admin.js file: module.exports = function(req, res, next) { if(req.logged) { res.render("admin", { app: "admin" }); } else { res.render("login", { app: "" }); } }   There are two templates that are rendered: /views/admin.jade and /views/login. jade. Based on the variable, which we set in /index.js, the script decides which one to show. If the user is not logged in, then a login form is sent to the browser, as follows: extends layout block content .container header h1 Administration hr section.articles article form(method="post", action="/admin") span Username: br input(type="text", name="username") br span Password: br input(type="password", name="password") br br input(type="submit", value="login")   There is no AngularJS code here. All we have is the good old HTML form, which submits its data via POST to the same URL—/admin. If the username and password are correct, the .logged variable is set to true and the controller renders the other template: extends layout block content .container header h1 Administration hr a(href="/") Public span | a(href="#/") List span | a(href="#/add") Add section(ng-view) script(src='/admin.js')   The control panel needs several views to handle all the operations. AngularJS has a great router module, which works with hashtags-type URLs, that is, URLs such as / admin#/add. The same module requires a placeholder for the different partials. In our case, this is a section tag. The ng-view attribute tells the framework that this is the element prepared for that logic. At the end of the template, we are adding an external file, which keeps the whole client-side JavaScript code that is needed by the control panel. While the client-side part of the applications needs only loading of the articles, the control panel requires a lot more functionalities. It is good to use the modular system of AngularJS. We need the routes and views to change, so the ngRoute module is needed as a dependency. This module is not added in the main angular.min.js build. It is placed in the angular-route.min.js file. The following code shows how our module starts: var admin = angular.module('admin', ['ngRoute']); admin.config(['$routeProvider', function($routeProvider) { $routeProvider .when('/', {}) .when('/add', {}) .when('/edit/:id', {}) .when('/delete/:id', {}) .otherwise({ redirectTo: '/' }); } ]);   We configured the router by mapping URLs to specific routes. At the moment, the routes are just empty objects, but we will fix that shortly. Every controller will need to make HTTP requests to the Node.js part of the application. It will be nice if we have such a service and use it all over our code. We can see an example as follows: admin.factory('API', function($http) { var request = function(method, url) { return function(callback, data) { $http({ method: method, url: url, data: data }) .success(callback) .error(function(data, status, headers, config) { console.error("Error requesting '" + url + "'."); }); } } return { get: request('GET', '/api/get'), add: request('POST', '/api/add'), edit: request('POST', '/api/edit'), remove: request('POST', '/api/delete') } });   One of the best things about AngularJS is that it works with plain JavaScript objects. There are no unnecessary abstractions and no extending or inheriting special classes. We are using the .factory method to create a simple JavaScript object. It has four methods that can be called: get, add, edit, and remove. Each one of them calls a function, which is defined in the helper method request. The service has only one dependency, $http. We already know this module; it handles HTTP requests nicely. The URLs that we are going to query are the same ones that we defined in the Node.js part. Now, let's create a controller that will show the articles currently stored in the database. First, we should replace the empty route object .when('/', {}) with the following object: .when('/', { controller: 'ListCtrl', template: ' <article ng-repeat="article in articles"> <hr /> <strong>{{article.title}}</strong><br /> (<a href="#/edit/{{article.id}}">edit</a>) (<a href="#/delete/{{article.id}}">remove</a>) </article> ' })   The object has to contain a controller and a template. The template is nothing more than a few lines of HTML markup. It looks a bit like the template used to show the articles on the client side. The difference is the links used to edit and delete. JavaScript doesn't allow new lines in the string definitions. The backward slashes at the end of the lines prevent syntax errors, which will eventually be thrown by the browser. The following is the code for the controller. It is defined, again, in the module: admin.controller('ListCtrl', function($scope, API) { API.get(function(articles) { $scope.articles = articles; }); });   And here is the beauty of the AngularJS dependency injection. Our custom-defined service API is automatically initialized and passed to the controller. The .get method fetches the articles from the database. Later, we send the information to the current $scope dependency and the two-way data binding does the rest. The articles are shown on the page. The work with AngularJS is so easy that we could combine the controller to add and edit in one place. Let's store the route object in an external variable, as follows: var AddEditRoute = { controller: 'AddEditCtrl', template: ' <hr /> <article> <form> <span>Title</spna><br /> <input type="text" ng-model="article.title"/><br /> <span>Text</spna><br /> <textarea rows="7" ng-model="article.text"></textarea> <br /><br /> <button ng-click="save()">save</button> </form> </article> ' };   And later, assign it to the both the routes, as follows: .when('/add', AddEditRoute) .when('/edit/:id', AddEditRoute)   The template is just a form with the necessary fields and a button, which calls the save method in the controller. Notice that we bound the input field and the text area to variables inside the $scope dependency. This comes in handy because we don't need to access the DOM to get the values. We can see this as follows: admin.controller( 'AddEditCtrl', function($scope, API, $location, $routeParams) { var editMode = $routeParams.id ? true : false; if (editMode) { API.get(function(articles) { articles.forEach(function(article) { if (article.id == $routeParams.id) { $scope.article = article; } }); }); } $scope.save = function() { API[editMode ? 'edit' : 'add'](function() { $location.path('/'); }, $scope.article); } })   The controller receives four dependencies. We already know about $scope and API. The $location dependency is used when we want to change the current route, or, in other words, to forward the user to another view. The $routeParams dependency is needed to fetch parameters from the URL. In our case, /edit/:id is a route with a variable inside. Inside the code, the id is available in $routeParams.id. The adding and editing of articles uses the same form. So, with a simple check, we know what the user is currently doing. If the user is in the edit mode, then we fetch the article based on the provided id and fill the form. Otherwise, the fields are empty and new records will be created. The deletion of an article can be done by using a similar approach, which is adding a route object and defining a new controller. We can see the deletion as follows: .when('/delete/:id', { controller: 'RemoveCtrl', template: ' ' })   We don't need a template in this case. Once the article is deleted from the database, we will forward the user to the list page. We have to call the remove method of the API. Here is how the RemoveCtrl controller looks like: admin.controller( 'RemoveCtrl', function($scope, $location, $routeParams, API) { API.remove(function() { $location.path('/'); }, $routeParams); } );   The preceding code depicts same dependencies like in the previous controller. This time, we simply forward the $routeParams dependency to the API. And because it is a plain JavaScript object, everything works as expected. Summary In this article, we built a simple blog by writing the backend of the application in Node.js. The module for database communication, which we wrote, can work with the MongoDB or MySQL database and store articles. The client-side part and the control panel of the blog were developed with AngularJS. We then defined a custom service using the built-in HTTP and routing mechanisms. Node.js works well with AngularJS, mainly because both are written in JavaScript. We found out that AngularJS is built to support the developer. It removes all those boring tasks such as DOM element referencing, attaching event listeners, and so on. It's a great choice for the modern client-side coding stack. You can refer to the following books to learn more about Node.js: Node.js Essentials Learning Node.js for Mobile Application Development Node.js Design Patterns Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] AngularJS Project [Article] Working with Live Data and AngularJS [Article]
Read more
  • 0
  • 2
  • 13484

article-image-types-variables-and-function-techniques
Packt
16 Feb 2016
39 min read
Save for later

Types, Variables, and Function Techniques

Packt
16 Feb 2016
39 min read
This article is an introduction to the syntax used in the TypeScript language to apply strong typing to JavaScript. It is intended for readers that have not used TypeScript before, and covers the transition from standard JavaScript to TypeScript. We will cover the following topics in this article: Basic types and type syntax: strings, numbers, and booleans Inferred typing and duck-typing Arrays and enums The any type and explicit casting Functions and anonymous functions Optional and default function parameters Argument arrays Function callbacks and function signatures Function scoping rules and overloads (For more resources related to this topic, see here.) Basic types JavaScript variables can hold a number of data types, including numbers, strings, arrays, objects, functions, and more. The type of an object in JavaScript is determined by its assignment–so if a variable has been assigned a string value, then it will be of type string. This can, however, introduce a number of problems in our code. JavaScript is not strongly typed JavaScript objects and variables can be changed or reassigned on the fly. As an example of this, consider the following JavaScript code: var myString = "test"; var myNumber = 1; var myBoolean = true; We start by defining three variables, named myString, myNumber and myBoolean. The myString variable is set to a string value of "test", and as such will be of type string. Similarly, myNumber is set to the value of 1, and is therefore of type number, and myBoolean is set to true, making it of type boolean. Now let's start assigning these variables to each other, as follows: myString = myNumber; myBoolean = myString; myNumber = myBoolean; We start by setting the value of myString to the value of myNumber (which is the numeric value of 1). We then set the value of myBoolean to the value of myString, (which would now be the numeric value of 1). Finally, we set the value of myNumber to the value of myBoolean. What is happening here, is that even though we started out with three different types of variables—a string, a number, and a boolean—we are able to reassign any of these variables to one of the other types. We can assign a number to a string, a string to boolean, or a boolean to a number. While this type of assignment in JavaScript is legal, it shows that the JavaScript language is not strongly typed. This can lead to unwanted behaviour in our code. Parts of our code may be relying on the fact that a particular variable is holding a string, and if we inadvertently assign a number to this variable, our code may start to break in unexpected ways. TypeScript is strongly typed TypeScript, on the other hand, is a strongly typed language. Once you have declared a variable to be of type string, you can only assign string values to it. All further code that uses this variable must treat it as though it has a type of string. This helps to ensure that code that we write will behave as expected. While strong typing may not seem to be of any use with simple strings and numbers—it certainly does become important when we apply the same rules to objects, groups of objects, function definitions and classes. If you have written a function that expects a string as the first parameter and a number as the second, you cannot be blamed, if someone calls your function with a boolean as the first parameter and something else as the second. JavaScript programmers have always relied heavily on documentation to understand how to call functions, and the order and type of the correct function parameters. But what if we could take all of this documentation and include it within the IDE? Then, as we write our code, our compiler could point out to us—automatically—that we were using objects and functions in the wrong way. Surely this would make us more efficient, more productive programmers, allowing us to generating code with fewer errors? TypeScript does exactly that. It introduces a very simple syntax to define the type of a variable or a function parameter to ensure that we are using these objects, variables, and functions in the correct manner. If we break any of these rules, the TypeScript compiler will automatically generate errors, pointing us to the lines of code that are in error. This is how TypeScript got its name. It is JavaScript with strong typing - hence TypeScript. Let's take a look at this very simple language syntax that enables the "Type" in TypeScript. Type syntax The TypeScript syntax for declaring the type of a variable is to include a colon (:), after the variable name, and then indicate its type. Consider the following TypeScript code: var myString : string = "test"; var myNumber: number = 1; var myBoolean : boolean = true; This code snippet is the TypeScript equivalent of our preceding JavaScript code. We can now see an example of the TypeScript syntax for declaring a type for the myString variable. By including a colon and then the keyword string (: string), we are telling the compiler that the myString variable is of type string. Similarly, the myNumber variable is of type number, and the myBoolean variable is of type boolean. TypeScript has introduced the string, number and boolean keywords for each of these basic JavaScript types. If we attempt to assign a value to a variable that is not of the same type, the TypeScript compiler will generate a compile-time error. Given the variables declared in the preceding code, the following TypeScript code will generate some compile errors: myString = myNumber; myBoolean = myString; myNumber = myBoolean; TypeScript build errors when assigning incorrect types The TypeScript compiler is generating compile errors, because we are attempting to mix these basic types. The first error is generated by the compiler because we cannot assign a number value to a variable of type string. Similarly, the second compile error indicates that we cannot assign a string value to a variable of type boolean. Again, the third error is generated because we cannot assign a boolean value to a variable of type number. The strong typing syntax that the TypeScript language introduces, means that we need to ensure that the types on the left-hand side of an assignment operator (=) are the same as the types on the right-hand side of the assignment operator. To fix the preceding TypeScript code, and remove the compile errors, we would need to do something similar to the following: myString = myNumber.toString(); myBoolean = (myString === "test"); if (myBoolean) { myNumber = 1; } Our first line of code has been changed to call the .toString() function on the myNumber variable (which is of type number), in order to return a value that is of type string. This line of code, then, does not generate a compile error because both sides of the equal sign are of the same type. Our second line of code has also been changed so that the right hand side of the assignment operator returns the result of a comparison, myString === "test", which will return a value of type boolean. The compiler will therefore allow this code, because both sides of the assignment resolve to a value of type boolean. The last line of our code snippet has been changed to only assign the value 1 (which is of type number) to the myNumber variable, if the value of the myBoolean variable is true. Anders Hejlsberg describes this feature as "syntactic sugar". With a little sugar on top of comparable JavaScript code, TypeScript has enabled our code to conform to strong typing rules. Whenever you break these strong typing rules, the compiler will generate errors for your offending code. Inferred typing TypeScript also uses a technique called inferred typing, in cases where you do not explicitly specify the type of your variable. In other words, TypeScript will find the first usage of a variable within your code, figure out what type the variable is first initialized to, and then assume the same type for this variable in the rest of your code block. As an example of this, consider the following code: var myString = "this is a string"; var myNumber = 1; myNumber = myString; We start by declaring a variable named myString, and assign a string value to it. TypeScript identifies that this variable has been assigned a value of type string, and will, therefore, infer any further usages of this variable to be of type string. Our second variable, named myNumber has a number assigned to it. Again, TypeScript is inferring the type of this variable to be of type number. If we then attempt to assign the myString variable (of type string) to the myNumber variable (of type number) in the last line of code, TypeScript will generate a familiar error message: error TS2011: Build: Cannot convert 'string' to 'number' This error is generated because of TypeScript's inferred typing rules. Duck-typing TypeScript also uses a method called duck-typing for more complex variable types. Duck-typing means that if it looks like a duck, and quacks like a duck, then it probably is a duck. Consider the following TypeScript code: var complexType = { name: "myName", id: 1 }; complexType = { id: 2, name: "anotherName" }; We start with a variable named complexType that has been assigned a simple JavaScript object with a name and id property. On our second line of code, we can see that we are re-assigning the value of this complexType variable to another object that also has an id and a name property. The compiler will use duck-typing in this instance to figure out whether this assignment is valid. In other words, if an object has the same set of properties as another object, then they are considered to be of the same type. To further illustrate this point, let's see how the compiler reacts if we attempt to assign an object to our complexType variable that does not conform to this duck-typing: var complexType = { name: "myName", id: 1 }; complexType = { id: 2 }; complexType = { name: "anotherName" }; complexType = { address: "address" }; The first line of this code snippet defines our complexType variable, and assigns to it an object that contains both an id and name property. From this point, TypeScript will use this inferred type on any value we attempt to assign to the complexType variable. On our second line of code, we are attempting to assign a value that has an id property but not the name property. On the third line of code, we again attempt to assign a value that has a name property, but does not have an id property. On the last line of our code snippet, we have completely missed the mark. Compiling this code will generate the following errors: error TS2012: Build: Cannot convert '{ id: number; }' to '{ name: string; id: number; }': error TS2012: Build: Cannot convert '{ name: string; }' to '{ name: string; id: number; }': error TS2012: Build: Cannot convert '{ address: string; }' to '{ name: string; id: number; }': As we can see from the error messages, TypeScript is using duck-typing to ensure type safety. In each message, the compiler gives us clues as to what is wrong with the offending code – by explicitly stating what it is expecting. The complexType variable has both an id and a name property. To assign a value to the complexType variable, then, this value will need to have both an id and a name property. Working through each of these errors, TypeScript is explicitly stating what is wrong with each line of code. Note that the following code will not generate any error messages: var complexType = { name: "myName", id: 1 }; complexType = { name: "name", id: 2, address: "address" }; Again, our first line of code defines the complexType variable, as we have seen previously, with an id and a name property. Now, look at the second line of this example. The object we are using actually has three properties: name, id, and address. Even though we have added a new address property, the compiler will only check to see if our new object has both an id and a name. Because our new object has these properties, and will therefore match the original type of the variable, TypeScript will allow this assignment through duck-typing. Inferred typing and duck-typing are powerful features of the TypeScript language – bringing strong typing to our code, without the need to use explicit typing, that is, a colon : and then the type specifier syntax. Arrays Besides the base JavaScript types of string, number, and boolean, TypeScript has two other data types: Arrays and enums. Let's look at the syntax for defining arrays. An array is simply marked with the [] notation, similar to JavaScript, and each array can be strongly typed to hold a specific type as seen in the code below: var arrayOfNumbers: number[] = [1, 2, 3]; arrayOfNumbers = [3, 4, 5]; arrayOfNumbers = ["one", "two", "three"]; On the first line of this code snippet, we are defining an array named arrayOfNumbers, and further specify that each element of this array must be of type number. The second line then reassigns this array to hold some different numerical values. The last line of this snippet, however, will generate the following error message: error TS2012: Build: Cannot convert 'string[]' to 'number[]': This error message is warning us that the variable arrayOfNumbers is strongly typed to only accept values of type number. Our code tries to assign an array of strings to this array of numbers, and is therefore, generating a compile error. The any type All this type checking is well and good, but JavaScript is flexible enough to allow variables to be mixed and matched. The following code snippet is actually valid JavaScript code: var item1 = { id: 1, name: "item 1" }; item1 = { id: 2 }; Our first line of code assigns an object with an id property and a name property to the variable item1. The second line then re-assigns this variable to an object that has an id property but not a name property. Unfortunately, as we have seen previously, TypeScript will generate a compile time error for the preceding code: error TS2012: Build: Cannot convert '{ id: number; }' to '{ id: number; name: string; }' TypeScript introduces the any type for such occasions. Specifying that an object has a type of any in essence relaxes the compiler's strict type checking. The following code shows how to use the any type: var item1 : any = { id: 1, name: "item 1" }; item1 = { id: 2 }; Note how our first line of code has changed. We specify the type of the variable item1 to be of type : any so that our code will compile without errors. Without the type specifier of : any, the second line of code, would normally generate an error. Explicit casting As with any strongly typed language, there comes a time where you need to explicitly specify the type of an object. An object can be cast to the type of another by using the < > syntax. This is not a cast in the strictest sense of the word; it is more of an assertion that is used at runtime by the TypeScript compiler. Any explicit casting that you use will be compiled away in the resultant JavaScript and will not affect the code at runtime. Let's modify our previous code snippet to use explicit casting: var item1 = <any>{ id: 1, name: "item 1" }; item1 = { id: 2 }; Note that on the first line of this snippet, we have now replaced the : any type specifier on the left hand side of the assignment, with an explicit cast of <any> on the right hand side. This snippet of code is telling the compiler to explicitly cast, or to explicitly treat the { id: 1, name: "item 1" } object on the right-hand side as a type of any. So the item1 variable, therefore, also has the type of any (due to TypeScript's inferred typing rules). This then allows us to assign an object with only the { id: 2 } property to the variable item1 on the second line of code. This technique of using the < > syntax on the right hand side of an assignment, is called explicit casting. While the any type is a necessary feature of the TypeScript language – its usage should really be limited as much as possible. It is a language shortcut that is necessary to ensure compatibility with JavaScript, but over-use of the any type will quickly lead to coding errors that will be difficult to find. Rather than using the type any, try to figure out the correct type of the object you are using, and then use this type instead. We use an acronym within our programming teams: S.F.I.A.T. (pronounced sviat or sveat). Simply Find an Interface for the Any Type. While this may sound silly – it brings home the point that the any type should always be replaced with an interface – so simply find it. Just remember that by actively trying to define what an object's type should be, we are building strongly typed code, and therefore protecting ourselves from future coding errors and bugs. Enums Enums are a special type that has been borrowed from other languages such as C#, and provide a solution to the problem of special numbers. An enum associates a human-readable name for a specific number. Consider the following code: enum DoorState { Open, Closed, Ajar } In this code snippet, we have defined an enum called DoorState to represent the state of a door. Valid values for this door state are Open, Closed, or Ajar. Under the hood (in the generated JavaScript), TypeScript will assign a numeric value to each of these human-readable enum values. In this example, the DoorState.Open enum value will equate to a numeric value of 0. Likewise, the enum value DoorState.Closed will be equate to the numeric value of 1, and the DoorState.Ajar enum value will equate to 2. Let's have a quick look at how we would use these enum values: window.onload = () => { var myDoor = DoorState.Open; console.log("My door state is " + myDoor.toString()); }; The first line within the window.onload function creates a variable named myDoor, and sets its value to DoorState.Open. The second line simply logs the value of myDoor to the console. The output of this console.log function would be: My door state is 0 This clearly shows that the TypeScript compiler has substituted the enum value of DoorState.Open with the numeric value 0. Now let's use this enum in a slightly different way: window.onload = () => { var openDoor = DoorState["Closed"]; console.log("My door state is " + openDoor.toString()); }; This code snippet uses a string value of "Closed" to lookup the enum type, and assign the resulting enum value to the openDoor variable. The output of this code would be: My door state is 1 This sample clearly shows that the enum value of DoorState.Closed is the same as the enum value of DoorState["Closed"], because both variants resolve to the numeric value of 1. Finally, let's have a look at what happens when we reference an enum using an array type syntax: window.onload = () => { var ajarDoor = DoorState[2]; console.log("My door state is " + ajarDoor.toString()); }; Here, we assign the variable openDoor to an enum value based on the 2nd index value of the DoorState enum. The output of this code, though, is surprising: My door state is Ajar You may have been expecting the output to be simply 2, but here we are getting the string "Ajar" – which is a string representation of our original enum name. This is actually a neat little trick – allowing us to access a string representation of our enum value. The reason that this is possible is down to the JavaScript that has been generated by the TypeScript compiler. Let's have a look, then, at the closure that the TypeScript compiler has generated: var DoorState; (function (DoorState) { DoorState[DoorState["Open"] = 0] = "Open"; DoorState[DoorState["Closed"] = 1] = "Closed"; DoorState[DoorState["Ajar"] = 2] = "Ajar"; })(DoorState || (DoorState = {})); This strange looking syntax is building an object that has a specific internal structure. It is this internal structure that allows us to use this enum in the various ways that we have just explored. If we interrogate this structure while debugging our JavaScript, we will see the internal structure of the DoorState object is as follows: DoorState {...} [prototype]: {...} [0]: "Open" [1]: "Closed" [2]: "Ajar" [prototype]: [] Ajar: 2 Closed: 1 Open: 0 The DoorState object has a property called "0", which has a string value of "Open". Unfortunately, in JavaScript the number 0 is not a valid property name, so we cannot access this property by simply using DoorState.0. Instead, we must access this property using either DoorState[0] or DoorState["0"]. The DoorState object also has a property named Open, which is set to the numeric value 0. The word Open IS a valid property name in JavaScript, so we can access this property using DoorState["Open"], or simply DoorState.Open, which equate to the same property in JavaScript. While the underlying JavaScript can be a little confusing, all we need to remember about enums is that they are a handy way of defining an easily remembered, human-readable name to a special number. Using human-readable enums, instead of just scattering various special numbers around in our code, also makes the intent of the code clearer. Using an application wide value named DoorState.Open or DoorState.Closed is far simpler than remembering to set a value to 0 for Open, 1 for Closed, and 3 for ajar. As well as making our code more readable, and more maintainable, using enums also protects our code base whenever these special numeric values change – because they are all defined in one place. One last note on enums – we can set the numeric value manually, if needs be: enum DoorState { Open = 3, Closed = 7, Ajar = 10 } Here, we have overridden the default values of the enum to set DoorState.Open to 3, DoorState.Closed to 7, and DoorState.Ajar to 10. Const enums With the release of TypeScript 1.4, we are also able to define const enums as follows: const enum DoorStateConst { Open, Closed, Ajar } var myState = DoorStateConst.Open; These types of enums have been introduced largely for performance reasons, and the resultant JavaScript will not contain the full closure definition for the DoorStateConst enum as we saw previously. Let's have a quick look at the JavaScript that is generated from this DoorStateConst enum: var myState = 0 /* Open */; Note how we do not have a full JavaScript closure for the DoorStateConst at all. The compiler has simply resolved the DoorStateConst.Open enum to its internal value of 0, and removed the const enum definition entirely. With const enums, we therefore cannot reference the internal string value of an enum, as we did in our previous code sample. Consider the following example: // generates an error console.log(DoorStateConst[0]); // valid usage console.log(DoorStateConst["Open"]); The first console.log statement will now generate a compile time error – as we do not have the full closure available with the property of [0] for our const enum. The second usage of this const enum is valid, however, and will generate the following JavaScript: console.log(0 /* "Open" */); When using const enums, just keep in mind that the compiler will strip away all enum definitions and simply substitute the numeric value of the enum directly into our JavaScript code. Functions JavaScript defines functions using the function keyword, a set of braces, and then a set of curly braces. A typical JavaScript function would be written as follows: function addNumbers(a, b) { return a + b; } var result = addNumbers(1, 2); var result2 = addNumbers("1", "2"); This code snippet is fairly self-explanatory; we have defined a function named addNumbers that takes two variables and returns their sum. We then invoke this function, passing in the values of 1 and 2. The value of the variable result would then be 1 + 2, which is 3. Now have a look at the last line of code. Here, we are invoking the addNumbers function, passing in two strings as arguments, instead of numbers. The value of the variable result2 would then be a string, "12". This string value seems like it may not be the desired result, as the name of the function is addNumbers. Copying the preceding code into a TypeScript file would not generate any errors, but let's insert some type rules to the preceding JavaScript to make it more robust: function addNumbers(a: number, b: number): number { return a + b; }; var result = addNumbers(1, 2); var result2 = addNumbers("1", "2"); In this TypeScript code, we have added a :number type to both of the parameters of the addNumbers function (a and b), and we have also added a :number type just after the ( ) braces. Placing a type descriptor here means that the return type of the function itself is strongly typed to return a value of type number. In TypeScript, the last line of code, however, will cause a compilation error: error TS2082: Build: Supplied parameters do not match any signature of call target: This error message is generate because we have explicitly stated that the function should accept only numbers for both of the arguments a and b, but in our offending code, we are passing two strings. The TypeScript compiler, therefore, cannot match the signature of a function named addNumbers that accepts two arguments of type string. Anonymous functions The JavaScript language also has the concept of anonymous functions. These are functions that are defined on the fly and don't specify a function name. Consider the following JavaScript code: var addVar = function(a, b) { return a + b; }; var result = addVar(1, 2); This code snippet defines a function that has no name and adds two values. Because the function does not have a name, it is known as an anonymous function. This anonymous function is then assigned to a variable named addVar. The addVar variable, then, can then be invoked as a function with two parameters, and the return value will be the result of executing the anonymous function. In this case, the variable result will have a value of 3. Let's now rewrite the preceding JavaScript function in TypeScript, and add some type syntax, in order to ensure that the function only accepts two arguments of type number, and returns a value of type number: var addVar = function(a: number, b: number): number { return a + b; } var result = addVar(1, 2); var result2 = addVar("1", "2"); In this code snippet, we have created an anonymous function that accepts only arguments of type number for the parameters a and b, and also returns a value of type number. The types for both the a and b parameters, as well as the return type of the function, are now using the :number syntax. This is another example of the simple "syntactic sugar" that TypeScript injects into the language. If we compile this code, TypeScript will reject the code on the last line, where we try to call our anonymous function with two string parameters: error TS2082: Build: Supplied parameters do not match any signature of call target: Optional parameters When we call a JavaScript function that has is expecting parameters, and we do not supply these parameters, then the value of the parameter within the function will be undefined. As an example of this, consider the following JavaScript code: var concatStrings = function(a, b, c) { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); Here, we have defined a function called concatStrings that takes three parameters, a, b, and c, and simply returns the sum of these values. If we call this function with all three parameters, as seen in the second last line of this snipped, we will end up with the string "abc" logged to the console. If, however, we only supply two parameters, as seen in the last line of this snippet, the string "abundefined" will be logged to the console. Again, if we call a function and do not supply a parameter, then this parameter, c in our case, will be simply undefined. TypeScript introduces the question mark ? syntax to indicate optional parameters. Consider the following TypeScript function definition: var concatStrings = function(a: string, b: string, c?: string) { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); console.log(concatStrings("a")); This is a strongly typed version of the original concatStrings JavaScript function that we were using previously. Note the addition of the ? character in the syntax for the third parameter: c?: string. This indicates that the third parameter is optional, and therefore, all of the preceding code will compile cleanly, except for the last line. The last line will generate an error: error TS2081: Build: Supplied parameters do not match any signature of call target. This error is generated because we are attempting to call the concatStrings function with only a single parameter. Our function definition, though, requires at least two parameters, with only the third parameter being optional. The optional parameters must be the last parameters in the function definition. You can have as many optional parameters as you want, as long as non-optional parameters precede the optional parameters. Default parameters A subtle variant on the optional parameter function definition, allows us to specify the value of a parameter if it is not passed in as an argument from the calling code. Let's modify our preceding function definition to use an optional parameter: var concatStrings = function(a: string, b: string, c: string = "c") { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); This function definition has now dropped the ? optional parameter syntax, but instead has assigned a value of "c" to the last parameter: c:string = "c". By using default parameters, if we do not supply a value for the final parameter named c, the concatStrings function will substitute the default value of "c" instead. The argument c, therefore, will not be undefined. The output of the last two lines of code will both be "abc". Note that using the default parameter syntax will automatically make the parameter optional. The arguments variable The JavaScript language allows a function to be called with a variable number of arguments. Every JavaScript function has access to a special variable, named arguments, that can be used to retrieve all arguments that have been passed into the function. As an example of this, consider the following JavaScript code: function testParams() { if (arguments.length > 0) { for (var i = 0; i < arguments.length; i++) { console.log("Argument " + i + " = " + arguments[i]); } } } testParams(1, 2, 3, 4); testParams("first argument"); In this code snippet, we have defined a function name testParams that does not have any named parameters. Note, though, that we can use the special variable, named arguments, to test whether the function was called with any arguments. In our sample, we can simply loop through the arguments array, and log the value of each argument to the console, by using an array indexer : arguments[i]. The output of the console.log calls are as follows: Argument 0 = 1 Argument 1 = 2 Argument 2 = 3 Argument 3 = 4 Argument 0 = first argument So, how do we express a variable number of function parameters in TypeScript? The answer is to use what are called rest parameters, or the three dots (…) syntax. Here is the equivalent testParams function, expressed in TypeScript: function testParams(...argArray: number[]) { if (argArray.length > 0) { for (var i = 0; i < argArray.length; i++) { console.log("argArray " + i + " = " + argArray[i]); console.log("arguments " + i + " = " + arguments[i]); } } } testParams(1); testParams(1, 2, 3, 4); testParams("one", "two"); Note the use of the …argArray: number[] syntax for our testParams function. This syntax is telling the TypeScript compiler that the function can accept any number of arguments. This means that our usages of this function, i.e. calling the function with either testParams(1) or testParams(1,2,3,4), will both compile correctly. In this version of the testParams function, we have added two console.log lines, just to show that the arguments array can be accessed by either the named rest parameter, argArray[i], or through the normal JavaScript array, arguments[i]. The last line in this sample will, however, generate a compile error, as we have defined the rest parameter to only accept numbers, and we are attempting to call the function with strings. The the subtle difference between using argArray and arguments is the inferred type of the argument. Since we have explicitly specified that argArray is of type number, TypeScript will treat any item of the argArray array as a number. However, the internal arguments array does not have an inferred type, and so will be treated as the any type. We can also combine normal parameters along with rest parameters in a function definition, as long as the rest parameters are the last to be defined in the parameter list, as follows: function testParamsTs2(arg1: string, arg2: number, ...ArgArray: number[]) { } Here, we have two normal parameters named arg1 and arg2 and then an argArray rest parameter. Mistakenly placing the rest parameter at the beginning of the parameter list will generate a compile error. Function callbacks One of the most powerful features of JavaScript–and in fact the technology that Node was built on–is the concept of callback functions. A callback function is a function that is passed into another function. Remember that JavaScript is not strongly typed, so a variable can also be a function. This is best illustrated by having a look at some JavaScript code: function myCallBack(text) { console.log("inside myCallback " + text); } function callingFunction(initialText, callback) { console.log("inside CallingFunction"); callback(initialText); } callingFunction("myText", myCallBack); Here, we have a function named myCallBack that takes a parameter and logs its value to the console. We then define a function named callingFunction that takes two parameters: initialText and callback. The first line of this funciton simply logs "inside CallingFunction" to the console. The second line of the callingFunction is the interesting bit. It assumes that the callback argument is in fact a function, and invokes it. It also passes the initialText variable to the callback function. If we run this code, we will get two messages logged to the console, as follows: inside CallingFunction inside myCallback myText But what happens if we do not pass a function as a callback? There is nothing in the preceding code that signals to us that the second parameter of callingFunction must be a function. If we inadvertently called the callingFunction function with a string, instead of a function as the second parameter as follows: callingFunction("myText", "this is not a function"); We would get a JavaScript runtime error: 0x800a138a - JavaScript runtime error: Function expected Defensive minded programmers, however, would first check whether the callback parameter was in fact a function before invoking it, as follows: function callingFunction(initialText, callback) { console.log("inside CallingFunction"); if (typeof callback === "function") { callback(initialText); } else { console.log(callback + " is not a function"); } } callingFunction("myText", "this is not a function"); Note the third line of this code snippet, where we check the type of the callback variable before invoking it. If it is not a function, we then log a message to the console. On the last line of this snippet, we are executing the callingFunction, but this time passing a string as the second parameter. The output of the code snipped would be: inside CallingFunction this is not a function is not a function When using function callbacks, then, JavaScript programmers need to do two things; firstly, understand which parameters are in fact callbacks and secondly, code around the invalid use of callback functions. Function signatures The TypeScript "syntactic sugar" that enforces strong typing, is not only intended for variables and types, but for function signatures as well. What if we could document our JavaScript callback functions in code, and then warn users of our code when they are passing the wrong type of parameter to our functions ? TypeScript does this through function signatures. A function signature introduces a fat arrow syntax, () =>, to define what the function should look like. Let's re-write the preceding JavaScript sample in TypeScript: function myCallBack(text: string) { console.log("inside myCallback " + text); } function callingFunction(initialText: string, callback: (text: string) => void) { callback(initialText); } callingFunction("myText", myCallBack); callingFunction("myText", "this is not a function"); Our first function definition, myCallBack now strongly types the text parameter to be of type string. Our callingFunction function has two parameters; initialText, which is of type string, and callback, which now has the new function signature syntax. Let's look at this function signature more closely: callback: (text: string) => void What this function definition is saying, is that the callback argument is typed (by the : syntax) to be a function, using the fat arrow syntax () =>. Additionally, this function takes a parameter named text that is of type string. To the right of the fat arrow syntax, we can see a new TypeScript basic type, called void. Void is a keyword to denote that a function does not return a value. So, the callingFunction function will only accept, as its second argument, a function that takes a single string parameter and returns nothing. Compiling the preceding code will correctly highlight an error in the last line of the code snippet, where we passing a string as the second parameter, instead of a callback function: error TS2082: Build: Supplied parameters do not match any signature of call target: Type '(text: string) => void' requires a call signature, but type 'String' lacks one Given the preceding function signature for the callback function, the following code would also generate compile time errors: function myCallBackNumber(arg1: number) { console.log("arg1 = " + arg1); } callingFunction("myText", myCallBackNumber); Here, we are defining a function named myCallBackNumber, that takes a number as its only parameter. When we attempt to compile this code, we will get an error message indicating that the callback parameter, which is our myCallBackNumber function, also does not have the correct function signature: Call signatures of types 'typeof myCallBackNumber' and '(text: string) => void' are incompatible. The function signature of myCallBackNumber would actually be (arg1:number) => void, instead of the required (text: string) => void, hence the error. In function signatures, the parameter name (arg1 or text) does not need to be the same. Only the number of parameters, their types, and the return type of the function need to be the same. This is a very powerful feature of TypeScript — defining in code what the signatures of functions should be, and warning users when they do not call a function with the correct parameters. As we saw in our introduction to TypeScript, this is most significant when we are working with third-party libraries. Before we are able to use third-party functions, classes, or objects in TypeScript, we need to define what their function signatures are. These function definitions are put into a special type of TypeScript file, called a declaration file, and saved with a .d.ts extension. Function callbacks and scope JavaScript uses lexical scoping rules to define the valid scope of a variable. This means that the value of a variable is defined by its location within the source code. Nested functions have access to variables that are defined in their parent scope. As an example of this, consider the following TypeScript code: function testScope() { var testVariable = "myTestVariable"; function print() { console.log(testVariable); } } console.log(testVariable); This code snippet defines a function named testScope. The variable testVariable is defined within this function. The print function is a child function of testScope, so it has access to the testVariable variable. The last line of the code, however, will generate a compile error, because it is attempting to use the variabletestVariable, which is lexically scoped to be valid only inside the body of the testScope function: error TS2095: Build: Could not find symbol 'testVariable'. Simple, right? A nested function has access to variables depending on its location within the source code. This is all well and good, but in large JavaScript projects, there are many different files and many areas of the code are designed to be re-usable. Let's take a look at how these scoping rules can become a problem. For this sample, we will use a typical callback scenario—using jQuery to execute an asynchronous call to fetch some data. Consider the following TypeScript code: var testVariable = "testValue"; function getData() { var testVariable_2 = "testValue_2"; $.ajax( { url: "/sample_json.json" success: (data, status, jqXhr) => { console.log("success : testVariable is " + testVariable); console.log("success : testVariable_2 is" + testVariable_2); }, error: (message, status, stack) => { alert("error " + message); } } ); } getData(); In this code snippet, we are defining a variable named testVariable and setting its value. We then define a function called getData. The getData function sets another variable called testVariable_2, and then calls the jQuery $.ajax function. The $.ajax function is configured with three properties: url, success, and error. The url property is a simple string that points to a sample_json.json file in our project directory. The success property is an anonymous function callback, that simply logs the values of testVariable and testVariable_2 to the console. Finally, the error property is also an anonymous function callback, that simply pops up an alert. This code runs as expected, and the success function will log the following results to the console: success : testVariable is :testValue success : testVariable_2 is :testValue_2 So far so good. Now, let's assume that we are trying to refactor the preceding code, as we are doing quite a few similar $.ajax calls, and want to reuse the success callback function elsewhere. We can easily switch out this anonymous function, and create a named function for our success callback, as follows: var testVariable = "testValue"; function getData() { var testVariable_2 = "testValue_2"; $.ajax( { url: "/sample_json.json", success: successCallback, error: (message, status, stack) => { alert("error " + message); } } ); } function successCallback(data, status, jqXhr) { console.log("success : testVariable is :" + testVariable); console.log("success : testVariable_2 is :" + testVariable_2); } getData(); In this sample, we have created a new function named successCallback with the same parameters as our previous anonymous function. We have also modified the $.ajax call to simply pass this function in, as a callback function for the success property: success: successCallback. If we were to compile this code now, TypeScript would generate an error, as follows: error TS2095: Build: Could not find symbol ''testVariable_2''. Since we have changed the lexical scope of our code, by creating a named function, the new successCallback function no longer has access the variable testVariable_2. It is fairly easy to spot this sort of error in a trivial example, but in larger projects, and when using third-party libraries, these sorts of errors become more difficult to track down. It is, therefore, worth mentioning that when using callback functions, we need to understand this lexical scope. If your code expects a property to have a value, and it does not have one after a callback, then remember to have a look at the context of the calling code. Function overloads As JavaScript is a dynamic language, we can often call the same function with different argument types. Consider the following JavaScript code: function add(x, y) { return x + y; } console.log("add(1,1)=" + add(1,1)); console.log("add(''1'',''1'')=" + add("1", "1")); console.log("add(true,false)=" + add(true, false)); Here, we are defining a simple add function that returns the sum of its two parameters, x and y. The last three lines of this code snippet simply log the result of the add function with different types: two numbers, two strings, and two boolean values. If we run this code, we will see the following output: add(1,1)=2 add('1','1')=11 add(true,false)=1 TypeScript introduces a specific syntax to indicate multiple function signatures for the same function. If we were to replicate the preceding code in TypeScript, we would need to use the function overload syntax: function add(arg1: string, arg2: string): string; function add(arg1: number, arg2: number): number; function add(arg1: boolean, arg2: boolean): boolean; function add(arg1: any, arg2: any): any { return arg1 + arg2; } console.log("add(1,1)=" + add(1, 1)); console.log("add(''1'',''1'')=" + add("1", "1")); console.log("add(true,false)=" + add(true, false)); The first line of this code snippet specifies a function overload signature for the add function that accepts two strings and returns a string. The second line specifies another function overload that uses numbers, and the third line uses booleans. The fourth line contains the actual body of the function and uses the type specifier of any. The last three lines of this snippet show how we would use these function signatures, and are similar to the JavaScript code that we have been using previously. There are three points of interest in the preceding code snippet. Firstly, none of the function signatures on the first three lines of the snippet actually have a function body. Secondly, the final function definition uses the type specifier of any and eventually includes the function body. The function overload syntax must follow this structure, and the final function signature, that includes the body of the function must use the any type specifier, as anything else will generate compile-time errors. The third point to note, is that we are limiting the add function, by using these function overload signatures, to only accept two parameters that are of the same type. If we were to try and mix our types; for example, if we call the function with a boolean and a string, as follows: console.log("add(true,''1'')", add(true, "1")); TypeScript would generate compile errors: error TS2082: Build: Supplied parameters do not match any signature of call target: error TS2087: Build: Could not select overload for ''call'' expression. This seems to contradict our final function definition though. In the original TypeScript sample, we had a function signature that accepted (arg1: any, arg2: any); so, in theory, this should be called when we try to add a boolean and a number. The TypeScript syntax for function overloads, however, does not allow this. Remember that the function overload syntax must include the use of the any type for the function body, as all overloads eventually call this function body. However, the inclusion of the function overloads above the function body indicates to the compiler that these are the only signatures that should be available to the calling code. Summary To learn more about TypeScript, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Learning TypeScript (https://www.packtpub.com/web-development/learning-typescript) TypeScript Essentials (https://www.packtpub.com/web-development/typescript-essentials) Resources for Article: Further resources on this subject: Introduction to TypeScript[article] Writing SOLID JavaScript code with TypeScript[article] JavaScript Execution with Selenium[article]
Read more
  • 0
  • 0
  • 14049

article-image-changing-views
Packt
15 Feb 2016
25 min read
Save for later

Changing Views

Packt
15 Feb 2016
25 min read
In this article by Christopher Pitt, author of the book React Components, has explained how to change sections without reloading the page. We'll use this knowledge to create the public pages of the website our CMS is meant to control. (For more resources related to this topic, see here.) Location, location, and location Before we can learn about alternatives to reloading pages, let's take a look at how the browser manages reloads. You've probably encountered the window object. It's a global catch-all object for browser-based functionality and state. It's also the default this scope in any HTML page: We've even accessed window before. When we rendered to document.body or used document.querySelector, the window object was assumed. It's the same as if we were to call window.document.querySelector. Most of the time document is the only property we need. That doesn't mean it's the only property useful to us. Try the following, in the console: console.log(window.location); You should see something similar to: Location {     hash: ""     host: "127.0.0.1:3000"     hostname: "127.0.0.1"     href: "http://127.0.0.1:3000/examples/login.html"     origin: "http://127.0.0.1:3000"     pathname: "/examples/login.html"     port: "3000"     ... } If we were trying to work out which components to show based on the browser URL, this would be an excellent place to start. Not only can we read from this object, but we can also write to it: <script>     window.location.href = "http://material-ui.com"; </script> Putting this in an HTML page or entering that line of JavaScript in the console will redirect the browser to material-ui.com. It's the same if you click on the link. And if it's to a different page (than the one the browser is pointing at), then it will cause a full page reload. A bit of history So how does this help us? We're trying to avoid full page reloads, after all. Let's experiment with this object. Let's see what happens when we add something like #page-admin to the URL: Adding #page-admin to the URL leads to the window.location.hash property being populated with the same page. What's more, changing the hash value won't reload the page! It's the same as if we clicked on a link that had that hash in the href attribute. We can modify it without causing full page reloads, and each modification will store a new entry in the browser history. Using this trick, we can step through a number of different "states" without reloading the page, and be able to backtrack each with the browser's back button. Using browser history Let's put this trick to use in our CMS. First, let's add a couple functions to our Nav component: export default (props) => {     // ...define class names       var redirect = (event, section) => {         window.location.hash = `#${section}`;         event.preventDefault();     }       return <div className={drawerClassNames}>         <header className="demo-drawer-header">             <img src="images/user.jpg"                  className="demo-avatar" />         </header>         <nav className={navClassNames}>             <a className="mdl-navigation__link"                href="/examples/login.html"                onClick={(e) => redirect(e, "login")}>                 <i className={buttonIconClassNames}                    role="presentation">                     lock                 </i>                 Login             </a>             <a className="mdl-navigation__link"                href="/examples/page-admin.html"                onClick={(e) => redirect(e, "page-admin")}>                 <i className={buttonIconClassNames}                    role="presentation">                     pages                 </i>                 Pages             </a>         </nav>     </div>; }; We add an onClick attribute to our navigation links. We've created a special function that will change window.location.hash and prevent the default full page reload behavior the links would otherwise have caused. This is a neat use of arrow functions, but we're ultimately creating three new functions in each render call. Remember that this can be expensive, so it's best to move the function creation out of render. We'll replace this shortly. It's also interesting to see template strings in action. Instead of "#" + section, we can use `#${section}` to interpolate the section name. It's not as useful in small strings, but becomes increasingly useful in large ones. Clicking on the navigation links will now change the URL hash. We can add to this behavior by rendering different components when the navigation links are clicked: import React from "react"; import ReactDOM from "react-dom"; import Component from "src/component"; import Login from "src/login"; import Backend from "src/backend"; import PageAdmin from "src/page-admin";   class Nav extends Component {     render() {         // ...define class names           return <div className={drawerClassNames}>             <header className="demo-drawer-header">                 <img src="images/user.jpg"                      className="demo-avatar" />             </header>             <nav className={navClassNames}>                 <a className="mdl-navigation__link"                    href="/examples/login.html"                    onClick={(e) => this.redirect(e, "login")}>                     <i className={buttonIconClassNames}                        role="presentation">                         lock                     </i>                     Login                 </a>                 <a className="mdl-navigation__link"                    href="/examples/page-admin.html"                    onClick={(e) => this.redirect(e, "page-admin")}>                     <i className={buttonIconClassNames}                        role="presentation">                         pages                     </i>                     Pages                 </a>             </nav>         </div>;     }       redirect(event, section) {         window.location.hash = `#${section}`;           var component = null;           switch (section) {             case "login":                 component = <Login />;                 break;             case "page-admin":                 var backend = new Backend();                 component = <PageAdmin backend={backend} />;                 break;         }           var layoutClassNames = [             "demo-layout",             "mdl-layout",             "mdl-js-layout",             "mdl-layout--fixed-drawer"         ].join(" ");           ReactDOM.render(             <div className={layoutClassNames}>                 <Nav />                 {component}             </div>,             document.querySelector(".react")         );           event.preventDefault();     } };   export default Nav; We've had to convert the Nav function to a Nav class. We want to create the redirect method outside of render (as that is more efficient) and also isolate the choice of which component to render. Using a class also gives us a way to name and reference Nav, so we can create a new instance to overwrite it from within the redirect method. It's not ideal packaging this kind of code within a component, so we'll clean that up in a bit. We can now switch between different sections without full page reloads. There is one problem still to solve. When we use the browser back button, the components don't change to reflect the component that should be shown for each hash. We can solve this in a couple of ways. The first approach we can try is checking the hash frequently: componentDidMount() {     var hash = window.location.hash;       setInterval(() => {         if (hash !== window.location.hash) {             hash = window.location.hash;             this.redirect(null, hash.slice(1), true);         }     }, 100); }   redirect(event, section, respondingToHashChange = false) {     if (!respondingToHashChange) {         window.location.hash = `#${section}`;     }       var component = null;       switch (section) {         case "login":             component = <Login />;             break;         case "page-admin":             var backend = new Backend();             component = <PageAdmin backend={backend} />;             break;     }       var layoutClassNames = [         "demo-layout",         "mdl-layout",         "mdl-js-layout",         "mdl-layout--fixed-drawer"     ].join(" ");       ReactDOM.render(         <div className={layoutClassNames}>             <Nav />             {component}         </div>,         document.querySelector(".react")     );       if (event) {         event.preventDefault();     } } Our redirect method has an extra parameter, to apply the new hash whenever we're not responding to a hash change. We've also wrapped the call to event.preventDefault in case we don't have a click event to work with. Other than those changes, the redirect method is the same. We've also added a componentDidMount method, in which we have a call to setInterval. We store the initial window.location.hash and check 10 times a second to see if it has change. The hash value is #login or #page-admin, so we slice the first character off and pass the rest to the redirect method. Try clicking on the different navigation links, and then use the browser back button. The second option is to use the newish pushState and popState methods on the window.history object. They're not very well supported yet, so you need to be careful to handle older browsers or sure you don't need to handle them. You can learn more about pushState and popState at https://developer.mozilla.org/en-US/docs/Web/API/History_API. Using a router Our hash code is functional but invasive. We shouldn't be calling the render method from inside a component (at least not one we own). So instead, we're going to use a popular router to manage this stuff for us. Download it with the following: $ npm install react-router --save Then we need to join login.html and page-admin.html back into the same file: <!DOCTYPE html> <html>     <head>         <script src="/node_modules/babel-core/browser.js"></script>         <script src="/node_modules/systemjs/dist/system.js"></script>         <script src="https://storage.googleapis.com/code.getmdl.io/1.0.6/material.min.js"></script>         <link rel="stylesheet" href="https://storage.googleapis.com/code.getmdl.io/1.0.6/material.indigo-pink.min.css" />         <link rel="stylesheet" href="https://fonts.googleapis.com/icon?family=Material+Icons" />         <link rel="stylesheet" href="admin.css" />     </head>     <body class="         mdl-demo         mdl-color--grey-100         mdl-color-text--grey-700         mdl-base">         <div class="react"></div>         <script>             System.config({                 "transpiler": "babel",                 "map": {                     "react": "/examples/react/react",                     "react-dom": "/examples/react/react-dom",                     "router": "/node_modules/react-router/umd/ReactRouter"                 },                 "baseURL": "../",                 "defaultJSExtensions": true             });               System.import("examples/admin");         </script>     </body> </html> Notice how we've added the ReactRouter file to the import map? We'll use that in admin.js. First, let's define our layout component: var App = function(props) {     var layoutClassNames = [         "demo-layout",         "mdl-layout",         "mdl-js-layout",         "mdl-layout--fixed-drawer"     ].join(" ");       return (         <div className={layoutClassNames}>             <Nav />             {props.children}         </div>     ); }; This creates the page layout we've been using and allows a dynamic content component. Every React component has a this.props.children property (or props.children in the case of a stateless component), which is an array of nested components. For example, consider this component: <App>     <Login /> </App> Inside the App component, this.props.children will be an array with a single item—an instance of the Login. Next, we'll define handler components for the two sections we want to route: var LoginHandler = function() {     return <Login />; }; var PageAdminHandler = function() {     var backend = new Backend();     return <PageAdmin backend={backend} />; }; We don't really need to wrap Login in LoginHandler but I've chosen to do it to be consistent with PageAdminHandler. PageAdmin expects an instance of Backend, so we have to wrap it as we see in this example. Now we can define routes for our CMS: ReactDOM.render(     <Router history={browserHistory}>         <Route path="/" component={App}>             <IndexRoute component={LoginHandler} />             <Route path="login" component={LoginHandler} />             <Route path="page-admin" component={PageAdminHandler} />         </Route>     </Router>,     document.querySelector(".react") ); There's a single root route, for the path /. It creates an instance of App, so we always get the same layout. Then we nest a login route and a page-admin route. These create instances of their respective components. We also define an IndexRoute so that the login page will be displayed as a landing page. We need to remove our custom history code from Nav: import React from "react"; import ReactDOM from "react-dom"; import { Link } from "router";   export default (props) => {     // ...define class names       return <div className={drawerClassNames}>         <header className="demo-drawer-header">             <img src="images/user.jpg"                  className="demo-avatar" />         </header>         <nav className={navClassNames}>             <Link className="mdl-navigation__link" to="login">                 <i className={buttonIconClassNames}                    role="presentation">                     lock                 </i>                 Login             </Link>             <Link className="mdl-navigation__link" to="page-admin">                 <i className={buttonIconClassNames}                    role="presentation">                     pages                 </i>                 Pages             </Link>         </nav>     </div>; }; And since we no longer need a separate redirect method, we can convert the class back into a statement component (function). Notice we've swapped anchor components for a new Link component. This interacts with the router to show the correct section when we click on the navigation links. We can also change the route paths without needing to update this component (unless we also change the route names). Creating public pages Now that we can easily switch between CMS sections, we can use the same trick to show the public pages of our website. Let's create a new HTML page just for these: <!DOCTYPE html> <html>     <head>         <script src="/node_modules/babel-core/browser.js"></script>         <script src="/node_modules/systemjs/dist/system.js"></script>     </head>     <body>         <div class="react"></div>         <script>             System.config({                 "transpiler": "babel",                 "map": {                     "react": "/examples/react/react",                     "react-dom": "/examples/react/react-dom",                     "router": "/node_modules/react-router/umd/ReactRouter"                 },                 "baseURL": "../",                 "defaultJSExtensions": true             });               System.import("examples/index");         </script>     </body> </html> This is a reduced form of admin.html without the material design resources. I think we can ignore the appearance of these pages for the moment, while we focus on the navigation. The public pages are almost 100%, so we can use stateless components for them. Let's begin with the layout component: var App = function(props) {     return (         <div className="layout">             <Nav pages={props.route.backend.all()} />             {props.children}         </div>     ); }; This is similar to the App admin component, but it also has a reference to a Backend. We define that when we render the components: var backend = new Backend(); ReactDOM.render(     <Router history={browserHistory}>         <Route path="/" component={App} backend={backend}>             <IndexRoute component={StaticPage} backend={backend} />             <Route path="pages/:page" component={StaticPage} backend={backend} />         </Route>     </Router>,     document.querySelector(".react") ); For this to work, we also need to define a StaticPage: var StaticPage = function(props) {     var id = props.params.page || 1;     var backend = props.route.backend;       var pages = backend.all().filter(         (page) => {             return page.id == id;         }     );       if (pages.length < 1) {         return <div>not found</div>;     }       return (         <div className="page">             <h1>{pages[0].title}</h1>             {pages[0].content}         </div>     ); }; This component is more interesting. We access the params property, which is a map of all the URL path parameters defined for this route. We have :page in the path (pages/:page), so when we go to pages/1, the params object is {"page":1}. We also pass a Backend to Page, so we can fetch all pages and filter them by page.id. If no page.id is provided, we default to 1. After filtering, we check to see if there are any pages. If not, we return a simple not found message. Otherwise, we render the content of the first page in the array (since we expect the array to have a length of at least 1). We now have a page for the public pages of the website: Summary In this article, we learned about how the browser stores URL history and how we can manipulate it to load different sections without full page reloads. Resources for Article:   Further resources on this subject: Introduction to Akka [article] An Introduction to ReactJs [article] ECMAScript 6 Standard [article]
Read more
  • 0
  • 0
  • 11553

article-image-testing-node-and-hapi
Packt
09 Feb 2016
22 min read
Save for later

Testing in Node and Hapi

Packt
09 Feb 2016
22 min read
In this article by John Brett, the author of the book Getting Started with Hapi.js, we are going to explore the topic of testing in node and hapi. We will look at what is involved in writing a simple test using hapi's test runner, lab, how to test hapi applications, techniques to make testing easier, and finally how to achieve the all-important 100% code coverage. (For more resources related to this topic, see here.) The benefits and importance of testing code Technical debt is developmental work that must be done before a particular job is complete, or else it will make future changes much harder to implement later on. A codebase without tests is a clear indication of technical debt. Let's explore this statement in more detail. Even very simple applications will generally comprise: Features, which the end user interacts with Shared services, such as authentication and authorization, that features interact with These will all generally depend on some direct persistent storage or API. Finally, to implement most of these features and services, we will use libraries, frameworks, and modules regardless of language. So, even for simpler applications, we have already arrived at a few dependencies to manage, where a change that causes a break in one place could possibly break everything up the chain. So let's take a common use case, in which a new version of one of your dependencies is released. This could be a new hapi version, a smaller library, your persistent storage engine, MySQL, MongoDB, or even an operating system or language version. SemVer, as mentioned previously, attempts to mitigate this somewhat, but you are taking someone at their word when they say that they have adhered to this correctly, and SemVer is not used everywhere. So, in the case of a break-causing change, will the current application work with this new dependency version? What will fail? What percentage of tests fail? What's the risk if we don't upgrade? Will support eventually be dropped, including security patches? Without a good automated test suite, these have to be answered by manual testing, which is a huge waste of developer time. Development progress stops here every time these tasks have to be done, meaning that these types of tasks are rarely done, building further technical debt. Apart from this, humans are proven to be poor at repetitive tasks, prone to error, and I know I personally don't enjoy testing manually, which makes me poor at it. I view repetitive manual testing like this as time wasted, as these questions could easily be answered by running a test suite against the new dependency so that developer time could be spent on something more productive. Now, let's look at a worse and even more common example: a security exploit has been identified in one of your dependencies. As mentioned previously, if it's not easy to update, you won't do it often, so you could be on an outdated version that won't receive this security update. Now you have to jump multiple versions at once and scramble to test them manually. This usually means many quick fixes, which often just cause more bugs. In my experience, code changes under pressure are what deteriorate the structure and readability in a codebase, lead to a much higher number of bugs, and are a clear sign of poor planning. A good development team will, instead of looking at what is currently available, look ahead to what is in beta and will know ahead of time if they expect to run into issues. The questions asked will be: Will our application break in the next version of Chrome? What about the next version of node? Hapi does this by running the full test suite against future versions of node in order to alert the node community of how planned changes will impact hapi and the node community as a whole. This is what we should all aim to do as developers. A good test suite has even bigger advantages when working in a team or when adding new developers to a team. Most development teams start out small and grow, meaning all the knowledge of the initial development needs to be passed on to new developers joining the team. So, how do tests lead to a benefit here? For one, tests are a great documentation on how parts of the application work for other members of a team. When trying to communicate a problem in an application, a failing test is a perfect illustration of what and where the problem is. When working as a team, for every code change from yourself or another member of the team, you're faced with the preceding problem of changing a dependency. Do we just test the code that was changed? What about the code that depends on the changed code? Is it going to be manual testing again? If this is the case, how much time in a week would be spent on manual testing versus development? Often, with changes, existing functionality can be broken along with new functionality, which is called regression. Having a good test suite highlights this and makes it much easier to prevent. These are the questions and topics that need to be answered when discussing the importance of tests. Writing tests can also improve code quality. For one, identifying dead code is much easier when you have a good testing suite. If you find that you can only get 90% code coverage, what does the extra 10% do? Is it used at all if it's unreachable? Does it break other parts of the application if removed? Writing tests will often improve your skills in writing easily testable code. Software applications usually grow to be complex pretty quickly—it happens, but we always need to be active in dealing with this, or software complexity will win. A good test suite is one of the best tools we have to tackle this. The preceding is not an exhaustive list on the importance or benefits of writing tests for your code, but hopefully it has convinced you of the importance of having a good testing suite. So, now that we know why we need to write good tests, let's look at hapi's test runner lab and assertion library code and how, along with some tools from hapi, they make the process of writing tests much easier and a more enjoyable experience. Introducing hapi's testing utilities The test runner in the hapi ecosystem is called lab. If you're not familiar with test runners, they are command-line interface tools for you to run your testing suite. Lab was inspired by a similar test tool called mocha, if you are familiar with it, and in fact was initially begun as a fork of the mocha codebase. But, as hapi's needs diverged from the original focus of mocha, lab was born. The assertion library commonly used in the hapi ecosystem is code. An assertion library forms the part of a test that performs the actual checks to judge whether a test case has passed or not, for example, checking that the value of a variable is true after an action has been taken. Lets look at our first test script; then, we can take a deeper look at lab and code, how they function under the hood, and some of the differences they have with other commonly used libraries, such as mocha and chai. Installing lab and code You can install lab and code the same as any other module on npm: npm install lab code -–save-dev Note the --save-dev flag added to the install command here. Remember your package.json file, which describes an npm module? This adds the modules to the devDependencies section of your npm module. These are dependencies that are required for the development and testing of a module but are not required for using the module. The reason why these are separated is that when we run npm install in an application codebase, it only installs the dependencies and devDependencies of package.json in that directory. For all the modules installed, only their dependencies are installed, not their development dependencies. This is because we only want to download the dependencies required to run that application; we don't need to download all the development dependencies for every module. The npm install command installs all the dependencies and devDependencies of package.json in the current working directory, and only the dependencies of the other installed module, not devDependencies. To install the development dependencies of a particular module, navigate to the root directory of the module and run npm install. After you have installed lab, you can then run it with the following: ./node_modules/lab/bin/lab test.js This is quite long to type every time, but fortunately due to a handy feature of npm called npm scripts, we can shorten it. If you look at package.json generated by npm init in the first chapter, depending on your version of npm, you may see the following (some code removed for brevity): ... "scripts": { "test": "echo "Error: no test specified" && exit 1" }, ... Scripts are a list of commands related to the project; they can be for testing purposes, as we will see in this example; to start an application; for build steps; and to start extra servers, among many other options. They offer huge flexibility in how these are combined to manage scripts related to a module or application, and I could spend a chapter, or even a book, on just these, but they are outside the scope of this book, so let's just focus on what is important to us here. To get a list of available scripts for a module application, in the module directory, simply run: $ npm run To then run the listed scripts, such as test you can just use: $ npm run test As you can see, this gives a very clean API for scripts and the documentation for each of them in the project's package.json. From this point on in this book, all code snippets will use npm scripts to test or run any examples. We should strive to use these in our projects to simplify and document commands related to applications and modules for ourselves and others. Let's now add the ability to run a test file to our package.json file. This just requires modifying the scripts section to be the following: ... "scripts": { "test": "./node_modules/lab/bin/lab ./test/index.js" }, ... It is common practice in node to place all tests in a project within the test directory. A handy addition to note here is that when calling a command with npm run, the bin directory of every module in your node_modules directory is added to PATH when running these scripts, so we can actually shorten this script to: … "scripts": { "test": "lab ./test/index.js" }, … This type of module install is considered to be local, as the dependency is local to the application directory it is being run in. While I believe this is how we should all install our modules, it is worth pointing it out that it is also possible to install a module globally. This means that when installing something like lab, it is immediately added to PATH and can be run from anywhere. We do this by adding a -g flag to the install, as follows: $ npm install lab code -g This may appear handier than having to add npm scripts or running commands locally outside of an npm script but should be avoided where possible. Often, installing globally requires sudo to run, meaning you are taking a script from the Internet and allowing it to have complete access to your system. Hopefully, the security concerns here are obvious. Other than that, different projects may use different versions of test runners, assertion libraries, or build tools, which can have unknown side effects and cause debugging headaches. The only time I would use globally installed modules are for command-line tools that I may use outside a particular project—for example, a node base terminal IDE such as slap (https://www.npmjs.com/package/slap) or a process manager such as PM2 (https://www.npmjs.com/package/pm2)—but never with sudo! Now that we are familiar with installing lab and code and the different ways or running it inside and outside of npm scripts, let's look at writing our first test script and take a more in-depth look at lab and code. Our First Test Script Let's take a look at what a simple test script in lab looks like using the code assertion library: const Code = require('code'); [1] const Lab = require('lab'); [1] const lab = exports.lab = Lab.script(); [2] lab.experiment('Testing example', () => { [3] lab.test('fails here', (done) => { [4] Code.expect(false).to.be.true(); [4] return done(); [4] }); [4] lab.test('passes here', (done) => { [4] Code.expect(true).to.be.true(); [4] return done(); [4] }); [4] }); This script, even though small, includes a number of new concepts, so let's go through it with reference to the numbers in the preceding code: [1]: Here, we just include the code and lab modules, as we would any other node module. [2]: As mentioned before, it is common convention to place all test files within the test directory of a project. However, there may be JavaScript files in there that aren't tests, and therefore should not be tested. To avoid this, we inform lab of which files are test scripts by calling Lab.script() and assigning the value to lab and exports.lab. [3]: The lab.experiment() function (aliased lab.describe()) is just a way to group tests neatly. In test output, tests will have the experiment string prefixed to the message, for example, "Testing example fails here". This is optional, however. [4]: These are the actual test cases. Here, we define the name of the test and pass a callback function with the parameter function done(). We see code in action here for managing our assertions. And finally, we call the done() function when finished with our test case. Things to note here: lab tests are always asynchronous. In every test, we have to call done() to finish the test; there is no counting of function parameters or checking whether synchronous functions have completed in order to ensure that a test is finished. Although this requires the boilerplate of calling the done() function at the end of every test, it means that all tests, synchronous or asynchronous, have a consistent structure. In Chai, which was originally used for hapi, some of the assertions such as .ok, .true, and .false use properties instead of functions for assertions, while assertions like .equal(), and .above() use functions. This type of inconsistency leads to us easily forgetting that an assertion should be a method call and hence omitting the (). This means that the assertion is never called and the test may pass as a false positive. Code's API is more consistent in that every assertion is a function call. Here is a comparison of the two: Chai: expect('hello').to.equal('hello'); expect(foo).to.exist; Code: expect('hello').to.equal('hello'); expect('foot').to.exist(); Notice the difference in the second exist() assertion. In Chai, you see the property form of the assertion, while in Code, you see the required function call. Through this, lab can make sure all assertions within a test case are called before done is complete, or it will fail the test. So let's try running our first test script. As we already updated our package.json script, we can run our test with the following command: $ npm run test This will generate the following output: There are a couple of things to note from this. Tests run are symbolized with a . or an X, depending on whether they pass or not. You can get a lab list of the full test title by adding the -v or -–verbose flag to our npm test script command. There are lots of flags to customize the running and output of lab, so I recommend using the full labels for each of these, for example, --verbose and --lint instead of -v and -l, in order to save you the time spent referring back to the documentation each time. You may have noticed the No global variable leaks detected message at the bottom. Lab assumes that the global object won't be polluted and checks that no extra properties have been added after running tests. Lab can be configured to not run this check or whitelist certain globals. Details of this are in the lab documentation availbale at https://github.com/hapijs/lab. Testing approaches This is one of the many known approaches to building a test suite, as is BDD (Behavior Driven Development), and like most test runners in node, lab is unopinionated about how you structure your tests. Details of how to structure your tests in a BDD can again be found easily in the lab documentation. Testing with hapi As I mentioned before, testing is considered paramount in the hapi ecosystem, with every module in the ecosystem having to maintain 100% code coverage at all times, as with all module dependencies. Fortunately, hapi provides us with some tools to make the testing of hapi apps much easier through a module called Shot, which simulates network requests to a hapi server. Let's take the example of a Hello World server and write a simple test for it: const Code = require('code'); const Lab = require('lab'); const Hapi = require('hapi'); const lab = exports.lab = Lab.script(); lab.test('It will return Hello World', (done) => { const server = new Hapi.Server(); server.connection(); server.route({ method: 'GET', path: '/', handler: function (request, reply) { return reply('Hello Worldn'); } }); server.inject('/', (res) => { Code.expect(res.statusCode).to.equal(200); Code.expect(res.result).to.equal('Hello Worldn'); done(); }); }); Now that we are more familiar with with what a test script looks like, most of this will look familiar. However, you may have noticed we never started our hapi server. This means the server was never started and no port assigned, but thanks to the shot module (https://github.com/hapijs/shot), we can still make requests against it using the server.inject API. Not having to start a server means less setup and teardown before and after tests and means that a test suite can run quicker as less resources are required. server.inject can still be used if used with the same API whether the server has been started or not. Code coverage As I mentioned earlier in the article, having 100% code coverage is paramount in the hapi ecosystem and, in my opinion, hugely important for any application to have. Without a code coverage target, writing tests can feel like an empty or unrewarding task where we don't know how many tests are enough or how much of our application or module has been covered. With any task, we should know what our goal is; testing is no different, and this is what code coverage gives us. Even with 100% coverage, things can still go wrong, but it means that at the very least, every line of code has been considered and has at least one test covering it. I've found from working on modules for hapi that trying to achieve 100% code coverage actually gamifies the process of writing tests, making it a more enjoyable experience overall. Fortunately, lab has code coverage integrated, so we don't need to rely on an extra module to achieve this. It's as simple as adding the --coverage or -c flag to our test script command. Under the hood, lab will then build an abstract syntax tree so it can evaluate which lines are executed, thus producing our coverage, which will be added to the console output when we run tests. The code coverage tool will also highlight which lines are not covered by tests, so you know where to focus your testing effort, which is extremely useful in identifying where to focus your testing effort. It is also possible to enforce a minimum threshold as to the percentage of code coverage required to pass a suite of tests with lab through the --threshold or -t flag followed by an integer. This is used for all the modules in the hapi ecosystem, and all thresholds are set to 100. Having a threshold of 100% for code coverage makes it much easier to manage changes to a codebase. When any update or pull request is submitted, the test suite is run against the changes, so we can know that all tests have passed and all code covered before we even look at what has been changed in the proposed submission. There are services that even automate this process for us, such as TravisCI (https://travis-ci.org/). It's also worth knowing that the coverage report can be displayed in a number of formats; For a full list of these reporters with explanations, I suggest reading the lab documentation available at https://github.com/hapijs/lab. Let's now look at what's involved in getting 100% coverage for our previous example. First of all, we'll move our server code to a separate file, which we will place in the lib folder and call index.js. It's worth noting here that it's good testing practice and also the typical module structure in the hapi ecosystem to place all module code in a folder called lib and the associated tests for each file within lib in a folder called test, preferably with a one-to-one mapping like we have done here, where all the tests for lib/index.js are in test/index.js. When trying to find out how a feature within a module works, the one-to-one mapping makes it much easier to find the associated tests and see examples of it in use. So, having separated our server from our tests, let's look at what our two files now look like; first, ./lib/index.js: const Hapi = require('hapi'); const server = new Hapi.Server(); server.connection(); server.route({ method: 'GET', path: '/', handler: function (request, reply) { return reply('Hello Worldn'); } }); module.exports = server; The main change here is that we export our server at the end for another file to acquire and start it if necessary. Our test file at ./test/index.js will now look like this: const Code = require('code'); const Lab = require('lab'); const server = require('../lib/index.js'); const lab = exports.lab = Lab.script(); lab.test('It will return Hello World', (done) => { server.inject('/', (res) => { Code.expect(res.statusCode).to.equal(200); Code.expect(res.result).to.equal('Hello Worldn'); done(); }); }); Finally, for us to test our code coverage, we update our npm test script to include the coverage flag --coverage or -c. The final example of this is in the second example of the source code of Chapter 4, Adding Tests and the Importance of 100% Coverage, which is supplied with this book. If you run this, you'll find that we actually already have 100% of the code covered with this one test. An interesting exercise here would be to find out what versions of hapi this code functions correctly with. At the time of writing, this code was written for hapi version 11.x.x on node.js version 4.0.0. Will it work if run with hapi version 9 or 10? You can test this now by installing an older version with the help of the following command: $ npm install hapi@10 This will give you an idea of how easy it can be to check whether your codebase works with different versions of libraries. If you have some time, it would be interesting to see how this example runs on different versions of node (Hint: it breaks on any version earlier than 4.0.0). In this example, we got 100% code coverage with one test. Unfortunately, we are rarely this fortunate when we increase the complexity of our codebase, and so will the complexity of our tests be, which is where knowledge of writing testable code comes in. This is something that comes with practice by writing tests while writing application or module code. Linting Also built into lab is linting support. Linting enforces a code style that is adhered to, which can be specified through an .eslintrc or .jshintrc file. By default, lab will enforce the the hapi style guide rules. The idea of linting is that all code will have the same structure, making it much easier to spot bugs and keep code tidy. As JavaScript is a very flexible language, linters are used regularly to forbid bad practices such as global or unused variables. To enable the lab linter, simply add the linter flag to the test command, which is --lint or -L. I generally stick with the default hapi style guide rules as they are chosen to promote easy-to-read code that is easily testable and forbids many bad practices. However, it's easy to customize the linting rules used; for this, I recommend referring to the lab documentation. Summary In this article, we covered testing in node and hapi and how testing and code coverage are paramount in the hapi ecosystem. We saw justification for their need in application development and where they can make us more productive developers. We also introduced the test runner and code assertion libraries lab and code in the ecosystem. We saw justification for their use and also how to use them to write simple tests and how to use the tools provided in lab and hapi to test hapi applications. We also learned about some of the extra features baked into lab, such as code coverage and linting. We looked at how to test the code coverage of an application and get it to 100% and how the hapi ecosystem applies the hapi styleguide to all modules using lab's linting integration. Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack[article] A Typical JavaScript Project[article] An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js[article]
Read more
  • 0
  • 0
  • 13200
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Packt
09 Feb 2016
13 min read
Save for later

CSS Properties – Part 1

Packt
09 Feb 2016
13 min read
In this article written by Joshua Johanan, Talha Khan and Ricardo Zea, authors of the book Web Developer's Reference Guide, the authors wants to state that "CSS properties are characteristics of an element in a markup language (HTML, SVG, XML, and so on) that control their style and/or presentation. These characteristics are part of a constantly evolving standard from the W3C." (For more resources related to this topic, see here.) A basic example of a CSS property is border-radius: input { border-radius: 100px; } There is an incredible amount of CSS properties, and learning them all is virtually impossible. Adding more into this mix, there are CSS properties that need to be vendor prefixed (-webkit-, -moz-, -ms-, and so on), making this equation even more complex. Vendor prefixes are short pieces of CSS that are added to the beginning of the CSS property (and sometimes, CSS values too). These pieces of code are directly related to either the company that makes the browser (the "vendor") or to the CSS engine of the browser. There are four major CSS prefixes: -webkit-, -moz-, -ms- and -o-. They are explained here: -webkit-: This references Safari's engine, Webkit (Google Chrome and Opera used this engine in the past as well) -moz-: This stands for Mozilla, which creates Firefox -ms-: This stands for Microsoft, which creates Internet Explorer -o-: This stands for Opera, but only targets old versions of the browser Google Chrome and Opera both support the -webkit- prefix. However, these two browsers do not use the Webkit engine anymore. Their engine is called Blink and is developed by Google. A basic example of a prefixed CSS property is column-gap: .column { -webkit-column-gap: 5px; -moz-column-gap: 5px; column-gap: 5px; } Knowing which CSS properties need to be prefixed is futile. That's why, it's important to keep a constant eye on CanIUse.com. However, it's also important to automate the prefixing process with tools such as Autoprefixer or -prefix-free, or mixins in preprocessors, and so on. However, vendor prefixing isn't in the scope of the book, so the properties we'll discuss are without any vendor prefixes. If you want to learn more about vendor prefixes, you can visit Mozilla Developer Network (MDN) at http://tiny.cc/mdn-vendor-prefixes. Let's get the CSS properties reference rolling. Animation Unlike the old days of Flash, where creating animations required third-party applications and plugins, today, we can accomplish practically the same things with a lot less overhead, better performance, and greater scalability, all through CSS only. Forget plugins and third-party software! All we need is a text editor, some imagination, and a bit of patience to wrap our heads around some of the animation concepts CSS brings to our plate. Base markup and CSS Before we dive into all the animation properties, we will use the following markup and animation structure as our base: HTML: <div class="element"></div> CSS: .element { width: 300px; height: 300px; } @keyframes fadingColors { 0% { background: red; } 100% { background: black; } } In the examples, we will only see the element rule since the HTML and @keyframes fadingColors will remain the same. The @keyframes declaration block is a custom animation that can be applied to any element. When applied, the element's background will go from red to black. Ok, let's do this. animation-name The animation-name CSS property is the name of the @keyframes at-rule that we want to execute, and it looks like this: animation-name: fadingColors; Description In the HTML and CSS base example, our @keyframes at-rule had an animation where the background color went from red to black. The name of that animation is fadingColors. So, we can call the animation like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; } This is a valid rule using the longhand. There are clearly no issues with it at all. The thing is that the animation won't run unless we add animation-duration to it. animation-duration The animation-duration CSS property defines the amount of time the animation will take to complete a cycle, and it looks like this: animation-duration: 2s; Description We can specify the units either in seconds using s or in milliseconds using ms. Specifying a unit is required. Specifying a value of 0s means that the animation should actually never run. However, since we do want our animation to run, we will use the following lines of code: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; } As mentioned earlier, this will make a box go from its red background to black in 2 seconds, and then stop. animation-iteration-count The animation-iteration-count CSS property defines the number of times the animation should be played, and it looks like this: animation-iteration-count: infinite;Description Here are two values: infinite and a number, such as 1, 3, or 0.5. Negative numbers are not allowed. Add the following code to the prior example: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; } This will make a box go from its red background to black, start over again with the red background and go to black, infinitely. animation-direction The animation-direction CSS property defines the direction in which the animation should play after the cycle, and it looks like this: animation-direction: alternate; Description There are four values: normal, reverse, alternate, and alternate-reverse. normal: It makes the animation play forward. This is the default value. reverse: It makes the animation play backward. alternate: It makes the animation play forward in the first cycle, then backward in the next cycle, then forward again, and so on. In addition, timing functions are affected, so if we have ease-out, it gets replaced by ease-in when played in reverse. We'll look at these timing functions in a minute. alternate-reverse: It's the same thing as alternate, but the animation starts backward, from the end. In our current example, we have a continuous animation. However, the background color has a "hard stop" when going from black (end of the animation) to red (start of the animation). Let's create a more 'fluid' animation by making the black background fade into red and then red into black without any hard stops. Basically, we are trying to create a "pulse-like" effect: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; } animation-delay The animation-delay CSS property allows us to define when exactly an animation should start. This means that as soon as the animation has been applied to an element, it will obey the delay before it starts running. It looks like this: animation-delay: 3s; Description We can specify the units either in seconds using s or in milliseconds using ms.Specifying a unit is required. Negative values are allowed. Take into consideration that using negative values means that the animation should start right away, but it will start midway into the animation for the opposite amount of time as the negative value. Use negative values with caution. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; } This will make the animation start after 3 seconds have passed. animation-fill-mode The animation-fill-mode CSS property defines which values are applied to an element before and after the animation. Basically, outside the time, the animation is being executed. It looks like this: animation-fill-mode: none; Description There are four values: none, forwards, backwards, and both. none: No styles are applied before or after the animation. forwards: The animated element will retain the styles of the last keyframe. This the most used value. backwards: The animated element will retain the styles of the first keyframe, and these styles will remain during the animation-delay period. This is very likely the least used value. both: The animated element will retain the styles of the first keyframe before starting the animation and the styles of the last keyframe after the animation has finished. In many cases, this is almost the same as using forwards. The prior properties are better used in animations that have an end and stop. In our example, we're using a fading/pulsating animation, so the best property to use is none. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; } animation-play-state The animation-play-state CSS property defines whether an animation is running or paused, and it looks like this: animation-play-state: running; Description There are two values: running and paused. These values are self-explanatory. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; } In this case, defining animation-play-state as running is redundant, but I'm listing it for purposes of the example. animation-timing-function The animation-timing-function CSS property defines how an animation's speed should progress throughout its cycles, and it looks like this: animation-timing-function: ease-out; There are five predefined values, also known as easing functions, for the Bézier curve (we'll see what the Bézier curve is in a minute): ease, ease-in, ease-out, ease-in-out, and linear. ease The ease function Sharply accelerates at the beginning and starts slowing down towards the middle of the cycle, its syntax is as follows: animation-timing-function: ease; ease-in The ease-in function starts slowly accelerating until the animation sharply ends, its syntax is as follows: animation-timing-function: ease-in; ease-out The ease-out function starts quickly and gradually slows down towards the end: animation-timing-function: ease-out; ease-in-out The ease-in-out function starts slowly and it gets fast in the middle of the cycle. It then starts slowing down towards the end, its syntax is as follows: animation-timing-function:ease-in-out; linear The linear function has constant speed. No accelerations of any kind happen, its syntax is as follows: animation-timing-function: linear; Now, the easing functions are built on a curve named the Bézier curve and can be called using the cubic-bezier() function or the steps() function. cubic-bezier() The cubic-bezier() function allows us to create custom acceleration curves. Most use cases can benefit from the already defined easing functions we just mentioned (ease, ease-in, ease-out, ease-in-out and linear), but if you're feeling adventurous, cubic-bezier() is your best bet. Here's how a Bézier curve looks like: Parameters The cubic-bezier() function takes four parameters as follows: animation-timing-function: cubic-bezier(x1, y1, x2, y2); X and Y represent the x and y axes. The numbers 1 and 2 after each axis represent the control points. 1 represents the control point starting on the lower left, and 2 represent the control point on the upper right. Description Let's represent all five predefined easing functions with the cubic-bezier() function: ease: animation-timing-function: cubic-bezier(.25, .1, .25, 1); ease-in: animation-timing-function: cubic-bezier(.42, 0, 1, 1); ease-out: animation-timing-function: cubic-bezier(0, 0, .58, 1); ease-in-out: animation-timing-function: cubic-bezier(.42, 0, .58, 1); linear: animation-timing-function: cubic-bezier(0, 0, 1, 1); Not sure about you, but I prefer to use the predefined values. Now, we can start tweaking and testing each value to the decimal, save it, and wait for the live refresh to do its thing. However, that's too much time wasted testing if you ask me. The amazing Lea Verou created the best web app to work with Bézier curves. You can find it at cubic-bezier.com. This is by far the easiest way to work with Bézier curves. I highly recommend this tool. The Bézier curve image showed earlier was taken from the cubic-bezier.com website. Let's add animation-timing-function to our example: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: ease-out; } steps() The steps() timing function isn't very widely used, but knowing how it works is a must if you're into CSS animations. It looks like this: animation-timing-function: steps(6); This function is very helpful when we want our animation to take a defined number of steps. After adding a steps() function to our current example, it looks like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: steps(6); } This makes the box take six steps to fade from red to black and vice versa. Parameters There are two optional parameters that we can use with the steps() function: start and end. start: This will make the animation run at the beginning of each step. This will make the animation start right away. end: This will make the animation run at the end of each step. This is the default value if nothing is declared. This will make the animation have a short delay before it starts. Description After adding the parameters to the CSS code, it looks like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: steps(6, start); } Granted, in our example, is not very noticeable. However, you can see it more clear in this pen form Louis Lazarus when hovering over the boxes, at http://tiny.cc/steps-timing-function. Here's an image taken from Stephen Greig's article on Smashing Magazine, Understanding CSS Timing Functions, that explains start and end from the steps() function: Also, there are two predefined values for the steps() function: step-start and step-end. step-start: Is the same thing as steps(1, start). It means that every change happens at the beginning of each interval. step-end: Is the same thing as steps(1, end). It means that every change happens at the end of each interval. CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: step-end; } animation The animation CSS property is the shorthand for animation-name, animation-duration, animation-timing-function, animation-delay, animation-iteration-count, animation-direction, animation-fill-mode, and animation-play-state. It looks like this: animation: fadingColors 2s; Description For a simple animation to work, we need at least two properties: name and duration. If you feel overwhelmed by all these properties, relax. Let me break them down for you in simple bits. Using the animation longhand, the code would look like this: CSS: .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; } Using the animation shorthand, which is the recommended syntax, the code would look like this: CSS: .element { width: 300px; height: 300px; animation: fadingColors 2s; } This will make a box go from its red background to black in 2 seconds, and then stop. Final CSS code Let's see how all the animation properties look in one final example showing both the longhand and shorthand styles. Longhand style .element { width: 300px; height: 300px; animation-name: fadingColors; animation-duration: 2s; animation-iteration-count: infinite; animation-direction: alternate; animation-delay: 3s; animation-fill-mode: none; animation-play-state: running; animation-timing-function: ease-out; } Shorthand style .element { width: 300px; height: 300px; animation: fadingColors 2s infinite alternate 3s none running ease-out; } The animation-duration property will always be considered first rather than animation-delay. All other properties can appear in any order within the declaration. You can find a demo in CodePen at http://tiny.cc/animation. Summary In this article we learned how to add animations in our web project, also we learned about different properties, in detail, that can be used to animate our web project along with their description. Resources for Article: Further resources on this subject: Using JavaScript with HTML[article] Welcome to JavaScript in the full stack[article] A Typical JavaScript Project[article]
Read more
  • 0
  • 0
  • 3239

article-image-running-simpletest-and-phpunit
Packt
09 Feb 2016
4 min read
Save for later

Running Simpletest and PHPUnit

Packt
09 Feb 2016
4 min read
In this article by Matt Glaman, the author of the book Drupal 8 Development Cookbook. This article will kick off with an introduction to getting a Drupal 8 site installed and running. (For more resources related to this topic, see here.) Running Simpletest and PHPUnit Drupal 8 ships with two testing suites. Previously Drupal only supported Simpletest. Now there are PHPUnit tests as well. In the official change record, PHPUnit was added to provide testing without requiring a full Drupal bootstrap, which occurs with each Simpletest test. Read the change record here: https://www.drupal.org/node/2012184. Getting ready Currently core comes with Composer dependencies prepackaged and no extra steps need to be taken to run PHPUnit. This article will demonstrate how to run tests the same way that the QA testbot on Drupal.org does. The process of managing Composer dependencies may change, but is currently postponed due to Drupal.org's testing and packaging infrastructure. Read more here: https://www.drupal.org/node/1475510. How to do it… First enable the Simpletest module. Even though you might only want to run PHPUnit, this is a soft dependency for running the test runner script. Open a command line terminal and navigate to your Drupal installation directory and run the following to execute all available PHPUnit tests: php core/scripts/run-tests.sh PHPUnit Running Simpletest tests required executing the same script, however instead of passing PHPUnit as the argument, you must define the URL option and tests option: php core/scripts/run-tests.sh --url http://localhost --all Review the test output! How it works… The run-tests.sh script has been shipped with Drupal since 2008, then named run-functional-tests.php. The command interacts with the test suites in Drupal to run all or specific tests and sets up other configuration items. We will highlight some of the useful options below: --help: This displays the items covered in the following bullets --list: This displays the available test groups that can be run --url: This is required unless the Drupal site is accessible through http://localhost:80 --sqlite: This allows you to run Simpletest without having to have Drupal installed --concurrency: This allows you to define how many tests run in parallel There's more… Is run-tests a shell script? The run-tests.sh isn't actually a shell script. It is a PHP script, which is why you must execute it with PHP. In fact, within core/scripts each file is a PHP script file meant to be executed from the command line. These scripts are not intended to be run through a web server, which is one of the reasons for the .sh extension. There are issues with discovered PHP across platforms that prevent providing a shebang line to allow executing the file as a normal bash or bat script. For more info view this Drupal.org issue: https://www.drupal.org/node/655178. Running Simpletest without Drupal installed With Drupal 8, Simpletest can be run off SQLlite and no longer requires an installed database. This can be accomplished by passing the sqlite and dburl options to the run-tests.sh script. This requires the PHP SQLite extension to be installed. Here is an example adapted from the DrupalCI test runner for Drupal core: php core/scripts/run-tests.sh --sqlite /tmp/.ht.sqlite --die-on-fail --dburl sqlite://tmp/.ht.sqlite --all Combined with the built in PHP web server for debugging you can run Simpletest without a full-fledged environment. Running specific tests Each example thus far has used the all option to run every Simpletest available. There are various ways to run specific tests: --module: This allows you to run all the tests of a specific module --class: This runs a specific path, identified by full namespace path --file: This runs tests from a specified file --directory: This run tests within a specified directory Previously in Drupal tests were grouped inside of module.test files, which is where the file option derives from. Drupal 8 utilizes the PSR-4 autoloading method and requires one class per file. DrupalCI With Drupal 8 came a new initiative to upgrade the testing infrastructure on Drupal.org. The outcome was DrupalCI. DrupalCI is open source and can be downloaded and run locally. The project page for DrupalCI is: https://www.drupal.org/project/drupalci. The test bot utilizes Docker and can be downloaded locally to run tests. The project ships with a Vagrant file to allow it to be run within a virtual machine or locally. Learn more on the testbot's project page: https://www.drupal.org/project/drupalci_testbot. See also PHPUnit manual: https://phpunit.de/manual/4.8/en/writing-tests-for-phpunit.html Drupal PHPUnit handbook: https://drupal.org/phpunit Simpletest from the command line: https://www.drupal.org/node/645286 Resources for Article: Further resources on this subject: Installing Drupal 8 [article] What is Drupal? [article] Drupal 7 Social Networking: Managing Users and Profiles [article]
Read more
  • 0
  • 0
  • 11389

article-image-classes-and-instances-ember-object-model
Packt
05 Feb 2016
12 min read
Save for later

Classes and Instances of Ember Object Model

Packt
05 Feb 2016
12 min read
In this article by Erik Hanchett, author of the book Ember.js cookbook, Ember.js is an open source JavaScript framework that will make you more productive. It uses common idioms and practices, making it simple to create amazing single-page applications. It also let's you create code in a modular way using the latest JavaScript features. Not only that, it also has a great set of APIs in order to get any task done. The Ember.js community is welcoming newcomers and is ready to help you when required. (For more resources related to this topic, see here.) Working with classes and instances Creating and extending classes is a major feature of the Ember object model. In this recipe, we'll take a look at how creating and extending objects works. How to do it Let's begin by creating a very simple Ember class using extend(), as follows: const Light = Ember.Object.extend({ isOn: false }); This defines a new Light class with a isOn property. Light inherits the properties and behavior from the Ember object such as initializers, mixins, and computed properties. Ember Twiddle Tip At some point of time, you might need to test out small snippets of the Ember code. An easy way to do this is to use a website called Ember Twiddle. From that website, you can create an Ember application and run it in the browser as if you were using Ember CLI. You can even save and share it. It has similar tools like JSFiddle; however, only for Ember. Check it out at http://ember-twiddle.com. Once you have defined a class you'll need to be able to create an instance of it. You can do this by using the create() method. We'll go ahead and create an instance of Light. constbulb = Light.create(); Accessing properties within the bulb instance We can access the properties of the bulb object using the set and get accessor methods. Let's go ahead and get the isOn property of the Light class, as follows: console.log(bulb.get('isOn')); The preceding code will get the isOn property from the bulb instance. To change the isOn property, we can use the set accessor method: bulb.set('isOn', true) The isOn property will now be set to true instead of false. Initializing the Ember object The init method is invoked whenever a new instance is created. This is a great place to put in any code that you may need for the new instance. In our example, we'll go ahead and add an alert message that displays the default setting for the isOn property: const Light = Ember.Object.extend({ init(){ alert('The isON property is defaulted to ' + this.get('isOn')); }, isOn: false }); As soon as the Light.create line of code is executed, the instance will be created and this message will pop up on the screen. The isON property is defaulted to false. Subclass Be aware that you can create subclasses of your objects in Ember. You can override methods and access the parent class by using the _super() keyword method. This is done by creating a new object that uses the Ember extend method on the parent class. Another important thing to realize is that if you're subclassing a framework class such as Ember.Component and you override the init method, you'll need to make sure that you call this._super(). If not, the component may not work properly. Reopening classes At anytime, you can reopen a class and define new properties or methods in it. For this, use the reopen method. In our previous example, we had an isON property. Let's reopen the same class and add a color property, as follows: To add the color property, we need to use the reopen() method: Light.reopen({ color: 'yellow' }); If required, you can add static methods or properties using reopenClass, as follows: Light.reopen({ wattage: 40 }); You can now access the static property: Light.wattage How it works In the preceding examples, we have created an Ember object using extend. This tells Ember to create a new Ember class. The extend method uses inheritance in the Ember.js framework. The Light object inherits all the methods and bindings of the Ember object. The create method also inherits from the Ember object class and returns a new instance of this class. The bulb object is the new instance of the Ember object that we created. There's more To use the previous examples, we can create our own module and have it imported to our project. To do this, create a new MyObject.js file in the app folder, as follows: // app/myObject.js import Ember from 'ember'; export default function() { const Light = Ember.Object.extend({ init(){ alert('The isON property is defaulted to ' + this.get('isOn')); }, isOn: false }); Light.reopen({ color: 'yellow' }); Light.reopenClass({ wattage: 80 }); const bulb = Light.create(); console.log(bulb.get('color')); console.log(Light.wattage); } This is the module that we can now import to any file of our Ember application. In the app folder, edit the app.js file. You'll need to add the following line at the top of the file: // app/app.js import myObject from './myObject'; At the bottom, before the export, add the following line: myObject(); This will execute the myObject function that we created in the myObject.js file. After running the Ember server, you'll see the isOn property defaulted to the false pop-up message. Working with computed properties In this recipe, we'll take a look at the computed properties and how they can be used to display data, even if that data changes as the application is running. How to do it Let's create a new Ember.Object and add a computed property to it, as shown in the following: Begin by creating a new description computed property. This property will reflect the status of isOn and color properties: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }) }); We can now create a new Light object and get the computed property description: const bulb = Light.create(); bulb.get('description'); //The yellow light is set to false The preceding example creates a computed property that depends on the isOn and color properties. When the description function is called, it returns a string describing the state of the light. Computed properties will observe changes and dynamically update whenever they occur. To see this in action, we can change the preceding example and set the isOn property to false. Use the following code to accomplish this: bulb.set('isOn', true); bulb.get('description') //The yellow light is set to true The description has been automatically updated and will now display that the yellow light is set to true. Chaining the Light object Ember provides a nice feature that allows computed properties to be present in other computed properties. In the previous example, we created a description property that outputted some basic information about the Light object, as follows: Let's add another property that gives a full description: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), }); The fullDescription function returns a string that concatenates the output from description with a new string that displays the age: const bulb = Light.create({age:22}); bulb.get('fullDescription'); //The yellow light is set to false and the age is 22 In this example, during instantiation of the Light object, we set the age to 22. We can overwrite any property if required. Alias The Ember.computed.alias method allows us to create a property that is an alias for another property or object. Any call to get or set will behave as if the changes were made to the original property, as shown in the following: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn'); }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), aliasDescription: Ember.computed.alias('fullDescription') }); const bulb = Light.create({age: 22}); bulb.get('aliasDescription'); //The yellow light is set to false and the age is 22. The aliasDescription will display the same text as fullDescription since it's just an alias of this object. If we made any changes later to any properties in the Light object, the alias would also observe these changes and be computed properly. How it works Computed properties are built on top of the observer pattern. Whenever an observation shows a state change, it recomputes the output. If no changes occur, then the result is cached. In other words, the computed properties are functions that get updated whenever any of their dependent value changes. You can use it in the same way that you would use a static property. They are common and useful throughout Ember and it's codebase. Keep in mind that a computed property will only update if it is in a template or function that is being used. If the function or template is not being called, then nothing will occur. This will help with performance. Working with Ember observers in Ember.js Observers are fundamental to the Ember object model. In the next recipe, we'll take our light example and add in an observer and see how it operates. How to do it To begin, we'll add a new isOnChanged observer. This will only trigger when the isOn property changes: const Light = Ember.Object.extend({ isOn: false, color: 'yellow', age: null, description: Ember.computed('isOn','color',function() { return 'The ' + this.get('color') + ' light is set to ' + this.get('isOn') }), fullDescription: Ember.computed('description','age',function() { return this.get('description') + ' and the age is ' + this.get('age') }), desc: Ember.computed.alias('description'), isOnChanged: Ember.observer('isOn',function() { console.log('isOn value changed') }) }); const bulb = Light.create({age: 22}); bulb.set('isOn',true); //console logs isOn value changed The Ember.observer isOnChanged monitors the isOn property. If any changes occur to this property, isOnChanged is invoked. Computed Properties vs Observers At first glance, it might seem that observers are the same as computed properties. In fact, they are very different. Computed properties can use the get and set methods and can be used in templates. Observers, on the other hand, just monitor property changes. They can neither be used in templates nor be accessed like properties. They also don't return any values. With that said, be careful not to overuse observers. For many instances, a computed property is the most appropriate solution. Also, if required, you can add multiple properties to the observer. Just use the following command: Light.reopen({ isAnythingChanged: Ember.observer('isOn','color',function() { console.log('isOn or color value changed') }) }); const bulb = Light.create({age: 22}); bulb.set('isOn',true); // console logs isOn or color value changed bulb.set('color','blue'); // console logs isOn or color value changed The isAnything observer is invoked whenever the isOn or color properties changes. The observer will fire twice as each property has changed. Synchronous issues with the Light object and observers It's very easy to get observers out of sync. For example, if a property that it observes changes, it will be invoked as expected. After being invoked, it might manipulate a property that hasn't been updated yet. This can cause synchronization issues as everything happens at the same time, as follows: The following example shows this behavior: Light.reopen({ checkIsOn: Ember.observer('isOn', function() { console.log(this.get('fullDescription')); }) }); const bulb = Light.create({age: 22}); bulb.set('isOn', true); When isOn is changed, it's not clear whether fullDescription, a computed property, has been updated yet or not. As observers work synchronously, it's difficult to tell what has been fired and changed. This can lead to unexpected behavior. To counter this, it's best to use the Ember.run.once method. This method is a part of the Ember run loop, which is Ember's way of managing how the code is executed. Reopen the Light object and you can see the following occurring: Light.reopen({ checkIsOn: Ember.observer('isOn','color', function() { Ember.run.once(this,'checkChanged'); }), checkChanged: Ember.observer('description',function() { console.log(this.get('description')); }) }); const bulb = Light.create({age: 22}); bulb.set('isOn', true); bulb.set('color', 'blue'); The checkIsOn observer calls the checkChanged observer using Ember.run.once. This method is only run once per run loop. Normally, checkChanged would be fired twice; however, since it's be called using Ember.run.once, it only outputs once. How it works Ember observers are mixins from the Ember.Observable class. They work by monitoring property changes. When any change occurs, they are triggered. Keep in mind that these are not the same as computed properties and cannot be used in templates or with getters or setters. Summary In this article you learned classes and instances. You also learned computed properties and how they can be used to display data. Resources for Article: Further resources on this subject: Introducing the Ember.JS framework [article] Building Reusable Components [article] Using JavaScript with HTML [article]
Read more
  • 0
  • 0
  • 14632

article-image-going-mobile-first
Packt
03 Feb 2016
16 min read
Save for later

Going Mobile First

Packt
03 Feb 2016
16 min read
In this article by Silvio Moreto Pererira, author of the book, Bootstrap By Example, we will focus on mobile design and how to change the page layout for different viewports, change the content, and more. In this article, you will learn the following: Mobile first development Debugging for any device The Bootstrap grid system for different resolutions (For more resources related to this topic, see here.) Make it greater Maybe you have asked yourself (or even searched for) the reason of the mobile first paradigm movement. It is simple and makes complete sense to speed up your development pace. The main argument for the mobile first paradigm is that it is easier to make it greater than to shrink it. In other words, if you first make a desktop version of the web page (known as responsive design or mobile last) and then go to adjust the website for a mobile, it has a probability of 99% of breaking the layout at some point, and you will have to fix a lot of things in both the mobile and desktop. On the other hand, if you first create the mobile version, naturally the website will use (or show) less content than the desktop version. So, it will be easier to just add the content, place the things in the right places, and create the full responsiveness stack. The following image tries to illustrate this concept. Going mobile last, you will get a degraded, warped, and crappy layout, and going mobile first, you will get a progressively enhanced, future-friendly awesome web page. See what happens to the poor elephant in this metaphor: Bootstrap and themobile first design At the beginning of Bootstrap, there was no concept of mobile first, so it was made to work for designing responsive web pages. However, with the version 3 of the framework, the concept of mobile first was very solid in the community. For doing this, the whole code of the scaffolding system was rewritten to become mobile first from the start. They decided to reformulate how to set up the grid instead of just adding mobile styles. This made a great impact on compatibility between versions older than 3, but was crucial for making the framework even more popular. To ensure the proper rendering of the page, set the correct viewport at the <head> tag: <meta name="viewport" content="width=device-width,   initial-scale=1"> How to debug different viewports in the browser Here, you will learn how to debug different viewports using the Google Chrome web browser. If you already know this, you can skip this section, although it might be important to refresh the steps for this. In the Google Chrome browser, open the Developer tools option. There are many ways to open this menu: Right-click anywhere on the page and click on the last option called Inspect. Go in the settings (the sandwich button on the right-hand side of the address bar), click on More tools, and finally on Developer tools. The shortcut to open it is Ctrl (cmd for OS X users) + Shift + I. F12 in Windows also works (Internet Explorer legacy…). With Developer tools, click on the mobile phone to the left of a magnifier, as showed in the following image: It will change the display of the viewport to a certain device, and you can also set a specific network usage to limit the data bandwidth. Chrome will show a message telling you that for proper visualization, you may need to reload the page to get the correct rendering: For the next image, we have activated the Device mode for an iPhone 5 device. When we set this viewport, the problems start to appear because we did not make the web page with the mobile first methodology. Bootstrap scaffolding for different devices Now that we know more about mobile first development and its important role in Bootstrap starting from version 3, we will cover Bootstrap usage for different devices and viewports. For doing this, we must apply the column class for the specific viewport, for example, for medium displays, we use the .col-md-*class. The following table presents the different classes and resolutions that will be applied for each viewport class:   Extra small devices (Phones < 544 px / 34 em) Small devices (Tablets ≥ 544 px / 34 em and < 768 px / 48 em) Medium devices (Desktops ≥ 768 px /48 em < 900px / 62 em) Large devices (Desktops ≥ 900 px / 62 em < 1200px 75 em) Extra large devices (Desktops ≥ 1200 px / 75 em) Grid behavior Horizontal lines at all times Collapse at start and fit column grid Container fixed width Auto 544px or 34rem 750px or 45rem 970px or 60rem 1170px or 72.25rem Class prefix .col-xs-* .col-sm-* .col-md-* .col-lg-* .col-xl-* Number of columns 12 columns Column fixed width Auto ~ 44px or 2.75 rem ~ 62px or 3.86 rem ~ 81px or 5.06 rem ~ 97px or 6.06 rem Mobile and extra small devices To exemplify the usage of Bootstrap scaffolding in mobile devices, we will have a predefined web page and want to adapt it to mobile devices. We will be using the Chrome mobile debug tool with the device, iPhone 5. You may have noticed that for small devices, Bootstrap just stacks each column without referring for different rows. In the layout, some of the Bootstrap rows may seem fine in this visualization, although the one in the following image is a bit strange as the portion of code and image are not in the same line, as it supposed to be: To fix this, we need to add the class column's prefix for extra small devices, which is .col-xs-*, where * is the size of the column from 1 to 12. Add the .col-xs-5 class and .col-xs-7 for the columns of this respective row. Refresh the page and you will see now how the columns are put side by side: <div class="row">   <!-- row 3 -->   <div class="col-md-3 col-xs-5">     <pre>&lt;p&gt;I love programming!&lt;/p&gt;       &lt;p&gt;This paragraph is on my landing page&lt;/p&gt;       &lt;br/&gt;       &lt;br/&gt;       &lt;p&gt;Bootstrap by example&lt;/p&gt;     </pre>   </div>   <div class="col-md-9 col-xs-7">     <img src="imgs/center.png" class="img-responsive">   </div> </div> Although the image of the web browser is too small on the right, it would be better if it was a vertical stretched image, such as a mobile phone. (What a coincidence!) To make this, we need to hide the browser image in extra small devices and display an image of the mobile phone. Add the new mobile image below the existing one as follows. You will see both images stacked up vertically in the right column: <img src="imgs/center.png" class="img-responsive"> <img src="imgs/mobile.png" class="img-responsive"> Then, we need to use the new concept of availability classes present in Bootstrap. We need to hide the browser image and display the mobile image just for this kind of viewport, which is extra small. For this, add the .hidden-xs class to the browser image and add the .visible-xs class to the mobile image: <div class="row">   <!-- row 3 -->   <div class="col-md-3 col-xs-5">     <pre>&lt;p&gt;I love programming!&lt;/p&gt;       &lt;p&gt;This paragraph is on my landing page&lt;/p&gt;       &lt;br/&gt;       &lt;br/&gt;       &lt;p&gt;Bootstrap by example&lt;/p&gt;     </pre>   </div>   <div class="col-md-9 col-xs-7">     <img src="imgs/center.png" class="img-responsive hidden-xs">     <img src="imgs/mobile.png" class="img-responsive visible-xs">   </div> </div> Now this row seems nice! With this, the browser image was hidden in extra small devices and the mobile image was shown for this viewport in question. The following image shows the final display of this row: Moving on, the next Bootstrap .row contains a testimonial surrounded by two images. It would be nicer if the testimonial appeared first and both images were displayed after it, splitting the same row, as shown in the following image. For this, we will repeat almost the same techniques presented in the last example: The first change is to hide the Bootstrap image using the .hidden-xs class. After this, create another image tag with the Bootstrap image in the same column of the PACKT image. The final code of the row should be as follows: <div class="row">   <div class="col-md-3 hidden-xs">     <img src="imgs/bs.png" class="img-responsive">   </div>   <div class="col-md-6 col-xs-offset-1 col-xs-11">     <blockquote>       <p>Lorem ipsum dolor sit amet, consectetur         adipiscing elit. Integer posuere erat a ante.</p>       <footer>Testimonial from someone at         <cite title="Source Title">Source Title</cite></footer>     </blockquote>   </div>   <div class="col-md-3 col-xs-7">     <img src="imgs/packt.png" class="img-responsive">   </div>   <div class="col-xs-5 visible-xs">     <img src="imgs/bs.png" class="img-responsive">   </div> </div> We did plenty of things now; all the changes are highlighted. The first is the .hidden-xs class in the first column of the Bootstrap image, which hid the column for this viewport. Afterward, in the testimonial, we changed the grid for the mobile, adding a column offset with size 1 and making the testimonial fill the rest of the row with the .col-xs-11 class. Lastly, like we said, we want to split both images from PACKT and Bootstrap in the same row. For this, make the first image column fill seven columns with the .col-xs-7 class. The other image column is a little more complicated. As it is visible just for mobile devices, we add the .col-xs-5 class, which will make the element span five columns in extra small devices. Moreover, we hide the column for other viewports with the .visible-xs class. As you can see, this row has more than 12 columns (one offset, 11 testimonials, seven PACKT images, and five Bootstrap images). This process is called column wrapping and happens when you put more than 12 columns in the same row, so the groups of extra columns will wrap to the next lines. Availability classes Just like .hidden-*, there are the .visible-*-*classes for each variation of the display and column from 1 to 12. There is also a way to change the display of the CSS property using the .visible-*-* class, where the last * means block, inline, or inline-block. Use this to set the proper visualization for different visualizations. The following image shows you the final result of the changes. Note that we made the testimonial appear first, with one column of offset, and both images appear below it: Tablets and small devices Completing the mobile visualization devices, we move on to tablets and small devices, which are devices from 544 px (34 em) to 768 px (48 em). Most of this kind of devices are tablets or old desktops monitors. To work with this example, we are using the iPad mini in the portrait position. For this resolution, Bootstrap handles the rows just as in extra small devices by stacking up each one of the columns and making them fill the total width of the page. So, if we do not want this to happen, we have to set the column fill for each element with the .col-sm-* class manually. If you see now how our example is presented, there are two main problems. The first one is that the heading is in separate lines, whereas they could be in the same line. For this, we just need to apply the grid classes for small devices with the .col-sm-6 class for each column, splitting them into equal sizes: <div class="row">   <div class="col-md-offset-4 col-md-4 col-sm-6">     <h3>       Some text with <small>secondary text</small>     </h3>   </div>   <div class="col-md-4 col-sm-6">     <h3>       Add to your favorites       <small>         <kbd class="nowrap"><kbd>ctrl</kbd> + <kbd>d</kbd></kbd>       </small>     </h3>   </div> </div> The result should be as follows: The second problem in this viewport is again the testimonial row! Due to the classes that we have added for the mobile viewport, the testimonial now has an offset column and different column span. We must add the classes for small devices and make this row with the Bootstrap image on the left, the testimonial in the middle, and the PACKT image on the right: <div class="row">   <div class="col-md-3 hidden-xs col-sm-3">     <img src="imgs/bs.png" class="img-responsive">   </div>   <div class="col-md-6 col-xs-offset-1 col-xs-11     col-sm-6 col-sm-offset-0">     <blockquote>       <p>Lorem ipsum dolor sit amet, consectetur         adipiscing elit. Integer posuere erat a ante.</p>       <footer>Testimonial from someone at         <cite title="Source Title">Source Title</cite></footer>     </blockquote>   </div>   <div class="col-md-3 col-xs-7 col-sm-3">     <img src="imgs/packt.png" class="img-responsive">   </div>   <div class="col-xs-5 hidden-sm hidden-md hidden-lg">     <img src="imgs/bs.png" class="img-responsive">   </div> </div> As you can see, we had to reset the column offset in the testimonial column. It happened because it kept the offset that we had added for extra small devices. Moreover, we are just ensuring that the image columns had to fill just three columns with the .col-sm-3 classes in both the images. The result of the row is as follows: Everything else seems fine! These viewports were easier to set up. See how Bootstrap helps us a lot? Let's move on to the final viewport, which is a desktop or large devices. Desktop and large devices Last but not least, we enter the grid layout for a desktop and large devices. We skipped medium devices because we coded first for this viewport. Deactivate the Device mode in Chrome and put your page in a viewport with a width larger or equal to 1200 pixels. The grid prefix that we will be using is .col-lg-*, and if you take a look at the page, you will see that everything is well placed and we don't need to make changes! (Although we would like to make some tweaks to make our layout fancier and learn some stuff about the Bootstrap grid.) We want to talk about a thing called column ordering. It is possible to change the order of the columns in the same row by applying the.col-lg-push-* and .col-lg-pull-* classes. (Note that we are using the large devices prefix, but any other grid class prefix can be used.) The .col-lg-push-* class means that the column will be pushed to the right by the * columns, where * is the number of columns pushed. On the other hand, .col-lg-pull-* will pull the column in the left direction by *, where * is the number of columns as well. Let's test this trick in the second row by changing the order of both the columns: <div class="row">   <div class="col-md-offset-4 col-md-4 col-sm-6 col-lg-push-4">     <h3>       Some text with <small>secondary text</small>     </h3>   </div>   <div class="col-md-4 col-sm-6 col-lg-pull-4">     <h3>       Add to your favorites       <small>         <kbd class="nowrap"><kbd>ctrl</kbd> + <kbd>d</kbd></kbd>       </small>     </h3>   </div> </div> We just added the .col-lg-push-4 class to the first column and .col-lg-pull-4 to the other one to get this result. By doing this, we have changed the order of both the columns in the second row, as shown in the following image: Summary In this article, you learned a little about the mobile first development and how Bootstrap can help us in this task. We started from an existing Bootstrap template, which was not ready for mobile visualization, and fixed that. While fixing, we used a lot of Bootstrap scaffolding properties and Bootstrap helpers, making it much easier to fix anything. We did all of this without a single line of CSS or JavaScript; we used only Bootstrap with its inner powers! Resources for Article:   Further resources on this subject: Bootstrap in a Box [article] The Bootstrap grid system [article] Deep Customization of Bootstrap [article]
Read more
  • 0
  • 0
  • 8945
article-image-customizing-and-automating-google-applications
Packt
27 Jan 2016
7 min read
Save for later

Customizing and Automating Google Applications

Packt
27 Jan 2016
7 min read
In this article by the author, Ramalingam Ganapathy, of the book, Learning Google Apps Script, we will see how to create new projects in sheets and send an email with inline image and attachments. You will also learn to create clickable buttons, a custom menu, and a sidebar. (For more resources related to this topic, see here.) Creating new projects in sheets Open any newly created google spreadsheet (sheets). You will see a number of menu items at the top of the window. Point your mouse to it and click on Tools. Then, click on Script editor as shown in the following screenshot: A new browser tab or window with a new project selection dialog will open. Click on Blank Project or close the dialog. Now, you have created a new untitled project with one script file (Code.gs), which has one default empty function (myFunction). To rename the project, click on project title (at the top left-hand side of the window), and then a rename dialog will open. Enter your favored project name, and then click on the OK button. Creating clickable buttons Open the script editor in a newly created or any existing Google sheet. Select the cell B3 or any other cell. Click on Insert and Drawing as shown in the following screenshot: A drawing editor window will open. Click on the Textbox icon and click anywhere on the canvas area. Type Click Me. Resize the object so as to only enclose the text as shown in the screenshot here: Click on Save & Close to exit from the drawing editor. Now, the Click Me image will be inserted at the top of the active cell (B3) as shown in the following screenshot: You can drag this image anywhere around the spreadsheet. In Google sheets, images are not anchored to a particular cell, it can be dragged or moved around. If you right-click on the image, then a drop-down arrow at the top right corner of the image will be visible. Click on the Assign script menu item. A script assignment window will open as shown here: Type "greeting" or any other name as you like but remember its name (so as to create a function with the same name for the next steps). Click on the OK button. Now, open the script editor in the same spreadsheet. When you the open script editor, the project selector dialog will open. You'll close or select blank project. A default function called myFunction will be there in the editor. Delete everything in the editor and insert the following code. function greeting() { Browser.msgBox("Greeting", "Hello World!", Browser.Buttons.OK); } Click on the save icon and enter a project name if asked. You have completed coding your greeting function. Activate the spreadsheet tab/window, and click on your button called Click Me. Then, an authorization window will open; click on Continue. In the successive Request for Permission window, click on Allow button. As soon as you click on Allow and the permission gets dialog disposed, your actual greeting message box will open as shown here: Click on OK to dispose the message box. Whenever you click on your button, this message box will open. Creating a custom menu Can you execute the function greeting without the help of the button? Yes, in the script editor, there is a Run menu. If you click on Run and greeting, then the greeting function will be executed and the message box will open. Creating a button for every function may not be feasible. Although, you cannot alter or add items to the application's standard menu (except the Add-on menu), such as File, Edit and View, and others, you can add the custom menu and its items. For this task, create a new Google docs document or open any existing document. Open the script editor and type these two functions: function createMenu() { DocumentApp.getUi() .createMenu("PACKT") .addItem("Greeting", "greeting") .addToUi(); } function greeting() { var ui = DocumentApp.getUi(); ui.alert("Greeting", "Hello World!", ui.ButtonSet.OK); } In the first function, you use the DocumentApp class, invoke the getUi method, and consecutively invoke the createMenu, addItem, and addToUi methods by method chaining. The second function is familiar to you that you have created in the previous task but this time with the DocumentApp class and associated methods. Now, run the function called createMenu and flip to the document window/tab. You can notice a new menu item called PACKT added next to the Help menu. You can see the custom menu PACKT with an item Greeting as shown next. The item label called Greeting is associated with the function called greeting: The menu item called Greeting works the same way as your button created in previous task. The drawback with this method of inserting custom menu is used to show up the custom menu. You need to run createMenu every time within the script editor. Imagine how your user can use this greeting function if he/she doesn't know about the GAS and script editor? Think that your user might not be a programmer as you. To enable your users to execute the selected GAS functions, then you should create a custom menu and make it visible as soon as the application is opened. To do so, rename the function called createMenu to onOpen, that's it. Creating a sidebar Sidebar is a static dialog box and it will be included in the right-hand side of the document editor window. To create a sidebar, type the following code in your editor: function onOpen() { var htmlOutput = HtmlService .createHtmlOutput('<button onclick="alert('Hello World!');">Click Me</button>') .setTitle('My Sidebar'); DocumentApp.getUi() .showSidebar(htmlOutput); } In the previous code, you use HtmlService and invoke its method called createHtmlOutput and consecutively invoke the setTitle method. To test this code, run the onOpen function or the reload document. The sidebar will be opened in the right-hand side of the document window as shown in the following screenshot. The sidebar layout size is a fixed one that means you cannot change, alter, or resize it: The button in the sidebar is an HTML element, not a GAS element, and if clicked, it opens the browser interface's alert box. Sending an email with inline image and attachments To embed images such as logo in your email message, you may use HTML codes instead of some plain text. Upload your image to Google Drive and get and use the file ID in the code: function sendEmail(){ var file = SpreadsheetApp.getActiveSpreadsheet() .getAs(MimeType.PDF); var image = DriveApp.getFileById("[[image file's id in Drive ]]").getBlob(); var to = "[[receiving email id]]"; var message = '<img src="cid:logo" /> Embedding inline image example.</p>'; MailApp.sendEmail( to, "Email with inline image and attachment", "", { htmlBody:message, inlineImages:{logo:image}, attachments:[file] } ); } Summary In this article, you learned how to customize and automate Google applications with a few examples. Many more useful and interesting applications have been described in the actual book.  Resources for Article: Further resources on this subject: How to Expand Your Knowledge [article] Google Apps: Surfing the Web [article] Developing Apps with the Google Speech Apis [article]
Read more
  • 0
  • 0
  • 7659

article-image-accessing-data-spring
Packt
25 Jan 2016
8 min read
Save for later

Accessing Data with Spring

Packt
25 Jan 2016
8 min read
In this article written by Shameer Kunjumohamed and Hamidreza Sattari, authors of the book Spring Essentials, we will learn how to access data with Spring. (For more resources related to this topic, see here.) Data access or persistence is a major technical feature of data-driven applications. It is a critical area where careful design and expertise is required. Modern enterprise systems use a wide variety of data storage mechanisms, ranging from traditional relational databases such as Oracle, SQL Server, and Sybase to more flexible, schema-less NoSQL databases such as MongoDB, Cassandra, and Couchbase. Spring Framework provides comprehensive support for data persistence in multiple flavors of mechanisms, ranging from convenient template components to smart abstractions over popular Object Relational Mapping (ORM) tools and libraries, making them much easier to use. Spring's data access support is another great reason for choosing it to develop Java applications. Spring Framework offers developers the following primary approaches for data persistence mechanisms to choose from: Spring JDBC ORM data access Spring Data Furthermore, Spring standardizes the preceding approaches under a unified Data Access Object (DAO) notation called @Repository. Another compelling reason for using Spring is its first class transaction support. Spring provides consistent transaction management, abstracting different transaction APIs such as JTA, JDBC, JPA, Hibernate, JDO, and other container-specific transaction implementations. In order to make development and prototyping easier, Spring provides embedded database support, smart data source abstractions, and excellent test integration. This article explores various data access mechanisms provided by Spring Framework and its comprehensive support for transaction management in both standalone and web environments, with relevant examples. Why use Spring Data Access when we have JDBC? JDBC (short for Java Database Connectivity), the Java Standard Edition API for data connectivity from Java to relational databases, is a very a low-level framework. Data access via JDBC is often cumbersome; the boilerplate code that the developer needs to write makes the it error-prone. Moreover, JDBC exception handling is not sufficient for most use cases; there exists a real need for simplified but extensive and configurable exception handling for data access. Spring JDBC encapsulates the often-repeating code, simplifying the developer's code tremendously and letting him focus entirely on his business logic. Spring Data Access components abstract the technical details, including lookup and management of persistence resources such as connection, statement, and result set, and accept the specific SQL statements and relevant parameters to perform the operation. They use the same JDBC API under the hood while exposing simplified, straightforward interfaces for the client's use. This approach helps make a much cleaner and hence maintainable data access layer for Spring applications. DataSource The first step of connecting to a database from any Java application is obtaining a connection object specified by JDBC. DataSource, part of Java SE, is a generalized factory of java.sql.Connection objects that represents the physical connection to the database, which is the preferred means of producing a connection. DataSource handles transaction management, connection lookup, and pooling functionalities, relieving the developer from these infrastructural issues. DataSource objects are often implemented by database driver vendors and typically looked up via JNDI. Application servers and servlet engines provide their own implementations of DataSource, a connector to the one provided by the database vendor, or both. Typically configured inside XML-based server descriptor files, server-supplied DataSource objects generally provide built-in connection pooling and transaction support. As a developer, you just configure your DataSource objects inside the server (configuration files) declaratively in XML and look it up from your application via JNDI. In a Spring application, you configure your DataSource reference as a Spring bean and inject it as a dependency to your DAOs or other persistence resources. The Spring <jee:jndi-lookup/> tag (of  the http://www.springframework.org/schema/jee namespace) shown here allows you to easily look up and construct JNDI resources, including a DataSource object defined from inside an application server. For applications deployed on a J2EE application server, a JNDI DataSource object provided by the container is recommended. <jee:jndi-lookup id="taskifyDS" jndi-name="java:jboss/datasources/taskify"/> For standalone applications, you need to create your own DataSource implementation or use third-party implementations, such as Apache Commons DBCP, C3P0, and BoneCP. The following is a sample DataSource configuration using Apache Commons DBCP2: <bean id="taskifyDS" class="org.apache.commons.dbcp2.BasicDataSource" destroy-method="close"> <property name="driverClassName" value="${driverClassName}" /> <property name="url" value="${url}" /> <property name="username" value="${username}" /> <property name="password" value="${password}" /> . . . </bean> Make sure you add the corresponding dependency (of your DataSource implementation) to your build file. The following is the one for DBCP2: <dependency> <groupId>org.apache.commons</groupId> <artifactId>commons-dbcp2</artifactId> <version>2.1.1</version> </dependency> Spring provides a simple implementation of DataSource called DriverManagerDataSource, which is only for testing and development purposes, not for production use. Note that it does not provide connection pooling. Here is how you configure it inside your application: <bean id="taskifyDS" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${driverClassName}" /> <property name="url" value="${url}" /> <property name="username" value="${username}" /> <property name="password" value="${password}" /> </bean> It can also be configured in a pure JavaConfig model, as shown in the following code: @Bean DataSource getDatasource() { DriverManagerDataSource dataSource = new DriverManagerDataSource(pgDsProps.getProperty("url")); dataSource.setDriverClassName( pgDsProps.getProperty("driverClassName")); dataSource.setUsername(pgDsProps.getProperty("username")); dataSource.setPassword(pgDsProps.getProperty("password")); return dataSource; } Never use DriverManagerDataSource on production environments. Use third-party DataSources such as DBCP, C3P0, and BoneCP for standalone applications, and JNDI DataSource, provided by the container, for J2EE containers instead. Using embedded databases For prototyping and test environments, it would be a good idea to use Java-based embedded databases for quickly ramping up the project. Spring natively supports HSQL, H2, and Derby database engines for this purpose. Here is a sample DataSource configuration for an embedded HSQL database: @Bean DataSource getHsqlDatasource() { return new EmbeddedDatabaseBuilder().setType(EmbeddedDatabaseType.HSQL) .addScript("db-scripts/hsql/db-schema.sql") .addScript("db-scripts/hsql/data.sql") .addScript("db-scripts/hsql/storedprocs.sql") .addScript("db-scripts/hsql/functions.sql") .setSeparator("/").build(); } The XML version of the same would look as shown in the following code: <jdbc:embedded-database id="dataSource" type="HSQL"> <jdbc:script location="classpath:db-scripts/hsql/db-schema.sql" /> . . . </jdbc:embedded-database> Handling exceptions in the Spring data layer With traditional JDBC based applications, exception handling is based on java.sql.SQLException, which is a checked exception. It forces the developer to write catch and finally blocks carefully for proper handling and to avoid resource leakages. Spring, with its smart exception hierarchy based on runtime exception, saves the developer from this nightmare. By having DataAccessException as the root, Spring bundles a big set of meaningful exceptions that translate the traditional JDBC exceptions. Besides JDBC, Spring covers the Hibernate, JPA, and JDO exceptions in a consistent manner. Spring uses SQLErrorCodeExceptionTranslator, which inherits SQLExceptionTranslator in order to translate SQLExceptions to DataAccessExceptions. We can extend this class for customizing the default translations. We can replace the default translator with our custom implementation by injecting it into the persistence resources (such as JdbcTemplate, which we will cover soon). DAO support and @Repository annotation The standard way of accessing data is via specialized DAOs that perform persistence functions under the data access layer. Spring follows the same pattern by providing DAO components and allowing developers to mark their data access components as DAOs using an annotation called @Repository. This approach ensures consistency over various data access technologies, such as JDBC, Hibernate, JPA, and JDO, as well as project-specific repositories. Spring applies SQLExceptionTranslator across all these methods consistently. Spring recommends your data access components to be annotated with the stereotype @Repository. The term "repository" was originally defined in Domain-Driven Design, Eric Evans, Addison-Wesley, as "a mechanism for encapsulating storage, retrieval, and search behavior which emulates a collection of objects." This annotation makes the class eligible for DataAccessException translation by Spring Framework. Spring Data, another standard data access mechanism provided by Spring, revolves around @Repository components. Summary We have so far explored Spring Framework's comprehensive coverage of all the technical aspects around data access and transaction. Spring provides multiple convenient data access methods, which takes away much of the hard work involved in building the data layer from the developer and also standardizes business components. Correct usage of Spring data access components will ensure that our data layer is clean and highly maintainable. Resources for Article: Further resources on this subject: So, what is Spring for Android?[article] Getting Started with Spring Security[article] Creating a Spring Application[article]
Read more
  • 0
  • 0
  • 16718

article-image-advanced-fetching
Packt
21 Jan 2016
6 min read
Save for later

Advanced Fetching

Packt
21 Jan 2016
6 min read
In this article by Ramin Rad, author of the book Mastering Hibernate, we have discussed various ways of fetching the data from the permanent store. We will focus a little more on annotations related to data fetch. (For more resources related to this topic, see here.) Fetching strategy In Java Persistence API, JPA, you can provide a hint to fetch the data lazily or eagerly using the FetchType. However, some implementations may ignore lazy strategy and just fetch everything eagerly. Hibernate's default strategy is FetchType.LAZY to reduce the memory footprint of your application. Hibernate offers additional fetch modes in addition to the commonly used JPA fetch types. Here, we will discuss how they are related and provide an explanation, so you understand when to use which. JOIN fetch mode The JOIN fetch type forces Hibernate to create a SQL join statement to populate both the entities and the related entities using just one SQL statement. However, the JOIN fetch mode also implies that the fetch type is EAGER, so there is no need to specify the fetch type. To understand this better, consider the following classes: @Entity public class Course { @Id @GeneratedValue private long id; private String title; @OneToMany(cascade=CascadeType.ALL, mappedBy="course") @Fetch(FetchMode.JOIN) private Set<Student> students = new HashSet<Student>(); // getters and setters } @Entity public class Student { @Id @GeneratedValue private long id; private String name; private char gender; @ManyToOne private Course course; // getters and setters } In this case, we are instructing Hibernate to use JOIN to fetch course and student in one SQL statement and this is the SQL that is composed by Hibernate: select course0_.id as id1_0_0_, course0_.title as title2_0_0_, students1_.course_id as course_i4_0_1_, students1_.id as id1_1_1_, students1_.gender as gender2_1_2_, students1_.name as name3_1_2_ from Course course0_ left outer join Student students1_ on course0_.id=students1_.course_id where course0_.id=? As you can see, Hibernate is using a left join all courses and any student that may have signed up for those courses. Another important thing to note is that if you use HQL, Hibernate will ignore JOIN fetch mode and you'll have to specify the join in the HQL. (we will discuss HQL in the next section) In other words, if you fetch a course entity using a statement such as this: List<Course> courses = session .createQuery("from Course c where c.id = :courseId") .setLong("courseId", chemistryId) .list(); Then, Hibernate will use SELECT mode; but if you don't use HQL, as shown in the next example, Hibernate will pay attention to the fetch mode instructions provided by the annotation. Course course = (Course) session.get(Course.class, chemistryId); SELECT fetch mode In SELECT mode, Hibernate uses an additional SELECT statement to fetch the related entities. This mode doesn't affect the behavior of the fetch type (LAZY, EAGER), so they will work as expected. To demonstrate this, consider the same example used in the last section and lets examine the output: select id, title from Course where id=? select course_id, id, gender, name from Student where course_id=? Note that the first Hibernate fetches and populates the Course entity and then uses the course ID to fetch the related students. Also, if your fetch type is set to LAZY and you never reference the related entities, the second SELECT is never executed. SUBSELECT fetch mode The SUBSELECT fetch mode is used to minimize the number of SELECT statements executed to fetch the related entities. If you first fetch the owner entities and then try to access the associated owned entities, without SUBSELECT, Hibernate will issue an additional SELECT statement for every one of the owner entities. Using SUBSELECT, you instruct Hibernate to use a SQL sub-select to fetch all the owners for the list of owned entities already fetched. To understand this better, let's explore the following entity classes. @Entity public class Owner { @Id @GeneratedValue private long id; private String name; @OneToMany(cascade=CascadeType.ALL, mappedBy="owner") @Fetch(FetchMode.SUBSELECT) private Set<Car> cars = new HashSet<Car>(); // getters and setters } @Entity public class Car { @Id @GeneratedValue private long id; private String model; @ManyToOne private Owner owner; // getters and setters } If you try to fetch from the Owner table, Hibernate will only issue two select statements; one to fetch the owners and another to fetch the cars for those owners, by using a sub-select, as follows: select id, name from Owner select owner_id, id, model from Car where owner_id in (select id from Owner) Without the SUBSELECT fetch mode, instead of the second select statement as shown in the preceding section, Hibernate will execute a select statement for every entity returned by the first statement. This is known as the n+1 problem, where one SELECT statement is executed, then, for each returned entity another SELECT statement is executed to fetch the associated entities. Finally, SUBSELECT fetch mode is not supported in the ToOne associations, such as OneToOne or ManyToOne because it was designed for relationships where the ownership of the entities is clear. Batch fetching Another strategy offered by Hibernate is batch fetching. The idea is very similar to SUBSELECT, except that instead of using SUBSELECT, the entity IDs are explicitly listed in the SQL and the list size is determined by the @BatchSize annotation. This may perform slightly better for smaller batches. (Note that all the commercial database engines also perform query optimization.) To demonstrate this, let's consider the following entity classes: @Entity public class Owner { @Id @GeneratedValue private long id; private String name; @OneToMany(cascade=CascadeType.ALL, mappedBy="owner") @BatchSize(size=10) private Set<Car> cars = new HashSet<Car>(); // getters and setters } @Entity public class Car { @Id @GeneratedValue private long id; private String model; @ManyToOne private Owner owner; // getters and setters } Using @BatchSize, we are instructing Hibernate to fetch the related entities (cars) using a SQL statement that uses a where in clause; thus listing the relevant ID for the owner entity, as shown: select id, name from Owner select owner_id, id, model from Car where owner_id in (?, ?) In this case, the first select statement only returned two rows, but if it returns more than the batch size there would be multiple select statements to fetch the owned entities, each fetching 10 entities at a time. Summary In this article, we covered many ways of fetching datasets from the database. Resources for Article: Further resources on this subject: Hibernate Types[article] Java Hibernate Collections, Associations, and Advanced Concepts[article] Integrating Spring Framework with Hibernate ORM Framework: Part 1[article]
Read more
  • 0
  • 0
  • 23051
article-image-introducing-sailsjs
Packt
15 Jan 2016
5 min read
Save for later

Introducing Sails.js

Packt
15 Jan 2016
5 min read
In this article by Shahid Shaikh, author of the book Sails.js Essentials, you will learn a few basics of Sails.js. Sails.js is modern production-ready Node.js framework to develop Node.js web application by following the MVC pattern. If you are not looking to reinvent wheel as we do in MongoDB, ExpressJS, AngularJS, and NodeJS (MEAN) stack and focus on business logic, then Sails.js is the answer. Sails.js is a powerful enterprise-ready framework, you can write your business logic and deploy the application with the surety that the application won't fail due to other factors. Sails.js uses Express.js as a web framework and Socket.io to handle the web socket messages. It is integrated and coupled in the Sails.js, therefore, you don't need to install and configure them separately. Sails.js also has all the popular database supports such as MySQL, MongoDB, PostgreSQL, and so on. It also comes up with autogenerated API feature that let's you create API on the go. (For more resources related to this topic, see here.) Brief about MVC We know that Model-View-Controller (MVC) is a software architecture pattern coined by Smalltalk engineers. MVC separates the application in three internal components and data is passed via each component. Each component is responsible for their task and they pass their result to the next component. This separation provides a great opportunity of code reusability and loose coupling. MVC components The following are the components of MVC architecture: Model The main component of MVC is model. Model represents knowledge. It could be a single object or nested structure of objects. The model directly manages the data (store the data as well), logic, and rules of the application. View View is the visual representation of model. View takes data from the model and presents it (in browser or console and so on). View gets updated as soon as the model is changed. An ideal MVC application must have a system to notify other components about the changes. In a web application, view is the HTML Embedded JavaScript (EJS) pages that we present in a web browser. Controller As the name implies, task of a controller is to accept input and convert it into a proper command for model or view. Controller can send commands to the model to make changes or update the state and it can also send commands to the view to update the information. For example, consider Google Docs as an MVC application, View will be the screen where you type. As you can see in the defined interval, it automatically saves the information in the Google server, which is a controller that notifies the model (Google backend server) to update the changes. Installing Sails.js Make sure that you have the latest version of Node.js and npm installed in your system before installing Sails.js. You can install it using the following command: npm install -g sails You can then use Sails.js as a command-line tool, as shown in the following: Creating new project You can create a new project using Sails.js command-line tool. The command is as follows: sails create appName Once the project is created, you need to install the dependencies that are needed by it. You can do so by running the following command: npm install Adding database support Sails.js provides object-relational mapping (ORM) drivers for widely-used databases such as MySQL, MongoDB, and PostgreSQL. In order to add these to your project, you need to install the respective packages such as sails-mysql for MySQL, sails-mongo for MongoDB, and so on. Once it is added, you need to change the connections.js and models.js file located in the /config folder. Add the connection parameters such as host, user, password, and database name in the connections.js file. For example, for MySQL, add the following: module.exports.connections = {   mysqlAdapter: {     adapter: 'sails-mysql',     host: 'localhost,     user: 'root',     password: '',     database: 'sampleDB'   } }; In the models.js file, change the connection parameter to this one. Here is a snippet for that: module.exports.models = {   connection: 'mysqlAdapter' }; Now, Sails.js will communicate with MySQL using this connection. Adding grunt task Sails.js uses grunt as a default task builder and provides effective way to add or remove existing tasks. If you take a look at the tasks folder in the Sails.js project directory, you will see that there is the config and register folder, which holds the task and registers the task with a grunt runner. In order to add a new task, you can create new file in the /config folder and add the grunt task using the following snippet as default: module.exports = function(grunt) {   // Your grunt task code }; Once done, you can register it with the default task runner or create a new file in the /register folder and add this task using the following code: module.exports = function (grunt) {   grunt.registerTask('task-name', [ 'your custom task']); }; Run this code using grunt <task-name>. Summary In this article, you learned that you can develop rich a web application very effectively with Sails.js as you don't have to do a lot of extra work for configuration and set up. Sails.js also provides autogenerated REST API and built-in WebSocket integration in each routes, which will help you to develop the real-time application in an easy way. Resources for Article: Further resources on this subject: Using Socket.IO and Express together [article] Parallel Programming Patterns [article] Planning Your Site in Adobe Muse [article]
Read more
  • 0
  • 0
  • 4655

article-image-ecmascript-6-standard
Packt
15 Jan 2016
21 min read
Save for later

ECMAScript 6 Standard

Packt
15 Jan 2016
21 min read
In this article by Ved Antani, the author of the book Mastering JavaScript, we will learn about ECMAScript 6 standard. ECMAScript 6 (ES6) is the latest version of the ECMAScript standard. This standard is evolving and the last round of modifications was done in June, 2015. ES2015 is significant in its scope and the recommendations of ES2015 are being implemented in most JavaScript engines. This is great news. ES6 introduces a huge number of features that add syntactic forms and helpers that enrich the language significantly. The pace at which ECMAScript standards keep evolving makes it a bit difficult for browsers and JavaScript engines to support new features. It is also a practical reality that most programmers have to write code that can be supported by older browsers. The notorious Internet Explorer 6 was once the most widely used browser in the world. To make sure that your code is compatible with the most number of browsers is a daunting task. So, while you want to jump to the next set of awesome ES6 features, you will have to consider the fact that several of the ES6 features may not be supported by the most popular of browsers or JavaScript frameworks. This may look like a dire scenario, but things are not that dark. Node.js uses the latest version of the V8 engine that supports majority of ES6 features. Facebook's React also supports them. Mozilla Firefox and Google Chrome are two of the most used browsers today and they support a majority of ES6 features. To avoid such pitfalls and unpredictability, certain solutions have been proposed. The most useful among these are polyfills/shims and transpilers. (For more resources related to this topic, see here.) ES6 syntax changes ES6 brings in significant syntactic changes to JavaScript. These changes need careful study and some getting used to. In this section, we will study some of the most important syntax changes and see how you can use Babel to start using these newer constructs in your code right away. Block scoping We discussed earlier that the variables in JavaScript are function-scoped. Variables created in a nested scope are available to the entire function. Several programming languages provide you with a default block scope where any variable declared within a block of code (usually delimited by {}) is scoped (available) only within this block. To achieve a similar block scope in JavaScript, a prevalent method is to use immediately-invoked function expressions (IIFE). Consider the following example: var a = 1; (function blockscope(){     var a = 2;     console.log(a);   // 2 })(); console.log(a);       // 1 Using the IIFE, we are creating a block scope for the a variable. When a variable is declared in the IIFE, its scope is restricted within the function. This is the traditional way of simulating the block scope. ES6 supports block scoping without using IIFEs. In ES6, you can enclose any statement(s) in a block defined by {}. Instead of using var, you can declare a variable using let to define the block scope. The preceding example can be rewritten using ES6 block scopes as follows: "use strict"; var a = 1; {   let a = 2;   console.log( a ); // 2 } console.log( a ); // 1 Using standalone brackets {} may seem unusual in JavaScript, but this convention is fairly common to create a block scope in many languages. The block scope kicks in other constructs such as if { } or for (){ } as well. When you use a block scope in this way, it is generally preferred to put the variable declaration on top of the block. One difference between variables declared using var and let is that variables declared with var are attached to the entire function scope, while variables declared using let are attached to the block scope and they are not initialized until they appear in the block. Hence, you cannot access a variable declared with let earlier than its declaration, whereas with variables declared using var, the ordering doesn't matter: function fooey() {   console.log(foo); // ReferenceError   let foo = 5000; } One specific use of let is in for loops. When we use a variable declared using var in a for loop, it is created in the global or parent scope. We can create a block-scoped variable in the for loop scope by declaring a variable using let. Consider the following example: for (let i = 0; i<5; i++) {   console.log(i); } console.log(i); // i is not defined As i is created using let, it is scoped in the for loop. You can see that the variable is not available outside the scope. One more use of block scopes in ES6 is the ability to create constants. Using the const keyword, you can create constants in the block scope. Once the value is set, you cannot change the value of such a constant: if(true){   const a=1;   console.log(a);   a=100;  ///"a" is read-only, you will get a TypeError } A constant has to be initialized while being declared. The same block scope rules apply to functions also. When a function is declared inside a block, it is available only within that scope. Default parameters Defaulting is very common. You always set some default value to parameters passed to a function or variables that you initialize. You may have seen code similar to the following: function sum(a,b){   a = a || 0;   b = b || 0;   return (a+b); } console.log(sum(9,9)); //18 console.log(sum(9));   //9 Here, we are using || (OR operator) to default variables a and b to 0 if no value was supplied when this function was invoked. With ES6, you have a standard way of defaulting function arguments. The preceding example can be rewritten as follows: function sum(a=0, b=0){   return (a+b); } console.log(sum(9,9)); //18 console.log(sum(9));   //9 You can pass any valid expression or function call as part of the default parameter list. Spread and rest ES6 has a new operator, …. Based on how it is used, it is called either spread or rest. Let's look at a trivial example: function print(a, b){   console.log(a,b); } print(...[1,2]);  //1,2 What's happening here is that when you add … before an array (or an iterable) it spreads the element of the array in individual variables in the function parameters. The a and b function parameters were assigned two values from the array when it was spread out. Extra parameters are ignored while spreading an array: print(...[1,2,3 ]);  //1,2 This would still print 1 and 2 because there are only two functional parameters available. Spreads can be used in other places also, such as array assignments: var a = [1,2]; var b = [ 0, ...a, 3 ]; console.log( b ); //[0,1,2,3] There is another use of the … operator that is the very opposite of the one that we just saw. Instead of spreading the values, the same operator can gather them into one: function print (a,...b){   console.log(a,b); } console.log(print(1,2,3,4,5,6,7));  //1 [2,3,4,5,6,7] In this case, the variable b takes the rest of the values. The a variable took the first value as 1 and b took the rest of the values as an array. Destructuring If you have worked on a functional language such as Erlang, you will relate to the concept of pattern matching. Destructuring in JavaScript is something very similar. Destructuring allows you to bind values to variables using pattern matching. Consider the following example: var [start, end] = [0,5]; for (let i=start; i<end; i++){   console.log(i); } //prints - 0,1,2,3,4 We are assigning two variables with the help of array destructuring: var [start, end] = [0,5]; As shown in the preceding example, we want the pattern to match when the first value is assigned to the first variable (start) and the second value is assigned to the second variable (end). Consider the following snippet to see how the destructuring of array elements works: function fn() {   return [1,2,3]; } var [a,b,c]=fn(); console.log(a,b,c); //1 2 3 //We can skip one of them var [d,,f]=fn(); console.log(d,f);   //1 3 //Rest of the values are not used var [e,] = fn(); console.log(e);     //1 Let's discuss how objects' destructuring works. Let's say that you have a function f that returns an object as follows: function f() {   return {     a: 'a',     b: 'b',     c: 'c'   }; } When we destructure the object being returned by this function, we can use the similar syntax as we saw earlier; the difference is that we use {} instead of []: var { a: a, b: b, c: c } = f(); console.log(a,b,c); //a b c Similar to arrays, we use pattern matching to assign variables to their corresponding values returned by the function. There is an even shorter way of writing this if you are using the same variable as the one being matched. The following example would do just fine: var { a,b,c } = f(); However, you would mostly be using a different variable name from the one being returned by the function. It is important to remember that the syntax is source: destination and not the usual destination: source. Carefully observe the following example: //this is target: source - which is incorrect var { x: a, x: b, x: c } = f(); console.log(x,y,z); //x is undefined, y is undefined z is undefined //this is source: target - correct var { a: x, b: y, c: z } = f(); console.log(x,y,z); // a b c This is the opposite of the target = source way of assigning values and hence will take some time in getting used to. Object literals Object literals are everywhere in JavaScript. You would think that there is no scope of improvement there. However, ES6 wants to improve this too. ES6 introduces several shortcuts to create a concise syntax around object literals: var firstname = "Albert", lastname = "Einstein",   person = {     firstname: firstname,     lastname: lastname   }; If you intend to use the same property name as the variable that you are assigning, you can use the concise property notation of ES6: var firstname = "Albert", lastname = "Einstein",   person = {     firstname,     lastname   }; Similarly, you are assigning functions to properties as follows: var person = {   getName: function(){     // ..   },   getAge: function(){     //..   } } Instead of the preceding lines, you can say the following: var person = {   getName(){     // ..   },   getAge(){     //..   } } Template literals I am sure you have done things like the following: function SuperLogger(level, clazz, msg){   console.log(level+": Exception happened in class:"+clazz+" -     Exception :"+ msg); } This is a very common way of replacing variable values to form a string literal. ES6 provides you with a new type of string literal using the backtick (`) delimiter. You can use string interpolation to put placeholders in a template string literal. The placeholders will be parsed and evaluated. The preceding example can be rewritten as follows: function SuperLogger(level, clazz, msg){   console.log(`${level} : Exception happened in class: ${clazz} -     Exception : {$msg}`); } We are using `` around a string literal. Within this literal, any expression of the ${..} form is parsed immediately. This parsing is called interpolation. While parsing, the variable's value replaces the placeholder within ${}. The resulting string is just a normal string with the placeholders replaced with actual variable values. With string interpolation, you can split a string into multiple lines also, as shown in the following code (very similar to Python): var quote = `Good night, good night! Parting is such sweet sorrow, that I shall say good night till it be morrow.`; console.log( quote ); You can use function calls or valid JavaScript expressions as part of the string interpolation: function sum(a,b){   console.log(`The sum seems to be ${a + b}`); } sum(1,2); //The sum seems to be 3 The final variation of the template strings is called tagged template string. The idea is to modify the template string using a function. Consider the following example: function emmy(key, ...values){   console.log(key);   console.log(values); } let category="Best Movie"; let movie="Adventures in ES6"; emmy`And the award for ${category} goes to ${movie}`;   //["And the award for "," goes to ",""] //["Best Movie","Adventures in ES6"] The strangest part is when we call the emmy function with the template literal. It's not a traditional function call syntax. We are not writing emmy(); we are just tagging the literal with the function. When this function is called, the first argument is an array of all the plain strings (the string between interpolated expressions). The second argument is the array where all the interpolated expressions are evaluated and stored. Now what this means is that the tag function can actually change the resulting template tag: function priceFilter(s, ...v){   //Bump up discount   return s[0]+ (v[0] + 5); } let default_discount = 20; let greeting = priceFilter `Your purchase has a discount of   ${default_discount} percent`; console.log(greeting);  //Your purchase has a discount of 25 As you can see, we modified the value of the discount in the tag function and returned the modified values. Maps and Sets ES6 introduces four new data structures: Map, WeakMap, Set, and WeakSet. We discussed earlier that objects are the usual way of creating key-value pairs in JavaScript. The disadvantage of objects is that you cannot use non-string values as keys. The following snippets demonstrate how Maps are created in ES6: let m = new Map(); let s = { 'seq' : 101 };   m.set('1','Albert'); m.set('MAX', 99); m.set(s,'Einstein');   console.log(m.has('1')); //true console.log(m.get(s));   //Einstein console.log(m.size);     //3 m.delete(s); m.clear(); You can initialize the map while declaring it: let m = new Map([   [ 1, 'Albert' ],   [ 2, 'Douglas' ],   [ 3, 'Clive' ], ]); If you want to iterate over the entries in the Map, you can use the entries() function that will return you an iterator. You can iterate through all the keys using the keys() function and you can iterate through the values of the Map using values() function: let m2 = new Map([     [ 1, 'Albert' ],     [ 2, 'Douglas' ],     [ 3, 'Clive' ], ]); for (let a of m2.entries()){   console.log(a); } //[1,"Albert"] [2,"Douglas"][3,"Clive"] for (let a of m2.keys()){   console.log(a); } //1 2 3 for (let a of m2.values()){   console.log(a); } //Albert Douglas Clive A variation of JavaScript Maps is a WeakMap—a WeakMap does not prevent its keys from being garbage-collected. Keys for a WeakMap must be objects and the values can be arbitrary values. While a WeakMap behaves in the same way as a normal Map, you cannot iterate through it and you can't clear it. There are reasons behind these restrictions. As the state of the Map is not guaranteed to remain static (keys may get garbage-collected), you cannot ensure correct iteration. There are not many cases where you may want to use WeakMap. The most uses of a Map can be written using normal Maps. While Maps allow you to store arbitrary values, Sets are a collection of unique values. Sets have similar methods as Maps; however, set() is replaced with add(), and the get() method does not exist. The reason that the get() method is not there is because a Set has unique values, so you are interested in only checking whether the Set contains a value or not. Consider the following example: let x = {'first': 'Albert'}; let s = new Set([1,2,'Sunday',x]); //console.log(s.has(x));  //true s.add(300); //console.log(s);  //[1,2,"Sunday",{"first":"Albert"},300]   for (let a of s.entries()){   console.log(a); } //[1,1] //[2,2] //["Sunday","Sunday"] //[{"first":"Albert"},{"first":"Albert"}] //[300,300] for (let a of s.keys()){   console.log(a); } //1 //2 //Sunday //{"first":"Albert"} //300 for (let a of s.values()){   console.log(a); } //1 //2 //Sunday //{"first":"Albert"} //300 The keys() and values() iterators both return a list of the unique values in the Set. The entries() iterator yields a list of entry arrays, where both items of the array are the unique Set values. The default iterator for a Set is its values() iterator. Symbols ES6 introduces a new data type called Symbols. A Symbol is guaranteed to be unique and immutable. Symbols are usually used as an identifier for object properties. They can be considered as uniquely generated IDs. You can create Symbols with the Symbol() factory method—remember that this is not a constructor and hence you should not use a new operator: let s = Symbol(); console.log(typeof s); //symbol Unlike strings, Symbols are guaranteed to be unique and hence help in preventing name clashes. With Symbols, we have an extensibility mechanism that works for everyone. ES6 comes with a number of predefined built-in Symbols that expose various meta behaviors on JavaScript object values. Iterators Iterators have been around in other programming languages for quite some time. They give convenience methods to work with collections of data. ES6 introduces iterators for the same use case. ES6 iterators are objects with a specific interface. Iterators have a next() method that returns an object. The returning object has two properties—value (the next value) and done (indicates whether the last result has been reached). ES6 also defines an Iterable interface, which describes objects that must be able to produce iterators. Let's look at an array, which is an iterable, and the iterator that it can produce to consume its values: var a = [1,2]; var i = a[Symbol.iterator](); console.log(i.next());      // { value: 1, done: false } console.log(i.next());      // { value: 2, done: false } console.log(i.next());      // { value: undefined, done: true } As you can see, we are accessing the array's iterator via Symbol.iterator() and calling the next() method on it to get each successive element. Both value and done are returned by the next() method call. When you call next() past the last element in the array, you get an undefined value and done: true indicating that you have iterated over the entire array. For..of loops ES6 adds a new iteration mechanism in form of for..of loop, which loops over the set of values produced by an iterator. The value that we iterate over with for..of is an iterable. Let's compare for..of to for..in: var list = ['Sunday','Monday','Tuesday']; for (let i in list){   console.log(i);  //0 1 2 } for (let i of list){   console.log(i);  //Sunday Monday Tuesday } As you can see, using the for  in loop, you can iterate over indexes of the list array, while the for..of loop lets you iterate over the values stored in the list array. Arrow functions One of the most interesting new parts of ECMAScript 6 is arrow functions. Arrow functions are, as the name suggests, functions defined with a new syntax that uses an arrow (=>) as part of the syntax. Let's first see how arrow functions look: //Traditional Function function multiply(a,b) {   return a*b; } //Arrow var multiply = (a,b) => a*b; console.log(multiply(1,2)); //2 The arrow function definition consists of a parameter list (of zero or more parameters and surrounding ( .. ) if there's not exactly one parameter), followed by the => marker, which is followed by a function body. The body of the function can be enclosed by { .. } if there's more than one expression in the body. If there's only one expression, and you omit the surrounding { .. }, there's an implied return in front of the expression. There are several variations of how you can write arrow functions. The following are the most commonly used: // single argument, single statement //arg => expression; var f1 = x => console.log("Just X"); f1(); //Just X   // multiple arguments, single statement //(arg1 [, arg2]) => expression; var f2 = (x,y) => x*y; console.log(f2(2,2)); //4   // single argument, multiple statements // arg => { //     statements; // } var f3 = x => {   if(x>5){     console.log(x);   }   else {     console.log(x+5);   } } f3(6); //6   // multiple arguments, multiple statements // ([arg] [, arg]) => { //   statements // } var f4 = (x,y) => {   if(x!=0 && y!=0){     return x*y;   } } console.log(f4(2,2));//4   // with no arguments, single statement //() => expression; var f5 = () => 2*2; console.log(f5()); //4   //IIFE console.log(( x => x * 3 )( 3 )); // 9 It is important to remember that all the characteristics of a normal function parameters are available to arrow functions, including default values, destructuring, and rest parameters. Arrow functions offer a convenient and short syntax, which gives your code a very functional programming flavor. Arrow functions are popular because they offer an attractive promise of writing concise functions by dropping function, return, and { .. } from the code. However, arrow functions are designed to fundamentally solve a particular and common pain point with this-aware coding. In normal ES5 functions, every new function defined its own value of this (a new object in case of a constructor, undefined in strict mode function calls, context object if the function is called as an object method, and so on). JavaScript functions always have their own this and this prevents you from accessing the this of, for example, a surrounding method from inside a callback. To understand this problem, consider the following example: function CustomStr(str){   this.str = str; } CustomStr.prototype.add = function(s){   // --> 1   'use strict';   return s.map(function (a){             // --> 2     return this.str + a;                 // --> 3   }); };   var customStr = new CustomStr("Hello"); console.log(customStr.add(["World"])); //Cannot read property 'str' of undefined On the line marked with 3, we are trying to get this.str, but the anonymous function also has its own this, which shadows this from the method from line 1. To fix this in ES5, we can assign this to a variable and use the variable instead: function CustomStr(str){   this.str = str; } CustomStr.prototype.add = function(s){     'use strict';   var that = this;                       // --> 1   return s.map(function (a){             // --> 2     return that.str + a;                 // --> 3   }); };   var customStr = new CustomStr("Hello"); console.log(customStr.add(["World"])); //["HelloWorld] On the line marked with 1, we are assigning this to a variable, that, and in the anonymous function, we are using the that variable, which will have a reference to this from the correct context. ES6 arrow functions have lexical this, meaning that the arrow functions capture the this value of the enclosing context. We can convert the preceding function to an equivalent arrow function as follows: function CustomStr(str){   this.str = str; } CustomStr.prototype.add = function(s){   return s.map((a)=> {     return this.str + a;   }); }; var customStr = new CustomStr("Hello"); console.log(customStr.add(["World"])); //["HelloWorld] Summary In this article, we discussed a few important features being added to the language in ES6. It's an exciting collection of new language features and paradigms, and using polyfills and transpilers, you can start with them right away. JavaScript is an ever growing language and it is important to understand what the future holds. ES6 features make JavaScript an even more interesting and mature language. Resources for Article: Further resources on this subject: Using JavaScript with HTML [article] Getting Started with Tableau Public [article] Façade Pattern – Being Adaptive with Façade [article]
Read more
  • 0
  • 0
  • 3658
Modal Close icon
Modal Close icon